PowerShell Basics–The Environment

I do have to say last year I started to write about PowerShell Basics and I then stopped. The main reason was that after talking with Dave Kennedy I decided to write a class for DerbyCon 2012 and boy did I thought it was going to be simple. I started believing that I could write it in a month or two and have it done since I use PowerShell on a daily basis, took me over 6 months, ended up with over 600 slides and was even modifying the slides on the airplane ride to Louisville since Microsoft came out with PowerShell Version 3.0 as part of the Windows Management Framework 3 a week before the conference. The good part is that I have now more than enough material to re-start the series and cover more fun stuff for the security professional and the admin alike.

I have given the PowerShell for Security Professionals class 3 times and one thing I decided for the blog posts that differs from the class it self is to provide short segments of fast and easy to use information for people to start getting in to Powershell.

What is PowerShell

PowerShell is Microsoft new Command Line Interface for Windows systems, it provides access to:

  • Existing Windows Command Line tools.
  • PowerShell Cmdlets (PowerShell own Commands)
  • PowerShell Functions
  • Access to the .Net Framework API
  • Access to WMI (Windows Management Instrumentation
  • Access to Windows COM (Component Object Model)
  • Access to function in Windows DLL (Dynamic Linked Libraries)

As it can be seen PowerShell does provide a lot of access to different technologies and APIs on a Windows system making it ideal for administration and for security work alike.

Microsoft if making PowerShell the default management interface for many of it’s server products like Exchange, System Center Operations Manager, SQL Server, SharePoint Server and more, not only that but with Windows 2012 server the default install is core (GUI-Less System) and management is done via the command line or using Remote Admiration Tools. Microsoft included over 4 thousand new PowerShell cmdlets to make the administration of the new server the easiest ever using the command line.

PowerShell

Depending on the environment and systems you work with there are 2 main versions of PowerShell you will fond your self working with:

  • PowerShell v2 –Included with Windows 7 and Windows 2008 R2. Available as a separate download for Windows XP SP3, Windows 2003 SP2, Windows Vista SP1 and Windows 2008 SP2. It can be pushed to hosts via Windows Server Update Service. Download t http://support.microsoft.com/kb/968929
  • PowerShell v3 – Included with Windows 8 and Windows 2012. Available as a separate download for Windows 7 SP1 and Windows 2008 R2 SP2. It can not be pushed to hosts via Windows Server Update Service. Download http://www.microsoft.com/en-us/download/details.aspx?id=34595

On Windows System prior to Windows 8 and Windows 2012 PowerShell can be found under Start –> All Programs –> Accessories –> System Tools Depending on the architecture of the operating system there will be an x86 version and a x64 version of PowerShell. In addition to the shortcut to the PowerShell terminal there will also be shortcuts to the ISE (Integrated Scripting Environment) and Editor for PowerShell scripts that was included with PowerShell v2 and greatly improved on PowerShell v3. On Systems running Windows 8 and Windows 2012 with the Metro Interface one just need to type PowerShell or PowerShell_ISE to access the components. On a Windows 2012 Core System one just needs to type powershell.exe in the command prompt to load it.

Some recommendations when loading PowerShell:

  • Since PowerShell provides access to many administrative functions it is recommended to run it as Administrator.

image

 

image

  • If you are on a x64 system make sure you run the x64 version of it (The one with no x86 in the name of the shortcut)

 image

When we launch PowerShell we are greeted with a blue command window with white text.

image

As it an be seen one can easily determine by looking at the title bar of the window if one is running as Administrator or not.

I would recommend to take the chance and customize the shortcut for launching PowerShell so as to provide the best experience. Right click on the PowerShell blue icon on the top left of the PowerShell Window and select Properties, make sure on the Options tab that the Edit Options are selected

image

On the Layout tab adjust the Screen Buffer Size Width to one where there is no need for side scroll bar making sure that both Width fields have the same value in both the Buffer Size and Window Size.

image

Ensuring a proper with will make the management of large amounts of output generated by some cmdlets easier to look at on the screen.

The terminal has several keyboard shortcuts that can be used, a list of the most common are in the table bellow:

image

 

On PowerShell v2 the ISE can also be use as an interactive command prompt where commands are entered in on window and output is shown in the next, in addition it is is a script editor with syntax highlighting

image

On PowerShell v3 the ISE has been greatly improved, offering a consolidated command prompt and also provides a cmdlet help pane

image

In addition ISEv3 also provides:

  • Intellisense for Cmdlets and parameters with parameter help popup.
  • Intellisense will provide values for parameters based on enumerations and pre-defined sets.
  • Intellisense will perform smart matching for cmdlet names
  • Intellisense will show path options for filesystems and PSProviders
  • Intellisense will show variables
  • Intellisense will show for objects properties and methods available

It will also provide an Icon Reference that makes it easier to select in Intellisense what one wants to choose.

image

The command prompt on ISEv3 can be said to be the closest one can get to the perfect terminal for PowerShell with the exception that since it is not a true terminal several console commands are not supported. To get a list of the unsupported console commands one can take a look at the $psUnsupportedConsoleApplications variable

image

There are some other alternatives to consoles I recommend people to also try out if they find the one included with Windows to limiting:

For my next blog post I will go in to running commands, exploring the commands and using the help system.

Metasploit Framework Guides Updated for Using Git

I updated my installation guides for Metasploit Framework for Ubuntu and OS X for the recent changes where Git is now used for updating. I do have to say I'm a happy ow, since the addition on Gemcache folder to host all the Gems for both the Open Source base and the Comercial Products based of it SVN has timed out or error out in updating those so with this change it resolves this issue. If you have used my guides for you current setups just:

cp /usr/local/share/metasploit-framework/database.yml /tmp
cd /usr/local/share/
rm -rf metasploit-framework
git clone https://github.com/rapid7/metasploit-framework.git
cp /tmp/database.yml /usr/local/share/metasploit-framework/ 

Should We Exploit Every Vulnerability to Prove it Exist?

Recently I made a comment in twitter where I said that I cringe every time a hear that to confirm a vulnerability an exploit must be ran to confirm and prove it.  Some people agreed that it is not the perfect solutions other argued that it is the best one. Let me explain in more that 140 character chunks why I cringe. The scenario I refer to is that of an internal security team managing the security of their infrastructure on a daily basis.  

  1. There are safer ways to check if a vulnerability is present after performing a patch deployment or a configuration change. Most scanner now a days have credential checks where they check versions of files, presence of package and even if the server has been rebooted or not in addition to the network validation of connecting to a possible service and interacting with it to try to determine in a safe way if the service is vulnerable or not. We also have systems in most medium to big organizations that inventory the hosts and can produce detailed reports of what patches have been installed and which not, some tools are even free. Many times the Security team just needs to ask for confirmation from one of the infrastructure teams or have read permissions to those inventory systems. Other times why may just need to put a bit of elbow grease and determine what specific permission they would need on a account that is only used for scanning. 
  2. Not all exploit frameworks and tools have all exploit and attacks for every vulnerability that you may be exposed to. In fact network remote exploits are every time less and less and the numbers have shifted to client side, even with my love of Metasploit Framework I know that Cavas, Core Impact and many other tools will have exploit that the other does not and many just do not get added to the tools, others would require that we automate the user actions that would execute the vulnerable software against a file or attacker system to prove it is vulnerable. this mean that one is leaving a very large number of possible vulnerabilities missed if exploitation is the only way.
  3. I do not discard the use of exploits as a verification method, it could be use for certain critical vulnerabilities where we may have taken actions to implement countermeasures against and a patch is not present. Now this has to be done in a planned way where both the security team and other infrastructure teams must participate to be able to deploy, test and validate. Running any exploit against all reported vulnerable systems is risky since many may crash a service or the server, if done without planning and proper communication between the teams this could have business impact consequences and further deteriorate any existing political or personal problems in a organization. 

One of the arguments I got was that many companies the teams do not talk, are just not willing to work together or by design there is a separation of roles and responsibilities that prohibits working together. To be honest I see this as a big problem in management and leadership in a organization. Are there companies that are like this? yes. Should we try our best to change this if we work in such a company? absolutely. If we are in that situation our success will vary or we may not be successful at all but that does not make running exploits for confirmation without planning or knowing the risks that it may cause the option and solution. I know that some will agree and others will not but I felt it was better I wrote it down that sending twitter public  and direct messages al day long and be able to transmit my reasoning for the comment.  Hope my 0.02 cents on the subject may be helpful to someone and I'm open to opinions and counter arguments. 

Trojan Horse by Mark Russinovich Review

cover

Trojan Horse is Mark Russinovich second techno thriller. His first one being Zero Day. Mark is a Technical Fellow in the Platform and Services Division at Microsoft, he is very well known in the Information Technology arena as an expert in security, operating systems. He is also the author of several Microsoft Press books in addition to being a regular contributor to TechNet Magazine and Windows IT Pro magazine.

In the first book Zero Day we meet Jeff Aiken a forensics specialist that runs his own company where he travels from client to client helping them analyze how they were compromised, he covers how Jeff works to determine how malware gets in to the systems and how he is driven by his drive to find the where, what and who of the infections and security breaches he investigates, when he finds that there is more to the malware he is investigating and how it is related with several events around the world we see how Daryl Haugen from the US Computer Emergency Response Teams helps him to put the pieces together, we also see how when the terrorist find out what he is doing the dangers move from the digital to the physical world where now the attacks are no longer viruses and Trojans but a trained soldier for hire sent out to get them. We also learn about his past before the fateful attack in September 11 and how it affected his life. The story in that book centers around a plot from the terrorist group al-Qaeda to repeat their attack on the west but instead of planes and bomb the use of computer malware. Mark covers as part of the story many of the areas that many in the security community know very well and those are:

1. How difficult is for Anti Virus companies to really protect us from all types of malware.

2. How do many criminal and political organizations that may lack the resources to write their own tools and develop new attacks are going out and out sourcing skills from the vast pool of security professionals and coders that are willing to find and sell Zero Day exploits to the highest bidder and are not driven by any political or Religious motives.

3. He covers how companies many times do not take the security of their products seriously enough and do not prioritize the patching of security holes.

4. The complexity and political motivations of the Federal government trying to control, regulate security and react to emerging threats.

He does all this with what I found to be a very good mix of technical information, plausible scenarios, drama, actions and a bit of romance. On this his second book and continuation of the adventures of Jeff Aiken and Daryl Haugen as they run their own company and are called to help investigate an infection on government system changing information so as to influence the politics and events in the Middle East. We see how Jeff Aiken is driven again with his fascination to discover who is behind the infection and what they are doing. This brings Jeff to the attention of Governments that want to stop his work and silence him so their agenda is not affected and they can succeed in their goals. This book differs from the original in terms that instead of covering a Terrorist organization we are seeing how governments like China and Iran use the Internet as their new battle ground and are of operation for covert action. We also see how even the US government is moving in the advancement in the technologies to be able to address threat not in a kinetic manner but thru technological means to infiltrate and take proper actions in covert manners using the internet and even how to jump in to systems deemed secured and air gaped. Marks covers in addition several areas of interest for security professionals in our industry, these are:

* How private companies help the government by providing the appropriate skill set to develop exploits and security research that can be use offensively in covert actions.

* The shift of malware from collecting information to modifying so as to alter event and actions in the physical realm.

* How digital supremacy affects and influences the politics and actions of governments.

* How governments use their offensive technological resources in the aid of other governments for political gains.

* He also covers how many governments are willing to shift from a digital to a kinetic approach to protect such secrets and actions.

The story takes us thru Europe as Jeff moves from country to country trying to save the women he loves and stop the plans of the Iranian government and the Chinese government who are providing them with the technology means to carry out their plans for economic gain. The book keeps the reader engaged at all times and we see how the writing style of Mark has improved and morphed in this second book. The book has the right mix of action and technology making it one of my favorites books this year. Hope to see more books from Mark that continue with Jeff Aiken and his adventures in the digital and physical

Changing Ubuntu LTS 12.04 Back to GNOME Classic

I really try to use Unity on the new Ubuntu LTS as much as possible and make it part of my workflow, but many times I liked Unity after a while and others I hated it. So I decided to go back to Gnome Classic. In case you are in the same boat as me just open a Terminal and run 

 

sudo apt-get update
sudo apt-get upgrade
sudo apt-get install gnome-panel

 

After installing the packages. Log off the account and in the login screen click on the area shown in the picture bellow:

 

Change

 

Now select GNOME Classic 

 

Select

 

Now when you log on you should be all set:

Desktop

Hope this information is useful for others that find themselves in the same situation as me.