RDP TLS Certificate Deployment Using GPO

Remote Desktop has been the Go To remote administration tool for many IT professionals and sadly many even expose it to the internet leading to brutefoce attacks and Man in the Middle attacks. I still remember the fist time I saw how easy it is from Irongeek examples using Cain & Able http://www.irongeek.com/i.php?page=videos/cain-rdp-terminal-server-mitm-sniff and http://www.irongeek.com/i.php?page=security/cain-rdp-mitm-parser I have taken great care to make sure RDP connections in my network and customer networks are as secure as possible. Here is an example on how to deploy TLS certificates for use of RDP via GPO and how to configure some none Microsoft systems.

 

Read More

WinRM SSL Certificate Deployment via GPO

I really like using WinRM (Windows Remote Management) to manage my servers and lab. It serves as the basis for server management that Microsoft is moving in to. It provides several advantages:

  • It is secured by default by leveraging strong encryption to protect traffic.
  • It is standards based so it allows for interoperability with other platforms. 
  • it is SOAP based making it firewall friendly and easier to route in segmented enviromentes. 
  • I can leverage Kerberos for authentication. 
  • It allows me to query WMI information from the host without the need of opening RPC ports or dealing with DCOM it self.

One of the problems with WinRM is that by being compatible it provides downgrading of its security for compitibility or user error. In Windows one can disable encryption when connecting and also specify Basic for auth exposing credentials. In a Windows only environment one could set it up to enforce encryption and force stronger authentication methods but we hardly see that kind of enviroment often that is why I prefer SSL when possible. To protect me from user naivate and have already a foundation for future integrations.

Read More

Patching with WSUS Offline Open Source Project

I'm constantly spinning lab VMs for testing and validating ideas, also to cinstatly practicing basics of system administration. Sadly there might be periods where I will forget to update my teamplate images I use to clone and sysprep Windows machines. I started playing with a solution called WSUS Offline from http://www.wsusoffline.net/ an open source project that aids in building several automated ways to install critical patches on a machine making it safe to connect that machine to the web for downloading none critical or security patches. This is not only great for virtual labs but when participating on Red vs Blue challenges and you are in the blue team looking for a quick way to apply security patches.

Read More

Updating Group Policy Objects Remotely

One of the recommendations I always give people who ask my opinion on updating to new versions of Windows is that if you do upgrade or deploy new servers to always do your Active Directory Domain Controllers first. By updating the DCs first one can start implementing stronger authentication as clients are migrated and also start implementing policies that address the new versions of Windows as they start joining the domain.

One of the feature I like on Windows 2012 and Windows 2012 R2 is the starter GPO for allowing the PowerShell cmdlet Invoke-GPUpdate to remotely schedule gpupdate.exe so as to update GPO settings at a time of our choosing.

Read More

Merging Nessus XML Reports with PowerShell

one of the most frequent tasks that friends have asked me is how can they merge Nessus XML reports. On this blog post I will cover how to do it using PowerShell. The process is very simple since Microsoft .Net Framework makes working with this type of data a breeze.

Wen we look at a exported Nessus in XML format the structure is a simple one as it can be seen in this screen shot:

The root element for the document is NessusClientData_v2 under it we can find as child elements:

  • Policy a copy of the policy used to perform the scan.
  • Report contains a collection of child elements called ReportHost each containing all the information about each of the scanned hosts and what was found on each.

Nessus requires that all reports must have this elements and the Policy to be a valid structured ones to be imported. With this information at hand the best way to merge the data would be by copying the ReportHost elements from one scan in to another if all have the same policy.

In PowerShell the simplest way to do this would be to read each of the files we want to join together as .Net XMLDocument objects using the PowerShell XML Type accelerator:

[xml]$Report1 = Get-Content -Path 'C:\Users\Carlos Perez\Downloads\Lab1_k6tcev.nessus'
[xml]$Report2 = Get-Content -Path 'C:\Users\Carlos Perez\Downloads\Lab2_j25p67.nessus'

now we have 2 variables containing each of the reports $report1 and $report2. We now select the ReportHost nodes from the report we want to merge using XPath and the SelectNodes() method of the XMLDocument object:

$ReportHostsToAdd = $Report2.NessusClientData_v2.Report.SelectNodes("ReportHost")

Now that we have the nodes from the report we can add them to the other report by importing each node using the ImportNode() method. Importing a node creates an XmlNode object owned by the importing document, we then import the object in to the document using the method AppendChildNode().

foreach($ReportHost in $ReportHostsToAdd)
{
    $Node = $Report1.ImportNode($ReportHost, $true)
    $Report1.NessusClientData_v2.Report.AppendChild($Node)
}

Once all the nodes have been appended under the Report Element we can now save the document using the Save() method. We need to provide it a full path.

$Report1.Save('C:\users\Carlos Perez\Desktop\consolidated.nessus')

Creating an Advanced Function

Now that we know the steps needed to merge 2 reports we can build and advanced function to use these steps. We start by using ISE Snippet menu and selecting the snippet for a Advanced Function by pressing Crtl-j

The structure of an advanced function is simple:

  • Param - Contain each of the parameters that the function will take and the options for each of the parameters.
  • Begin Block - Script block that is executed at the start of execution of the functions and is only executed once.
  • Process Block - Script block that is executed for each object the function receives in the pipeline.
  • End Block - Script block that is executed at the end of function and/or after all objects have been processed by the pipeline.

I always prefer to start by naming my function with an approved Verb and give it a descriptive none plural noun so as to follow Microsoft PowerShell Cmdlet naming guidelines. To get a list of approved verbs we can just type in a PowerShell session Get-Verb. For this function we will use the Merge verb and for noun NessusReport. After naming the function a decide if I will use parameter sets. In the case of processing files I like to follow when possible the same 2 parameters used by most PowerShell core cmdlets of Path and LiteralPath where LiteralPath is the parameter that will accepts files from the pipeline using the Alias PSPath. So my parameters for this function would be:

  1. Report - Report that will server as the master report where all other reports will merge in to. Its options are:
    • Mandatory.
    • Accepts values from the pipeline by property name.
    • First position if parameter name not specified.
    • Validate that the file exists.
  2. OutFile - The file that will contain all merged ReportHost elements.
    • Mandatory.
    • Accepts values from the pipeline by property name.
    • Second position if parameter name not specified.
  3. Path - Relative path to the file we want to merge.
    • Mandatory.
    • Accepts values from the pipeline by property name.
    • First position if parameter name not specified.
    • Validate that the file exists.
    • Part of parameter set named "Path"
  4. LiteralPath - Full path to the file we want to merge.
    • Mandatory.
    • Accepts values from the pipeline by property name.
    • First position if parameter name not specified.
    • Validate that the file exists.
    • Part of parameter set named "LiteralPath"
    • Has alias of PSPath.
  5. ReportName - Name for the merged report that is shown when imported in to Nessus.
    • Optional

After planning the parameters they can be set in the function:

    Param
    (
        # Report that will server as the master report where all other reports
        # will merge in to
        [Parameter(Mandatory=$true,
                   ValueFromPipelineByPropertyName=$true,
                   Position=0)]
        [ValidateScript({Test-Path -Path $_})]
        [string]
        $Report,

        # The file that will contain all merged ReportHost elements.
        [Parameter(Mandatory=$true,
                   ValueFromPipelineByPropertyName=$true,
                   Position=1)]
        [string]
        $OutFile,

        # Relative path to the file we want to merge.
        [Parameter(Mandatory=$true,
                   ValueFromPipelineByPropertyName=$true,
                   Position=2,
                   ParameterSetName = 'Path')]
        [ValidateScript({Test-Path -Path $_})]
        [string]
        $Path,

        # Full path to the file we want to merge.
        [Parameter(Mandatory=$true,
                   ValueFromPipelineByPropertyName=$true,
                   Position=2,
                   ParameterSetName = 'LiteralPath')]
        [ValidateScript({Test-Path -Path $_})]
        [Alias('PSPath')]
        [string]
        $LiteralPath,

        # Name for the merged report that is shown when imported in to Nessus.
        [Parameter(Mandatory=$true)]
        [string]
        $ReportName

    )

We can test the function and that the parameter sets are configured properly by using the Get-Help cmdlet:

Since we will be opening the master report only once to merge other reports in to it we will create the XMLDocument object in the Beguin block:

    Begin
    {
        [xml]$MainReport = Get-Content -Path $Report
    }

In the process block we will create the XMLDocument object for each of the files passed via the pipeline by selecting the proper parameter set and import the ReportHost elements from each file:

    Process
    {
        switch($PSCmdlet.ParameterSetName)
        {
            'Path' 
            { 
                Write-Verbose -Message "Merging $($Path) in to $($Report)"
                [xml]$Report2Merge = Get-Content -Path $Path
            }

            'LiteralPath' 
            {   
                Write-Verbose -Message "Merging $($LiteralPath) in to $($Report)"
                [xml]$Report2Merge = Get-Content -LiteralPath $LiteralPath
            }
        }

        $ReportHostsToAdd = $Report2Merge.NessusClientData_v2.Report.SelectNodes("ReportHost")
        foreach($ReportHost in $ReportHostsToAdd)
        {
            $Node = $MainReport.ImportNode($ReportHost, $true)
            $MainReport.NessusClientData_v2.Report.AppendChild($Node)
        }
    }

In the End block we resolve the path to the file we will save the data into and also set the name for the report if one is provided. Since the file does not exist we can not use the Resolve-Path cmdlet and need to use $ExecutionContext.SessionState.Path.GetUnresolvedProviderPathFromPSPath() method to resolve a full path and save the content to the file:

    End
    {
        if ($ReportName.Length -gt 0)
        {
            $MainReport.NessusClientData_v2.Report.name = $ReportName
        }

        $MergedFile = $ExecutionContext.SessionState.Path.GetUnresolvedProviderPathFromPSPath($OutFile)
        $MainReport.Save($MergedFile)
    }

We can now test the function by piping a list of files to it and provide the right parameters:

Hope you find this simple example useful. It can be expanded and more error handling can be added but I wanted to keep it simple.