Networking Pivoting via SSH – Scanning with Nessus Professional behind a Firewall or NAT.

In this post I’m going to be covering the process to scan a network behind a Firewall or NAT using Networking Pivoting via SSH without being limited to proxychains, specific ports and protocols. Essentially this will use SSH tunneling, virtual tap adapters, some routing and masquarding in IPtables. The beauty of this method is the prerequisites are very low, for the most part no additional packages or standalone tools are required, we can use what is shipped with most Linux builds.

There are many use cases for this, scanning an internal network without being on prem, cloud environments, various pentesting scenarios, which can often be the stumbling point once a shell has been landed. Traditionally this type of task would have been done with the use of proxy chains, through some form of shell access via a netcat listener, Metasploit or SSH dynamic port forward, which I have previous walked through here. However this is an extremely slow method and rely’s on being able to tunnel through a single port with proxy chains, I have never had any luck scanning with more complex tools like Nessus in this way. Full SYN scans (-sT) with nmap great, Nessus not so much.

Lets take the following scenario and set the pivot up:

Networking Pivoting via SSH

We can use tunctl or ip tuntap, the difference being that ip tuntap is part of the iptools suite and therefore general supported on most Linux operating systems. Tunctl can usually be downloaded from your repo of choice ie with Ubuntu its part of the apt repository. In this example we will be working with Kali as the scanning system and a Ubuntu server as the pivot point, which has SSH accessible. (It is worth mentioning at this point it doesn’t matter which end the SSH connection is initiated from).

First we need to create a virtual tunnel and therefore need to create two virtual interfaces at both ends of the tunnel. For this we are going to use a tap interface. For reference a tap interface operates at layer 2 and a tun interface operates at layer 3.

Using tunctl: First we will need to install tunctl with apt install uml-utilities

# apt install uml-utilities

Create the virtual tap interface with the following command:

# tunctl -t tap0

Using ip tuntap: First verify your ip tools version installed supports tuntap, type ‘ip’ you will see if the command is available:

# ip

Create the virtual tap interface with the following command:

# ip tuntap add dev tap0 mod tap

Once this is setup assign it an ip address and raise the interface, assign a different address for each end of the tunnel:

So on the scanner:

# ip a a 10.100.100.100/24 dev tap0 && ip link set tap0 up

On the pivot server:

# ip a a 10.100.100.101/24 dev tap0 && ip link set tap0 up

On each end of the tunnel we will also need to make sure our SSH config will allow us to tunnel. Lets modify our /etc/ssh/sshd_config file by adding ‘ PermitTunnel=yes ‘ to the end and restart the service. More about this option can be found in SSH man page here.

Now for the magic, lets bring the tunnel up by establishing an SSH session, this will need to be done with a privileged account:

ssh -o Tunnel=ethernet -w 0:0 root@11.1.1.11

Lets cover off these options:

  • -o = allows us to specify options
  • Tunnel=ethernet = is our option for the tunnel
  • -w 0:0 = specifies the next available interface for the tunnel, and corresponds to each side of the tunnel.

Next lets take a moment to verify our tunnel is up with a couple of quick checks:

First verify the link is up with ethtool:

# ethtool tap0

You will notice the link is up, try this without the connection you will find the link is down.

Second verify you can ping the other end of the tunnel:

# ping 10.100.100.101

Again disconnect your SSH connection and watch icmp response drop.

Next in order to get our traffic to our destination servers/subnet we are going to need some routes adding to Kali to tell the system where to send the traffic, ie the other end of the tunnel, so, something similar to this where 192.168.1.0/24 being the network you are targeting:

# ip route add 192.168.1.0/24 via 10.100.100.101

# ip route add 192.168.2.0/24 via 10.100.100.101

Finally we need to setup some iptables rules and turn our pivot point into a router by enabling IPv4 forwarding:

# echo 1 > /proc/sys/net/ipv4/ip_forward

# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

# iptables -t nat -A POSTROUTING -o tap0 -j MASQUERADE

# iptables -A INPUT -i eth0 -m state –state RELATED,ESTABLISHED -j ACCEPT

# iptables -A INPUT -i tap0 -m state –state RELATED,ESTABLISHED -j ACCEPT

# iptables -A FORWARD -j ACCEPT

At this point the pivot should be up and running, test this by doing some basic checks with a known host on your target network.

Happy pivoting testers!

Self Signed Certificates + Remote Desktop Protocol = MiTM and Creds – This is a problem, don’t ignore it!

In this post I am going to highlight the risks of using self signed certificates with Remote Desktop Protocol (RDP). Why its a problem and what we can do to fix it! Hopeful by demonstrating the impact it will raise awareness of how serious an issue this can actually be.

On an internal network the issue stems from you connect to a computer or server that is using a self signed certificate through remote desktop your not  able to verify the endpoint for its authenticity. ie it is who it says it is.

Unfortunately we are all too familiar with the classic rdp certificate warning prompt like this and most of the time blindly click on yes I accept. Often with out actually reading what the message is saying.

Ok, lets see what all the fuss is about then. Lets consider the following devices in our LAB

DC16: 192.168.1.10 – Windows Server 2016 Domain Controller

WEB16: 191.168.1.52 – Windows Server 2016 Web Server

W10 192.168.1.51 – Windows 10 Client

Kali  192.168.1.50 – Kali Linux our attacker.

The attacker can essentially sit on the same network and cause a Man In The Middle (MiTM) condition between the windows 10 client and Web Server when using self-signed certificate. If we expand on the scenario slightly. Imagine we have an admin logged in to our windows 10 client, he/she wants to investigate an issue on the web server, so goes to establish a remote desktop session to the server. Lets consider what can happen.

To demonstrate this attack we are going to use ‘Seth’ a tool to perform a MitM attack and extract clear text credentials from RDP connections. Code is located here: https://github.com/SySS-Research/Seth , you can find a more detailed talk about the tool here by its creator Adrian Vollmer https://www.youtube.com/watch?v=wdPkY7gykf4.

On our attacking machine we are going to start Seth:

Mean while our admin is going about his daily tasks on our windows 10 client, he/she then decides to connect to our web server via RDP:

The usual connection sequence takes place, the admin receives the usual all too familiar warning box and continues to establish the connection. In the meanwhile over on our attacking box the connection has been intercepted and the MiTM attack carried out successfully. Seth intercepts the connection and has captured the NTLMv2 hash as well as the clear text credentials. Oh dear.

As you can see this not an optimal configuration, and one which  we would very much like to avoid. It can be avoided by using a signed certificate from your internal CA or other trusted certificate authority. Getting certificates installed on your devices isn’t all that too difficult to go through, I actually discuss this further here and linked to how to. In addition to this we can also stop our clients from connecting to anything we don’t trust via GPO. Remember we need to be connecting to our servers via name not IP. As the IP address is not what is on the certificate in the common name field and will therefore be untrusted.

Well I hope this has helped demonstrate the impact of self-signed certificates and why they should be addressed on the inside.

Generating a certificate for a non-domain joined device using an internal AD CA – ie pfsense

I thought I would walk through the process of generating a certificate for a non-domain joined device using an internal Active Directory Certificate Authority (AD CS). In this example, it is going to be for our web GUI for a pfsense firewall. I’v talked before about the challenges of self signed certificates in this post, so thought this would be useful to further demonstrate how this can be done for other devices that are not joined to a domain. Like most things if you have never experienced setting something like this up, you won’t necessarily know how to go about doing it. This post aims to fill that gap. Hopefully you will see this isn’t as difficult as it sounds.

For our lab we have AD CS setup and pfsense on the same network, its actually acting as the gateway for the network. Its a key piece of equipment on the network that we want technical security assurance around. Including being able to validate that when we connect to the device for management it is who we think it is, and importantly who its saying it is. And that we are not in a position to let ourselves be caught by a man in the middle attack!

Lets start on the pfsense web configurator page:

As we can see this is using a self signed certificate and is therefore untrusted. So we want a certificate on our firewall that is signed by a trusted certificate authority, one that is ideally already in our root certificate store. If you have an internal AD CS, the root CA certificate will most likely be already there.

Typically with a network device such as this we somehow want to first generate a Certificate Signing Request (CSR) to then take to our CA to be signed. You can usually achieve this via a shell session to the device or through the web GUI in most cases. Whilst the steps i’m going through with pfsense are specific to this device, the concept is the same for all devices. With pfsense we are able to do this here:

We can see in the above screen shot the self-signed certificate that comes with the device. To start the process we click on the green button at the bottom Add/Sign. As you can see below the method we want to use is ‘Create a Certificate Signing Request’.

Continue down the page adding all the relevant info. Three key areas to take note are the ‘Common Name’, ‘Alternative Names’ and selecting Server Certificate for the certificate type. These are important as you this is how we will identify the authenticity of the device. The ‘Comman Name’ is effectively its short name, and the ‘Alternative Names’ we will want to add as the Fully-Qualified-Domain-Name (FQDN). In this case I’m naming the firewall FW1, Jango.com is the domain name 🙂 .

Once at the end of the page select save and you should see our certificate request in a pending state, the screen should look like this:

Next export the CSR, download it to your local machine and open it in notepad. Highlight the text and copy it to your clipboard for later. The file should look like this:

Next we are going to find our way to the AD CS certificate enrollment web page. This is commonly the CA name followed by ‘certsrv/default.asp’ so in my lab the CA is held on the DC, so will be http://DC16/certsrv/default.asp, just like below:

Next we select ‘Request a certificate’:

Here we don’t have many options as this is a fairly default install of the certificate services however select ‘Advanced Certificate Request. On the next screen as below, paste in the CSR in the request window, and select the default ‘Web Server’ template from the ‘Certificate Template’ drop down menu and click submit:

Next we have the opportunity to download the signed certificate in various formats.

In this instance we are going to download the certificate in Base 64 encoded format . Open up the certificate file in notepad, highlight the contents and save it to the clipboard, it should look like this:

Next we go back to the pfsense web GUI, and complete the certificate signing request from the certificate page. This is under ‘System’ –> Certificate Manager’ –> ‘Certificates’. We do this by selecting the update CSR button, paste in the contents of the certificate into the ‘Final Certificate data’ like below and select ‘update’:

The certificate will be loaded and will look like this:

As we can see from the above screenshot our subject Alternate Names are listed as FW1 and FW1.jango.com, meaning when we access the page with these names the connection will be validated correctly. As opposed to accessing it via IP address and it will warn us that the browser has not been able to validate the endpoint and is therefore insecure.

Next change how the certificate is used. Essentially we are binding it to port 443, the web GUI itself, we do this in System –> Advanced –> Admin Access, select the descriptive name we gave it earlier and select ‘Save’ at the bottom of the page:

Next reload the web GUI page using your common name or subject alternate name. At this point bear in mind you mostly likely will need a manual DNS entry for FW1. So head over to the DNS console and quickly create one. Once you have done that reload the pfsense web GUI, and hey presto!

and..

Now we have a certificate signed by our internal AD CA and can verify what we are connecting to is actually correct.

I hope this has helped demystify the process of obtaining an internally signed certificate from our AD CA for our weird and wonderful network devices that we have on the network.

LocalAccountTokenFilterPolicy accessing the C$ with a local account

Just a quick post on the LocalAccountTokenFilterPolicy setting. What it is why we have it. As a pentester and administering windows systems I’m bumping into this all the time. The classic scenario being your trying to access the C$ of machine with a local account, and being blocked. You check all the usuals; firewall, creds etc and are banging your head against a brick wall. Its more than likely to be the Remote User Account Control (UAC) LocalAccountTokenFilterPolicy setting in Windows that is stopping you. Depending on what type of account you are connecting with ie Domain or Local depends on whether the UAC access token filtering will take affect, it will not affect domain accounts in the local Administrators group, only Local accounts. Even if the Local account is in the Administrators group, UAC filtering means that the action being taken will run as a standard user until elevated. Think of when you launch CMD or PowerShell logged in as an Admin account, its run in the context of standard user until you elevate, or re launch as an Admin. So when we try to connect to the C$ with a Local account that is in the Administrator group we are blocked by UAC. Disabling LocalAccountTokenFilterPolicy will allow us to connect.

When the Remote User Account Control (UAC) LocalAccountTokenFilterPolicy value is set to 0, Remote UAC access token filtering is enabled. When it is set to 1, remote UAC is disabled. We can set this with the following one liner:

cmd /c reg add HKLM\SOFTWARE\Microsoft\Windows\CurrentVersion\Policies\system /v LocalAccountTokenFilterPolicy /t REG_DWORD /d 1 /f

The same scenario can be said for running a credentialed or authenticated Nessus scan with a local account that is part of the Administrators group. For Nessus to enumerate the system it will connect to the C$.  It will fail unless the LocalAccountTokenFilterPolicy is set to 1. There are other pre-requisites, these are out of scope of this post however.

Well hope this helps!

Problems with VirtualBox Guest Additions in Kali – Quick Tip!

Problems with VirtualBox Guest Additions in Kali. This post serves more of a reminder to myself however this might also serve as help to others. Troubleshooting problems/issues with VirtualBox Guest Additions. I think its safe to say I use VirtualBox a lot, I will be lucky if a day goes by where I’m not in VirtualBox using a VM. I also therefore use and rely on Guest Additions working correctly features such as, mapped drives back to the host, USB, display options are all useful to name a few. It can be frustrating sometimes when Guest Additions breaks, either a mapped network drive disappears or your display has shrunk. Once all the usual checks have been done to troubleshoot I normally move on to VirtualBox Guest Additions. I also use Kali, where its also fair to say I have the most issues, now this might be because I probably use it the most in terms of a VirtualBox VM or that Kali 2.0 is rolling and thus based on Debian Testing. As such we have a multitude of updates happening constantly both to packages and underlying operating system. The following is my usual check list and 99% of the time sorts out the issue for Kali 2:

  1. Update VirtualBox to the latest version.
  2. Then update VirtualBox Guest Additions to also the latest edition.
  3. Update Kali; apt-get update & apt-get upgrade -y
  4. Update kali; apt-get dist-upgrade -y
  5. Update the Kernel headers: apt-get update && apt-get install -y linux-headers-$(uname -r)
  6. Re-install VirtualBox Guest Additions directly from VirtualBox:

cp -r /media/cdrom0/ /tmp/
cd /tmp/cdrom0

Make the VBoxLinuxAdditions.run file executable:

chmod u+x VBoxLinuxAdditions.run

Install it:
./VBoxLinuxAdditions.run

If successfully your output should be similar to this:

VirtualBox Guest Additions Installtion Kali

If you get errors in the installation you will need to work back through the errors ensuring the above steps have been executed and are successful.

Well I hope this helps someone out in a jam.

Automatic Updates in Ubuntu Server 18.04.1 LTS with Apt and unattended-upgrades package

In this post we look at how we can automate our security updates and packages that can be updated for a Ubuntu Server 18.04.1 LTS including scheduled reboots. Automatic Updates in Ubuntu Server are a real win.

This is a fairly straight forward affair, we will be working in the unattended-upgrades package, this can used to automatically install updates to the system. We have granular control, being able to configure updates to all packages or just security updates, blacklisting packages, notifications and auto reboot. A very useful set of features.

Lets look at the main configuration file /etc/apt/apt.conf.d/50unattended-upgrades.

A couple of key lines in this file will want our attention. Firstly this will depending on what type of updates you want to automate. If you know the software that runs on the server well enough, and depending on the criticality of the service it provides you have the following options for the type of updates to automate, uncommenting ‘//’ the various lines will give you those type of updates:

Unattended-Upgrade::Allowed-Origins {
        "${distro_id}:${distro_codename}";
        "${distro_id}:${distro_codename}-security";
//      "${distro_id}:${distro_codename}-updates";
//      "${distro_id}:${distro_codename}-proposed";
//      "${distro_id}:${distro_codename}-backports";
};

This next section of the file dictates what packages should not be upgraded, ie if you have a certain set of dependencies and don’t want to the software to upgrade due to comparability issues list them here:

// List of packages to not update (regexp are supported)
Unattended-Upgrade::Package-Blacklist {
// "vim";
// "libc6";
// "libc6-dev";
// "libc6-i686";
};

To get notifications for any problems or package upgrades add your email address to the below section:

// Send email to this address for problems or packages upgrades
// If empty or unset then no email is sent, make sure that you
// have a working mail setup on your system. A package that provides
// 'mailx' must be installed. E.g. "user@example.com"
Unattended-Upgrade::Mail "email@email.com";

The next two sections dictate when the server should be rebooted and without confirmation. Here we have the Unattended-Upgrade::Automatic-Reboot option set to ‘true‘, and unattended-Upgrade::Automatic-Reboot-Time set to ‘02:00‘ am.

// Automatically reboot *WITHOUT CONFIRMATION*
// if the file /var/run/reboot-required is found after the upgrade
Unattended-Upgrade::Automatic-Reboot "true";

// If automatic reboot is enabled and needed, reboot at the specific
// time instead of immediately
// Default: "now"
unattended-Upgrade::Automatic-Reboot-Time "02:00";

To then enable the automatic updates edit the file /etc/apt/apt.conf.d/20auto-upgrades. Create the file if it doesn’t exist, add the below text, the frequency of the update procedure is dictated by the number in quotes next to each item. For example everything with a 1 in it will happen everyday and the 7 represents once a week.

APT::Periodic::Update-Package-Lists "1";
APT::Periodic::Unattended-Upgrade "1";
APT::Periodic::AutocleanInterval "7";
APT::Periodic::Unattended-Upgrade "1";

A couple of useful log files to keep an eye on are: /var/log/unattended-upgrades/unattended-upgrades.log. This will give you information about the updates and whether a reboot is required. /var/log/unattended-upgrades/unattended-upgrades-shutdown.log and also issuing the command last reboot will also give you information about any restarts that are required or have happened.

I hope this been helpful and keeps you current with your patching!

Linux Password Policy – Using PAM, pam_unix and pam_cracklib with Ubuntu Server 18.04.1

Linux Password Policy is often overlooked. This post is to raise awareness how we can up our game in terms of password complexity for Linux systems. Setting up password complexity in Linux specifically Ubuntu Server more specifically 18.04.1 is achieved through Pluggable Authentication Modules (PAM). To authenticate a user, an application such as ssh hands off the authentication mechanism to PAM to determine if the credentials are correct. There are various modules that can be modified within PAM to set-up aspects like password complexity and account lockout and other restrictions. We can check what modules are installed by issuing:

sudo man -k pam_

By default Ubuntu requires a minimum of 6 characters. In Ubuntu this is controlled by the module pam_unix which is used for traditional password authentication, this is configured in debain/ubuntu systems in the file /etc/pam.d/common-password (RedHat/Centos systems its/etc/pam.d/system-auth). Modules work in a rule/stack manner processing one rule then another depending on the control arguments. An amount of configuration can be done in the pam_unix module, however for more granular control there is another module called pam_cracklib. This allows for all the specific control that one might want for a secure complex password.

A basic set of requirements for password complexity might be:

A minimum of one upper case
A minimum of one lower case
A minimum of least one digit
A minimum of one special character
A minimum of 15 characters
Password History 15

Lets work through on a test Ubuntu 18.04.1 server how we would implement this. First install pam_cracklib, this is a ‘pluggable authentication module’ which can be used in the password stack. ‘Pam_cracklib’, will check for specific password criteria, based on default values and what you specify. For example by default it will run through a routine to see if the password is part of a dictionary and then go on to check for your specifics that you may have set like password length.

First lets install the module, it is available in the Ubuntu repository:

sudo apt install libpam-cracklib

The install process will automatically add a line into the /etc/pam.d/common-password file that is used for the additional password control. I’ve highlight it below:

Password complexity in Linux

We can then further modify this line for additional complexity. working on the above criteria we would add:

ucredit=-1 : A minimum of one upper case
lcredit=-1 : A minimum of one lower case
dcredit=-1 : A minimum of least one digit
ocredit=-1 : A minimum of one special character
minlen=15 : A minimum of 15 characters.

note the -1 number represents a minimum value to subtract from the minlen value. There is nothing to stop you incresing this, for example ocredit=-3 would require the user to add 3 special characters.

Password history is actually controlled by pam_unix so we will touch on this separately.

Default values that get added are:

retry=3 : Prompt user at most 3 times before returning an error. The default is 1.
minlen=8 : A minimum of 15 characters.
difok=3 : The amount of character changes in the new password that differentiate it from the old password.

Our new arguments would be something like this:

password requisite pam_cracklib.so retry=3 minlen=15 difok=3 ucredit=-1 lcredit=-1 dcredit=-1 ocredit=-1

For password history first we need to create a new file for pam_unix to store old passwords (hashed of course). Without this password changes will fail.

touch /etc/security/opasswd
chown root:root /etc/security/opasswd
chmod 600 /etc/security/opasswd

Add the ‘remeber=15‘ to the end of the pam_unix line and your done, at least for now. Both lines should look like this:

These changes are instant, no need to reboot or restart any service.

Now all that is left to do is test your new password policy. Whilst this does provide good password complexity I would always suggest you use a public/private key pair for SSH access and disable password authentication specifically for this service.

I hope this helps.

Traffic Shaping in Linux – controlling your bandwidth

In certain scenarios whilst pentesting there may be a requirement to control your bandwidth from your testing device, otherwise known as traffic shaping. In this post I will walk through how we can do some Traffic Shaping in Linux. All testers should be accountable for the amount of traffic they generate while testing. This is easily achievable in a few different ways, some better than others. I’ll walk through how we can achieve this in this post. It is always a good idea to log and monitor the amount of traffic you are sending and receiving. I will typically do this with ‘iftop’, I will open this before sending any traffic.

iftop looks like this:

Here we can see sent, received and total accumulation in the bottom left.  In the bottom middle are the peak rates. Over to the right hand side we can see the transmission rates for 2, 10 and 40 second intervals. Couple of interesting toggle switches you can use while iftop is open being ‘h’ for help, ‘p’ to display port and ‘s’ and ‘d’ to hide/show source and destination.

On to the traffic shaping.  In most Linux distros Tc (traffic control) is available, this can be used to configure traffic manipulation at the Linux kernel level. Tc is packaged with iproute2 the shiny new(ish) tool set for configuring networking in Linux.

In my view Tc is reasonably complex to configure if you simply need to reduce your bandwidth for an interface. Enter Wondershaper. Wondershaper allows you to limit your bandwidth in a simple manner. It does this using Tc. Wondershaper is available through the Apt repository where Apt is being used.

You can limit your traffic on an interface to 10Mbps upload and download like below. Values are in bits.

wondershaper [interface] [downlink] [uplink]

wondershaper  eth2 10000 10000

To clear the limits set:

wondershaper clear

To see the limits set use:

wondershaper eth2

Testing…

Using iPerf we can test the bandwidth reduction by wondershaper. The setup that I am using for this test is two virtual machines with two cheap physical USB 10/100 Ethernet adapters passed through to each virtual machine and physically connected via an Ethernet cable. Interfaces are set to 100 Full. Running iperf with no restrictions give us the following results:

I’m not surprised by the 55.5Mbits/sec rate.

Throttling our connection to 10Mbits/sec with wondershaper:

Great, we see a distinct change in bandwidth running consistently across 10 seconds lower than 10 Mbits/sec.

Throttling the connection further to 1 Mbit/sec:

And again we see our bandwidth dropping further to less than 1Mbit/sec.

Other ways I have seen been offered up as solutions are turning auto-negotiate off and setting your link speed and duplex. However I would argue this is not traffic shaping. It may work in certain circumstances, however I have had mixed success with virtual machines. This doesn’t give you the granular control of Tc and wondershaper.

Conclusion: A very useful tool for controlling your bandwidth in Linux. For a quick fix use wondershaper for either more granular control dive in and configure Tc manually.