Serverless Domain Hunting: Track Newly Registered Domains With Ease

There are plenty of options when it comes to automating Python scripts. You can use cronjobs if your using *nix, scheduled tasks on Windows, and native libraries like schedule and apschedule. Tracking newly registered domains requires constant uptime and little room for error that could easily be made by scheduling your code manually or experiencing an issue in your development environment that causes delays.

In this post, we’ll leverage AWS serverless architecture and show you how to stay ahead of adversaries and keep an eye on emerging domains—with little effort and without breaking the bank.

Domain Hunting?

Advanced threat actors and cybercriminals are just like us (humans) and thrive on taking the path of least resistance which gets the job done. This is no different when it comes to domain registration. Many bad actors leave behind patterns and tendencies in how they register domains. These patterns likely exist from attempts to optimize operations for speed and the old adage, “if it’s not broken, don’t fix it.”

Since most of us cannot access high-priced services from popular security vendors, creating our own hunting tool offers the advantage of tailoring the solution to our specific needs and interests, enabling more precise and targeted detection of potential threats. We can easily create our own database of keywords or other criteria we are interested in to uncover attackers.

Why AWS?

We can utilize AWS Lambda functions and a few lines of code from your favorite programming language to monitor domains at scale without worrying about managing infrastructure. With the Lambda service, you only pay for what you consume, so you can likely start thinking of the many tasks you can automate with a few clicks and retrieve results in near real-time.

With a few other AWS apps like EventBridge, Simple Notification Service, and S3, we can create a recurring schedule for our code to run, receive notifications on the status of the domain tracker, and save our results with near-unlimited storage space.

Setup

This post assumes you have an active AWS account (IAM user, not root) and know your way around the dashboard. If not, please first read up on some essential prerequisites:

With that out of the way, let’s start with creating and configuring two Lambda functions, one to download a list of newly registered domains, and one to pick out only the domain names of interest to us.

Info on Lambda functions: https://aws.amazon.com/lambda/

First lambda function to download new domains list

Our first function, “newly-reg-domains” is rather simple but accomplishes the following:

  1. Import required libraries and set our AWS constants that will allow us to upload our file to an S3 Bucket.
  2. Set the base URL for the WHOIS Database Service (WhoisDS), which is where the list of newly registered domains resides.
  3. The daily list of domains consists of domains registered over the previous day. The first function in the code calculates two days prior, then base64 encodes the date and the “zip” file extension for requesting the URL.
  4. Send an HTTP GET request to the WhoisDS URL along with our encoded date to grab the zip file.
  5. Unzip and decode the file in memory, and write to a file.
  6. Write the domains as a text file to our S3 bucket, “new-domains-landing”.
Second lambda function using fuzzy matching

The second function uses our initial S3 bucket as a trigger (more on this below) to execute the code. Basic workflow of the function:

  1. Whenever a file is uploaded to the landing bucket, the code is executed.
  2. Create an S3 client session and name source and destination buckets.
  3. Download only the most recent file from the bucket, and give it a file name.
  4. Open the file and match domains against a set of keywords using the RapidFuzz library.
  5. Write only the files that matched to our second bucket, “new-domains-analyzed”.

Info on S3 buckets: https://aws.amazon.com/s3/

The next step is relatively easy, create two buckets that will receive the initial list of domains and one for the domain names found to match our search criteria.

S3 buckets

Creating a trigger that executes lambda code is as easy as navigating to the configuration tab above your function and clicking on the “Triggers” tab to the left.

Creating a trigger for our lambda function

We now have two lambda functions that can download a complete list of newly registered domains and use fuzzy string matching to extract domain names of interest. In addition, we also have two S3 buckets to store the results files and a trigger that waits for files to be uploaded before executing code.

We don’t have a method to schedule our first piece of Python code to run and reach out to the database service. We’ll change this by using Amazon EventBridge.

Info on EventBridge: https://aws.amazon.com/eventbridge/

EventBridge is a serverless application that will allow us to create a schedule to invoke our code at set intervals and a rule that matches events for processing.

EventBridge cron schedule

For this project, I decided to use a cron-style schedule that targets our newly-reg-domains lambda function and executes it within 5 minutes of the selected time.

EventBridge rule

Rules in EventBridge wait for events to happen, in our case, object creation in the new-domains-landing bucket.

We now have a set schedule to run and save our code securely without worrying about taking up disk space. Let’s now add a way to notify us by email that our code has run without any errors.

Info on Simple Notification Service (SNS): https://aws.amazon.com/sns/

SNS Topic for email notification

If you are like me, you probably think, “this is great, but it’s a pain to log in to AWS every day and download files from buckets.” I couldn’t agree more. Finally, we’ll set up further automation using GitHub Actions to download the most recent fuzzy strings-matched file from our bucket and push it to a repo.

Info on GitHub Actions: https://github.com/features/actions

We first need to create a workflow file to retrieve the file from the bucket. This file will need to be placed in your repository at “./github/workflows/s3_download.yml.”

GitHub Actions workflow file

This file will set up an environment to grab the file from the bucket on a set schedule. After a successful download, the file will be committed and pushed to our current working repository. Below is the Python file, which does the bulk of the work at “./github/workflows/scripts/.”

Python script to download file from S3 bucket

Result of the GitHub Actions workflow file:

Domain name file

Conclusion

By understanding the patterns and tendencies of adversaries bent on causing chaos in networks, we can quickly develop real-time strategies that allow us to identify and block suspicious domains before they are used against organizations.

Using AWS Lambda and other serverless applications can make it extremely easy to create, deploy, and maintain a highly available domain monitoring system uniquely tailored to our areas of interest.

You can do more with these lists from here (add machine learning to identify suspicious domain names, automatically add typosquatting domains to a firewall block list, etc.). Still, more than enough has been covered to inspire readers to explore additional serverless hunting opportunities.

Hope you enjoyed this brief post.


Posted

in

by

Comments

One response to “Serverless Domain Hunting: Track Newly Registered Domains With Ease”

  1. Week 12 – 2023 – This Week In 4n6 Avatar

    […] Mike at “CyberSec & Ramen”Serverless Domain Hunting: Track Newly Registered Domains With Ease […]

    Like

%d bloggers like this: