Meet Ai3, the DTEX Risk Assistant. Fast-track effective insider risk management with guided investigations.

WORKFORCE CYBER
INTELLIGENCE AND SECURITY

BLOG

A Human-centric Approach to Operational Awareness and Risk Management.

Anatomy of a Breach: Dissecting Insider Data Theft

Last week, news outlets lit up as a healthcare firm has notified many thousands of its customers of a recent customer data breach. While the specifics have not been made public, it appears to be a clear case of a malicious insider threat. Apparently, this breach was due to an employee with access to customer information copying that data and sharing it with third parties.

Dtex hasn’t worked with this company directly, so we can’t speculate too much about the details of this particular event, beyond the information available to the public. However, we have seen plenty of cases that are similar to this one — and the trend is growing more pervasive. Several similar stories have hit the news in recent years. Insider data theft is a real enough threat that CISOs around the world talk to us every day about avoiding this exact scenario.

It’s important to note that these acts do tend to play out in surprisingly predictable patterns. In fact, every time we see situations like this, we also see a fairly consistent series of behavioral events.

The key to stopping these events is to use that consistent pattern of behavior for detection.

Looking for the Signs: The Insider Threat Kill Chain

Many organizations expend a lot of energy trying to make sure they can catch data exfiltration directly — the moment a user hits “transfer,” the green upload bar starts in DropBox, the minute a USB device is ejected. And yes, it is important to be looking for these things, but they’re only one piece of the puzzle. Even more importantly, once you reach that step, the data has already left

Insiders who are planning to steal data tend to follow a certain set of steps leading up to the data theft, and these hold the most reliable avenue to detecting and stopping evens before data leaves your organization.

We call this the insider threat kill chain, and this is how it works:

Step 1: Reconnaissance

The malicious actor looks for what they’re going to steal. They start perusing network folders that they don’t normally look at, or they begin researching methods for getting data out of the organization.

In essence, this is the “preparation” stage.

Typical signs of reconnaissance include…

Accessing a new or unusual location in a document repository. An unusual increase in error or access denied messagesFailed attempts to mount USB devices and access external websitesUnusually rapid rate of opening files in a short period of timeNetwork scanning and use of network toolsRunning applications that they’ve never run before — especially administrative applicationsIncrease in access to sensitive documentation

Step 2: Circumvention

This is when the malicious actor takes steps to evade existing security mechanisms to gain access to the sensitive data. Employees aren’t stupid — they know you have all kinds of cybersecurity solutions in place to stop them from doing what they’re about to do. Experienced users may even know how those solutions are misconfigured.

So, naturally, that means they’ll have to bypass or sneak through the cracks in any security controls you have in place before carrying out their plan.

In many cases we see, research can be a big giveaway — like when users start googling phrases such as, “How to disable CarbonBlack” or “How to bypass the corporate firewall” or “Unblocked versions of Dropbox.”

Sometimes, circumvention methods can be as simple as a user taking their laptop off of the corporate network and to a coffee shop or some other public network in order to avoid perimeter security.

On the other end of the spectrum, we’ve also seen more IT savvy users use more technical methods like proxy servers, portable applications or anonymous VPN connections.

Here are a few examples of the types of user activity that indicate circumvention:

Use of tools like Tor (“The Onion Router”), VPN and proxy servers to engage in anonymous internet activityFile transfers through instant messaging or remote support tools, to evade DLP restrictionsUse of Virtual Machine environments to conceal endpoint activitiesSharing information online, whether it be through copy/paste sites like PasteBin, communities like Reddit, or social networks like Facebook or LinkedInDisabling or bypassing security software, or researching how to do so

Step 3: Aggregation

This is when the malicious actor collects all of the data that they’re about to steal in one place — when, for example, they drag everything to one folder, or compress the archive, or pull all of the data onto their computer from the corporate network.

The aggregation step underscores why you need to be looking for unusual employee behavior based on their personal baseline. It’s unrealistic to block all file movement in your organization. Even if you’re trying to strictly police it, your employees are probably suffering — not because they’re trying to steal, but because they’re trying to get their jobs done.

We also see employees stealing data they already access to for their jobs — which means that you may not have the option to block employees from moving or modifying data unrelated to their job to stop the potential for data theft.

The key here is to immediately know once a user is copying or moving a number, type, or quantity of files that is unlike their normal day to day operations. You should be looking for things like:

Unusual amounts of file copies, movements, and deletionsUnusual amounts of file activity in high-risk locations and sensitive file typesUnusual creation of files that are all exactly the same sizeUnusual transfer of files from numerous disparate locationsSaving files to an unusual location on a user’s endpoint

Step 4: Obfuscation

Malicious users commonly try to cover their tracks before they actually remove the data from the organizations (or afterwards, depending on the specifics).

Sometimes, they do this with actions as simple as renaming files – for example, changing “Customer Information.zip” to “Vacation Pics.pdf”, or something equally innocuous.

Other times, we see users go through more extreme measures, like disabling security tools altogether (particularly if they’re super users or administrators).

However, if you know what to look for, this can be a giveaway.

Here are a few things Dtex monitors:

Unusual rates and sizes of file compressionClearing cookies and event viewer logs, or unusual use of private browser modesHiding sensitive information in image, video, or other misleading file types (e.g. steganography using the Alternate Data Stream)Unusual rates of file renaming, especially from a sensitive file type to an innocuous file type

Step 5: Exfiltration

This is the moment every CISO dreads. This is when the actual data theft occurs, and the stolen information leaves your organization. In a perfect world, you hopefully would have caught the malicious actor before they reach this point — but unfortunately, we all have seen enough to know that’s not always the case.

Detecting the data exfiltration itself is fairly straightforward: you’re looking for any unusual movement of files off of your endpoint or network, whether that be through removable devices, cloud services, email, IM, or any other of the numerous transfer methods available today.

Don’t Fall Victim to Hurricanes

Attempting to stop data theft by focusing on blocking or detecting data exfiltration alone is like a meteorologist who warns of a hurricane by measuring how much rain has fallen in his backyard: it’s too little, way too late.

By paying attention to user behavior patterns, security experts can often detect data theft before it happens.

Take this instance from our analysts, for example.

In this case, the user’s malicious intent is a little less flagrantly obvious in each individual step. This emphasizes why it’s so important to look at the full culmination of a user’s behavior with a big picture view. Perhaps none of these steps are damning on their own — at least, not until the user exfiltrates the data.

However, by recognizing that the employee is taking a highly suspicious path of action — which lines up perfectly with the insider threat kill chain pattern of behavior — this organization can intervene before the user is able to steal data.

It’s also important to note that this method covers a historically difficult blind spot: super users. Obviously, organizations can’t block administrative users effectively, since they need such granular control of IT and security systems in the course of their job. But, by looking at the “big picture” of user behavior, you can still detect when these users are acting suspiciously.

What Can We Learn?

By looking at high profile breaches from a high level, with few public details, we can see an important takeaway: user behavior is key.

This is a critical reminder of the importance of recognizing common threads in these types of attacks. In the recent customer data theft case, we don’t know the specifics of how this employee stole all of those files. But just from our experiences mitigating, investigating, and stopping similar attacks, we can make an educated guess that it included some of the hallmark steps that we discussed above.

The key to preventing insider data theft lies in detecting those preliminary steps and patterns. No one will ever be able to forcibly block 100% of data theft attempts. At the rate technology is evolving, that becomes more impossible every day as technology — and the way it’s used in the professional world — becomes increasingly open. But organizations can still learn from these experiences and adapt. Users are people, and people are often predictable. User behavior and visibility are, increasingly, critical elements when it comes to stopping data breaches.