In this week’s Analyst Breach Insights, we’re discussing Macy’s recent data breach. On November 14, Macy’s notified customers that their website had been breached from October 7 to October 14. According to their announcement, an “unauthorized third party” added code to Macys.com checkout pages in order to siphon off payment data.
You may be thinking that this sounds a bit familiar. There’s a reason why: barely more than a year ago, Macy’s was hit with another disturbing breach. From April through June of last year, a third party attacker stole usernames and passwords from macys.com and bloomingdales.com accounts. This event even led to Macy’s getting hit with a class action lawsuit that accused them of engaging in “reckless and negligent” security practices.
Clearly, there’s something bigger going on here.
There’s the issue of last month’s breach, yes. But there’s also a broader question: why is Macy’s continually getting breached, even after facing heavy consequences for it? Where is the recovery effort going wrong?
Albert Einstein is oft quoted as saying that the definition of insanity is doing the same thing over and over again and expecting different results. In many ways, that’s exactly what organizations in this situation are doing. And that will continue to be the case until they rebuild from the ground up using enterprise-wide, real time visibility to make educated and intentional choices about their security approaches.
The Small Picture: The Individual Breach
On a smaller scale, there’s the issue of this particular breach. Let’s begin by stating the obvious: the incident was certainly preventable (or at the very least noticeable, considering this wasn’t typical web application communication). There are many ways that Macys could have and should have detected it much more quickly.
We know that this breach occurred because a third party managed to add code to the Macys.com checkout page in order to siphon off payment data. That brief piece of information alone tells us four important facts:
A user or administrative account was likely involved at some stage of the process.This account or server was hijacked by a third party.This account either had the privileges to make changes to the macys.com web pages, or the attacker escalated its privileges until it did.The server was behaving abnormally (meaning, it was sending data somewhere it was not supposed to be).
Every one of these four points is separately something that should have been detected had Macy’s been looking at the right data. Even if firewalls, AV, or perimeter tools had failed to keep the attacker out, visibility into abnormal behavior should have immediately notified them that something was seriously amiss.
The same goes for the breach that Macy’s experienced last year, too, in which many user accounts were compromised and began behaving very erratically. It could have and should have been detected — but went unnoticed for months.
Cleary, Macy’s had not addressed the root of the problem, which stems from a much broader issue.
“It goes beyond tools and staff,” Dtex VP of Field Engineering Steven Spadaccini said. “If you don’t make modifications in how you’re processing the end data, and if you’re not deploying the right machine learning and analytics to assist the human learning element, nothing will change. Humans are only as good as the data they’re focused on. Otherwise there’s a perfect storm of doing the wrong things to look at the wrong data.”
The Big Picture: Rebuilding from the Ground Up
In this situation, it isn’t enough to spot-patch the vulnerabilities that led to this particular breach. Because Macy’s has a history of being hit with data breaches, clearly something bigger is amiss — and whatever their reactive solutions are, they aren’t enough. This is a systemic problem, and solving a systemic problem means starting all over.
“Obviously, Macys.com should go through wholesale changes of their security architecture, policies, and procedures,” Spadaccini said. “Additionally, they should focus on who has access to these systems and applications and monitor activity on their servers. Secondly, they should have a detailed understanding of their web applications and who they are allowed to communicate with. Any deviation from that behavior should begin an immediate notification to your threat investigation.”
We frequently espouse the benefits of enterprise-wide monitoring. But even that won’t help an organization that truly doesn’t even know what to look for, or what constitutes “bad” activity. If you don’t actually understand how data moves through your organization or how users handle it, how can you tell whether a particular program is harmful? How do you know if certain server activity is worthy of concern? How do you know which user accounts are behaving erratically or just system admins performing maintenance?
It all comes down to making informed decisions about your security approach, and that means having the data to give you a complete picture of how your enterprise works.
For example, one of our customers began their Dtex deployment by rolling out enterprise wide, and then simply allowing Dtex to collect data for three (weeks), untouched. They then analyzed this significant batch of data in order to determine their “hotspots” — files and locations that came into contact with the most users, and therefore, was at greatest inherent risk of data theft or accidental exposure. Other Dtex customers utilize our visibility explicitly to identify gaps and blind spots left by other solutions.
All of these organizations are using visibility to make informed holistic decisions about their security posture. And in the case of Macy’s and other organizations that keep getting hit with breaches over and over again, that’s where it needs to start: truly understanding the enterprise, and a willingness to completely rework the system based on the facts that knowledge uncovers.
Ultimately, the key to identifying and detecting bad behavior, whether that behavior comes from an insider, a hacker, or malware, is contextualizing it within the big picture and pinpointing abnormalities. This is not only the strategy that could have prevented this particular breach, but it’s also the backbone of successfully restructuring a failing security posture.
“Our recommendation is always to prioritize understanding behavior — whether that be a server performing a function of a user conducting activity — and elevating activity that deviates from the norm,” Spadaccini said. “If none of the other servers or users are doing it, it warrants an investigation every time. Dtex pays attention by utilizing machine learning and threat-based analytics to the right data to highlight this sort of activity. But many other tools, like those that focus on log data or malware data, cannot. Without this contextual insight, your security is based on a flawed platform — and that’s why companies like Macy’s will continue to get breached unless they fundamentally re-evaluate.”