Meet Ai3, the DTEX Risk Assistant. Fast-track effective insider risk management with guided investigations.



A Human-centric Approach to Operational Awareness and Risk Management.

On the OPM Breach Anniversary, How Far Have We Come?

A condensed version of this post first appeared on ITSP Magazine.

It has been two years since we first heard about one of the largest data breaches in the history of the federal government, hitting the Office of Personnel Management (OPM) and exposing the sensitive personal information of more than 22 million current and former employees. From personnel files and fingerprint data to security background clearance information – including extensive behavior and lifestyle details, financial information, and even names of relatives, the breadth and depth of information obtained meant devastating consequences that would be felt for years to come and left us all wondering: how could that happen?

There’s no denying that, since then, security threats and data breaches have started to border on ubiquitous. The total number of data breaches in that same year (2015) was 781 – jumping to 1,093 in 2016 and just recently hitting a half-year record high of 791 in 2017, according to the Identity Resource Center. From major enterprises like Yahoo and Verizon to smaller entities like the Bronx Lebanon Hospital Center and Wildlife Sporting Licensing sites, it’s clear that nobody is immune to security threats or the catastrophic effects that may follow.

But just how far have we come, exactly, in the two years since that catastrophic event across both the public and private sector? I spent some time re-reading the Oversight Committee Report, published in September 2016, and it seemed prudent to resurface some of the challenges that contributed to the OPM data breach and reflect on the progress made since.

The It Won’t Happen to Me Mindset

One of the first points noted in the report’s Executive Summary is that – among other factors – the lax state of OPM’s Information Security left the agency’s systems exposed for an experienced hacker to infiltrate and compromise.

Today, the challenge remains that most organizations – particularly small or medium-sized ones – do not consider themselves to be prime targets for cybercrime. They believe there is nothing of interest for someone to steal, similar to the recent HBO hack, or that there is no notoriety to be gained by attacking an unknown enterprise.

Recent industry research indicates a significant disconnect continues to exist, notably among SMBs, between awareness and action when it comes to cyber security. While many SMBs are concerned about cyber attacks (58 percent), more than half are not allocating any budget at all to risk mitigation. And despite the staggering figure that 60 percent of small companies go out of business within six months of a cyber attack, only 14 percent of small businesses rate their ability to mitigate cyber risks, vulnerabilities and attacks as highly effective.

The reality is that any organization, big or small, is a target. Intellectual property, money, personnel information – these are only a few examples of worthwhile things a hacker may want. Understanding that businesses of all shapes and sizes are at risk is the first step in stepping from the defense to the offense.

Outdated and Costly Legacy Systems

The report then goes on to note, “There is a pressing need for federal agencies to modernize legacy IT in order to mitigate the cybersecurity threat inherent in unsupported, end of life IT systems and applications… the agency missed opportunities to prioritize the purchase and deployment of certain cutting-edge tools that would have prevented this attack.”

It’s no secret that the government is plagued with legacy systems. Federal agencies, as a whole, spend over $89 billion annually on IT, but a majority of that money (upwards of 70 percent) is focused on maintaining and operating legacy IT systems. The catch is that some of these systems are so legacy that many can no longer be patched or updated with new security capabilities. This well-known gap in agency security posture is a prime target for malicious actors.

But this isn’t a challenge specific to the public sector. Recent research from the Ponemon Institute shows that companies are still overwhelmingly relying on legacy technologies and governance practices to address potential threat vectors. A prime example: 94 percent of those surveyed indicated that they still use a traditional network firewall to mitigate threats, even as they acknowledge an increasingly complex and evolving threat landscape that now includes things like unsecured IoT devices, botnets, DDoS attacks and anonymized malicious activities.

The bright spot here is that we have options, and there is solid evidence that we are moving in the right direction. Cloud-based infrastructure provides a direct path to modernization and efficiency, particularly for budget-constrained organizations and agencies – and we see those investments, and the associated funds, being prioritized. On both the private and public sector fronts, we’re both increasing our overall security spend while starting to shift investments in prevention-only approaches to those that also put a focus on detection and response.

Hyper Focus on Outside Adversaries

A related point, as we’re talking about perimeter-focused defenses like network firewalls, is one that’s easy to glaze over in the Committee’s lengthy report: “The OPM data breaches illustrate the challenge of securing large, and therefore high value, data repositories when defenses are geared towards perimeter defenses…”

This is an area where we largely continue to struggle, on all fronts. Despite the increased visibility of insider threats, and the potentially extensive damage they can do, the emphasis – as shown in security budgets and priorities – continues to be placed on external threat vectors. Let me be clear: there is absolutely a need to protect against these, and I’m the first to advocate for building a comprehensive, layered defense. But more often than not, the discussion of insider threat gets drowned out by that of outside risk and hardening perimeter defenses when the fact is that more and more external actors are finding and exploiting vulnerabilities from within.

Even once insider threats are acknowledged, the perception is that incidents and breaches are driven by malicious employees or actors, but a significant portion – up to 68 percent – of the risk is due to employee carelessness. Whether full-time employees or part-time contractors, the fact remains that users are engaging in activities every day that have the potential to compromise sensitive information.

While it’s an oversimplification to immediately link personal email usage to malicious intent, it is impossible to ignore the fact that personal email accounts can absolutely be used as an avenue for data theft. And therein lies the problem for IT: while most people using personal email at work are not doing anything nefarious, how do they find the ones that are? How do they see when an employee is compromised through phishing or other means, or recognize bad actors representing themselves as trusted employees? The lack of rich intelligence around compromised or malicious users drives the lock and block controls across an organization.

The Internet has absolutely helped workers get things done faster, better and more easily. At the same time, there are risk assessments that show a significant amount of inappropriate workplace activity like using anonymous VPNs, gambling and downloading resources illegally that open new channels for bad actors to enter the organization undetected. What is needed is user intelligence to shine a light on where risky behavior is compromising the enterprise, whether maliciously or negligently.

The Path Forward

Regardless of which sector you sit in, there’s a fundamental flaw in our approach to enterprise security that continues to ring as true in 2017 as it did in 2015: we’re relying on border protection to mitigate internal risk. Even with the overemphasis on securing traditional perimeters, the average time between a malware infection and discovery of the attack is more than 200 days (a gap that has barely narrowed in recent years). In the case of the DNC, the bad actors are believed to have been in the network for over a year. The amount of damage that can be done in that amount of time is almost unquantifiable.

To see where risky behavior is compromising the enterprise, CIO’s and CISO’s need to look inward. Having access to user intelligence in near real time can be an invaluable tool in bridging this gap, enabling an organization to see areas of risk without infringing on user privacy. The addition of context and machine learning to user behavior metadata can help an organization both detect and prevent data breaches at scale. Legacy forms of employee monitoring like keystroke logging or video-capture of screen content cannot adapt to the modern enterprise, or live within today’s employee privacy requirements. Systems that look for internal threats without the addition of user intelligence are missing the critical contextual data that cut through the noise.

In these two years following the attack, OPM implemented a number of programs to help prevent, and mitigate, an attack from taking place again. The agency created its Continuous Diagnostics and Mitigation program that offers a full suite of tools and sensors to scan for and respond to threats on their networks; began a major push to encrypt its data and require strong authentication processes for each of its internal users; and implemented a Zero Trust security model. While there is still work to be done, the active response from that fateful day in 2015 is something any enterprise should take note of and keep in mind.