Join our panel during Black Hat 2024 – Blurred Lines: Investigating the Convergence of Internal and External Threats



Insider Risk Insights - DTEX Blog

Layers and Telemetry — What We Can Learn from the Stanford University Breach

On April 1st, no fooling, the Stanford University student newspaper (The Stanford Daily) reported that hackers have leaked massive amounts of student personal information including social security numbers, addresses, emails, family members and financial information.

The hackers obtained the data from the University’s MedSecureSend secure file transfer system which leverages Accellion FTA technology. Accellion alerted customers to vulnerabilities in its legacy File Transfer Appliance solution in December and encouraged them to apply security patches. Like many other Accellion customers including Shell, University of California Berkley, UC Los Angeles, UC Davis, University of Colorado and the University of Miami, it seems Stanford University did not apply the patches, and hackers were able to exploit the applications’ vulnerabilities to harvest PHI data.

This is an all too familiar of a story that those of us who practice cybersecurity have heard hundreds of times over. Why does this continue to happen? Why weren’t patches applied? What other ‘layers’ of security were present or missing that may have detected abnormal activity that would have signaled early stages of an attack?

Why does this continue to happen? It’s about security architecture and visibility. Layers matter, whether preparing for a winter hike, attending a college football game on a brisk Fall day, and when securing our organizations against cyber threats.

A comprehensive cybersecurity strategy includes a multi-layered approach that starts with protection using tools we all know and love: NGAV, Network/Endpoint DLP, Vulnerability Management, VPNs, NGFW and Secure Web Gateways. Next is detection because it’s impossible to stop every possible compromise. It is better to assume that a compromise is inevitable, and prepare for that with the visibility to recognize the attack surface and detect indicators of attack and anomalies. This is where and why SIEM’s, EDR platforms and Vulnerability Assessment tools are so important. Last, but not least, is forensics. Forensics is about examining various endpoint and network logs, IoCs and application data, in order to collate evidence and establish a sequence of events for any applicable incident. This is critical to presenting facts and determining impact.

Did Stanford University have each of these layers in place? Maybe, maybe not. The point of this post is not to judge. Getting cybersecurity right in practice is hard. Projects take time, changes cause outages and interrupt user experience, and finding cybersecurity talent is getting more difficult every day. These are the reasons breaches like this continue to happen and why applications are not patched immediately. So, what’s the fix?

Sure, patching more, faster and building strong and talented cyber security teams would help but not completely fix the problem. Why? Because it’s not just about making what we have work better, it’s about what’s missing.

The Stanford University breach could have been prevented if Stanford’s IT and Cybersecurity teams were able to recognize anomalous events related to the applications, machines, data and user accounts involved in the exfiltration of the student PHI. Hackers typically perform activities that test the defenses and weakness of a target, while simultaneously looking to evade detection. These type of reconnaissance, circumvention and aggregation activities would have initiated unknown processes and irregular network destinations that strongly suggest malicious or compromised activity. With this type of enterprise telemetry, specifically server monitoring and visibility to understand what’s normal, Stanford’s IT and Cyber Security teams may have had the opportunity to investigate the activity and stop PHI from being exfiltrated.

Even if the MedSecureSend application built on Accellion FTA had been patched as advised, enterprise telemetry is a good idea and something that needs to be a must-have at the forensic layer of an organization’s architecture. Hackers will get in but breaches don’t have to occur if organizations understand how systems, applications, data and humans interact so they can spot deviations from ‘normal’ and stop activities that lead to a breach while an attack is in motion.

I’d welcome your thoughts on LinkedIn about layered security best practices, especially how we can better use IoC’s and IoA’s alongside human behavior and technology interaction to see and stop attacks.