Join our panel during Black Hat 2024 – Blurred Lines: Investigating the Convergence of Internal and External Threats

INSIDER RISK INSIGHTS, NEWS, AND ANALYSIS

BLOG

Insider Risk Insights - DTEX Blog

The Rise of AI and Blended Attacks: Key Takeaways from RSAC 2024

The Rise of AI and Blended Attacks Key Takeaways from 2024 RSAC

The 2024 RSA Conference can be summed up in two letters: AI.

AI was everywhere. It was the main topic of more than 130 sessions. Almost every company with a booth in the Expo Hall advertised AI as a component in their solution. Even casual conversations with colleagues over lunch turned to AI.

In 2023, generative AI was all the rage since it was so new and there were so many questions around its security. This year, it appeared that the goal was simply to talk about AI as much as possible because people want to hear about it, but they just aren’t sure where AI fits into their security outlook. Is it a threat? Is it a tool? And what type of AI is being discussed? Unfortunately, AI has become shorthand for ChatGPT or CoPilot, and if you wanted to learn more about predictive AI/ML solutions, you had to look beyond the session title. However, one topic around AI began to pick up steam as the conference went on…

The Looming Threat of Shadow AI

A growing number of CISOs and cybersecurity experts are concerned about shadow AI and the threat of insider risk it will bring. A recent study conducted by ISACA found that 35% of respondents said that AI increases their productivity, so they want to be able to use the technology in the workplace, yet only 42% of companies are allowing their employees to use generative AI tools. Meanwhile, a recent report by DTEX shows 92% of organizations identify internal use of AI as a key security concern.

It isn’t just that employees are using the technology to find answers or help them solve problems. Many are using it as a short cut, and are not only doing so without permission, but they are feeding sensitive company or customer data into the AI algorithm. For example, at one company, members of the board of directors wanted to generate a summary of corporate objectives and procedures. They scanned books and binders filled with company information into an AI tool and then asked the tool to write the summary.

The summary was perfect, just what the board members wanted, but now information about that company’s financial health, its intellectual property and other sensitive information was available to anyone.

Without policies around who can use AI and how it can be used, and without adding ethical parameters to generative AI use, organizations are opening themselves up to levels of insider risk that haven’t been seen before.

Lack of Data Governance

A question that frequently came up in both conversations and sessions at RSA Conference was how to best deal with the data within LLMs, especially the data that your company doesn’t own. The threat of shadow AI has accelerated the concern around the amount of data generated within a company, what the data is, and who has control over it.

Government and industry regulations are increasing but organizations struggle to manage their data to be compliant. Classification of data makes a difference in data governance, but too many companies are unprepared or unable to handle basic hygiene around data.

While cybersecurity professionals in attendance at RSA Conference bemoaned the struggles around data governance, most predicted that addressing the problem will become a priority over the next 12 months, and that by next year’s event, we’ll see new regulations around data and AI, as well as greater governance over third-party data.

Budget Woes

AI/ML tools can help organizations classify and manage their data, and even accelerate investigations to detect or deter insider risks from escalating into a data breach. However, one of the most startling takeaways from RSA Conference was the minimal budgets that security teams are working with.

Security budgets are being slashed by up to a third of the previous year’s funding. And the cost dedicated specifically to insider risk management is disproportionately low, according to the 2023 Ponemon Cost of Insider Risks Global Report. While C-suites may say they need to address the cyber skills gap and onboard experienced cyber professionals, CISOs find it difficult to hire good people who can hit the ground running because they aren’t allocated appropriate pay and compensation budgets. Turning to MSPs to make up for the slack of inhouse staff can only go so far if budgets are already tight and decision makers don’t see the need for security spending.

Another spending-related point to take into consideration is where the budget cuts will be deployed. A number of CISOs expressed concern that security awareness training isn’t keeping up with the changing threat environment and the more prevalent role of generative AI risks. But security awareness training may end up being cut if there are other high priorities in the security office. At a time when highly specialized training is needed to address more sophisticated phishing attacks and an increase in misinformation campaigns, companies are taking a pause from training or using inexpensive older models.

Blurred Lines: The Rise of Blended Attacks 

Another key takeaway from RSA was the shift in perception of cyber-attacks from being all internal or external to being a combination of both. Increasingly, adversaries are blending traditional external Advanced Persistent Threats with insider-driven exploitation, such as social engineering, to access and steal data and IP. DTEX’s Blurred Lines session hosted by Mohan Koo and featuring Kevin Mandia (Mandiant) and Brad Maiorino (RTX) revealed new insights into the threat landscape: In 2023, there were 97 zero days found in the wild – the second highest ever. The good news is that SOC teams are getting better at detecting intrusions faster. The bad news is that adversaries are now targeting insiders more aggressively than ever before. Why? Because for many adversaries, exploiting an insider is easier and cheaper – PLUS they can often get away with it.

The panel also raised the topic of roles and accountability within an insider risk program. While the SOC has traditionally been charged with handling external threats, it’s a different story when it comes to insider risks: Where the SOC should ultimately manage the digital data, legal and HR should ultimately handle the investigation. As a fundamentally human challenge, this makes sense. As adversaries increasingly socially engineer insiders, organizations will be under pressure to develop best practices to manage AI and insider risk early in the game.

2024 RSA Conference, Blurred Lines

Informative ‘Blurred Lines’ session hosted by DTEX Systems on the convergence of internal and external threats.

These weren’t the only topics getting attention at RSA Conference this year. There is also an uptick in vendor fatigue, concerns about CISO burnout, and worries that if there is too much focus on AI that other problems will get pushed aside. However, one common theme (besides artificial intelligence) is that if organizations slack on areas like data governance or budgets, or if they don’t build policies around generative AI use, risky insider behaviors will go unchecked, leading to a rise in insider driven incidents.

With the cost and frequency of insider-initiated security incidents at an all-time high, investing in a dedicated insider risk program has never been more important. To learn more about the evolving threat landscape and how to navigate security threats from the inside out, download the 2024 Insider Risk Investigations Report. 

Download 2024 Insider Risk Investigations Report