The latest MIT NANDA report on the state of AI in business is a wake-up call for enterprise leaders. Despite the hype, the reality is stark: 95% of organizations are seeing no measurable business value from generative AI. The divide between high adoption of enterprise GenAI initiatives and low transformation is so wide, MIT researchers coined a term for it — the GenAI divide.
At DTEX, we see this divide as a big opportunity. Specifically, an opportunity to rethink how enterprises govern AI usage, protect data, and empower their workforce. And it starts with understanding the rise of shadow AI. Human ingenuity says people will continue to find a way to leverage GenAI despite organizations not understanding what exactly is needed… cue the opportunity to learn from them but also to protect our employees at the same time.
Shadow AI: the insider risk you may not have seen coming
One of the most interesting findings from the MIT report is the emergence of a “shadow AI economy”. While only 40% of companies officially procure subscriptions to tools like ChatGPT or Claude, over 90% of employees use them regularly — often multiple times a day.
“In many cases, shadow AI users reported using LLMs multiple times a day every day of their weekly workload through personal tools, while their companies’ official AI initiatives remained stalled in pilot phase.”
This isn’t just a productivity story. It’s a governance story. It’s a security story.
Employees are bypassing stalled enterprise AI initiatives and turning to consumer-grade tools to get work done. That means sensitive data is flowing through unmanaged channels. It means compliance blind spots. It means insider risk.
At DTEX, we’ve long said that data loss is a human problem. Shadow AI is the latest manifestation of that truth. And it’s why our Risk-Adaptive DLP solutions built to detect and respond to human + agentic AI activity and not just file movement. Even as humans begin to command AI agents to do their bidding, the human will ultimately be liable for the acts of AI.
DTEX: the control plane for human and AI collaboration
The report suggests that the GenAI divide isn’t about model quality or infrastructure. It’s about integration. But I’d also argue the LLM wasn’t granted access to the right data to solve the problem. So, a failure of a GenAI initiative is a likely failure to solve a ‘data governance’ problem.
That’s where DTEX comes in.
Our platform is designed to align with how people actually work. We don’t just monitor endpoints. We understand behavior. We don’t just block activity. We guide it. And we don’t just react to threats. We anticipate them.
With AI security capabilities, DTEX provides browser-agnostic oversight of GenAI usage, including prompt inspection and real-time risk scoring. Whether an employee is using a sanctioned tool or a personal account, DTEX sees it, understands it, and responds proportionately. At the same time, our privacy by design approach enables DTEX to only capture what is proportionate to the risk, ensuring employees’ fundamental rights to privacy are still respected.
And the DTEX Ai3 Risk Assistant presents a GenAI approach based on integrated workflows which does exactly what the report suggests enterprise organizations need.
What enterprise buyers really want from AI vendors
The MIT report makes it clear: it’s extremely important to understand the enterprise buyer’s mindset 1. They prioritize:
- Workflow alignment over flashy demos.
- Minimal disruption over maximal ambition.
- Clear data boundaries over vague promises.
- Continuous improvement over one-time deployments.
DTEX checks all these boxes.
Our Risk-Adaptive Framework™ automates policy updates based on behavioral changes, eliminating the need for static rule management. Our Behavior-Based Classification identifies sensitive data without inspecting the content directly, protecting privacy while securing IP. Our AI Risk Assistant guides investigations with contextual insights, not just alerts.
Bridging the divide: from shadow AI to strategic AI
The GenAI divide is real. But it’s not insurmountable.
Organizations don’t need to ban personal AI tools. They need to govern them intelligently. They don’t need to force adoption of brittle enterprise systems. They need to integrate AI into existing workflows. And they don’t need to fear insider risk. They need to manage it proactively.
DTEX is uniquely positioned to help enterprises cross the divide. We provide the visibility, context, and control needed to turn shadow AI from a liability into a strategic asset.
The takeaway from MIT’s research is clear: Shadow AI is an insider risk opportunity. And with DTEX, it’s an opportunity that’s actionable today.
Ready to see how it works? Request a demo and meet strategic AI.
Subscribe today to stay informed and get regular updates from DTEX Systems