Change is constant. For some, it signals opportunity. For others, anxiety. Either way, it brings risk.
As Artificial Intelligence (AI) adoption accelerates at a juggernaut pace, organizations are seeing more reductions in force, realignments, and fundamental shifts in how work gets done. Each of these changes amplifies insider risk: not only from the rising number of departures, but also from those who remain and push harder to prove their relevance: the “job huggers.”
What’s driving the risk
AI embedded in workflows: AI isn’t a side project anymore; it’s now stitched into the fabric of enterprise operations. Every new deployment means more data moving, more tools in play, and more access points to manage (think UPS logistics routing, Microsoft’s Copilot code generation, IBM’s HR chatbot rollout). Each integration expands the surface area where insider risk may creep in.
Layoffs and the “job hugger” effect: Employees see colleagues or those who occupy similar positions walk out the door in waves, and they start to wonder if they’re next. Some respond by clinging tighter to their roles. They are working longer hours, holding onto privileges, and stacking tasks to prove their relevance. These “job huggers” aren’t malicious by default, but their behavior creates risk: they resist giving up access, hoard data, and stretch influence in ways that can compromise security.
Exit timelines and sloppy offboarding: When layoffs hit at such a large scale, speed becomes the priority. HR and IT are under pressure to move people out quickly, and that’s when mistakes happen. Credentials linger, shadow accounts remain active, and access isn’t fully revoked (as seen across multiple AI-driven layoff waves in 2025). The result is a perfect storm: potentially disgruntled employees with live credentials, and forgotten accounts that become open doors for misuse.
The two risk groups
Departing employees: As noted above, compressed exit timelines magnify risk. Departing employees are under enormous stress and often pushed out quickly. Some will collect data on the way out: customer lists, proprietary code, or internal playbooks. Others slip through the cracks of rushed offboarding, with accounts left active longer than intended. Unrevoked credentials become ticking time bombs, whether misused intentionally or exposed accidentally.
Employees who stay (“job huggers/hoarders”): The other group is made up of those who remain. These “job huggers” inherit privileges by default as colleagues depart, often without intentional grants or oversight. To stay relevant, they lean heavily on AI tools and automations, sometimes pushing them into workflows without proper guardrails. The result is more data flowing through more channels handled by employees under pressure to prove their worth. Mistakes multiply, workarounds become normalized, and unseen data flows expand beyond what can easily be tracked.
As DTEX Systems CTO Rajan Koo noted in IT Brew, “Job huggers are employees who cling to their current role even if they aren’t fully happy in that position. They’re not disengaged, they’re searching for relevance and security. But the behaviors that make them feel indispensable can also create risk.”
The common thread: Both groups expand insider data risk. Departing employees exit fast, often with lingering access or collected data. Job huggers stay long, accumulating privilege and accelerating data movement with AI. One group leaves holes behind, while the other stretches the boundaries of access. Together, they form the dual challenge that insider risk management must confront in the age of AI adoption.
Why insider risk management needs HR now more than ever
Bring insider risk management in early, not after the fact. Too often, insider risk management is treated as a cleanup crew, called in once layoffs are under way or privileges have already drifted. That’s backwards. As noted in Crawl, Walk, Run: How to Kickstart Your Insider Risk Program, “Insider Risk Management must be part of the conversation at the planning stage, not bolted on after decisions are made.” If HR is forecasting reductions, realignments, or AI‑driven restructuring, insider risk management should be right there, teeing up monitoring for the behaviors that inevitably follow. These include stress, data hoarding, privilege accumulation, and sloppy offboarding.
Behavioral monitoring starts before the badge is turned in. Early involvement lets insider risk managers tune detection to catch anomalies while they’re still signals, not incidents. In The Proactive Power of Tabletop Exercises in Insider Risk Management, we opined, “By embedding insider risk management into the planning cycle, monitoring can be tuned to detect behavioral anomalies before they become incidents.” Watch for unusual data collection, sudden privilege use, or spikes in after‑hours activity. These patterns emerge before exits or when job huggers begin stretching to remain relevant.
HR provides the trigger, insider risk management provides the lens. HR knows people — who’s moving, who’s leaving, and who’s absorbing risk through workload and uncertainty — insider risk management translates that context into defensible risk analysis. In Rethinking Cyber Talent: Building Teams for Insider Risk Success, we emphasize, “HR knows who’s leaving, who’s staying, and who’s under pressure.” Align exit timelines with account revocation, privilege reviews with role consolidation, and behavioral baselines with AI adoption. Without HR’s early context, insider risk management is left chasing shadows after the fact.
Planning checklist: IRM and HR controls that prevent regret later
- Terminate SaaS accounts cleanly: HR triggers cascade into Salesforce, Jira, Google Workspace before exits begin.
- Kill tokens and keys: OAuth refresh tokens, API keys, and PATs scheduled for kill at termination.
- Purge MFA/device trust: Remove remembered browsers, wipe enrolled laptops, reset push MFA tied to personal devices.
- Audit non‑human identities early: Shared mailboxes, bot accounts, and RPA users assigned owners before layoffs.
- Set expiry on external shares: Drive, Box, and SharePoint links auto‑expire; external guests removed at exit.
- Privilege review during consolidation: Role changes trigger access reviews, not silent accumulation.
- Workflow connector reassignment: AI/automation flows owned by departing users reassigned or shut down.
- Endpoint offboarding checklist: Devices checked in, wiped, and removed from EDR/MDM before HR closes the file.
- Contractor/vendor account sync: Parallel identities outside HR systems included in termination planning.
- Audit trail validation: Every termination leaves a log: deprovision events, token revocations, privilege reviews.
Conclusion
AI is reshaping how organizations operate. Processes are streamlined, decisions accelerated, and new efficiencies are discovered. That’s the opportunity. But with every leap forward, security must evolve in lockstep. Insider risk management isn’t about slowing progress; it’s about ensuring that innovation doesn’t outpace control.
The path forward is clear:
- Faster privilege control so accounts and access don’t linger longer than they should.
- Earlier risk signals so behavioral anomalies are spotted before they become incidents.
- Risk‑adaptive controls that account for both human and non‑human identities, adjusting dynamically as roles, automations, and AI tools expand.
And the scale of the challenge is undeniable: nearly half of U.S. employees identify as job huggers (Monster survey, 2025). That means the behaviors we’ve discussed aren’t edge cases, they’re mainstream. Meeting that reality requires more than monitoring keystrokes; it requires resonance, being attuned to personnel, understanding the pressures they face, and aligning controls with human context.
Change will always be constant. Risk will always be present. But by embedding insider risk management at the planning stage and aligning it with HR, organizations can turn disruption into resilience. The result isn’t just protection, its organizational confidence. Confidence that AI can drive growth without opening doors to insider risk.
Insider risk in the age of AI isn’t a calamity waiting to happen; it’s an opportunity to lead with foresight. Organizations that seize it will not only safeguard their data — they’ll safeguard their future.
Request a demo to learn how DTEX can support your protective security and resilience against human, data and AI risks in 2026 and beyond.
Topics
Subscribe today to stay informed and get regular updates from DTEX






