(Summit Art Creations/Shutterstock)
Adoption of generative AI is growing in the workplace — but most of it isn’t happening through IT channels. Recent data shows that nearly 28 percent of US employees are using generative AI at work, with more than 10 percent using it daily and nearly a quarter using it at least weekly. Workers report that these tools help them quickly complete tasks faster, brainstorm ideas, and develop content more efficiently, increasing productivity.
Most of its use is unapproved. A recent Microsoft and LinkedIn Work Trends Index found that 78% of AI users bring their tools to work through personal accounts. It’s part of a broader trend that Gartner projects will reach 75 percent of employees by 2027, up from 41 percent now in 2022. Like previous waves of Shadow, these tools often fill the void left by enterprise software, offering speed, convenience, and usability that official systems lack. But the stakes are high: Unsolicited AI use can lead to compliance issues, data breaches and potential loss of competitive advantage.
Shadow AI is here — and it can’t afford to ignore it
(ThinkHubStudio/Shutterstock)
Shadow AI is a modern version of its shadow — when employees choose tools themselves because they are more effective or easier to use than those provided by IT leaders. Historically, this has looked like transferring work via USB drives, sending files to personal email accounts, or even using third-party apps that make tasks faster and easier.
The problem is that IT departments have no visibility or control over these tools. Data processed in personal AI accounts may include sensitive company information, intellectual property, or customer data, leaving it outside of general enforcement protections. Without proper oversight, these tools introduce compliance risks, security gaps and potential exposure of proprietary information, making unregulated AI adoption a serious concern for enterprises.
Leveraging Shadow AI for Productivity and Adoption
Historically, IT leaders have learned to work with it, rather than trying to push it forward. Employees naturally gravitate toward tools that make their jobs faster and easier, even if those tools aren’t officially approved. Many of today’s “enterprise essentials”—smartphones, cloud services, collaboration platforms—were once banned as security risks before their value became undeniable.
Generative AI is following exactly the same trajectory. The key for organizations is to understand which AI tools employees are adopting and why. By creating compliant, secure paths to popular tools, IT teams can now redirect budgets toward software that people actually want to use, while achieving measurable productivity gains. This approach shadows AI as a source of insight: one that employees adopt voluntarily, helping to build software stacks that are effective and embraced rather than enforced.
How it can guide the safe and effective use of AI
Enterprise technology has steadily democratized over the decades. Software that was once notoriously complex now rivals that of consumer apps. Cloud platforms have evolved to support distributed teams at scale, and cybersecurity practices have matured to balance protection with user experience. Generative AI needs to follow suit.
(Chilik/Shutterstock)
Leading AI platforms already offer enterprise tiers. Premium options with controlled data retention and enhanced security. But they require IT-level deployment and monitoring, not ad hoc management by individual employees. By seamlessly integrating AI into the tech stack, IT leaders can gain visibility into usage, provide security training, and support controlled experiences that drive performance.
At DEPL, we architect our language AI solutions with two priorities in mind: a seamless end-user experience and trust for security leaders. Speed, usability and accuracy matter, but trust is fundamental. This is why we build information security into our systems from the ground up, not as an afterthought.
The future of AI in the enterprise
Trust is the currency of enterprise AI adoption — and it’s fragile. Once compromised, it is extremely difficult to rebuild. As organizational AI strategies mature, the challenge is to balance productivity with security, creating room for innovation while keeping risks in check.
Shadow AI follows a familiar pattern: employees adopt tools that increase their effectiveness, regardless of government policies. For IT leaders, this creates an opportunity for realignment. By turning shadow AI into structured insights, organizations can achieve productivity gains that employees are already realizing, while keeping risk firmly in check.
The question isn’t whether AI will reinvent work — it already has. The question is whether it will lead this change or be dragged along with it.
About the author
Edward Crook is Chief of Staff at global AI and product research company DEPL, where he leads cross-departmental projects to support business growth. He is also an executive member of the Strategy Outtinker Network and, prior to DEPL, was Chief Strategy Officer at Brandwatch. He brings significant strategy and operations functions experience in the UK, Germany and the US, and specializes in growth stage businesses, market expansion, and B2B SaaS. His work has been recognized by the Information Age’s Data 50 and Customer Success Collective. Edward holds an MBA in Linguistics from the University of Sussex and an MBA from the Berlin School of Economics and Law.
Related
				
															






