Safety & GovernanceExecutive

Shadow AI

Definition
The use of AI tools by employees without IT or management approval, bypassing corporate security policies and data governance controls. Shadow AI parallels shadow IT but with higher risk due to the data-hungry nature of AI tools.
Why it matters
Shadow AI is the biggest immediate risk most organizations face from AI adoption. When employees paste confidential data into ChatGPT, upload proprietary documents to AI summarizers, or build unauthorized AI workflows with company data, they create data exposure risks that security teams cannot monitor or control. The problem is widespread: surveys show 60-80% of knowledge workers use AI tools that IT has not approved. Banning AI outright does not work; it just drives usage further underground. The solution is providing sanctioned AI tools that are as easy to use as the unsanctioned ones, while implementing data loss prevention (DLP) policies that prevent sensitive data from reaching external AI services.
In practice
Samsung banned ChatGPT after engineers inadvertently uploaded proprietary source code in 2023. JPMorgan, Goldman Sachs, and Citigroup restricted employee use of external AI tools. A 2024 survey by Salesforce found that more than 55% of workers using generative AI at work had not received approval. Companies are responding with: enterprise AI platforms that keep data within corporate boundaries (Microsoft 365 Copilot, custom Claude deployments), AI usage policies, endpoint monitoring for unauthorized AI tool access, and training programs that channel AI enthusiasm into approved tools. The emergence of the Chief AI Officer role is partly a response to shadow AI governance challenges.

We cover safety & governance every week.

Get the 5 AI stories that matter — free, every Friday.

Know the terms. Know the moves.

Get the 5 AI stories that matter every Friday — free.

Free forever. No spam.