Netskope, a leader in Secure Access Service Edge (SASE), released its latest Cloud and Threat Report on August 4, 2025, highlighting a 50% surge in generative AI (genAI) platform usage among enterprise end-users in the three months ending May 2025. The report underscores the rapid rise of shadow AI—unsanctioned AI applications used by employees—posing significant security risks, with over half of current app adoption classified as shadow AI.
GenAI Platform Surge: 50% increase in users and 73% rise in network traffic in Q1 2025 (Feb–May).
Shadow AI Prevalence: Over 50% of genAI app adoption is unsanctioned, amplifying data security risks.
Top Platforms: Microsoft Azure OpenAI (29%), Amazon Bedrock (22%), Google Vertex AI (7.2%).
SaaS GenAI Apps: Netskope tracks 1,550+ distinct apps, up from 317 in February 2025; organizations use ~15 apps on average.
Data Uploads: Monthly data to genAI apps rose from 7.7 GB to 8.2 GB quarter-over-quarter.
Popular Apps: ChatGPT declined in enterprise use, while Gemini, Copilot, Claude, Perplexity AI, and Grammarly gained traction; Grok entered the top 10.
On-Premises AI: 34% of organizations use LLM interfaces like Ollama (33%); 67% access Hugging Face; 39% use GitHub Copilot.
Security Recommendations: Assess genAI usage, enforce approved app policies, implement DLP, and monitor continuously.
The Netskope Threat Labs Cloud and Threat Report details a 50% increase in genAI platform users from February to May 2025, with network traffic up 73%. These platforms, enabling custom AI apps and agents, are the fastest-growing shadow AI category due to their ease of use. In May 2025, 41% of organizations used at least one genAI platform, led by Microsoft Azure OpenAI (29%), Amazon Bedrock (22%), and Google Vertex AI (7.2%). Shadow AI, where employees use unapproved tools, accounts for over 50% of app adoption, with 72% of genAI users accessing apps via personal accounts, heightening risks of data leakage.
Ray Canzanese, Director of Netskope Threat Labs, noted, “The rapid growth of shadow AI places the onus on organizations to identify who is creating new AI apps and AI agents using genAI platforms and where they are building and deploying them.”
Netskope now tracks over 1,550 distinct genAI SaaS apps, up from 317 in February 2025, with organizations using an average of 15 apps (up from 13). Monthly data uploads to these apps increased from 7.7 GB to 8.2 GB. Enterprises are consolidating around purpose-built tools like Google Gemini and Microsoft Copilot, which saw significant adoption gains. ChatGPT, despite remaining the most popular app (used by 84% of organizations), saw its first enterprise usage decline since 2023. Other apps, including Anthropic Claude, Perplexity AI, Grammarly, and Gamma, grew, while Grok entered the top 10 most-used apps, though it remains among the most-blocked, with blockage rates declining as organizations adopt granular controls.
Organizations are increasingly deploying genAI locally, with 34% using LLM interfaces like Ollama (33% adoption), LM Studio (0.9%), and Ramalama (0.6%). Employee engagement with AI marketplaces is also rising, with 67% of organizations seeing users download resources from Hugging Face. Agentic AI is gaining traction, with 39% of organizations using GitHub Copilot and 5.5% running on-premises AI agents. Additionally, 66% of organizations have users making API calls to api.openai.com, and 13% to api.anthropic.com, indicating significant on-premises AI activity.
Shadow AI introduces significant risks, including:
Data Leakage: 75% of enterprise users upload data to genAI apps, including sensitive information like source code (46% of data policy violations), passwords, and intellectual property, risking breaches and compliance violations.
Unsanctioned Usage: 72% of genAI users access apps via personal accounts, bypassing IT oversight.
On-Premises Risks: Local genAI infrastructure, used by 54% of organizations, introduces supply chain vulnerabilities, data leakage, and prompt injection risks.
Posts on X highlight growing concern, with 68% of organizations experiencing data leakage from AI usage, emphasizing the need for robust security frameworks.
Netskope recommends the following to mitigate shadow AI risks:
Assess GenAI Landscape: Identify all genAI tools in use, their users, and usage patterns.
Bolster App Controls: Enforce policies allowing only approved genAI apps, using blocking mechanisms and real-time user coaching (73% of users heed coaching warnings).
Inventory Local Controls: Apply frameworks like OWASP Top 10 for LLMs for on-premises AI infrastructure.
Continuous Monitoring: Track shadow AI instances and stay updated on AI ethics, regulations, and adversarial threats.
Agentic AI Policies: Partner with AI adopters to create actionable policies to limit unsanctioned agentic AI use.
DLP adoption is rising, with 45% of organizations using it to control data flow to genAI apps, and 73% blocking at least one app (top 25% block 14.6 apps on average).
The report aligns with industry trends toward AI-driven solutions, as seen in Amazon’s Q2 2025 results, where AWS’s 17.5% growth was fueled by Amazon Bedrock, and Genesys’ $1.5B investment for AI-powered CX orchestration. However, Netskope’s findings underscore a gap in enterprise security, with shadow AI outpacing control implementation. The 30x increase in data sent to genAI apps over the past year highlights the urgency of advanced DLP and governance.
Netskope, a leader in modern security and networking, addresses the needs of both security and networking teams by providing optimized access and real-time, context-based security for people, devices, and data anywhere they go. Thousands of customers, including more than 30 of the Fortune 100, trust the Netskope One platform, its Zero Trust Engine, and its powerful NewEdge network to reduce risk and gain full visibility and control over cloud, AI, SaaS, web, and private applications—providing security and accelerating performance without trade-offs.