Home
News
Tech Grid
Data & Analytics
Data Processing Data Management Analytics Data Infrastructure Data Integration & ETL Data Governance & Quality Business Intelligence DataOps Data Lakes & Warehouses Data Quality Data Engineering Big Data
Enterprise Tech
Digital Transformation Enterprise Solutions Collaboration & Communication Low-Code/No-Code Automation IT Compliance & Governance Innovation Enterprise AI Data Management HR
Cybersecurity
Risk & Compliance Data Security Identity & Access Management Application Security Threat Detection & Incident Response Threat Intelligence AI Cloud Security Network Security Endpoint Security Edge AI
AI
Ethical AI Agentic AI Enterprise AI AI Assistants Innovation Generative AI Computer Vision Deep Learning Machine Learning Robotics & Automation LLMs Document Intelligence Business Intelligence Low-Code/No-Code Edge AI Automation NLP AI Cloud
Cloud
Cloud AI Cloud Migration Cloud Security Cloud Native Hybrid & Multicloud Cloud Architecture Edge Computing
IT & Networking
IT Automation Network Monitoring & Management IT Support & Service Management IT Infrastructure & Ops IT Compliance & Governance Hardware & Devices Virtualization End-User Computing Storage & Backup
Human Resource Technology Agentic AI Robotics & Automation Innovation Enterprise AI AI Assistants Enterprise Solutions Generative AI Regulatory & Compliance Network Security Collaboration & Communication Business Intelligence Leadership Artificial Intelligence Cloud
Finance
Insurance Investment Banking Financial Services Security Payments & Wallets Decentralized Finance Blockchain Cryptocurrency
HR
Talent Acquisition Workforce Management AI HCM HR Cloud Learning & Development Payroll & Benefits HR Analytics HR Automation Employee Experience Employee Wellness Remote Work Cybersecurity
Marketing
AI Customer Engagement Advertising Email Marketing CRM Customer Experience Data Management Sales Content Management Marketing Automation Digital Marketing Supply Chain Management Communications Business Intelligence Digital Experience SEO/SEM Digital Transformation Marketing Cloud Content Marketing E-commerce
Consumer Tech
Smart Home Technology Home Appliances Consumer Health AI
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • AI

Legit Survey: Half Concerned Over AI Code Vulnerabilities


Legit Survey: Half Concerned Over AI Code Vulnerabilities
  • by: Source Logo
  • |
  • October 2, 2025

A new survey from Legit Security uncovers significant consumer apprehension toward AI-generated code in applications, with nearly half expressing worries about potential vulnerabilities. Conducted among 1,000 U.S. consumers, the findings highlight the critical need for transparency and robust security practices as AI integrates deeper into software development.

Quick Intel

  • Nearly half of consumers concerned about AI-generated code vulnerabilities.
  • 1 in 4 would lose trust in favorite apps using AI-written code.
  • 26% would avoid apps with AI code vulnerabilities; 33% more cautious in downloads.
  • Top concerns: security vulnerabilities (34%), unpredictable behavior (23%), data training (21%).
  • Trust factors: official app stores (53%), privacy policies (46%), known brands (45%).
  • Generational gap: Boomers 2x more likely to lose trust; 34% Gen Z trust apps more with AI.

Consumer Trust and AI Vulnerabilities

The survey, commissioned by Legit Security and executed by Dynata, reveals that while AI adoption in software is inevitable, consumer trust hinges on responsible implementation. One in four respondents would abandon their preferred applications upon discovering AI-written code, emphasizing the erosion of confidence when vulnerabilities arise from AI processes.

Key Concerns and Trust Indicators

Security vulnerabilities top consumer fears at 34%, followed by unpredictable application behavior at 23% and data training issues at 21%. Factors boosting perceived security include official app stores at 53%, privacy policies at 46%, and well-known brands at 45%. These insights stress the importance of visible accountability in AI-driven development to maintain user confidence.

Generational Perspectives on AI Risk

The report identifies stark generational differences in AI tolerance. Over 40% of Boomers fear AI vulnerabilities and are twice as likely to lose trust upon AI disclosure. In contrast, younger users demonstrate resilience, with Gen Z showing increased trust in AI-enhanced applications for 34%, reflecting varied attitudes toward technological innovation.

Leadership Commentary

"AI itself isn't a dirty word to consumers. The real issue is whether companies use it responsibly," says Roni Fuchs, co-founder and CEO at Legit. "Most people don't reject apps just because they leverage AI-generated code. Many of them understand it's inevitable. The real breaking point comes when AI introduces a vulnerability. At that moment, trust erodes fast, and potentially permanently. As AI adoption accelerates across the software development lifecycle, the mandate is clear: companies must make preventing, detecting, and remediating vulnerabilities in AI-generated code a non-negotiable priority. Anything less risks losing your users' trust."

"There is urgency for engineering teams to ensure that AI-generated code can be safe, secure, and trustworthy," says Liav Caspi, co-founder and CTO at Legit. "Cybersecurity Awareness Month has traditionally emphasized consumer best practices, but these findings highlight that developer practices matter a lot, too. Users are right to be concerned about how AI is being leveraged in the applications they use daily, and while they will keep downloading apps with AI, visible signals of security and accountability are needed to create this trust."

Released at the onset of National Cybersecurity Awareness Month, the survey calls for engineering teams to prioritize secure AI code practices. Explore the full findings in Legit Security's blog post and visit www.legitsecurity.com for solutions in AI-native application security.

About Legit Security

The Legit Security AI-Native ASPM platform is a new way to manage application security in a world of AI-first development, providing a cleaner way to manage and scale AppSec and address risks. Fast to implement, easy to use, and AI-native, Legit has an unmatched ability to discover and visualize the entire software factory attack surface, including a prioritized view of AppSec data from siloed scanning tools. As a result, organizations have the visibility, context, and automation they need to quickly find, fix, and prevent the application risk that matters most. Spend less time chasing low-risk findings and more time innovating.

  • AI Code SecurityApp SecurityCybersecurityConsumer Trust
News Disclaimer
  • Share