Home
News
Tech Grid
Interviews
Anecdotes
Think Stack
Press Releases
Articles
  • AI

Perfai Launches Extension to Autonomously Test AI-Generated Code


Perfai Launches Extension to Autonomously Test AI-Generated Code
  • by: Source Logo
  • |
  • November 24, 2025

Perfai has launched a new extension for AI coding assistants and MCP (Model Context Protocol) tools, providing developers with an integrated solution to test, fix, and validate AI-generated code directly within their existing editors. This extension integrates with popular platforms like VS Code, GitHub Copilot, Cursor, and Claude, adding a layer of autonomous testing and auto-fixing without disrupting established developer workflows.

Quick Intel

  • Perfai launches a new extension for AI coding assistants and MCP tools.

  • It autonomously tests and auto-fixes security vulnerabilities in AI-generated code.

  • The tool integrates directly into VS Code, Copilot, Cursor, and other editors.

  • Case studies found AI-generated apps have 2x more vulnerabilities than manual code.

  • It provides real-time detection of breakpoints, unsafe flows, and exposed data paths.

  • The platform offers engineering leaders a centralized dashboard for security oversight.

The Critical Need for AI Code Security

As AI-coded applications ship faster, they often carry significant security gaps. Perfai's research, drawn from multiple case studies, reveals that AI-generated apps contain twice as many vulnerabilities as manually coded software. In one specific instance, Perfai detected 59 open vulnerabilities across 71 endpoints in a Replit-built app, exposing sensitive data including user documents and live conversation data. In another case, a Copilot-coded ERP rollout contained 2,216 hidden vulnerabilities that had passed manual QA, leaving financial records and employee data exposed.

How the Perfai Extension Works

The core problem is that AI coding assistants can write features but often miss the risks created between those features. Perfai's new extension addresses this by running autonomous testing inside the developer's workspace. It detects breakpoints, unsafe flows, missing checks, and exposed paths as soon as they appear. When an issue is identified, Perfai's auto-fix agent repairs it with minimal code changes, then re-tests the fix to ensure stability.

Seamless Integration and Leadership Visibility

With MCP integration, Perfai can test AI-generated code as it is being written. Developers do not need to adopt new dashboards or alter their workflow, as everything operates within their familiar coding environment. Perfai reports findings in real time, suggests fixes, and validates them. Simultaneously, it sends results to a central dashboard, giving engineering leaders—CISOs, CIOs, CTOs—a clear, consolidated view of the organization's application security posture, including fixes applied and risk trends across teams.

This release also establishes the foundation for Perfai's next wave of agents focused on quality and functional testing, aiming to create a complete in-editor testing experience for AI-built applications. By removing the guesswork from reviewing AI-generated code, Perfai's extension provides a faster, safer way to build with AI.

 

About Perfai

Perfai is an autonomous agentic AI platform for app testing. Its agents explore apps, detect issues, generate fixes, and validate them automatically. Perfai helps teams ship safer, more stable software without slowing development.

  • AI SecurityCybersecurityAI
News Disclaimer
  • Share