AppSecure, a cybersecurity firm specializing in penetration testing, identified a critical vulnerability in Meta.AI’s GraphQL API that could have exposed users’ AI prompts and responses. Discovered by CEO Sandeep Hodkasia, the flaw was responsibly reported and fixed, ensuring no misuse occurred, while highlighting the need for robust security in AI platforms.
AppSecure found a flaw in Meta.AI’s GraphQL API exposing user data.
Vulnerability in useAbraImagineReimagineMutation query lacked authorization checks.
Reported on December 26, 2024; fixed temporarily January 24, 2025, permanently April 24, 2025.
Meta awarded AppSecure $10,000 plus $4,550 for related findings.
No evidence of exploitation was found, ensuring user data safety.
Highlights need for proactive security in generative AI systems.
During a security research exercise, AppSecure’s CEO Sandeep Hodkasia uncovered a vulnerability in Meta.AI’s GraphQL API, specifically in the useAbraImagineReimagineMutation query. The flaw stemmed from a missing authorization check, allowing any logged-in user to manipulate the media_set_id parameter and access other users’ prompts and AI-generated content. This posed a significant risk to user privacy on Meta’s generative AI chatbot platform.
“This wasn’t about chasing a bounty — it was about securing a system millions are starting to trust,” clarifies Sandeep. “If a platform as robust as Meta.AI can have such loopholes, it’s a clear signal that other AI-first companies must proactively test their platforms before users’ data is put at risk.”
AppSecure reported the vulnerability to Meta on December 26, 2024. Meta responded promptly, deploying a temporary fix on January 24, 2025, and a permanent resolution on April 24, 2025. In their official statement, Meta said: “You demonstrated an issue where a malicious actor could access users' prompts and AI-generated media via a certain GraphQL query, potentially allowing an attacker to access users’ private media. We mitigated this and found no evidence of abuse.” Meta awarded AppSecure $10,000 for the primary vulnerability and an additional $4,550 for related issues identified during the investigation.
The discovery underscores the growing attack surface in generative AI platforms, where user-generated content and prompt histories are vulnerable. AppSecure’s findings emphasize the need for proactive security testing, particularly for AI systems handling sensitive data. The firm’s hands-on approach involves simulating real-world attacks to identify hidden flaws, helping organizations strengthen their defenses before threats emerge.
“Security is not just about fixing problems after they appear; it’s about anticipating risks and acting before damage occurs,” adds Sandeep. “That's why leading companies work with us to identify real-world risks early and build AI platforms that stay secure and reliable from the very beginning.”
As a CREST-accredited penetration testing firm, AppSecure has a strong reputation for responsibly uncovering vulnerabilities in AI-focused platforms. By examining user interactions and backend processes, the company helps businesses address risks early. This discovery highlights the critical role of proactive cybersecurity in protecting user trust and data integrity as AI adoption accelerates.
AppSecure’s work serves as a reminder for AI-first companies to prioritize security testing. With no evidence of exploitation and a swift fix by Meta, this incident demonstrates the value of responsible disclosure and collaboration in safeguarding emerging technologies.
AppSecure Security is a CREST-accredited Penetration testing firm that identifies and addresses critical vulnerabilities through real-world attack simulations. The experienced team focuses on testing web applications, APIs, and networks to expose hidden risks before threats can cause harm. By following industry standards and taking a proactive approach, AppSecure helps businesses strengthen their defenses and stay ahead of evolving cyber challenges, making it a trusted partner for comprehensive security solutions.