OpenAI has unveiled Aardvark, a new agentic AI agent powered by GPT-5 and tailored for cybersecurity.
Described by OpenAI as being able to “think like a security researcher,” Aardvark is designed to identify potential software vulnerabilities before hackers can exploit them and suggest solutions.
“Aardvark looks for bugs as a human security researcher might: by reading code, analyzing it, writing and running tests, using tools, and more,” OpenAI wrote in a blog post.
Humans are kept in the loop as Aardvark’s changes are made as suggestions for human implementation rather than autonomously completed.
Originally developed as an in-house tool to help engineers identify and resolve issues, the system uses large language model-powered reasoning to understand code behavior, identify vulnerabilities and prioritize the issues.
Its success as an internal tool led to OpenAI’s decision to expand its use cases to commercial partners.
In tests, Aardvark identified 92% of both known and created vulnerabilities, and was able to uncover bugs such as logic flaws, incomplete fixes and privacy issues.
The system is now available in private beta to “validate and refine” its capabilities in the field, with OpenAI calling for companies to apply to deploy the tool across a range of environments. The company also said it will offer Aardvark pro-bono for non-commercial use cases.
Details on when Aardvark will be made publicly available have not been disclosed, although OpenAI said the launch comes as software risks across industries see a spike, with more than 40,000 vulnerabilities reported in 2024.
“By catching vulnerabilities early, validating real-world exploitability, and offering clear fixes, Aardvark can strengthen security without slowing innovation,” OpenAI stated.

