The World Digital Technology Academy (WDTA) introduced its AI STR Series: Single AI Agent Runtime Security Testing Standards at a UN-hosted event in Geneva. This framework addresses the growing need for secure and ethical AI deployment across industries like finance and healthcare, emphasizing full lifecycle management to ensure safety and trust.
WDTA launched AI STR Series for Single AI Agent Security Testing on July 11, 2025.
Standards focus on lifecycle management, from data governance to certification.
Targets risks in autonomous driving, healthcare, finance, and manufacturing.
Fourth in WDTA’s AI STR (Safety, Trust, Responsibility) certification suite.
Pilot certifications underway in finance and healthcare, with Asia-Pacific next.
Supports UN’s Global Digital Compact for ethical AI deployment.
At the “Global Consultation on the Social Aspects of Digital Technologies and AI” co-hosted by the United Nations Research Institute for Social Development (UNRISD) and WDTA in Geneva, the AI STR Series was unveiled to address security concerns in the rapidly expanding AI agent ecosystem. “Fair data governance and the integration of AI safety with ethics and social values are key to promoting global sustainable development,” said Peter Major, vice-chair of the UN Commission on Science and Technology for Development (CSTD) and Honorary Chairman of WDTA. The standards aim to embed ethics and responsibility across AI lifecycles.
The AI STR Series tackles the Kolingridge dilemma, where governance becomes harder once technologies are widely adopted. “Once new technologies are embedded in society, governance becomes exponentially harder—this is the Kolingridge dilemma,” said Yale Li, WDTA Executive Chairman. The framework establishes enforceable testing and certification protocols to ensure safety from development to deployment, covering industries like autonomous driving and healthcare.
Developed by a global task force spanning Asia, Europe, and North America, including experts from Microsoft, Google, and Ant Group, the AI STR Series emphasizes full lifecycle management. It includes data governance, model deployment, automated testing tools, and certification procedures to mitigate risks like data leaks and model tampering. Pilot certifications are active in finance and healthcare, with plans to expand to the Asia-Pacific region.
This is the fourth installment in WDTA’s AI STR (Safety, Trust, Responsibility) certification suite, following standards for Generative AI, Large Language Model (LLM) Security Testing, and LLM Supply Chain Security. The suite supports the UN’s Global Digital Compact, promoting secure and ethical AI globally. “2025 has seen AI agents proliferate across content creation, knowledge retrieval, workflow automation, and beyond,” said Yale Li. “But their deployment has been shadowed by mounting security concerns. These standards aim to put a ‘safety belt’ on the rapidly advancing AI agent ecosystem.”
The AI STR Series positions WDTA as a leader in global AI governance, fostering trust and safety in AI deployment. By integrating ethical standards and robust testing, WDTA’s framework ensures AI technologies drive sustainable development while addressing security challenges across industries.