
SAS, a global leader in data and AI, released findings from the IDC Data and AI Impact Report: The Trust Imperative, highlighting a surge in trust for generative AI (GenAI) over traditional AI, despite insufficient investment in ethical safeguards. Conducted across 2,375 IT and business leaders globally, the study exposes a critical misalignment between perceived trust and actual trustworthiness, underscoring the need for robust governance to maximize AI's ROI and mitigate risks.
The IDC report reveals a striking contradiction: GenAI, with its humanlike interactivity, garners high trust (48% "complete trust") compared to agentic AI (33%) and traditional AI (18%), despite being less reliable and explainable. Quantum AI, still nascent, earns 26% complete trust, reflecting premature confidence. However, concerns persist—62% worry about data privacy, 57% about transparency, and 56% about ethical use—yet only 40% of organizations invest in trustworthy AI practices like governance and explainability. "Our research shows a contradiction: that forms of AI with humanlike interactivity and social familiarity seem to encourage the greatest trust, regardless of actual reliability or accuracy," said Kathy Lange, Research Director of the AI and Automation Practice at IDC. "As AI providers, professionals and personal users, we must ask: GenAI is trusted, but is it always trustworthy? And are leaders applying the necessary guardrails and AI governance practices to this emerging technology?"
Despite 78% claiming full AI trust, only 2% prioritize AI governance frameworks, and under 10% develop responsible AI policies. This deprioritization risks stunted ROI, with trustworthy AI leaders—those investing in governance—1.6 times more likely to achieve double or greater returns. The rapid rise of GenAI (81% adoption vs. 66% for traditional AI) heightens risks, as organizations fail to match trust with accountability measures, leaving systems vulnerable to misuse and inefficiencies.
Weak data infrastructure hampers AI success, with 49% citing non-centralized or non-optimized cloud environments as a top issue, followed by insufficient governance (44%) and skill shortages (41%). Accessing relevant data sources (58%), ensuring privacy and compliance (49%), and maintaining data quality (46%) are leading challenges, underscoring the need for robust data strategies to support AI’s integration into critical processes.
“For the good of society, businesses and employees – trust in AI is imperative," said Bryan Harris, Chief Technology Officer at SAS. "In order to achieve this, the AI industry must increase the success rate of implementations, humans must critically review AI results, and leadership must empower the workforce with AI."
SAS’s study calls for urgent investment in governance, data infrastructure, and skills to align AI trust with trustworthiness, ensuring ethical, high-ROI deployments in an AI-driven world.
SAS is a global leader in data and AI. With SAS software and industry-specific solutions, organizations transform data into trusted decisions. SAS gives you THE POWER TO KNOW®. Learn more at sas.com.