
Gartner Survey Revealed 34% of Organisations Are Already Using or Implementing AI Application Security Tools
Announcement posted by Gartner 19 Sep 2023
Thirty-four percent of organisations are either already using or implementing artificial intelligence (AI) application security tools to mitigate the accompanying risks of generative AI (GenAI), according to a new survey from Gartner, Inc. Over half (56%) of respondents said they are also exploring such solutions.
The Gartner Peer Community survey was conducted from April 1 to April 7 among 150 IT and information security leaders at organisations where GenAI or foundational models are in use, in plans for use, or being explored.
Twenty-six percent of survey respondents said they are currently implementing or using privacy-enhancing technologies (PETs), ModelOps (25%) or model monitoring (24%) (see Figure 1).
Figure 1. Organisations Using or Planning to Use Tools to Address Risks Related to Generative AI (Percentage of Respondents)

"IT and security and risk management leaders must, in addition to implementing security tools, consider supporting an enterprise-wide strategy for AI TRiSM (trust, risk and security management)," said Avivah Litan, Distinguished VP Analyst at Gartner. "AI TRiSM manages data and process flows between users and companies who host generative AI foundation models, and must be a continuous effort, not a one-off exercise to continuously protect an organisation."
IT is Ultimately Responsible for GenAI Security
While 93% of IT and security leaders surveyed said they are at least somewhat involved in their organisation's GenAI security and risk management efforts, only 24% said they own this responsibility.
Among the respondents that do not own the responsibility for GenAI security and/or risk management, 44% reported that the ultimate responsibility for GenAI security rested with IT. For 20% of respondents, their organisation's governance, risk, and compliance departments owned the responsibility.
Top-of-Mind Risks
The risks associated with GenAI are significant, continuous and will constantly evolve. Survey respondents indicated that undesirable outputs and insecure code are among their top-of-mind risks when using GenAI:
- 57% of respondents are concerned about leaked secrets in AI-generated code.
- 58% of respondents are concerned about incorrect or biased outputs.
"Organisations that don't manage AI risk will witness their models not performing as intended and, in the worst case, can cause human or property damage," said Litan. "This will result in security failures, financial and reputational loss, and harm to individuals from incorrect, manipulated, unethical or biased outcomes. AI malperformance can also cause organisations to make poor business decisions."
Gartner clients can read more in the report "Generative AI Security and Risk Management."
About Gartner for Cybersecurity Leaders
Gartner for Cybersecurity Leaders equips security leaders with the tools to help reframe roles, align security strategy to business objectives and build programs to balance protection with the needs of the organisation. Additional information is available at https://www.gartner.com/en/cybersecurity.
Follow news and updates from Gartner for Cybersecurity Leaders on X and LinkedIn using #GartnerSEC. Visit the Gartner Newsroom for more information and insights.
About Gartner
Gartner, Inc. (NYSE: IT) delivers actionable, objective insight that drives smarter decisions and stronger performance on an organisation's mission-critical priorities. To learn more, visit gartner.com