Nearly three quarters (74%) of businesses are concerned about the privacy and data integrity risks of artificial intelligence (AI), which is slowing adoption of the technology, according to a new report.

The sixth edition of GlobalData’s Executive AI Briefing details polling by the company that found 59% of businesses to be lacking confidence in adopting the technology for their organisations. Only a fifth (21%) of respondents reported high or very high adoption of AI within their organisations.

“Many organisations are distrustful of AI tools developed by third parties, particularly due to the lack of transparency in how proprietary data, submitted through prompts or fine-tuning, is safeguarded,” the report states. “Nonetheless, developing AI models entirely in-house can be prohibitively expensive. The findings from GlobalData’s polls underscore the importance of AI vendors implementing robust data protection measures and establishing clear guardrails to reassure their clients.”

The findings of the three polls outlined, which were conducted between May 2023 and March 2025 and each responded to by upwards of 2,000 industry experts, are compounded by research conducted by tech giant Cisco that found critical security flaws in major large language models (LLMs).

Cisco explained: “Using algorithmic jailbreaking techniques, our team applied an automated attack methodology on DeepSeek R1 which tested it against 50 random prompts from the HarmBench dataset. These covered six categories of harmful behaviours including cybercrime, misinformation, illegal activities and general harm.

“The results were alarming: DeepSeek R1 exhibited a 100% attack success rate, meaning it failed to block a single harmful prompt. This contrasts starkly with other leading models, which demonstrated at least partial resistance.”

Attack success rates of other LLMs tested were 96% for Meta’s Llama 3.1 405B, 86% for OpenAI’s ChatGPT-4o, 64% for Google’s Gemini 1.5 Pro, 36% for Anthropic’s Claude 3.5 Sonnet and 26% for 01.AI’s preview.

Of the implications of this, GlobalData’s briefing contends: “Such poor results will make businesses exploring generative AI integration reticent given the reputational risk that would come with deploying harmful AI systems.”

Among ten implications for the broader AI market of DeepSeek’s emergence, GlobalData states that geopolitical repercussions need to be factored into any investment decision, that closed-source models will be under pressure to become open source and that security and safety must be maintained as the highest priorities.

Despite the privacy, integrity and security concerns, GlobalData forecasts that investment in AI will continue to rise and notes that DeepSeek has shown the value of reinforcement learning – a process through which an AI agent learns the best decision-making route based on iterative feedback received in the form of rewards and penalties.