As leaders meet at the World Economic Forum in Davos to discuss global collaboration across AI, technology consulting company Accenture has announced ten innovation hubs for generative AI (GenAI) around the globe.
The GenAI hubs, which include one in London, are part of the company’s $3bn investment into AI business. The investment will help Accenture’s clients make the most of GenAI.
The move, which Accenture hopes will capitalise on the AI boom, comes as experts argue that stronger global collaboration is needed to regulate AI successfully.
Research company GlobalData forecasts that the overall AI market will be worth $909bn by 2030, having grown at a compound annual growth rate (CAGR) of 35% between 2022 and 2030.
In the GenAI space, revenues are expected to grow from $1.8bn in 2022 to $33bn in 2027 at a CAGR of 80%.
As AI investment and adoption continue to grow, so do the issues that come with it.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataAI challenges such as privacy, job displacement, cybersecurity and human bias, have all been points of contention for governments and technology companies.
The US and Europe have taken steps to regulate AI with notably different approaches.
The US has taken a non-regulatory approach to AI regulation, investing in new AI risk management and advanced research. The EU’s approach is based on more centralised comprehensive legislation across different areas of emerging technology.
Despite the two governments sharing similar outlooks on how AI should function, the enforcement of that regulation could not be further apart.
The EU AI Act, which is unlikely to be enforced before 2025, has received significant backlash from some of the bloc’s largest companies.
In a letter signed by dozens of firms, the EU was told the legislation “would jeopardize Europe’s competitiveness and technological sovereignty without effectively tackling the challenges we are and will be facing.”
Experts remain divided on how to govern AI
Laura Petrone, analyst at GlobalData, told Verdict that global cooperation is critical to developing responsible AI and ensuring that AI-related ethical principles are reflected in standards and companies’ best practices.
“It’s even more critical when regulation is still at an early stage,” Petrone told Verdict.
Petrone added: “Governments, businesses, and civil society actors must speak to one another about how to address these risks and put in place guardrails.”
Anthony Deighton, general manager at data company Tamr, agrees that global collaboration in AI serves as “a fundamental pillar for its successful, ethical, and responsible deployment.”
“This collaborative approach offers a multifaceted perspective on the creation and implementation of AI technologies, guaranteeing that these tools incorporate an awareness of a broad spectrum of social, cultural, and ethical considerations,” Deighton told Verdict.
However, some experts believe it will be difficult for global companies and governments to come to a succinct agreement on global AI regulation.
“While we have strong global momentum right now in terms of putting in place structures to ensure that global corporations and governments adopt AI securely and responsibly, it’s highly unlikely we will see a consensus on global AI regulation anytime soon,” Dr Ellison Anne Williams, founder and CEO of cybersecurity company Enveil, told Verdict.
“How AI is governed in different countries will manifest differently — which is consistent with what we’ve seen for privacy more broadly,” she added.
As of right now, the regulation of AI is in the hands of its Big Tech creators. This has been criticised by many who feel those who profit of the technology shouldn’t be the ones that enact guardrails.
Some argue that tougher measures are needed in order to truly protect society against the AI’s potential dangers.
“Relying too much on heads of Big Tech – such as Meta, Google and Microsoft – will be like inviting them to referee their own football game,” Michael Queenan, CEO and co-founder of Nephos Technologies, told Verdict.
“Big Tech companies, who are set to profit hugely from AI, shouldn’t be able to dominate conversations about setting guardrails and regulating its use,” Queenan said.