How the world can regulate artificial intelligence (AI) from an ethical and environmental point of view is among the key considerations for the technology.
GlobalData’s recently published Artificial Intelligence – Executive Briefing identifies key areas for consideration in relation to the development of AI. While the technology is set to have revolutionary impacts across a wide variety of industries, there are concerns about whether a global standard of AI regulation can be achieved. In addition, the report highlights AI’s complex relationship with environmental sustainability.
GlobalData forecasts a compound annual growth rate for the AI industry of 35% between 2022-30, making the industry worth a potential $909bn by 2030. Indeed, a recent GlobalData poll across the company’s network of B2B websites as part of its Thematic Intelligence: Tech Sentiment Polls Q3 2023 found AI to be viewed as a highly promising technology, with 53% of 368 respondents believing it will live up to all of its hype.
Currently, generative AI is the fastest-growing AI value chain with the adoption of the technology being seen in a variety of industries in the past 12 months.
Although currently it is only being used mainly for basic data analysis, virtual assistants and content production, there are predictions of it being able to deliver fully automated content, blockbuster films, predictive customer support, real-time recommendations on investment and risk and predictive health models.
AI regulation
Progression in regulation has been seen with the EU’s AI Act, which is due to come into force in late 2023 or 2024. The regulation is aimed at ensuring AI models are non-discriminatory and environmentally friendly, guaranteeing safety, transparency and traceability.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataIn November, meanwhile, the UK government will hold the AI Safety Summit to discuss risks and mitigation strategies aimed at leading international cooperation in AI regulation.
Of the UK’s geopolitical position and global regulation of AI, GlobalData’s Thematic Research Director Josep Bori comments: “This could be difficult in the current geopolitical position of the UK after Brexit. Standard setting and global regulatory frameworks are better handled by large international organisations and regulatory bodies, rather than single countries.
“It is unclear to what extent the UK can lead the global AI safety movement, despite its academic strengths.”
Assessing the unclear future of AI regulation, Bori says: “Developments in generative AI are coming fast, and AI regulation is in very early stages, so we believe the situation will remain fluid for quite some time. We do expect divergence across jurisdictions and frequent regulatory changes. Whether the global regulatory framework will eventually converge to a minimum common level or just outright diverge is unclear.”
Corroborating this sense of uncertainty about whether global regulatory standards on AI can be achieved, Benjamin Chin, Associate Analyst in Thematic Intelligence at GlobalData, adds: “In the coming years, more governments will enact AI regulations seeking higher technical and ethical standards. However, differing approaches across countries, as well as the active role of big tech, will mean that a unified vision of AI regulation, which could benefit everyone, is unlikely to be achieved.”
Environmental and ethical considerations of AI
AI has a multifarious relationship with the environment, with the executive brief highlighting the potential of the technology to be both detrimental and beneficial to the environment. While training AI on large language models (LLMs), for example, can consume vast amounts of energy, these can also have the potential to aid sustainability by monitoring renewable energy consumption in smart grids and more.
Bori notes that the environmental impacts of the technology have garnered more attention over the last year and foresees environmental considerations of LLMs becoming increasingly important.
”Given the importance of complying with environmental, social, and governance frameworks, we believe energy consumption or carbon footprint will over time become a competitive factor in generative AI, with companies potentially selecting LLMs with lower carbon footprint even if they might be less performing,” he says.
Of this, Chin adds: “From an environmental standpoint, as per the sixth edition of the AI Index Report 2023 published by Stanford University, the carbon dioxide-equivalent emissions produced by GPT-3 stood at 502 tons in 2022. In comparison, a return flight from London to New York generates about 986kg of CO2 per passenger.”
Remarking on the ethical complications of LLMs, China continued “In terms of ethics, the top concern is that as LLMs incorporate huge amounts of data, they will be more likely to absorb mainstream views and perpetuate existing social biases.”