Speaking at a event at the Nordic Business Forum 2023, NYU marketing professor Scott Galloway posed a question to the audience on tech regulation.
“Who here has parents who are 89?” referring to the age of some congress members such as Charles E. Grassley and the late Dianne Feinstein. Most of the audience raised their hands, to which Galloway followed up, “And who here would want their parents to handle discussions on AI?”
Though abrupt, Galloway’s question to the audience brings up an important discussion on the relationship between regulators and the tech industry when it comes to AI. Verdict has already questioned whether the regulation of AI is ever possible, but who should drive the writing these laws?
The average age of Congress members in 2023 is 58 years old, according to Fiscal Note, which is older than the median age of the US citizen which stands at 38.
Although AI has been around in universities since early 1960s, it was mostly confined to university computer labs and has only been released to the public in the last few years. Is congress too old to be computer literate and, if so, how can digitally illiterate regulators regulate AI?
In response to Galloway’s questions, principal analyst at GlobalData Laura Petrone says that she feels the professor’s comments were “quite extreme” and required “some distinction” to be understood.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataFor Petrone, it is not a question of age but rather ensuring that more technological and legal experts are involved in writing AI regulations. To achieve this collaboration, Petrone states that collaboration between tech companies and regulators, regardless of age, will be critical.
One significant event to watch, in Petrone’s opinion, will be the upcoming UK AI Safety Summit.
“It will be interesting to see the level of collaboration achieved at [the UK Summit] and to what extent tech companies will be willing to disclose these internal details,” she concluded.
Like the US Congressional Forums on AI, the UK’s AI Safety Summit is likely to be predominantly behind closed doors to the public with only 100 attendees admitted. Whilst Big Tech has been invited to attend these talks, how can regulators ensure that tech experts are leading the discussion rather than passively listening?
Director of AI Lab at Pegasystems, Dr Peter van der Putten, elucidated the benefits of inviting tech experts to join the AI regulation process.
“Everyone is a stakeholder in AI as everyone is affected by it,” began van der Putten, “AI expertise helps to understand where the real opportunities and risks lie.”
“Lawmakers shouldn’t make sweeping assumptions that the tech world is opposed to AI regulation, but differentiate between tech focussing on responsible AI applications, versus tech companies opposing any sort of regulation,” he stated.
Refereeing their own game?
However, collaboration between Big Tech and regulators alone cannot be the sole driving force behind AI regulation.
Michael Queenan, CEO of data company Nephos Technologies, instead reiterates the importance of including “impartial” voices within AI discussion.
“Relying too much on heads of Big Tech – such as Meta, Google and Microsoft– will be like inviting them to referee their own football game,” Queenan posits.
Expanding on this further, Queenan reminds Verdict that Big Tech is positioned to profit immensely from AI. If GlobalData forecasts are accurate, AI is on track to become a global market worth $984bn by 2030.
Likening AI regulatory discussions to the growth of social media sites in the 2000s, Queenan states that it is currently impossible to comprehend the full social consequences AI could have.
“What is needed is involvement from neutral experts that can offer valuable advice on how to avoid the real dangers that AI poses, without any ulterior motives,” Queenan stated.
Including Big Tech into AI regulatory discussions may require them to provide an uncomfortable degree of transparency into just how Big Tech is set to profit from AI and where this money is sourced.
Whilst companies like Microsoft-backed OpenAI and Google’s parent company Alphabet have publicly promised to prioritise user privacy and avoid bias within their AI software, technology training company, O’Reilly, founder and CEO Tim O’Reilly says that these companies are often too vague in describing the methods they use to achieve these goals. O’Reilly helps businesses upskill their employees by training on new technologies including AI.
In O’Reilly’s opinion, disclosures over these companies’ practices are too often revealed in small information points via white papers, earnings calls and even whistleblowers.
“The best place to start with AI [regulation] is by mandating transparency and building regulatory institutions for enforcing accountability,” he stated.
Providing only general reassurance, says O’Reilly, is unacceptable.
Whilst tech companies can provide regulators with the technical knowledge required to understand AI, impartiality and transparency cannot go understated when involving Big Tech in regulation.