New language learning models (LLMs) from OpenAI and Google have the tech community in an uproar right now.
Users of OpenAI’s Chat-GPT have generally expressed wonder at how impressively the model can string together eloquent sentences and even generate functional code. The alarming capabilities of these models are inspiring fear as well as excitement. OpenAI’s cofounder Sam Altman’s description of the recently-launched GPT-4 underlines this point nicely: Altman speculated that the model could either be “dangerous” or “the greatest technology humanity has yet developed”.
There are plenty of good reasons to be afraid of these advanced LLMs. Altman himself expressed concerns that GPT-4 could be used to produce huge quantities of credible-sounding misinformation or to generate code for cybercriminals. Another concern is the number of jobs advanced AI could automate. After all, why would you pay for manual content creation if you can automate this with generative AI for free? Ever since the Industrial Revolution, advances in technology have prompted fears of rising unemployment due to machines automating tasks that humans were previously paid to perform. Are those fears justified today?
Eloquence vs truth
The crucial question people should ask themselves is this: am I paid for the aesthetic quality or the factual credibility of the content I produce? Because, as LLMs currently stand, they can recreate the former, but not the latter. This is because an LLM is a very good mimic of language and writing. They are trained on enormous databases of written text, which enables them to produce output that is grammatically sound and even stylistically/tonally congruous with the demands of the user’s input.
However, there is no guarantee that an LLM’s output will be factually accurate because there is no guarantee that the written text that makes up an LLM’s database will be factually accurate. Consider how much online content is oversimplified and/or misleading. Even well-respected media platforms can produce content that over-generalises or is outright biased. Social media platforms and online forums are obviously even worse.
Most LLM developers are not forthcoming about how their models are trained, but any use of online content (and where else would these developers get troves of accessible data on which to train their LLMs?) compromises an LLM’s ability to reliably produce factually accurate output.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataA now infamous example of this was displayed by Google’s LLM ‘Bard’, which made an error in its first demo and caused Google to lose 7% of its value.
LLMs – What jobs are at risk?
On the one hand, journalists, research analysts, and consultants can breathe easy for now. You may have heard jokes around your office that your report-writing/research job is now automatable. In truth, it isn’t yet. Imagine McKinsey announced tomorrow that it was going to completely automate its research and report writing operations with GPT-4, replacing swathes of human analysts in the process. McKinsey would effectively be admitting to the public that what these research reports offer is eloquent writing, but not genuine insights, since eloquence is the only thing in a human analyst that an LLM can mimic.
This is not to say that McKinsey (and companies like it) have no use for LLMs. LLMs can expedite the research and drafting of written content. But human supervision is needed to ensure the content’s factual accuracy.
On the other hand, creative writers may be in trouble. As stated previously, LLMs cannot reliably produce factually accurate content, but they can recreate the aesthetic qualities of good writing (grammatical accuracy, appropriate tone, style, structure, etc) that creative writers are ultimately paid for. The creative potential of generative AI was demonstrated most clearly when art produced by an AI received first prize in a fine art contest in the US in September 2022. Even back in 2016, an AI-written novella made it past the first round of screening for a national literary prize in Japan. AI has progressed considerably since then, and this could threaten the livelihoods of creatives.
What should be done?
LLMs can be excellent tools for content creation and research. Luckily, they require human supervision when executing these functions, and therefore do not currently threaten the jobs of their human supervisors. Unfortunately, creative writers are more vulnerable. This needs to be addressed, because a world in which literary art is created by machines is unquestionably a dystopia.