Meta “waited patiently on the sidelines” as artificial intelligence (AI) applications like ChatGPT were released to “learn from their mistakes” before announcing its new large language model (LLM) system, an expert has claimed.
Meta’s new LLM, named LLaMA, aims to assist scientists and researchers explore new applications for AI technology.
Meta’s announcement comes at a time when LLMs are at the height of mainstream popularity, powering applications like OpenAI’s ChatGPT, Google’s unreleased Bard and Microsoft’s Bing AI.
“In the increasingly high-profile generative AI race, it is no longer an advantage to possess a language model but a necessity to ensure that you are still regarded as a competitor,” Emma Taylor, analyst at research firm GlobalData, told Verdict.
Meta claims its new LLM stands out from other competitive models on the market for a number of reasons.
According to the company, LLaMA will come in several sizes, ranging from 7 billion parameters to 65 billion parameters – parameters being the instructions given to LLMs.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataLarger models, like OpenAI’s Chat-GPT 3 with 175 billion parameters, have been successful in expanding the technology’s capability, but they cost more to operate.
“Meta has waited patiently on the sidelines as models like ChatGPT were released, learning from the mistakes of others and working out how to differentiate its product,” Emma Taylor, analyst at research firm GlobalData told Verdict.
Taylor points to other models being rushed into public view and making highly visible mistakes.
“This includes Google’s Bard, which made a factual error in its first public demo, causing Google’s share price to plummet,” Taylor said.
Bigger isn’t always better says Meta
Meta says that smaller models trained on more tokens – pieces of words – are better to retrain and use for specific potential product use cases.
The smallest version of Meta’s LLaMA, LLaMA 7B, is trained with one trillion tokens, according to Meta’s announcement.
Meta claims that despite the rising sophistication of LLMs “full research access to them remains limited because of the resources that are required to train and run such large models.”
“This restricted access has limited researchers’ ability to understand how and why these large language models work,” the Meta announcement read.
Mark Zuckerberg’s metaverse-centric company believes working with smaller LLMs will help researchers improve known issues such as bias, toxicity, and misinformation.
AI has not escaped tightening budgets
Despite the rising popularity of applications like ChatGPT, AI has not escaped the tightening of purse strings from venture capitalists, with investment in the sector plummeting in 2022.
In 2013, 298 deals were completed with a total value of $1.6bn. This continued to grow over the decade, peaking in 2021 with a whopping 3,694 deals completed at a total value of $127bn, according to GlobalData.
In 2022, the number of deals completed decreased to 3507 with a much lower value of $72,954.
GlobalData is the parent company of Verdict and its sister publications.