Nearly 70 years since Alan Turing posed the question āCan machines think?ā artificial intelligence (AI) is finally beginning to have an impact on the global economy. Proponents of AI believe that it has the potential to transform the world as we know it. So what is AI in business and what are the main themes?
Within the AI industry, there are seven key technology categories: machine learning, data science, conversational platforms, computer vision, AI chips, smart robots and context-aware computing.
Machine learning
Machine learning (ML) is an application of AI that gives computer systems the ability to learn and improve from data without being explicitly programmed. Examples include predictive data models or software platforms that analyse behavioural data. A rapidly growing number of companies are taking machine learning and applying it to industry-specific challenges, such as detecting bank fraud or offering personalised recommendations based on past purchases. Deep learning (DL) is a subset of machine learning, built on artificial neural networks that attempt to mimic the way neurons in the human brain talk to each other.
Data science
An interdisciplinary field that aims to derive knowledge or insights from large amounts data, both structured and unstructured. Machine learning is one of the tools used by data scientists to perform tasks such as data mining, using software to identify patterns in data, and data analysis through the application of predictive algorithms.
Conversational platforms
Conversational platforms employ a variety of technologies, including speech recognition, natural language processing (NLP), contextual awareness and machine learning, to enable human-like interaction with computer systems. Virtual personal assistants, like Amazonās Alexa, can schedule appointments, provide weather updates and play music based on voice commands, while a growing number of companies across industries are implementing virtual agents in areas such as customer service and human resources.
Computer vision
This category includes all technology that attempts to capture and interpret images or videos in a meaningful or useful way, including facial recognition software and visual search tools.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataAI chips
The explosion of interest in AI has fuelled demand for chips with the high levels of computing power required to rapidly crunch huge datasets. Graphics processing units (GPUs) have become standard for machine learning.
Smart robots
Smart robots are capable of anticipating and adapting to certain situations, based on the interpretation of data derived from an array of sensors, such as 3D cameras, ultrasound transmitters, force sensors and obstacle detectors. They are capable of working safely alongside humans in factories, where they are known as collaborative robots, or co-bots and also have applications in areas such as healthcare, providing monitoring and assistance for the elderly, and retail, offering in-store assistance to customers.
Context-aware computing
This refers to systems that adapt their behaviour according to the physical environment in which they are operating. Contextual information can include location, orientation, temperature, light, pressure and humidity. We would also include in this category those technologies that enable interaction through gestures, such as hand or eye movements. Context-awareness is typically a feature of IoT products, like Googleās Nest thermostats, and is also being incorporated into the virtual/augmented reality technology being produced by the likes of Magic Leap.
Why does AI matter for business?
Incumbents in virtually every industry are set to face some kind of game-changing disruption from artificial intelligence technologies. In terms of the speed of adoption of AI, vertical markets can be grouped into three categories:
- High AI adoption, including the technology industry as well as banking and financial services, automotive and telecoms.
- Medium AI adoption, including retail, media, healthcare, insurance and transportation and logistics and travel and tourism.
- Low AI adoption, covering construction, energy, education, and the public sector.
In a recent GlobalData survey of more than 3,000 companies worldwide, some 60% of respondents identified AI platforms and chatbots, machine learning and deep learning as a current technology priority. Of the 40% who did not, approximately one-third did identify AI platforms and chatbots, machine learning and deep learning as planned technology priorities.
It is not only companies that are making AI investment a priority, but countries too. China is the most obvious example, pledging to become the worldās AI leader by the end of the next decade, but governments in other nations are also backing large spending projects in order to ensure they will not miss out on positive effects of the AI boom.
In the UK, the government has touted its investment in innovative technologies like AI and, in April 2018, secured a deal between private and public groups expected to deliver almost Ā£1bn ($1.4bn) of investment. French president Emmanuel Macron has said that his government will invest ā¬1.5bn ($1.8bn) in AI research until the end of his term in 2022, while the European Union has said that ā¬20bn ($24bn) needs to be poured into AI over two years if the region is to keep pace with the US and China.
The US remains the leader in the development of AI technology, but China has already begun to erode some of its advantages and many observers are concerned about the lack of leadership on this issue coming from the Trump White House.
What are the big themes around AI?
The era of āAI for Xā
The potential applications of AI are endless and, as the technology becomes more mainstream, it will be used to tackle increasingly niche use-cases. Already there are examples of companies using AI to brew beer, compose music and suggest skincare routines.
Edge computing
Edge computing, in which more data-processing is done at the edge of the network, nearer to the data source and so reducing latency and enabling actions to be triggered in real-time, will become increasingly prevalent in AI-enabled technologies, in particular, smart robots, autonomous vehicles and other consumer devices. Appleās A11 Bionic chip, found in the iPhone X, boasts a neural engine capable of handling algorithms for functions like facial recognition and augmented reality on the device, rather than in the cloud.
Quantum computing
The race to reach quantum supremacy, the point at which a quantum computer can carry out calculations faster than a classical computer ever could, is well underway, with Google, IBM and Microsoft leading the pack. AI, and particularly machine learning, stands to benefit, as quantum computers should be able to complete extremely complex calculations, involving very large data sets, in a fraction of the time it would take todayās machines. For effort-intensive AI chores like classification, regression and clustering, quantum computing will open up an entirely new realm of performance and scale.
Capsule networks
Influential researcher Geoffrey Hinton is often referred to as the father of deep learning, with his work on artificial neural networks having laid the foundation for the recent AI boom. His most recent research, however, has sought to address issues with machine learning systems, particularly around computer vision, with a new approach, called capsule networks. Promising improved error rates and potentially requiring less data, capsule networks could become a key building block for future AI systems.
Open source AI
In order for AI to reach its full potential, it needs to be accessible to as many people as possible. Rather than jealously guarding their intellectual property, technology companies are actively encouraging people to experiment with, and build on their technology, whether that be through the use of application programming interfaces (APIs), software development kits (SDKs) or open source software platforms like Googleās TensorFlow or Microsoftās Cognitive Toolkit. This democratization of AI is likely to accelerate the development of the field and attract developers, two crucial elements in ensuring its longevity.
Voice as the user interface (UI)
Up until relatively recently, voice interactions with automated systems, such as the interactive voice response (IVR) systems used by banks and utilities, were almost universally disappointing, with slow response times and poor levels of recognition and understanding resulting in frustration. Recent advances in speech recognition and natural language processing have improved the experience of talking to a computer, to the extent that one in six Americans now own a smart speaker through which they can interact with a virtual assistant like Amazonās Alexa, Googleās Assistant or Microsoftās Cortana. Now established in the home, voice is also increasingly prevalent in cars, hotels, shops and offices, playing a variety of roles. Yet there is still plenty of scope for improvement, with even the leading virtual assistants often struggling to understand simple commands or provide an experience that goes much beyond a simple question-and-answer format.
The ethics of AI
The more widespread use of AI is raising complex ethical issues, the relevance of which will only increase as the technologyās influence in areas like medicine, finance and the law becomes more pervasive. An important area of focus as the AI industry matures will be ensuring, as far as possible, that human biases and prejudices are not passed on to AI systems through data, algorithms and interaction. Also vital will be ensuring that the economic benefits derived from the use of AI are shared across society, rather than deepening pre-existing inequalities.
The war for AI talent
The AI industry is facing a major talent shortage. According to a recent study by Chinese technology giant Tencent, there are currently only about 300,000 AI researchers worldwide, while market demand runs into the millions. This shortfall has already sent salaries skyrocketing ā an October 2017 report in The New York Times found that AI specialists with just a few years of experience were earning as much as $500,000 per year, while the very best could earn millions. In an attempt to tackle this issue, the likes of Google and Facebook have set up deep learning classes, both for employees and the general public.
Data privacy and data protection
The misuse and mishandling of personal data is currently a hot topic. Increased regulation around the storage and processing of data is highly likely, indeed, it is already in force in Europe in the form of the General Data Protection Regulation (GDPR), which came into force on 25 May 2018. The AI industry is reliant on large data sets and any restrictions on its ability to use them could have significant consequences.
Cracking open AIās black box
One specific aspect of GDPR that has caused significant discussion in the AI industry is the provision on the right to obtain an explanation of a decision based on automated processing. This goes to the heart of a major criticism of AI, particularly where it is used to make judgements that directly impact peopleās lives, such as approving loans and recommending medical treatment: that this decision-making process is unclear and lacks accountability. There is still debate about just how much of an impact this aspect of GDPR will have and, indeed, whether it is even possible to explain the decision-making processes of AI systems, but there is no doubt that AI has a black box problem and it will continue to erode public confidence in the technology. A team of researchers recently created a system that could point to the evidence it used to answer a question and describe how it interpreted that evidence and, as the AI industry matures, this type of transparency will be increasingly necessary.
What is the history of AI?
In 1950, British mathematician and computer scientist Alan Turing published the seminal paper āComputing Machinery and Intelligenceā, in which he considered the question: can machines think? Six years later, at a conference in Dartmouth College, the term āArtificial Intelligenceā was accepted as the name of the field of study into thinking machines.
The progress of AI, from Arthur Samuelās first game-playing program in 1952 to the all-conquering AlphaGo program created by Google DeepMind, has been far from linear. There have been two significant periods of reduced funding and interest in AI, known as AI winters, the first of which ran from 1974 to 1980 and the second from 1987 to 1993. Following the end of this second winter, interest began to slowly pick up, helped by Deep Blueās landmark victory in 1997 in world chess championships. Just over a decade later, Google had built the first autonomous car and the current AI boom was truly underway.
By 2010, Microsoft introduced Kinect technology for its Xbox games console which perfected gesture control, a type of AI technology. Nintendoās Wii console had pioneered motion control technology, but Microsoft did so purely by tracking human motion, without the use of a remote controller.
By 2011, a wide range of virtual office assistants appeared on connected devices. They included Appleās Siri, Microsoftās Cortana and Googleās Now. The more data they processed, the more they learnt and the better they performed.
That same year, IBMās Watson neurocomputer beat two champions on the quiz show Jeopardy, demonstrating that a computer could interpret questions and provide answers with all the nuances of a human.
By 2014, Amazon had launched Echo, a voice-activated intelligent speaker powered by Alexa, its AI engine. The more voice commands Echo heard, the more it learned and the more intelligent it became, gaining ground rapidly on Googleās voice recognition engine, which was, and still is, technically superior.
In 2016, Google DeepMindās AlphaGo algorithm beat the world champion in the Chinese board game of Go, a feat that many experts had predicted would take much longer to achieve.
By 2017, Libratus, an AI computer program built by researchers at Carnegie Mellon University, beat the worldās top poker players in a game of no-limit Texas Holdāem poker. Mastering poker had previously been seen as a major challenge for machines as, unlike games like Go and chess, the available information is imperfect as players cannot see each otherās cards.
By 2020, AI is expected to be, in the words of one of the best-known names in modern AI Andrew Ng, āthe new electricityā, powering a range of products. Software developers will plug their programmes into machine learning platforms using cloud-based APIs to create a wide variety of intelligent apps that get smarter as more people use them.
By 2040, according to a poll of AI experts around the world conducted by Vincent MĆ¼ller and Nick Bostrom of Oxford University in 2014, there is a 50% chance that full human-level AI would be achieved. By 2075, the probability that full human-level AI would be achieved climbs to 90%.
Moreover, 30 years after full AI has been reached, the experts predict the arrival of super-intelligent AI systems, where super-intelligence is defined by Bostrom as, āany intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interestā. At this point, it is possible that humans may no longer be able to control the machines they have created
The AI storyā¦ ā¦ how did this theme get here and where is it going?
- 1642: Pascal invents the first digital calculating machine.
- 1854: George Boole invents Boolean algebra.
- 1913: Whitehead and Russell revolutionize formal logic in Principia Mathematica.
- 1948: Von Neumann asserts that a general computer can simulate any effective procedure.
- 1950: Alan Turing develops the Turing Test to assess a machine’s ability to exhibit intelligent (human-like) behaviour.
- 1952: Arthur Samuel writes the first game-playing program for draughts (checkers).
- 1956: Phrase āArtificial Intelligenceā first aired at a Dartmouth College conference.
- 1959: John McCarthy and Marvin Minsky found the MIT AI Lab.
- 1965: Joseph Weizenbaum (MIT) builds Eliza, an interactive program capable of carrying on a dialogue (in English).
- 1973: The Lighthill Report, heavily critical of AI research, sets study of the area back in UK and US.
- 1997: IBMās Deep Blue defeats world chess champion, Garry Kasparov.
- 1998: Tim Berners-Lee publishes the landmark Semantic Web Road Map paper.
- 2005: Tivo popularises recommendation technology based on tracking Web activity and media usage.
- 2009: Google builds first autonomous car.
- 2010: Microsoft Kinect for the Xbox is the first gaming device to track human body movement.
- 2011: IBM Watson beats human champions in the TV game show Jeopardy.
- 2011: Apple’s natural language-based virtual assistant Siri appears on the iPhone 4S.
- 2014: Tesla introduces AutoPilot, software which is later upgraded for fully autonomous driving.
- 2014: Amazon launches Echo, its intelligent voice-activated speaker, which include the Alexa virtual assistant.
- 2015: Baidu launches Duer, its intelligent assistant
- 2016: Google DeepMind’s AlphaGo algorithm beats world Go champion Lee Sedol 4-1.
- 2017: Libratus, designed by Carnegie Mellon researchers, beats top 4 players in no-limit Texas Holdāem poker.
- 2018: Alibaba’s AI model scores better than humans in a Stanford University reading and comprehension test.
- 2020: AI becomes the new “electricity” – developers plug into machine learning APIs for a wide variety of apps.
- 2030: If China has achieved its goal, it will be the primary AI innovation centre by 2030.
- 2040: 50% probability of full human-level AI, according to a poll of AI experts.
This article was produced in association with GlobalData Thematic research. More details here about how to access in-depth reports and detailed thematic scorecard rankings.