With new technology constantly emerging and becoming one of the latest trends or must-haves, people can be forgiven for forgetting the real-life applications of the technology. Autonomous cars have long been a staple in science fiction but in the last decade, they have moved closer to general availability as more and more companies start developing their own offerings. Tesla, Google’s Waymo and GM Cruise are some of the leaders in this industry, driving the advancement in self-driving vehicles.
However, while autonomous vehicles have the potential to reshape transport and society, one of the major issues facing developers is their security. From glitchy software to duped artificial intelligence (AI) and environmental tampering, there are many security and safety vulnerabilities that need solutions before these vehicles can be considered secure and safe for use on our roads. Below we look at autonomous car cybersecurity issues that need solutions to future-proof self-driving vehicles.
New tech on the block
The advent of the internet of things (IoT) and connected technology has enabled huge changes for both businesses and wider society. For autonomous cars, IoT can equip the vehicles with real-time data, helping the vehicles to detect operational issues, understand traffic conditions and their changing surroundings. Self-driving technology is using IoT capabilities alongside AI and machine learning in order to develop an operating system that focuses on two areas: decision-making abilities and perception.
Autonomous vehicles need to perceive the road better than the best human driver could. This challenge comes down to improving computer vision, enhancing the perception system using cameras and radar, as well as building highly detailed maps of the environment so the vehicle may process its surroundings. The second challenge is looking at how to develop an AI system that can make reasonable decisions on the road such as when to change lanes and what speed to travel. This opens up the issue beyond technology into regulation and ethics.
Ethics and security
AI and ethics has been a hot topic for several years now and autonomous car developers are no closer to finding an answer to the serious ethical issues that self-driving cars present. Safety is one of the most important factors to consider when developing the vehicles but how do companies judge what the ethical lines are when it comes to safety?
Most companies know that they alone can’t create the ethical decisions behind the software, which has to balance the safety of the passengers against the safety of people outside of the vehicle. The big challenge, then, lies in creating regulations that formalise the limits of reasonable decision-making so that companies can program the vehicles to act within these parameters.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataAnother area to consider is the security of the firmware and software; not only does it face the typical threat of cyber attacks, but for self-driving vehicles, security means safety. Automakers must be able to ensure that their software and firmware is secure which is made more complex with the connectivity of an IoT system where one vulnerability could open up the system to further threats.
At the same time, the software must be reliable to ensure that the cars can run continuously and not break down because of a glitchy update. Companies such as Tesla have a very security-conscious approach to development, with security testing and research part of the normal product development research and process. This is not always the case for the traditional automakers who, in contrast, don’t have as mature an approach to security.
Manipulating environments
Manufacturers will also have to consider factors outside of the technology inside the car. The vehicles must be able to monitor and evaluate their surroundings as they travel along motorways and country lanes. While detailed maps can be created and fed into the operating system, a vulnerability to consider is the manipulation of the environment. Criminals don’t have to resort to hacking the car when they can ‘hack’ the environment the vehicle will be travelling through. This was witnessed when hackers stuck a two-inch strip of tape on a 35-mph speed sign and were able to successfully trick two Tesla self-driving cars into accelerating to 85 mph. Researchers at McAfee were also able to trick two Tesla cars into autonomously speeding up by 50 mph.
In addition to intentional manipulation of the environment, often roads will be subject to temporary conditions such as temporary speeds or closures of lanes so just storing a database with GPS coordinates will not be enough to solve this problem.
Looking to the future of autonomous car cybersecurity
There are promising developments that are bringing autonomous cars closer to reality but there are also important issues needing solutions. In addition to ensuring the technology and software are robust and secure, automakers and industry bodies need to consider how to create AI programs with the correct ethical decisions when much of the area is subjective.
Michael Aminzade is the MD of cyber at 6point6, a technology consultancy with a focus on digital, emerging technology and cyber.
Read more: Roborace: The futuristic motorsport providing a testbed for autonomous cars