The NHS has already started looking to AI to improve patient service and cut costs, and the technology is currently being used to improve the patient care through AI-powered apps and implementing technology which will allow NHS 111 inquiries to be handled by robots within two years.
According to the Department of Health and Social Care, the technology is already being used across the NHS to improve the early diagnosis of some types of cancer, reduce the number of unnecessary operations caused by false positives, selecting patients suitable for clinical trials, creating patient care plans, and in disease prediction and diagnosis.
However, with the use of artificial intelligence comes the issue of data management and privacy, particularly when dealing with sensitive data of a medical nature.
As a result, the UK government has created an AI code of conduct is designed to ensure that the best and safest data-driven technologies are used by the NHS and will protect patient data.
The AI code of conduct is made up of 10 principles setting out how the government will make it easier for companies to work with the NHS to develop new technologies.
It will set out what good practice in this area should look like, designed to reassure patients that the technology can be utilised without compromising privacy, and make it easier for the government to work with suppliers on the development of new technology.
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataThe ten principles of the AI code of conduct are:
- Understand users, their needs and the context
- Define the outcome and how the technology will contribute to it
- Use data that is in line with appropriate guidelines for the purpose for which it is being used
- Be fair, transparent and accountable about what data is being used
- Make use of open standards
- Be transparent about the limitations of the data used and algorithms deployed
- Show what type of algorithm is being developed or deployed, the ethical examination of how the data is used, how its performance will be validated and how it will be integrated into health and care provision
- Generate evidence of effectiveness for the intended use and value for money
- Make security integral to the design
- Define the commercial strategy
Further updates of the AI code of conduct are due to be published at the end of 2019.
“The ethics of AI will be a key challenge over this next decade”
The ethics of AI must be a key consideration when implementing data-driven health, especially when collaborating with innovators from sectors that are not necessarily familiar with medical ethics.
Dr Jabe Wilson, consulting director, text and data analytics at Elsevier, which provides informatics and AI solutions to healthcare, believes that the AI code of conduct indicate a willingness to invest in and better understand the benefits of AI:
“The new AI guidelines for the NHS are to be welcomed, as we see growing interest from governments and organisations in the ethical use of AI, and providing standards for auditing it’s use is one key aspect of this. It’s important we understand how training data is generated and gathered, so we can take steps to eliminate potential bias. Additionally, when AI is used in decision support systems within the NHS, it’s critical to provide a transparent and understandable rationale for the decision, to guarantee fairness and allow for an appeal or review to be granted.”
He believes that the next challenge for the NHS is ensuring that the data analysis is accurate:
“In doing this, the big challenge for the NHS will be identifying the provenance of data and understanding what the selection process was for the original research. Work is being done on creating artificial training data and looking at how to make decisions transparent, but it’s important the NHS puts in place systems capable of gathering and normalising data to ensure analysis can be conducted accurately.”
Dr Nick Lynch, Consultant for The Pistoia Alliance believes that the ethics of AI is set to become a key issue in the field:
“The ethics of AI will be a key challenge over this next decade. To date, AI’s success has been on solving intellectual challenges (e.g. chess, ‘Go’) but the real test arrives as AI takes on ethical decisions. This is hard enough for humans, but how we expect AI to do this is a defining moment for the role of AI in our society and especially in the NHS and healthcare settings, where it will be a matter of life and death.
“The data an AI system use to make decisions have come from our society and from research, and to ensure there are no unintended biases in these data is a challenge to be solved. In addition, we must be able to show there are no unintended consequences in an AI driven healthcare decision. The NHS’s guidelines on AI are a positive step in the right direction – in The Pistoia Alliance, we believe the life science and healthcare industries must come together to solve these AI challenges, so we can develop solutions that are understandable to both regulators and patients.”