Researchers have developed a technique that could help to distinguish genuine videos from those that have been manipulated, offering a potential solution to the rising threat of deepfakes.
Deepfakes are altered videos that are almost impossible to distinguish from the real thing. This is made possible by advancements in artificial intelligence and machine learning technology, notably the development of generative adversarial networks. This technique uses two neural networks – one which generates artificial outputs such as an image or video, and another which analyses the results and determines how good it is. In the case of deepfakes, the neural network superimposes an image or video on to another and then analyses the results to ensure that the output is realistic enough.
However, researchers from the NYU Tandon School of Engineering have demonstrated how neural networks could potentially be used to verify the authenticity of an image from acquisition to delivery.
The process involves replacing the photo development pipeline with a neural network, which places carefully crafted artefacts into a digital image as it created and before there is any opportunity to alter it.
These extremely sensitive artefacts have been designed to allow the post-processing that typically takes place after an image is captured by a digital device, such as stabilisation and lighting adjustments. However, it will distort should any manipulation take place after this has occurred, essentially serving as a hidden watermark.
“If the camera itself produces an image that is more sensitive to tampering, any adjustments will be detected with high probability,” Nasir memon, professor of computer science and engineering at NYU Tandon and co-author of the study. “These watermarks can survive post-processing; however, they’re quite fragile when it comes to modification: if you alter the image, the watermark breaks.”
How well do you really know your competitors?
Access the most comprehensive Company Profiles on the market, powered by GlobalData. Save hours of research. Gain competitive edge.
Thank you!
Your download email will arrive shortly
Not ready to buy yet? Download a free sample
We are confident about the unique quality of our Company Profiles. However, we want you to make the most beneficial decision for your business, so we offer a free sample that you can download by submitting the below form
By GlobalDataBy studying the watermark, the system would be able to determine whether the image is the original or if it has been altered in any way.
Tackling deepfake news
These manipulated videos are easy to create and hard to spot, and are increasingly being used to create and spread fake news and disinformation.
However, the recent spread of a video that appeared to show United States House of Representatives Speaker Nancy Pelosi drunk or unwell during an interview shows how even basic video manipulation can successfully fool the public. The video, which social media network Facebook has refused to remove, seems to have been intentionally slowed down to give this impression.
While altering a video in such a way likely took seconds, the video has attracted millions of views and comments questioning Pelosi’s health, showing the potential that such videos have to manipulate voters.
While this isn’t the first attempt to develop a system that ensures image authenticity, the researchers note that past attempts have focused on determining the authenticity of the end product, rather than throughout the imaging pipeline.
With AI-driven processes likely to increasingly find their way into the imaging process in the coming years, implementing the technique won’t come with too much difficulty, and could help to restore some trust in the fight against disinformation.
“We have the opportunity to dramatically change the capabilities of next-generation devices when it comes to image integrity and authentication,” NYU Tandon research assistant Pawel Korus and Memon said. “Imaging pipelines that are optimised for forensics could help restore an element of trust in areas where the line between real and fake can be difficult to draw with confidence.”
However, the researchers noted that further study would be needed before that can happen. Prototype testing showed great variance in the success of the system, which detected manipulation with accuracy of between 45% and 90% without reducing the quality of an image.