Researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) developed ‘Photoguard,’ a system that disrupts AI image manipulation to protect users against the unwarranted modification and safeguard image authenticity.
According to their study, the efforts behind this tool is an approach to mitigate the malicious image editing risks of various diffusion models. Consequently, the researchers’ objective is to immunize images to make them resistant to the unwarranted image manipulation of these models.
“Personal images can be inappropriately altered and used for blackmail, resulting in significant financial implications when executed on a large scale,” Hadi Salman, the lead author of the study said.
In severe cases, image modifying models poses a threat to the digital community. It could be responsible for framing malicious acts, staging false crimes, and deception, through these manipulations that could damage reputational, emotional, or financial aspects of an individual’s life – anyone could be a victim of this.
“This is a reality for victims at all levels, from individuals bullied at school to society-wide manipulation,” he added.
In a recent news release from MIT, they discussed two methods used by a tool to protect images from unauthorized modifications. These methods are referred to as the "encoder" attack and the "diffusion" attack.
The "encoder" attack works by transforming the image in such a way that it appears completely random and unrecognizable. This makes it difficult for anyone to understand or modify the image without the proper authorization.
On the other hand, the "diffusion" attack optimizes the perturbations applied to the image. This means that these changes are carefully calculated to make the image look similar to a specific target or reference image, which acts as a kind of "image modifier."
With the implementation of ‘PhotoGuard’, any attempt of AI models to manipulate an image will be nearly impossible as the encoder attack introduces minor adjustments to the image’s latent representation causing the model to perceive it as a random identity.
Through a technique that involves perturbations, the tool applies ‘miniscule alterations’ in pixel values to disrupt the ability of AI models to manipulate an image.
“AI models view an image differently from how humans do. It sees an image as a complex set of mathematical data points that describe every pixel’s color and position – this is the image’s latent representation,” according to the said release.
The timely efforts behind the “Photoguard" is extremely vital as threats arising from generative AI generated systems are able to put people at risk.
With these safeguarding tools that immunizes images against modifications, it could create a safe space for sharing photos, especially in social media platforms – free from unwarranted and malicious modifications.This only means that AI is adaptive, dynamic, and transformative.
The emergence of AI is now able to combat the harmful AI generated tools itself. In the era where advanced generative models like DALL-E 2 progress, there must still be limitations and restrictions to be imposed.