NIXSolutions: MIT’s PhotoGuard Defending Images Against AI Attacks

Engineers at the Computer Science and Artificial Intelligence Laboratory (CSAIL) at the Massachusetts Institute of Technology (MIT) have proposed PhotoGuard technology, which makes it more difficult for artificial intelligence algorithms to modify images.


AI Image Generators: A New Era in Graphics Processing

The AI-powered image generators Dall-E and Stable Diffusion are just the beginning of a new era in graphics processing. AI can not only generate new pictures, but also edit existing ones with high quality, leaving room for potential abuse in the form of deepfakes. MIT CSAIL engineers have proposed PhotoGuard technology to help protect against such incidents.

Protecting Images through Two Methods

The technology includes two methods of attacks on AI algorithms: “encoder” (encoder) and “diffusion” (diffusion). The first method adds a hidden representation of the protected image – the technology changes individual pixels in the image in a special way and does not allow AI to recognize the content of the image, which means it blocks the ability for it to edit it.

Advanced “Diffusion” Attack Method

The second, more advanced and resource-intensive method of “diffusion” attack masks one image under another in the “eyes” of AI, notes NIXSolutions. As a result, AI tries to change only the picture that it “sees”, but does not touch the original one, as a result of which the image it generates looks unrealistic.