Artificial intelligence (AI) text-to-image models have become increasingly popular in recent years, with the ability to generate realistic and creative images from simple text prompts.
However, a new tool developed by researchers at the University of Chicago could pose a serious threat to this technology.
The tool, called Nightshade, allows users to "poison" AI text-to-image models by making imperceptible changes to images. These changes are too small to be noticed by the human eye, but they can cause AI models to generate incorrect or even nonsensical images when prompted with the poisoned images.
The researchers behind Nightshade say that the tool could be used by artists and other creators to protect their work from being used without their permission in AI text-to-image models. It could also be used by researchers to study the vulnerabilities of AI text-to-image models and to develop new defenses against adversarial attacks.
How does Nightshade work?
Nightshade works by generating small perturbations to images that are specifically designed to confuse AI text-to-image models. These perturbations are so small that they are imperceptible to the human eye, but they can cause AI models to make large mistakes when generating images from poisoned images.
For example, the researchers were able to use Nightshade to poison a popular AI text-to-image model called DALL-E 2. When prompted with a poisoned image of a cat, DALL-E 2 generated an image of a dog. When prompted with a poisoned image of a car, DALL-E 2 generated an image of a cow.
A drawback of the revolt
Back in December 2022, artists on ArtStation were protesting against AI-generated images being allowed on the platform. The site's most popular section, "Explore," has featured computer-created images, sparking outrage among human artists who feel threatened by AI technology.
Many have taken to spamming their portfolios with the message "No to AI-Generated Images" in solidarity with the movement started by costume designer Imogen Chayes and cartoonist Nicholas Kole.
After that, especially with Hollywood productions starting to use AI technologies, the question of "Is AI starting to take people's jobs" came to the agenda, and a war was declared against AI in many sectors.
Nightshade can be considered as a "mecha-biological" weapon against text-to-image generative AI models.
Now you: Is war the way to go, or is it for AI companies to be more transparent about the data they use and for countries to start AI regulation rules?
Thank you for being a Ghacks reader. The post Will Socrates and AI meet the same fate? appeared first on gHacks Technology News.
0 Commentaires