New ‘poisoning’ tool spells trouble for AI text-to-image tec…

New ‘poisoning tool spells trouble for AI text to image tec

Professional artists and photographers offended by generative-AI companies using their work to train their technology may soon have an effective way to respond that won’t require going to the courts.

Generative AI burst onto the scene about a year ago with the launch of OpenAI’s ChatGPT chatbot. The tool is extremely adept at interacting in a very natural, human-like way, but to achieve that capability it had to be trained on reams of data scraped from the web.

Similar generator-AI tools are also capable of creating images from text prompts, but like ChatGPT, they are trained by scraping images published on the web.

This means that artists and photographers are having their work used – without consent or compensation – by tech firms to create their own generative-AI tools.

To combat this, a team of researchers has developed a tool called Nightshade that is capable of confusing the training model, causing it to spit out false images in response to signals.

Recently outlined in an article in MIT Technology Review, Nightshade “poisons” training data by adding invisible pixels to a piece of art before it is uploaded to the web.

“Using this to ‘poison’ this training data could harm future iterations of image-generating AI models, such as DALL-E, MidJourney, and Stable Diffusion, rendering some of their outputs useless – Dog Cat cars become cows, and so on,” MIT reports as the research behind Nightshade has been submitted for peer review.

While image-creation tools are already impressive and constantly improving, the way they are trained has proven controversial, with the creators of several tools currently facing lawsuits from artists claiming That their work has been used without permission or payment.

University of Chicago professor Ben Zhao, who led the research team behind Nightshade, said such a tool could help shift the balance of power back to artists, warning against tech companies ignoring copyright and intellectual property.

“Data sets for large AI models can contain billions of images, so the more toxic images fed into the model, the more harm the technology will cause,” MIT Technology Review said in its report.

When it releases Nightshade, the team plans to make it open source so that others can refine it and make it more effective.

Aware of its potential to disrupt, the team behind Nightshade said it should be used as “the last defense for content creators against web scrapers that disrespect their rights.”

To tackle the problem, DALL-E creator OpenAI has recently begun allowing artists to extract their work from its training data, but the process has been described as extremely difficult as it requires sending a copy of each image to the artist. Is required. Each request requires its own application, along with the details of the image that needs to be removed.

Making the removal process too easy could discourage artists from using tools like Nightshade, which could create many more problems for OpenAI and others in the long run.






Related Posts

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.