Ansel Adams Estate Condemns Adobe for Selling A.I.-Generated Images Mimicking the Photographer’s Style
The black-and-white landscape dupes, which have since been taken down, violated Adobe’s generative A.I. policies
Photographer Ansel Adams’ estate is speaking out against Adobe for selling A.I.-generated images that mimic the late photographer’s awe-inspiring black-and-white landscapes.
In a social media post on Friday, the estate attached a screenshot of an image titled Nature’s Symphony: Ansel Adams-Style Landscape Photography—A.I. Generated, which was being sold on Adobe’s stock image marketplace for $79.99.
“You are officially on our last nerve with this behavior,” wrote the estate, which tagged Adobe’s official Threads account in the post.
Post by @anseladamsView on Threads
Adams, a San Francisco-born photographer and environmentalist who died in 1984, has long been celebrated for his crisp black-and-white photographs of the American West. In the A.I.-generated copycat, snowy mountains tower over a tranquil lake in a dramatic black and white that mimics Adams’ work. The stock image, however, is clearly digitally rendered. It looks flat and empty compared to the textured, detailed photographs it was created to recall.
Adobe responded publicly the next day, thanking the estate for flagging the content, as it “goes against our generative A.I. content policy.” The company’s official policy allows content created using A.I. to be hosted and sold on its platform, but it explicitly prohibits images “created using prompts containing other artist names, or created using prompts otherwise intended to copy another artist.”
The company assured the estate that the content had been removed, adding that it had “reached out via IG DM to share a way to get in touch directly in the future.”
The estate, however, says it had reached out privately to Adobe multiple times beginning in August 2023 with concerns about A.I.-generated images. “Assuming you want to be taken seriously re: your purported commitment to ethical, responsible A.I., while demonstrating respect for the creative community, we invite you to become proactive about complaints like ours, and to stop putting the onus on individual artists/artists’ estates to continuously police our IP on your platform, on your terms,” wrote the estate in another post.
Adobe spokesperson Bassil Elkadi tells the Verge’s Jess Weatherbed that the company is “actively in touch” with the photographer’s estate, adding that “appropriate steps were taken given the user violated Stock terms.” Artnet’s Min Chen reports that the contributor has been blocked for these violations.
Previously, Adobe told the Verge that it generally moderates all “crowdsourced” images before they’re available on the Adobe Stock platform. The company has “an experienced team of moderators who review submissions,” said Matthew Smith, vice president of Adobe Stock.
In another social media post following the initial exchange, the Adams estate thanked Adobe for removing the “latest round” of A.I.-generated Ansel Adams dupes. “We expect that it will stick this time.”
The estate continued: “We don’t have a problem with anyone taking inspiration from Ansel’s photography, but we strenuously object to the unauthorized use of his name to sell products of any kind, including digital products, and this includes A.I.-generated output—regardless of whether his name has been used on the input side, or whether a given model has been trained on his work.”
The emergence of A.I. image generators like DALL-E and Midjourney has triggered a wave of debates over copyright infringement. In January, a list of artists whose work may have been used to train Midjourney’s image generator made the rounds on the internet, prompting creatives to voice legal and ethical concerns. In several controversial cases, A.I. image generators have been used to “recreate” and “finish” works of art; they have even created images that win state fairs and hang in galleries.
Last year, researchers at the University of Chicago created a tool for artists concerned about their work being used to train A.I. systems without their consent. Called Nightshade, the program alters pixels in a way that humans can’t detect but that complicates a computer’s ability to comprehend an image.
“Right now, there’s very little incentive for companies to change the way that they have been operating—which is to say, ‘Everything under the sun is ours, and there’s nothing you can do about it,’” Ben Zhao, a computer scientist at the University of Chicago and leader of the Nightshade team, told Hyperallergic’s Elaine Velie in October. “We’re just sort of giving them a little bit more nudge towards the ethical front.”