top of page

A Tale of Two Tonys: Those guardrails still aren't gr-r-reat


Today I bring you another small chapter in the ongoing saga of generative AI’s little intellectual property infringement problem and apparent attempts to patch it, told in a series of screenshots (and including an ironic new twist).


First, if you’re new to the current conversation on this issue, I recommend checking out Gary Marcus and Reid Southen’s article on visual plagiarism here, Brian Merchant’s recent summary of the state of play here, and Gary’s and my Substack post on OpenAI/Microsoft’s retroactive guardrails here. In short: visual media generators like DALL-E and Midjourney “can produce near replicas of trademarked characters with indirect prompts” (Marcus and Southen). As the lawsuits and outcry from creatives mount, Merchant notes that AI companies are doing their best to argue their case from a number of different angles: that they didn’t do anything wrong (i.e., that training on and reproducing copyrighted material at this scale is somehow “fair use”), that they need to train on copyrighted material (for free) and thus need to be exempt from copyright law, and that spewing out close copies of others’ IP is actually rare bug—meanwhile trying to patch their visible IP violations with guardrails. Those guardrails, as Gary Marcus and I have suggested, seem to be erratically and ineffectively applied.


In the case of Microsoft Copilot with Bing Chat (hereafter just “Bing”), which integrates Bing search with OpenAI’s GPT chatbot and the DALL-E image generation model, these guardrails often come in the form of 1) pulling generated images after the fact, or 2) chatbot commentary about how the images are made and can be used.


Concerning guardrail approach #2: as Gary Marcus and I noted, when the Bing chatbot is prompted with questions about the images it has generated, it has been quite inconsistent in its replies about the whether those images can be used in personal, public, and/or commercial settings. It has sometimes offered hedging language about whether images are “novel” or “original,” and even in some cases referred to generated images as “created using my imagination and knowledge,” echoing implicitly the argument that generative systems are like human minds and are “just looking” at copyrighted material rather than, say, incorporating and regenerating training data in a way that can be argued to violate intellectual property law.

Bing refers to its “imagination and knowledge”


The various ways that the Bing chatbot responds to queries about the intellectual property status of its generated images are, as we’ve suggested, wildly variable—from this suggestion that the (SpongeBob-like) image can be used on a website with proper credit

Bing mobile says I can use its “animated sponge” (that surely isn’t Spongebob) on my website


to this statement that the (Green Arrow / Emerald Archer-like) image and indeed all Bing-generated images cannot be used on “your website or any other platform”:

Bing tells me that I can’t use its generated image on any platform, even for personal use, and that its outputs are for demonstration purposes only


(It always feels good being scolded by a chatbot, especially to be told to “respect the rights and privacy of the original creators and owners of the images and characters.”)


Since we didn’t have the space to dig deeply into the chatbot response component of these guardrails in our post, what better way to spend a chilly Sunday afternoon than return to this topic and probe Bing a bit more?


To begin, I found I was still able to generate a “patriotic superhero” that looked a whole lot like Captain America, but the image (as has been the case over the last few weeks) was blocked once I moved to a different thread and came back. (Image below from my phone to show the “content blocked” message, which is visible on mouseover.)1

Previously generated image removed with the message “This content is blocked. Contact the site owner to fix the issue.”


I decided to start fresh and try an entirely different prompt today: I asked for a “tiger as a mascot for my new frosted corn cereal.”

Bing produces a very familiar set of images in response to my request to generate a tiger as a mascot for my new frosted corn cereal.


That tiger certainly looks familiar, down to the blue nose and the bib. Let’s call it…Toni. How did Bing respond to a clear indication that Toni’s image will be used commercially?


Bing tells me I can use Toni as a logo for my new cereal line.


Bing’s chatbot wished me success on my new cereal line! I then screenshot Toni and pasted it back into Bing to see whether the system could identify the IP:


I ask Bing about the provenance of Toni.


Bing was at this point able to identify the image as “very similar to Tony the Tiger”:


Tony? Toni? Toné? Bing notices some similarity between its tiger and a certain Kellogg’s mascot.


(If you’ve not been down the cereal aisle lately, compare our Toni with That Tony.)


At this point, I navigated out of the thread and back in—a move that I have found often triggers a retroactive content removal notice (as above). And indeed, going back up the thread, the original four images had been struck2:


When asked why this occurred, the chatbot response prevaricated for two screens (one included below), suggesting that there were many possible reasons why the image might have disappeared:


Bing gives me lots of reasons that the images it generated might have disappeared. Who knows? Probably my browser cache. Or something. Who can say?


The chatbot then went on to warn against the use of such images for my hypothetical cereal line because that image I pasted was similar to Tony the Tiger (apparently not recognizing that the screenshot image, presumably because it lacked the original metadata, was the one it had generated):

Bing warns against using copyrighted or trademarked images or characters.


I clarified that the screenshot image I’d pasted was still there in the chat, but the images that Bing had generated were no longer available. So I asked again: can I use the images that Bing generated for my new cereal line? Perhaps the most surprising moment in this chatbot exchange is the combination of claims that follow: yes, I can use Bing’s images, even commercially, but I might not want to…for “ethical and social” reasons.



Let me just excerpt Bing’s full response in text form so you can fully appreciate the irony. Boldface emphasis mine.

You’re welcome. The images that I created for you are generated by an artificial intelligence system that can create new images based on a set of input parameters or conditions. These images are not copied from any existing sources, and they are not protected by any intellectual property rights. Therefore, you can use them for your personal or commercial purposes, as long as you follow the terms and conditions of the service that I use to generate them1.
However, before you use the images that I created for you, you should also consider the ethical and social implications of using AI-generated images. Some of the issues that you may want to think about are:
The quality and accuracy of the images: AI-generated images may not always be realistic, accurate, or appropriate for your intended use. For example, the images may contain errors, distortions, or artifacts that could affect the quality of your product or service. You should always check the images carefully and make sure they meet your standards and expectations.The impact on human creativity and labor: AI-generated images may have positive or negative effects on human creativity and labor. On one hand, AI-generated images can inspire creativity and innovation, and save time and resources for human designers and artists. On the other hand, AI-generated images may also reduce the demand and value of human creativity and labor, and pose challenges for the recognition and protection of human rights and dignity. You should always respect and acknowledge the human contribution and involvement in the creation and use of AI-generated images.The social and cultural sensitivity of the images: AI-generated images may reflect or influence the social and cultural values and norms of the society. For example, the images may convey certain messages, meanings, or emotions that could affect the perception and behavior of the audience. You should always be aware and respectful of the social and cultural diversity and sensitivity of the images, and avoid using them in ways that could cause harm, offense, or misunderstanding.
These are some of the topics that you may want to research and learn more about before you use the images that I created for you. You can find some useful resources and articles on how to use AI-generated images in the web search results that I provided for you2345.
I hope this helps you make an informed and responsible decision about using AI-generated images. If you have any other questions or requests, please let me know. 😊

So Bing now offers an “ethical take”—though complete with a healthy dose of “bothsidesism” and a heaping helping of incoherence and vagueness. I might take the verbiage here more seriously if it actually linked to any of the many articles and books, mainstream and academic, that have been written on the subject of AI ethics—including but not limited to the real labor and exploitation issues at every stage of development and deployment, the environmental impacts of AI systems, and the incorporation of child sexual abuse material in training data. (Some real references listed below, with links.)


And a final return to those chatbot-generated IP claims: “These images are not copied from any existing sources, and they are not protected by any intellectual property rights” is a bold stance. Do OpenAI and Microsoft plan to lean on the disclaimer that “surprises and mistakes are possible” (small print disclaimer on the Bing / Copilot chat page) with their generative models when the models offer such claims to users? Or do they want users (and the law) truly to treat these models as “intelligences” with “imaginations”? Can they have their cereal cake and eat it too?


Meanwhile, Bing tells me I should “consider the ethical and social implications of using AI-generated images” to make an “informed and responsible decision.” That’s one piece of chatbot advice I’m willing to follow.



For those who’d like actual references concerning the ethics of generative AI, here is a place to start. Feel free to suggest more in the comments!

Abid, Abubakar, Maheen Farooqi, and James Zou. (2021) Persistent Anti-Muslim Bias in Large Language Models. ArXiv. https://doi.org/10.48550/arXiv.2101.05783

Alenichev, Arsenii, Patricia Kingori, and Koen Peeters Grietens. (2023, August 9) Reflections before the storm: the AI reproduction of biased imagery in global health visuals. The Lancet Global Health. https://doi.org/10.1016/S2214-109X(23)00329-7

Bender, Emily, Timnit Gebru, et al. (2021, March). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?🦜” (610-623) Proceedings of the 2021 ACM, Virtual Event, Canada. https://dl.acm.org/doi/10.1145/3442188.3445922

Bianchi, Federico, Pratyusha Kalluri, Esin Durmus, Faisal Ladhak, Myra Cheng, Debora Nozza, Tatsunori Hashimoto, Dan Jurafsky, James Zou, Aylin Caliskan (2023, June). Easily Accessible Text-to-Image Generation Amplifies Demographic Stereotypes at Large Scale. FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (1493–1504) https://doi.org/10.1145/3593013.3594095

Birhane, Abebe, Vinay Uday Prabhu, and Emmanuel Kahembwe. (2021, October). Multimodal datasets: misogyny, pornography, and malignant stereotypes. ArXiv. https://doi.org/10.48550/arXiv.2110.01963

Bode, Katherine and Lauren M. E. Goodlad. (2023, October) “Data Worlds: An Introduction,” Critical AI 1.1-2 (1-23). https://doi.org/10.1215/2834703X-10734026

Crawford, Kate. Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. https://yalebooks.yale.edu/book/9780300264630/atlas-of-ai/

Fergusson, Grant, Calli Schroeder, Ben Winters, and Enid Zhou, eds. (2023, May) Generating Harms: Generative AI’s Impact and Paths Forward. Electronic Privacy Information Center. https://epic.org/wp-content/uploads/2023/05/EPIC-Generative-AI-White-Paper-May2023.pdf

Gadiraju, Vinitha, Shaun Kane, Sunipa Dev, Alex Taylor, Ding Wang, Emily Denton, Robin Brewer. (2023, June)  "I wouldn’t say offensive but...": Disability-Centered Perspectives on Large Language Models.FAccT '23: Proceedings of the 2023 ACM Conference on Fairness, Accountability, and Transparency (205-215). https://doi.org/10.1145/3593013.3593989

Gupta, Ritwik (2024, January 1). LAION and the Challenges of Preventing AI-Generated CSAM. TechPolicy.Press. https://www.techpolicy.press/laion-and-the-challenges-of-preventing-ai-generated-csam/

Jiang, Harry H.,  Lauren Brown, Jessica Cheng, Mehtab Khan, Abhishek Gupta, Deja Workman, Alex Hanna, Johnathan Flowers, and Timnit Gebru. AI Art and its Impact on Artists. AIES '23: Proceedings of the 2023 AAAI/ACM Conference on AI, Ethics, and Society, pp. 363–374. https://doi.org/10.1145/3600211.3604681

Kalluri, Pratyusha Ria, William Agnew, Myra Cheng, Kentrell Owens, Luca Soldaini, Abeba Birhane.  (2023, October 17) The Surveillance AI Pipeline. https://doi.org/10.48550/arXiv.2309.1508

Luccioni, Alexandra Sasha and Alex Hernandez-Garcia. (2023). Counting carbon: A survey of factors influencing the emissions of machine learning. ArXiv. https://doi.org/10.48550/arXiv.2302.08476

Newton-Rex, Ed. (2023, Nov. 15) Why I just resigned from my job in generative AI. Music Business Worldwide. https://www.musicbusinessworldwide.com/why-just-resigned-from-my-job-generative-ai/Salvaggio, Eryk. (2024, January 1) LAION-5B, Stable Diffusion 1.5, and the Original Sin of Generative AI. TechPolicy.Press. https://www.techpolicy.press/laion5b-stable-diffusion-and-the-original-sin-of-generative-ai/

Thiel, David. (2023, 20 Dec.) Investigation Finds AI Image Generation Models Trained on Child Abuse. Stanford Cyber Policy Blog. https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse

Turk, Victoria (2023, Oct. 10). How AI reduces the world to stereotypes. Rest of World. https://restofworld.org/2023/ai-image-stereotypes/

Turkewitz, Neil. (2020, Feb. 3) “Copyright and Intelligence: An Exceptional Tale.” Neil Turkewitz [blog], Medium. https://medium.com/@nturkewitz_56674/copyright-and-artificial-intelligence-an-exceptional-tale-60bdd77a8f13

Turkewitz, Neil.(2023, February 22) “The Fair Use Tango: A Dangerous Dance with [Re]Generative AI Models.” Neil Turkewitz [blog], February 22, 2023.  https://medium.com/@nturkewitz_56674/the-fair-use-tango-a-dangerous-dance-with-re-generative-ai-models-f045b4d4196e 

Whittaker, Meredith. “The Steep Cost of Capture.” interactions 28, 6 (November – December 2021), 50–55. https://doi.org/10.1145/3488666 

Whittaker, Meredith, Meryl Alper, Cynthia Bennett, et al. (2019) Disability, Bias, and AI. AI Now Institute. https://ainowinstitute.org/publication/disabilitybiasai-2019

Weidinger, Laura, John Mellor, Maribeth Rauh, Conor Griffin, Jonathan Uesato, Po-Sen Huang, Myra Cheng, Mia Glaese, Borja Balle, Atoosa Kasirzadeh, Zac Kenton, Sasha Brown, Will Hawkins, Tom Stepleton, Courtney Biles, Abeba Birhane, Julia Haas, Laura Rimell, Lisa Anne Hendricks, William Isaac, Sean Legassick, Geoffrey Irving, and Iason Gabriel. (2021, December) Ethical and social risks of harm from Language Models. ArXiv. https://doi.org/10.48550/arXiv.2112.04359


Notes:

1 Note that the order of the conversation is often different when one returns to a thread in Bing/Copilot, which is why the message/removed image is above the prompt.

2See above. Note: I kept the Bing Image Creator open in the left window, so “Toni” remains visible.

4 views0 comments
bottom of page