top of page

AI and Commun(icat)ion: When is it inappropriate to use LLM-generated writing?

Updated: May 31

MAR 14, 2023 --originally posted at Pandora's Bot Substack


In one of the more controversial early cases of ChatGPT/LLM use in academia, it was revealed that Vanderbilt’s Peabody College Equity, Diversity, and Inclusion office sent an AI-written response to the Feb 13, 2023, Michigan State University shooting. Nicole Joseph, one of the associate deans who signed the letter, sent out a follow-up message apologizing and saying using ChatGPT to write the message was “poor judgment”, the Hustler said.

“While we believe in the message of inclusivity expressed in the email, using ChatGPT to generate communications on behalf of our community in a time of sorrow and in response to a tragedy contradicts the values that characterize Peabody College,” Joseph’s message said.

I was sent this article by three colleagues, and we all shared our discomfort about this kind of use of AI. Indeed, outrage and disgust seem to be common responses. In an article from the student paper, The Vanderbilt Hustler, Senior Vanderbilt student Joseph Sexton noted that “using ChatGPT with a parenthetical citation would get points off in any class I’ve taken here….This feels definitively wrong, whether ChatGPT provided a single word or the entire spiel.” Meanwhile, back in January, this comment about the potential academic use of ChatGPT for letters of recommendation made its rounds on social media (by way of a repost by Shit Academics Say):

Indeed, there’s at least one Reddit thread dedicated to using ChatGPT for letters of recommendation. The latter may seem to many readers to be an entirely different ethical situation than that posed by the Vanderbilt incident. Is the use of an LLM for LoRs so different from the recommendation letter templates many academics personally generate and use over the years? Is this a case wherein we are simply taking mundane tasks off our overloaded service plates? Or does using an LLM violate some expectation of what letters of recommendation are intended to do--the care, thought, and judgment that they are expected to convey? I have been thinking about these responses as a kind of spectrum, with the outrage over the Vanderbilt DEI statement on one end and the cautious enthusiasm for using LLMs and generative AI to take over tasks that might otherwise take time, thought, and emotional energy on the other.

The issue with using generative AI in situations like the one that Vanderbilt has been taken to task for is not only whether people will get credit for doing things that they didn’t do—whether the people are students or faculty or marketing professionals or programmers. Vanderbilt’s Equity, Diversity, and Inclusion office using this technology in the way that they did, in other words, is not quite the same problem as, for instance, students turning in essays written by LLMs. The goal of most instructors is not simply to stick a grade on a final product but to assess the student’s critical thinking and writing. In that case, “AI-gerism” bypasses the educational process and the university’s learning goals entirely.

Much of the discomfort with the Vanderbilt case, and some of the discomfort one might have with using LLMs for letters of recommendation, I would argue, lies in the nature of the communicative exchange—the “why” of communication that is exposed and, arguably, transgressed in these particular cases.

The etymology of “communication,” which Merriam-Webster defines as “a process by which information is exchanged between individuals through a common system of symbols, signs, or behavior,” may offer a bit of insight. “Communication” comes from the Latin commūnicātiōn, and here is a fuller etymology according to the OED:

Etymology: < (i) Anglo-Norman communicacioun, Anglo-Norman and Middle French communicacion, communication, etc. (French communication ) interpersonal contact, dealing (late 13th cent. or earlier in Anglo-Norman), fact of having something in common with another person or thing (late 13th or early 14th cent. in Old French), communion rite (early 14th cent. or earlier in Anglo-Norman), (in anatomy) fact of being connected by a physical link (1314), connection, passage (from one place to another) (a1374), handing over (a1377 or earlier in Anglo-Norman), discussion (a1377 or earlier in Anglo-Norman), meeting, coming together (15th cent. or earlier in Anglo-Norman), piece of information (a1420), action of sharing (goods, property), joining together to do business (1437), and its etymon (ii) classical Latin commūnicātiōn-, commūnicātiō action of sharing or imparting, community of ground, (in rhetoric) consultation with one's audience or adversary, in post-classical Latin also membership in the Christian society (late 2nd cent. in Tertullian), participation in Holy Communion (3rd cent.), imparting (of ideas) (from 13th cent. in British sources), interchange of properties (between the two natures of Christ) (early 14th cent. in a British source), connection, link (1363 in Chauliac), consultation, discussion (from 14th cent. in British sources), social intercourse, fellowship (15th cent. in a British source) < commūnicāt- , past participial stem of commūnicāre.

One can see here that “communication” is closely tied with the notions of “community” and “communion”—and where there is a need to convey feeling and personal judgment, the use of LLMs feels like a transgression of an unspoken communicative contract.

As I commented in my last post, Sherry Turkle has noted that “intelligence once denoted a dense, layered, complex attribute. It implied intuition and common sense. But when computers were declared to have it, intelligence started to denote something more one-dimensional, strictly cognitive” (Alone Together: Why We Expect More from Technology and Less from Each Other, 141). We might say the same about the extent to which LLMs are in the process of stripping “communication” of some of its richness: the expectation of expression, exchange, communion, connection.

With this in mind, readers may appreciate the irony of the following exchanges I had with two different LLMs.

Me: What do humans offer to the communicative experience that ChatGPT and other LLMs cannot? ChatGPT: Humans offer a number of unique qualities to the communicative experience that ChatGPT and other language models cannot replicate. Some of these qualities include:

  1. Emotional intelligence: Humans are able to recognize and respond to a wide range of emotions, both in themselves and in others. They can use tone of voice, body language, and other nonverbal cues to convey meaning and build rapport.

  2. Contextual understanding: Humans are able to draw upon their broader understanding of the world and their own experiences to interpret and respond to communication in a way that takes into account the specific context and nuances of a given situation.

  3. Creativity and spontaneity: Humans are able to generate novel ideas and responses in the moment, based on their own creativity and intuition.

  4. Empathy and compassion: Humans are able to connect with others on a deeper level, showing empathy and compassion for their thoughts, feelings, and experiences.

  5. Personalized communication: Humans are able to tailor their communication to the specific needs and preferences of the person they are communicating with, based on their own intuition and understanding of the person.

While ChatGPT and other language models can simulate some aspects of human communication, they are limited by their programming and the data they have been trained on. They cannot fully replicate the richness and complexity of human communication, and may not be able to respond appropriately in all situations.

Me: Does the fact that ChatGPT does not have intelligence or an ability to interact in a social way with the communicant affect its ability to communicate fully? ChatGPT: Yes, the fact that ChatGPT does not have intelligence or the ability to interact in a social way with the communicant can affect its ability to communicate fully. While ChatGPT is capable of generating responses based on patterns in its training data, it does not truly understand the meaning behind the words or the context of the conversation in the way that a human would. This means that ChatGPT may not be able to fully grasp the nuances of the conversation or respond appropriately in all situations. Additionally, ChatGPT is not able to engage in the social aspects of communication that are important to many people, such as building rapport, showing empathy and compassion, and using nonverbal cues to convey meaning. These social aspects are an integral part of human communication, and their absence can make ChatGPT's communication feel stilted or impersonal. In summary, while ChatGPT is capable of communicating to a certain extent, its lack of intelligence and social abilities can limit its ability to communicate fully and may make its responses feel artificial or inadequate in some situations.

And from You.com (another LLM chatbot):

Me: Would a university using an LLM to craft a response to a school shooting be considered an ethical limit case? You.com Chat: Using a language model like LLM to craft a response to a school shooting could be considered an ethical limit case. The use of language models for such sensitive and emotionally charged topics requires careful consideration of the possible ethical implications. There may be concerns about the accuracy and appropriateness of the response generated by the language model. Also, the use of such technology could potentially minimize the impact of the tragedy and be seen as insensitive or disrespectful. As such, it is essential to consider the potential ethical challenges and address them appropriately before using such technology to generate responses in sensitive situations.

Indeed. And since LLMs draw on a vast training database of human outputs, it seems that, perhaps, “we” already know better.


© 2023 Kathryn Conrad


8 views0 comments

Comments


bottom of page