A Case Study on AI-Generated Content on Avianca Airlines' Chatbot Lawsuit
Jul 14, 2023
Whether you love or hate it, generative AI is here to stay. However, like with any technology, it isn’t perfect, and you must double (if not triple) check any generated output.
With our latest platform Eye2.ai, we encountered some issues with generative AI while we post-edited content. So today, we will examine the common issues and mistakes we discovered while providing post-editing services for generative AI content. We will also present how post-editing ensures you can have fast delivery through generative AI content from software like ChatGPT without compromising quality.
The Post-Editing Process of Eye2.ai
Despite what many think, AI-generated content has become increasingly prevalent in various fields, including writing, customer service, and data analysis. While AI has made significant advancements in generating coherent and meaningful text, several common mistakes can occur, leading to misinterpretations or inappropriate responses. For this reason, post-editing and checking the content is vital to ensure that your message is successfully conveyed to your target audience.
At Eye2.ai, we are no strangers to providing post-editing services through our experience in machine translation post-editing at our other platform, machinetranslation.com. Although there are some differences in the post-editing process between AI-generated and machine-translated content, the principle remains the same. Our professional experts thoroughly check, edit, and conduct quality assurance to guarantee that the content meets industry standards.
Observed Common Mistakes in the Article
In this case study, we take a look at a particular post-editing process wherein the AI-generated article about a man who sued Avianca Airlines, but his lawyer used ChatGPT and ended up paying fines. Meanwhile, the case became viral as it ended up in news headlines.
We have written some the common mistakes we noticed while examining and post-editing the generated-AI content. We have listed below a couple of them.
Lack of Contextual Understanding
AI often fails to fully understand context, which can lead to misinterpretations or inappropriate responses. While AI can handle straightforward information, it can struggle with understanding the context of ambiguous, complex, or emotionally charged ideas.
The highlighted text from the AI-generated is an example of the misinterpretation of what really happened in the Avianca Airlines case. No passenger was denied entry into a country due to ChatGPTs faulty advice on obtaining a Visa.
Overgeneralization or Stereotyping
AI models are trained on large datasets and might unintentionally propagate biases present in the data, resulting in outputs that can be insensitive or inappropriate. This can be problematic because we noticed these ethical issues when using AI generative software like ChatGPT for translation and localization, as its content contained gender and racial biases.
The last sentence didn’t contain any insight into what occurred but just gave an overgeneralized statement about how AI can generate misinformation and highlighted the importance of human insight in legal matters.
Lack of Originality and Creativity
Although AI can mimic creativity by combining existing concepts in new ways, it can’t truly innovate or think outside of what it’s been trained on. The significance of content writing is its ability to capture the attention of the audience and convey its message effectively.
One of the major problems with AI-generated content is that it tells us that the Avianca Airlines case serves as a compelling example of the pitfuls of generative AI but then fails to provide a coherent story of what happened, the small details and information that paint a very strong claim about the many problems with AI when dealing with technical or specialized fields like Law.
False Information
ChatGPT has the potential to reach a large audience, and if it disseminates false information, it can mislead and misinform people. It’s particularly concerning when it comes to sensitive topics, such as health, science, politics, or legal matters, where accurate information is crucial for making informed decisions, which can negatively affect your credibility.
As previously mentioned, ChatGPT's data is only up to 2021. Because of this, it couldn’t get access to the latest news and information. As you can see, I had to manually include all the information about the lawsuit against Avianca Airlines and how it has led to the revelation that the plaintiff’s lawyer used generative AI and failed to double-check the generated information and content.
Lack of Human Touch
Besides lacking creativity, the generated content from AI systems like ChatGPT is impersonal and lacks human aspects. While it can generate text that resembles human language, it often lacks empathy, emotions, and the ability to understand context in the same way humans do. This can result in impersonal and detached interactions, making it difficult for users to establish a genuine connection or feel understood.
AI fails to account for the needs of its readers, whether that be additional context or establishing an attention-grabbing introduction – much like the example we presented above.
Anticipated Errors That Were Not Present
Repetition and Redundancy
AI can sometimes generate repetitive or redundant content, especially in longer texts. We noticed that the content can become monotonous due to repetition, and it affects the sentence length. While editing the article, we noticed the same thing occurring. That is why early in the post-editing process while reading the AI-generated content, you should identify these repetitions and redundancies and take them into consideration.
Notice the highlighted text provided above. By the start of the conclusion, the AI stated that “AI models present opportunities and risks”, only to repeat the same claim 2 paragraphs after.
Unnatural Article Structure
Generative AI systems like ChatGPT can produce responses that sound unnatural or overly formal. The language can be excessively verbose, repetitive, or fail to capture the nuances of everyday conversation. It can make the conversation feel stilted and robotic, leading to a less engaging and satisfying user experience.
In this particular example, notice how the length of the conclusion equals the length of all the other segments of the article.
Conclusion
AI-generated content has limitations, and understanding the common mistakes associated with it is essential for responsible usage. The errors above show how human expertise and insights are vital to avoid issues such as loss of credibility, lower user engagement, and failure to convey your message effectively. By acknowledging these challenges, developers, and users can work towards refining it.
Get in Touch
Let’s chat about how Eye2.ai can help you transform your AI content today.