ChatGPT Incidents and Issues
“ChatGPT, which has taken the internet by the storm recently, seems to be making it a lot easier for people to add to the chaos.” – Why is Everyone Bashing ChatGPT?, Aparna Iyer, December 9, 2022
This January, we received an influx of incidents involving OpenAI’s newly released ChatGPT. Here is our compilation of distinct ChatGPT incidents categorized by type of harm.
Academic dishonesty: ChatGPT is based on a LLM released by OpenAI named GPT-3, which was previously abused by students to cheat on assignments and exams. Predictably, subsequent ChatGPT academic dishonesty incidents appeared in the database shortly after its release. Some students reported or even documented themselves on social media using ChatGPT to complete assignments and final exams. These abuses in academic settings have resulted in counteractions, including some schools banning the use of ChatGPT, and the emergence of ChatGPT-detection tools such as GPTZero or OpenAI’s AI Text Classifier. However, recent reports of these detection tools show that their error rates can be high.
Malware development: In another incident concerning ChatGPT abuse, cybersecurity researchers reported that ChatGPT has been used to develop malicious software of dubious quality. Though abuse of generative models to create malicious programs is not new, ChatGPT seems to enable a new group of cybercriminals who have little to no coding ability to potentially create malicious emails or code using only plain English.
Jailbreaking with ease: Despite OpenAI’s use of content filters to prevent tool misuse or abuse, many users were able to easily bypass them by using simple “prompt engineering.” On social media, users attempted to test the limits of these filters for a variety of purposes, such as to detail ChatGPT’s risks or to reveal hidden biases. Some of the more alarming cases include: tricking ChatGPT to provide instructions to commit murder or making bombs. Though it is not known whether harmful instructions have been carried out in the real world, the circumvention of controls has impacted the reputation of OpenAI.
Content labeler labor issue: On a related note, the development of these content filters involved data annotators in Kenya sorting through highly disturbing content with pay and labor practices reported as psychologically harmful and inadequately compensated. This incident reignites conversations about an ethical problem in AI development known as “ghost work,” which concerns the intentionally-hidden human labor that powers these AI systems.
Fake citations: As shown on its site, ChatGPT is warned by OpenAI to “occasionally generate incorrect information.” This known limitation resulted in instances where, if prompted by users, ChatGPT provided citations or citation URLs that do not exist. Interestingly, as shown in this example, the tool generated very convincing-looking URLs, using known research-related domain names such as https://link.springer.com/book or https://www.researchgate.net.
Quality assurance: Further downstream from model use, ChatGPT has run into problems with online platforms that rely on user submissions, such as Stack Overflow—a Q&A site for developers—and Immunefi—a white hat bug bounty platform. ChatGPT-generated answers or bug reports were deemed not of high quality, and overwhelmed the sites’ human-operated review or quality curation process due to the number of submissions. In both incidents, the platform banned ChatGPT-produced submissions as a result.
Since this Blog Post was written, numerous ChatGPT incidents have been added to the database. Additionally, please note that this list is not inclusive of all issues presented by ChatGPT—i.e., those where a real-world harm event has yet to occur. The AI Incident Database indexes and makes these "issues" searchable separately from "incidents."