AI Incident Roundup – October ‘22
Welcome to this month’s edition of The Monthly Roundup, a newsletter designed to give you a digestible recap on the latest incidents and reports of the AI Incident Database.
Estimated reading time: 5 minutes
🗞️ New Incidents
Emerging incidents that occurred last month:
Incident #377: Weibo Model Has Difficulty Detecting Shifts in Censored Speech
- What happened? The Chinese social media site Weibo's user moderation model has difficulty keeping up with shifting user slang in defiance of Chinese state censors.
- How is the AI not working? Although the site says it has refined its “keyword identification model” to be able to filter the use of intentionally misspelled words and homophones, the diversity and ever-evolving nature of online language in China makes it unlikely its model will be able to fully ban controversial language.
- What was the impact of this incident? Chinese citizens were able to undermine the efforts of Weibo to impose censorship of banned language, thus allowing them to discuss controversial topics such as government corruption.
- Who was involved? Weibo developed and deployed an AI system, which harmed Weibo and the Chinese government.
- 🚨 Editor's Note: The definition of alleged "harm" to a party does not indicate it is the responsibility of the broader community to mitigate or prevent that harm. Although this incident meets the criteria, database editors are making no claims about whether this incident should be prevented from recurring. The AIID indexes all incidents meeting its incident definition; our responsibility is to make such incidents known (e.g., Incident #13, where language toxicity models were shown to be easily fooled).
Incident #383: Google Home Mini Speaker Reportedly Read N-Word in Song Title Aloud
- What happened? Google Home Mini speaker was reported by users for announcing aloud the previously-censored n-word in a song title. It is unclear when or how Google's speakers stopped censoring the n-word.
- Who was involved? Google Home developed and deployed an AI system, which harmed Black Google Home Mini users and Google Home Mini users.
Incident #384: Glovo Driver in Italy Fired via Automated Email after Being Killed in Accident
- What happened? Delivery company Glovo's automated system sent an email terminating an employee for "non-compliance terms and conditions" after the employee was killed in a car accident while making a delivery on Glovo's behalf.
- Who was involved? Glovo developed and deployed an AI system, which harmed Sebastian Galassi and Sebastian Galassi's family.
Incident #385: Canadian Police's Release of Suspect's AI-Generated Facial Photo Reportedly Reinforced Racial Profiling
- What happened? The Edmonton Police Service (EPS) in Canada released a facial image of a Black male suspect generated by an algorithm using DNA phenotyping, which was denounced by the local community as racial profiling.
- How does the AI work? The AI system called Snapshot creates a composite facial sketch based on physical appearance attributes generated by DNA phenotyping is the process of predicting physical appearance and ancestry from unidentified DNA evidence.
- How did this AI cause harm? DNA phenotyping composites are approximations of appearance, and it is not clear that the Snapshot profiles match their subjects. In this case since the AI-generated suspect was Black, it raised concerns about racial profiling in a marginalized community.
- Who was involved? Parabon Nanolabs developed an AI system deployed by Edmonton Police Service, which harmed Black residents in Edmonton.
📎 New Developments
Older incidents that have new reports or updates.
Original incident | New report(s) |
---|---|
Incident #376:RealPage's Algorithm Pushed Rent Prices High, Allegedly Artificially |
|
Incident #373: Michigan's Unemployment Benefits Algorithm MiDAS Issued False Fraud Claims for Thousands of People |
|
Incident #267: Clearview AI Algorithm Built on Photos Scraped from Social Media Profiles without Consent |
|
Incident #382: Instagram's Exposure of Harmful Content Contributed to Teenage Girl’s Suicide |
|
🗄 From the Archives
Every edition, we feature one or more historical incidents that we find thought-provoking.
While in October we saw humans outwitting an automated system to avoid state censorship in China, other historical incidents recently added to the database highlight AI systems violating privacy in furtherance of state or commercial interests. An incident from 2019 echoes the concerns of state surveillance with the Ugandan government reportedly using facial recognition software to monitor political opposition. Meanwhile in the private sector, Uber allegedly violated drivers’ data privacy rights in order to monitor performance in 2020 and McDonald’s faced a lawsuit in 2021 for potentially violating Illinois privacy laws by collecting voice data through their drive-through chatbot.
Outside of the deliberate use of AI systems to collect and use private data, there are several previous examples of automated systems mistakenly collecting or sharing that data. In 2018, an Amazon Echo mistakenly sent a recorded private conversation between a husband and wife to one of the husband’s employees without either of their knowledge. GPT-2, the predecessor to GPT-3, was reportedly able to recite Personal Identifiable Information (PII) that it learned through training on massive amounts of data from the internet.
These occurrences from the last few years highlight common themes of privacy concerns that are increasingly concerns of legal systems providing rights to privacy and data protection.
– by Janet Schwartz
👇 Diving Deeper
- All new incidents added to the database in the last month, grouped by topic:
- Privacy & surveillance: #354, #357, #360, #361, #371, #372, #377
- Facial recognition: #358, #365, #368, #373, #375, #385
- Bias & discrimination: #355, #356, #359, #367, #374, #383
- Social media: #362, #363, #366, #380, #382
- Impactful errors: #364, #379, #384
- Unfair competition: #370, #376
- Autonomous vehicles: #378, #381
- Labor displacement: #369
- Explore clusters of similar incidents in Spatial Visualization
- Check out Table View for a complete view of all incidents
- Learn about alleged developers, deployers, and harmed parties in Entities Page
🦾 Support our Efforts
Still reading? Help us change the world for the better!
- Share this newsletter on LinkedIn, Twitter, and Facebook
- Submit incidents to the database
- Contribute to the database’s functionality