Description: Google’s automated detection of abusive images of children incorrectly flagged a parent’s photo intended for a healthcare provider, resulting in a false police report of child abuse, and loss of access to his online accounts and information.
Entities
View all entitiesAlleged: Google developed and deployed an AI system, which harmed a software engineer named Mark and parents using telemedicine services.
Incident Stats
Incident ID
303
Report Count
2
Incident Date
2022-08-21
Editors
Khoa Lam
Incident Reports
Reports Timeline
nytimes.com · 2022
- View the original report at its source
- View the report at the Internet Archive
Mark noticed something amiss with his toddler. His son’s penis looked swollen and was hurting him. Mark, a stay-at-home dad in San Francisco, grabbed his Android smartphone and took photos to document the problem so he could track its progr…
siliconangle.com · 2022
- View the original report at its source
- View the report at the Internet Archive
The application of artificial intelligence in various aspects of life has been mostly a net positive, but what happens when machine learning algorithms cannot detect the difference between an innocent health photo and child exploitation mat…
Variants
A "variant" is an incident that shares the same causative factors, produces similar harms, and involves the same intelligent systems as a known AI incident. Rather than index variants as entirely separate incidents, we list variations of incidents under the first similar incident submitted to the database. Unlike other submission types to the incident database, variants are not required to have reporting in evidence external to the Incident Database. Learn more from the research paper.