A Michigan woman was mistakenly accused of theft after a facial recognition system wrongly identified her as a suspect, raising fresh alarms over the accuracy and accountability of AI surveillance tools.
Porcha Woodruff, a Detroit resident, was left stunned when police showed up at her door with an arrest warrant—while she was eight months pregnant. The charge? Alleged involvement in a recent robbery and carjacking. The only evidence tying her to the crime was a facial recognition match, which later turned out to be incorrect.

Despite having no prior connection to the case, Woodruff was detained, booked, and held in custody for several hours before being released on bond. It wasn’t until days later that prosecutors dropped the charges due to lack of evidence.
The incident has sparked outrage among civil rights advocates who argue that facial recognition systems—especially when used without proper oversight—can result in life-altering mistakes. Critics point out that these systems have been shown to perform poorly on women and people of color, making them particularly vulnerable to false matches.
“This isn’t just about flawed software—it’s about flawed systems that are too quick to trust technology over real investigation,” said one civil liberties advocate.
Woodruff has filed a federal lawsuit against the City of Detroit, demanding accountability and raising broader questions about how such technology is being used in policing. Her case is not an isolated one—Detroit police have previously come under scrutiny for similar wrongful arrests linked to facial recognition.
As the legal battle unfolds, calls are growing for stricter regulations—or even outright bans—on the use of facial recognition in law enforcement until safeguards are firmly in place.