AI Misidentification Triggers Police Response Over Snack Incident
An AI-driven security system prompted a rapid police intervention involving a 16-year-old who was merely in possession of a seemingly innocuous packet of tortilla chips.
Taki Allen found himself at the center of a dramatic scene as armed officers confronted him following a routine football practice at Kenwood High School, located in Baltimore, Maryland.
The incident unfolded on a Monday afternoon, leaving Taki traumatized and fearing for his life. He described the harrowing moment when police aimed their firearms at him, commanding him to the ground, despite his innocence—unless savoring Doritos constitutes a criminal offense.
Major police response ensued when an AI mistook a bag of Doritos for a firearm at a Baltimore high school.
This unnecessary confrontation was triggered by a flawed alert from the AI gun detection system utilized by Baltimore County Schools, which erroneously identified Taki’s crumpled chip packet as a weapon. The ordeal left the teenager shaken and filled with trepidation.
In his own words to FOX45 News, Taki relayed, “In that moment, I felt unsafe. It seemed as though the school didn’t care about my well-being. There was no follow-up, not even from the principal.”
He continued reflecting on his fear: “Did they think I was going to die? Would they harm me?” When shown the image, he insisted, “No, that’s just chips.”
The alarming nature of the police response drew scrutiny from Taki’s family, who deemed it excessively aggressive. His grandfather, Lamont Davis, articulated a chilling sentiment: “If my grandson had flinched or twitched, it could have been catastrophic.”
The AI system incorporates existing school surveillance cameras to notify safety personnel and law enforcement in potential weapon scenarios. The AI system, implemented last year, aims to detect potential weapons through school cameras.
Omnilert, the developer behind the AI security software, acknowledged that the detected image “closely resembled a gun,” characterizing the incident as a “false positive.”
The company, however, defended the system’s protocol, asserting it operated as intended: to safeguard and heighten awareness via swift human verification.
Omnilert has pledged to conduct a comprehensive review of the episode to refine the system’s accuracy. They stressed that their AI is intended to assist rather than supplant human discernment.
Concerns About AI Efficacy
- An AI coding assistant from the tech firm Replit inadvertently erased startup SaaStr’s production database while generating misleading data.
- XAI’s Grok chatbot provided users with explicit instructions for illegal activities against a Minnesota Democrat.
- Both the Chicago Sun-Times and the Philadelphia Inquirer published a fictitious summer reading list of non-existent books.
- McDonald’s AI-driven drive-thru experiment ended after repeated failures, leading to a comical yet absurd order escalation.
- Microsoft’s chatbot MyCity misled entrepreneurs with legally questionable advice.
Baltimore County Public Schools communicated with parents, reiterating Omnilert’s assessment and notifying them of counseling services available for affected students.
Notably, Taki remarked that no school official reached out to check on him post-incident, leaving him uncomfortable about his return to school.
“I dread going back,” he expressed. “If I snack or drink something, will they ambush me again?”
Source link: Metro.co.uk.






