Kenwood High School in Maryland uses an AI-powered security system called Omnilert, which taps into existing surveillance cameras to continuously scan for potential weapons. But the system doesn’t always get it right. On the evening of October 20, it mistakenly flagged a bag of Doritos – a popular snack in the US – as a firearm.
The chips belonged to Taki Allen, a 17-year-old student who was waiting for a ride after football practice around 7 p.m. when police sirens suddenly blared and eight patrol cars pulled up. Officers jumped out and pointed their guns at him. “I thought I was going to die… they had guns on me,” Allen told CNN. He was forced to kneel, handcuffed and searched – only for the officers to find nothing but an empty bag of chips on the ground.
According to principal Kate Smith, the AI alert had already been canceled before police arrived. However, the update wasn’t communicated in time, a school district spokesperson told WBAL-TV 11. Superintendent Myriam Rogers called the incident “unfortunate” and announced a full review. Psychological support will be provided to the students involved.
Calls for reform after false alarm
After the incident, Allen’s grandfather, Lamont Davis, called for accountability and reforms in the use of AI surveillance. The event also sparked intense debate on social media. On Reddit, users voiced strong criticism of the police, school officials and the technology involved. Many argued that the real issue wasn’t the AI itself but the way people responded after the alert. “Human error, not AI failure,” one of the top comments read.
A system designed to detect weapons early makes sense – especially in the US, where school shootings remain a serious concern. But it’s crucial to ensure these tools don’t become a risk themselves. That means minimizing false alarms and maintaining clear communication between security staff and law enforcement.
Source(s)
Image source: Darya Sannikova/Pexels; Andre Moura/Pexels



