Notebookcheck Logo

AI systems like GPT-4 and Gemini misinterpreting Ring Camera footage could lead to false police calls, especially in minority neighborhoods

The MIT research paper suggests that AI systems were more likely to flag incidents in minority neighborhoods, raising concerns about fairness and accuracy in AI-driven surveillance. (Image source: MIT News)
The MIT research paper suggests that AI systems were more likely to flag incidents in minority neighborhoods, raising concerns about fairness and accuracy in AI-driven surveillance. (Image source: MIT News)
MIT-based research analyzed Amazon Ring footage using AI models like GPT-4 to assess police intervention decisions. The study revealed inconsistencies, including high false-positive rates and bias against minority neighborhoods, potentially leading to unnecessary police calls in non-criminal situations in the future.

As more homeowners turn to smart security solutions like Amazon’s Ring cameras (currently $149.99 on Amazon), AI will play a bigger role in keeping homes safe. But a new study is raising concerns about whether these future AI systems might be too quick to call the cops, even when nothing criminal is happening.

Researchers from MIT and Penn State analyzed 928 publicly available Ring surveillance videos to see how AI models like GPT-4, Claude, and Gemini make decisions about contacting law enforcement. The results showed that while 39% of the videos contained actual criminal activity, the AI models often failed to recognize this. In most cases, the models either stated no crime occurred or gave ambiguous responses. Despite this, they still recommended police intervention in some situations.

One of the study’s key findings was how the AI models reacted differently depending on the neighborhood. Even though the AI wasn’t given explicit details about the areas, it was more likely to suggest calling the police in majority-minority neighborhoods. In these areas, Gemini recommended police action in nearly 65% of cases where crimes occurred, compared to just over 51% in predominantly white neighborhoods. Additionally, the study noted that 11.9% of GPT-4's police recommendations happened even when no criminal activity was annotated in the footage, raising questions about false positives.

Most salient 3-, 4-, and 5- grams between white and minority neighborhoods in responses to “Should police be called?” (Image source: MIT)
Most salient 3-, 4-, and 5- grams between white and minority neighborhoods in responses to “Should police be called?” (Image source: MIT)

What's interesting is, Amazon has also been exploring AI-driven features for its Ring systems, including advanced tools like facial recognition, emotional analysis, and behavior detection, as suggested by recent patents. In the future, AI might play a way bigger role in identifying suspicious activities or people, further stepping up what our home security systems can do.

For homeowners using Ring cameras, there is no immediate cause for worry. As of now, Ring cameras have limited AI capabilities (mostly motion detection) and do not independently make such decisions. The advanced AI models used in the study, like GPT-4 and Claude, were applied externally to analyze Ring footage, not integrated into the cameras themselves. The gist of the research is that while future AI updates can help monitor your home to a higher degree, it might also be prone to making errors—errors that will have to be eliminated before these features become mainstream in upcoming Ring cameras.

Check out another research that covers AI’s bias against African American English dialects here.

(Image source: Institute for Data, Systems, and Society, Massachusetts Institute of Technology)
(Image source: Institute for Data, Systems, and Society, Massachusetts Institute of Technology)

Source(s)

Read all 2 comments / answer
static version load dynamic
Loading Comments
Comment on this article
Please share our article, every link counts!
> Expert Reviews and News on Laptops, Smartphones and Tech Innovations > News > News Archive > Newsarchive 2024 09 > AI systems like GPT-4 and Gemini misinterpreting Ring Camera footage could lead to false police calls, especially in minority neighborhoods
Anubhav Sharma, 2024-09-20 (Update: 2024-09-20)