Washington judge rejects AI-enhanced video due to inaccurate image modifications in triple murder case
Superior Court Judge Leroy McCullogh has rejected the use of an AI-enhanced video submitted as evidence during a triple murder case in King County, Washington. Two key issues exist with the use of such videos to prosecute a criminal case. First, AI image enhancement both creates and removes details from the original. Second, there is no method in existence to prove the AI modifications result in a video that accurately represent the actual scene.
AI technology has advanced quickly to the point where it can be used to fix damaged photos and improve poor quality photos. AI is able to do this by generating details from training across millions of existing images. Simply, when AI sees a section that appears like a grassy field in a poor quality image, it takes the most likely grassy field images it has seen before and attempts to patch in a higher quality replacement. Importantly, generative AI does not know what is real, so the enhanced videos can contain details that do not exist in real life or even be stripped of critical details.
Generative AI uses the input of millions of images combined with algorithms to create a massive set of numbers that represent those images. The original images are not kept by the AI, so exactly how AI enhances an image with details it has seen uses “opaque methods to represent what the AI ‘thinks’ should be shown”, according to Judge McCullogh.
Readers who are thinking of becoming lawyers should take a moment to read up on how generative AI works to avoid making mistakes such as submitting filings based on fake cases.