YouTube now asks viewers to detect generative AI slop when rating videos

YouTube has a complicated attitude toward generative AI. It allows some videos produced by machine learning on the platform and even encourages its use. Yet, the company has also vowed to crack down on AI slop. A new rating system has emerged that asks viewers to detect this undesirable content.
VidIQ is one of several social media accounts that noticed the pop-ups in the YouTube app. It bluntly asks, “Does this feel like AI slop?” or, alternatively, whether “low-quality AI” was a factor. Possible responses range from “Not at all” to “Extremely.” At the moment, it appears the new strategy is in a limited testing phase.
Content creators are free to use generative AI tools to enhance videos. They are not required to record voice-overs, make edits, or design graphics. Unfortunately, a growing number of uploads are constructed without sufficient human oversight. Even then, they are often permitted if not deemed low-quality. Otherwise, channel owners risk losing monetization.
How does YouTube filter out AI slop?
To determine if a candidate meets basic standards, YouTube relies on both automated and human reviews. Neither has proven adequate, with one recent study showing that over 20% of YouTube shorts were poorly produced, repetitive, or misleading. That may be why the company is adding a new element to the conventional like/dislike options.
Relying on viewers has its drawbacks, as some are not savvy enough to spot clever deepfakes. There is also a subjective element, where supporters of a channel may hesitate to flag a video.
Some critics believe the new rating system will lead to more AI slop, not less. If the changes become widespread, users will supply massive amounts of data. TukiFromKL thinks the platform is training its own models to generate content that’s harder to detect. With some results more convincing than others, it could learn how to effectively fool audiences.




















