Google offers $12 million for hackers to tackle generative AI attacks
Google is extending its current bug bounty scheme to reward hackers and security researchers who identify possible attacks in generative AI. The Vulnerability Rewards Program (VRP) outlines clear guidelines on eligible reports for financial rewards. If an issue qualifies for reporting, you can submit it on the Bug Hunter website and receive a reward if it is deemed valid.
Last year, Google paid security professionals $12 million for disclosing security flaws. In August, the company convened with other industry delegates and security researchers at Defcon, the largest public event for generative AI thus far, at the White House to identify possible issues.
We believe expanding the VRP will incentivize research around AI safety and security, and bring potential issues to light that will ultimately make AI safer for everyone. We're also expanding our open source security work to make information about AI supply chain security universally discoverable and verifiable.
- Google spokesperson
Up to this point, the company has used its own "AI Red Team" to research AI threats, discover bugs and develop system defences.
We leverage attackers' tactics, techniques and procedures (TTPs) to test a range of system defenses.
- Daniel Fabian, head of Google Red Teams
Under the new regulations, external hackers can now simulate attacks to identify flaws in Google's AI systems and services, for example, but they must work within a strict framework. The extension of the incentive scheme preceded the introduction of a new executive order by President Biden.
On 30th October, Joe Biden issued the first mandatory government action on artificial intelligence, for instance. The executive order mandates thorough evaluation of AI models prior to their usage by government agencies.