Microsoft is offering up to $15,000 to bug hunters that pinpoint vulnerabilities of Critical or Important severity in its AI-powered “Bing experience”.
“The new Microsoft AI bounty program comes as a result of key investments and learnings over the last few months, including an AI security research challenge and an update to Microsoft’s vulnerability severity classification for AI systems,” says Lynn Miyashita, a technical program manager with the Microsoft Security Response Center.
Microsoft is asking bug hunters to probe the AI-powered Bing experiences on bing.com in Browser, as well as the Bing integration in Microsoft Edge and the Bing integration in the iOS and Android versions of Microsoft Start and Skype mobile apps.
Manipulate the model’s response to individual inference requests, but do not modify the model itself Manipulate a model during the training phase Infer information about the model’s training data, architecture and weights, or inference-time input data Influence/change Bing’s chat behavior in a way that impacts all other users.
As per usual, the quality of the report accompanying a submission will also influence the amount of the bounty: a critical issue allowing model manipulation can net bug hunters $6,000 if the report is of low quality or $15,000 is it’s of high quality.
Earlier this year, DEF CON’s AI Village hosted a public assessment of LLMs aimed at finding bugs in and uncovering the potential for misuse of AI models.