The US government revealed Apple signed-up to a set of voluntary commitments designed to enhance AI safety, joining 15 other companies in agreeing to the guidelines.
Players which have already backed the measures include OpenAI, Alphabet, Meta Platforms and Amazon. After receiving commitments from the initial group, a related executive order was put in place by President Joe Biden’s administration.
The original pledges included internal and external security testing of AI systems before release, and sharing information on managing AI risks with industry, academia and governments.
US authorities noted Apple’s move to join the voluntary pledge further cements the guidelines “as cornerstones of responsible AI innovation”.
Additionally the government provided updates on work completed following the executive order, claiming it has: issued a “call to action” to combat harmful and abusive AI-generated content; released a guide to design “trustworthy” AI tools in the educational sector, and launched a $23 million project to promote use of “privacy-enhancing technologies”.
Comments