Artificial Intelligence & Machine Learning
,
Next-Generation Technologies & Secure Development
White House Touts Agency Achievements for Development and Safe Use of Technology
Apple is the latest tech giant to sign onto a list of voluntary commitments for artificial intelligence development pushed by the Biden administration. Fifteen technology heavyweights have already pledged they will follow the guidance.
See Also: Webinar | Accelerate your SOC with AI-driven security analytics with Elastic and Google Cloud
The White House is extracting promises of secure and trustworthy development from Silicon Valley in a strategy it adopted after an AI regulatory push in Congress looked unlikely to succeed. The commitments include investing in AI model cybersecurity, red-teaming against misuse or national security concerns and accepting vulnerability reports from third parties. Companies also say they will watermark AI-developed audio and visual material (see: IBM, Nvidia, Others Commit to Develop ‘Trustworthy’ AI).
The strategy predates an October executive order that requires foundational model developers to report the results of red-team safety tests to the government.
Apple’s decision to enroll in the White House commitments comes at a time when the company’s progress with AI development and use is conservative compared to its peers. The smartphone giant’s strategy has so far been to acquire early-stage startups to establish its foothold in the space. It bought 32 such firms by the end of 2023 and is focused on enhancing its existing products and services with AI, in contrast to its peers’ strategy of rolling out new AI features and applications.
Today’s White House announcement about Apple is timed to the nine-month anniversary of the AI executive order – giving the administration a chance to tout the steps federal agencies taken since the order took effect.
Among the highlights picked by the White House:
- The AI Safety Institute released for public comment proposed guidance for the evaluation of misuse of dual-use foundation models.
- The National Institute of Standards and Technology published a final framework on managing generative AI risks and securely developing dual-use foundation models.
- The Department of Energy stood up testbeds and tools to evaluate harms AI models may pose to critical infrastructure.
- The United States unanimously adopted the United Nations General Assembly resolution addressing global AI challenges, and expanded support for the U.S.-led political declaration on the responsible military use of AI and autonomy, which 55 nations endorsed.
The administration also celebrated the hiring of more than 200 hires to work on AI issues across the government.