Artificial Intelligence & Machine Learning
,
Government
,
Industry Specific
Accountability Needed to Unleah Full Potential of AI, Says NTIA Administrator
The Biden administration is calling for mandatory audits into high-risk artificial intelligence systems and greater clarity on where liability for applications gone wrong should lie in the AI supply chain.
See Also: Splunk For Law Enforcement
The recommendations come from a National Telecommunications and Information Administration report published Wednesday morning calling for accountability convinced “as a chain of inputs linked to consequences.”
“To achieve real accountability and harness all of AI’s benefits, the United States – and the world – needs new and more widely available accountability tools and information, an ecosystem of independent AI system evaluation, and consequences for those who fail to deliver on commitments or manage risks properly,” the 77-page report states.
“We need accountability to unleash the full potential of AI,” NTIA Administrator Alan Davidson said in a statement, adding that the agency’s recommendations “will empower businesses, regulators and the public to hold AI developers and deployers accountable for AI risks.”
The recommendations align with President Joe Biden’s October executive order on AI, which invoked the Defense Production Act to require developers of high-risk AI models to notify the government when they’re training such tools (see: White House Issues Sweeping AI Executive Order).
NTIA officials told reporters Tuesday afternoon the private sector should welcome Independent audits of certain high-risk systems since they would boost public and marketplace confidence.
The report also calls for possible pre-release certification of claims for AI models used in high-risk sectors such as healthcare and finance, as well periodic recertification to confirm the model still does what its makers claimed it does.
NTIA said the federal government should collaborate with the private sector to improve standard information disclosures through new concepts like “AI nutrition labels,” which would present standardized information similar to food labels mandated by the Food and Drug Administration. The report calls on federal agencies to develop recommendations on how best to apply existing liability rules and standards to AI systems.
Courts and lawmakers may ultimately define where liability lies for harms stemming from AI models in the supply chain leading to users from developers, but the agency proposes a meeting of legal experts and stakeholders to hammer out how existing regulation applies. The third party evaluation method of accountability the NTIA endorses may well hinge on exposure and protection from liability, the report says.
Additional recommendations include tasking the federal government with strengthening its capacity to address cross-sectoral risks associated with AI tools, while developing a national registry of high-risk AI deployments and reporting database for national AI adverse incidents.