Artificial Intelligence & Machine Learning
,
Geo Focus: The United Kingdom
,
Geo-Specific
ICO Call for Evidence to Focus on Legal Basis for Scrapped Training Data
The British data regulator is set to analyze the privacy implications of processing scrapped data used for training generative artificial intelligence algorithms.
See Also: Live Webinar | Integrating Splunk and Panther for Real-Time Alerting and Custom Dashboarding
The Information Commissioners’ Office on Monday announced that it’s soliciting comments from AI developers, legal experts, and other industry stakeholders on how privacy rights might be affected by developments in generative AI.
Since the majority of the AI systems use data scrapped from the public internet that may include a large swathe of personally identifiable information such as names and contact details, the primary concern is that AI developers could be processing that data in violation of existing privacy laws.
The ICO’s consultation seeks to understand if current data processing practices followed by AI developers violate privacy requirements stipulated in the U.K. General Data Protection Regulation and the Data Protection Act of 2018.
The consultation will focus on whether AI developers meet the “lawfulness” clause under the U.K. GDPR, which lays down six measures that a company should adhere to prove that its data processing requirements are compliant. They includes obtaining consent from users or ensuring that the business represents the legitimate interests of its customers, among others.
“Training generative AI models on web scraped data can be feasible if generative AI developers take their legal obligations seriously and key to this is the effective consideration of the legitimate interest,” the ICO said.
The consultation will close in March 1. Based on the responses received, the agency intends to release guidance on AI in the coming months.
The United Kingdom does not have a comprehensive artificial intelligence regulation, although the British government has told its data, competition, healthcare, media and financial regulators to monitor AI within their jurisdictions, giving the ICO authority to investigate potential data and privacy aspects of AI.
In October 2023, the ICO rebukedinstant messaging app Snapchat for failing to properly assess the privacy risk to the users of My AI – the platform’s generative artificial intelligence-powered chatbot. In 2022, the agency imposed a fine of 7.5 million pounds on facial recognition firm Clearview AI for unlawfully obtaining U.K. citizen facial images to power the company’s database (see: UK Privacy Watchdog Pursues Clearview AI Fine After Reversal).