Artificial Intelligence & Machine Learning
,
Data Privacy
,
Data Security
Official at AI Summit Says Framing AI as a Solution for Every Issue is Problematic

Artificial intelligence poses “profound” privacy risks, right from how models are trained to how the systems are prompted, Signal President Meredith Whittaker warned.
See Also: From Silos to Synergy: Gen AI Aligns IT and Security Teams
Speaking at a session on the threats posed by AI at the French AI Action Summit on Monday, Whittaker said the current deployment of AI across military, media and other technological sectors, violates fundamental rights to privacy and expression.
Despite ongoing concerns about the technology, companies are still rolling out poorly secured products in a bid to “integrate AI into every corner,” while not being “mindful about the consequences,” she added.
“AI relies on data. There is nothing in the world that AI can never know. There is nothing it cannot be intelligent about that it didn’t find in the data point from the training data; data imported from reinforcement learning, data provided through prompt or access to your photo library that it uses for inferences or predictions. So, AI has pretty profound privacy consequences,” Whittaker said.
A case in point is Microsoft’s recently launched automatic screenshot retrieval feature called Recall, Whittaker added.
The feature added to Microsoft Copilot+, allows Windows to take snapshots of users’ screens periodically to find data from apps, websites, images and documents. Because it screenshots pictures, including that of banking credentials in plain text, the application posed grave privacy and security risks, experts warned (see: Microsoft’s Recall Stokes Security and Privacy Concerns).
Due to increased criticism, Microsoft later changed the security features to launch a revised version of the app.
“They were storing the screenshot data on your desktop unencrypted, this could include anything that you were doing on your desktop,” Whittaker said about Recall and adding that companies and users need to stop seeing AI as the solution for everything.
“I think we need to be really cautious about framing AI as a solution to the problems in cases where it is a hunger for data, and the companies push for billions of dollars in investment,” Whittaker said.
Whittaker further raised concerns about the push from governments to backdoor encrypted services such as Signal. In the wake of increasing cyberthreats from Chinese threat actors such as Salt Typhoon, any attempt to backdoor existing IT systems could increase hacking risks, she added (see: CISA First Spotted Salt Typhoon Hackers in Federal Networks).
“It is an incredibly perilous time to be proposing such things, given the concentration of power in the tech industry,” Whittaker said.