Artificial Intelligence & Machine Learning
,
Data Governance
,
Data Privacy
California User’s Class Action Suit Says LinkedIn Violated Contract, Privacy Regs
A LinkedIn user has sued the company for flouting privacy requirements by allowing third-party companies to access user data – including Premium users’ private messages – to train their artificial intelligence models.
See Also: Live Webinar | AI-Powered Defense Against AI-Driven Threats
A proposed class lawsuit filed in federal court this week by Alessandro De La Torre, a LinkedIn Premium user in California, accused the job networking site of breaching privacy measures by permitting Microsoft subsidiaries to access “Premium customers’ private and confidential communications to third parties to train generative AI models.”
This unauthorized access occurred even though LinkedIn Premium offered its customers enhanced privacy, with the company pledging not to permit access to “life-altering information” without adequately notifying or seeking user permissions, the lawsuit said.
De La Torre claims that between July 2021 and September 2024 he used LinkedIn to send and receive numerous InMail messages about potential financing for startups, job-seeking efforts and attempts to reconnect with former colleagues, and that “exposure of such information could jeopardize plaintiff’s professional relationships, compromise business opportunities, and negatively impact his career prospects.”
“These raises grave privacy issues: Private discussions could surface in other Microsoft products, and customers; data is now permanently embedded in AI systems without their consent, exposing them to future unauthorized use of their personal information,” the plaintiff alleges.
LinkedIn’s AI policies generated criticism after the company quietly changed its setting to permit the use of customer data by default last year. Though the company suspended the feature, the lawsuit claimed that LinkedIn “discreetly” made more privacy-compromising changes to its policies.
These include notifying users that their data could be used to train AI models by LinkedIn and “another provider,” clarifications that user data already included in its AI training datasets will not be deleted, and omitting its earlier assurances on the use of privacy-enhancing features for training data sets.
“This behavior suggests that LinkedIn was fully aware that it had violated its contractual promises and privacy standards and aimed to minimize public scrutiny and potential legal repercussions,” the plaintiff alleged.
In addition to a contractual breach, the plaintiff also accused the company of violating the Stored Communications Act and California’s Unfair Competition Law. The plaintiff is demanding a damage claim of $1,000 under the law.
A LinkedIn spokesperson called the lawsuit “false claims with no merit.”
Privacy and AI Governance
Security and privacy experts have been raising concerns about the opaque nature in which companies process data scraped from the internet to train their models.
Central to the issue is whether companies are processing data in violation of existing privacy requirements and employing additional security measures such as anonymization before processing data.
OpenAI-maker ChatGPT is among the leading AI generative AI companies that have come under privacy scrutiny in recent months (see: Italian Data Regulator Launches Probe Into OpenAI’s Sora)./p>
Although the U.S. government in 2023 signed its executive order to address risks stemming from powerful AI models, its AI governance efforts now remain in limbo following President Donald Trump’s revocation of the order this week, (see: President Trump Scraps Biden’s AI Safety Executive Order).
Even in the absence of an exclusive AI rule, the fact that companies are using individuals’ private messages and sharing them with third parties is in itself a gross violation of the basic expectation of privacy that can result in more lawsuits in the U.S., said Enza Iannopollo, principal analyst at Forrester.
“In general, we expect to see a significant increase in the volume of lawsuits moving forward. Our data suggests that in the next 12 months, lawsuits specifically related to AI will increase by 20% in the US. Considering how expensive and damaging to reputation this can be, companies must take note.”
With the revocation of the executive order, the U.S. may have a piecemeal approach to AI governance with different states introducing their own AI rules and guidelines, Iannopollo said.
“Companies must prepare for a more complex and fragmented environment when it comes to AI-related requirements,” Iannopollo said.