LinkedIn sued for training AI models using customer information
share on
Earlier this week, LinkedIn premium customers have reportedly filed a proposed class action against the platform. The customers claimed that LinkedIn shared private messages to third parties without permission and that disclosed details were used to train generative artificial intelligence (Gen AI) models.
The class action suit claimed that LinkedIn had quietly introduced a privacy setting last August that allow users to enable or disable the sharing of their personal data. Customers then said that the platform quietly updated its privacy policy in September to state their shared data could be used to train AI models, reported Reuters.
Additionally, a "frequently asked questions" hyperlink stated that withdrawing from the feature would not affect the training that had already occurred.
Don't miss: MCMC, Microsoft in discussions over LinkedIn licensing
The complaint reportedly stated that the attempt to propose the AI training feature discreetly, indicated LinkedIn's awareness of violating customers' privacy and its promise to use personal information to improve the platform.
The lawsuit was reportedly filed in the California federal court on behalf of LinkedIn premium customers who used InMail messages and whose private data was shared to third parties for AI training before 18 September last year.
The suit seeks unspecified damages for breach of contract and violations of California's unfair competition law, as well as US$1,000 per customer for violations of the federal Stored Communications Act.
In conversations with MARKETING-INTERACTIVE, a LinkedIn spokesperson said, "These are false claims with no merit".
The lawsuit follows a case in October last year, where Hong Kong's privacy watchdog flagged concerns over LinkedIn over its privacy policy which allowed its Gen AI models to be trained on users’ data and content by default.
The Office of the Privacy Commissioner for Personal Data (PCPD) said that LinkedIn’s privacy policy update had aroused concerns of data protection authorities in other jurisdictions. The PCPD was also concerned about whether LinkedIn’s default opt-in setting for using users’ personal data to train generative AI models correctly reflected users’ choices. The PCPD had therefore written to LinkedIn to enquire into the matter.
In a conversation with MARKETING-INTERACTIVE at the time, the PCPD said it had received seven complaints in relation to data privacy on LinkedIn from October 2023 to 7 October 2024. The complainants were concerned about their personal data being disclosed without consent and fake accounts impersonating them.
Ada Chung Lai-ling, the privacy commissioner for personal data, reminded LinkedIn users to stay vigilant regarding the recent adjusted policy, making an informed decision about whether to permit the use of their personal data for AI training.
In response, LinkedIn told the South China Morning Post that it had started informing users of the change through multiple channels, citing a previous blog post written by Blake Lawit, senior vice president and general counsel of LinkedIn.
"In our privacy policy, we have added language to clarify how we use the information you share with us to develop the products and services of LinkedIn and its affiliates, including by training AI models used for content generation ('generative AI') and through security and safety measures," said the blog post.
Related Articles:
LinkedIn names new APAC managing director
Social media marketing among top 10 rising skills of LinkedIn members in APAC
Simu Liu joins LinkedIn: Why celebs need to build a brand beyond the spotlight
share on
Free newsletter
Get the daily lowdown on Asia's top marketing stories.
We break down the big and messy topics of the day so you're updated on the most important developments in Asia's marketing development – for free.
subscribe now open in new window