How to stop LinkedIn from training AI on your data

LinkedIn agreed Wednesday has been training its own AI on data from multiple users without seeking consent. Since LinkedIn only exempts future AI training, there is currently no way for users to opt out of training that has already occurred.

In a blog detailing the upcoming updates on Nov. 20, LinkedIn general counsel Blake Lawit confirmed that LinkedIn’s user agreement and privacy policy will be changed to better explain how users’ personal data powers AI.

Under new Privacy PolicyLinkedIn now warns users that “we may use your personal data… [to] Create and train artificial intelligence (AI) models, create, deliver and customize our Services, and derive insights with the help of AI, automated systems and inferences so that our Services are more relevant and useful to you and others.

An Frequently Asked Questions Personal Data may be collected any time a user creates AI or other AI features, as well as when a user creates a post, changes their preferences, comments to LinkedIn, or any other time using the Site.

That data is stored until the user deletes the AI-generated content. LinkedIn recommends that users use its data access tool to delete or request deletion of data collected about past LinkedIn activities.

The AI ​​features that LinkedIn’s AI models can generate “may be trained by LinkedIn or another provider such as Microsoft,” which provides some AI models through the Azure OpenAI service, the FAQ said.

A key privacy risk for users, LinkedIn’s FAQ noted, is that users who “provide personal data as input to a generating AI-powered feature” could see their “personal data provided as output.”

See also  Studios make a 'last and final' offer because SAG-AFTRA needs time to respond

LinkedIn says it “seeks to minimize personal data in data sets used to train models,” relying on “privacy-enhancing technologies to edit or remove personal data from the training dataset.”

While Lawit’s blog avoided clarifying whether already collected data can be removed from AI training data sets, the FAQ confirmed that users who have automatically opted out of sharing personal data for AI training can only opt out of intrusive data collection “going forward.”

The FAQ states that opting out “does not affect training that has already taken place.”

A LinkedIn spokesperson told Ars that choosing AI training “by default” will “benefit all members.”

“People can choose to opt out, but jobs and networking and creating AI are coming to LinkedIn as part of how we’re helping that transition,” a LinkedIn spokesperson said.

By allowing opt-outs from future AI training, a LinkedIn spokesperson added, it “gives people who use LinkedIn even more choice and control when it comes to using data to train the AI ​​technology we’re developing.”

How to Opt Out of AI Training on LinkedIn

Users can opt out of AI training by going to the “Data Privacy” section in their account settings, then turning off the option to allow LinkedIn automatic collection of “data to drive AI improvement” for most users.

The only exception is for users in the European Economic Area or Switzerland, who are protected by strict privacy laws that require consent from the Sites to collect personal data or permission from the Sites to justify the collection of data as a legitimate interest. Those users won’t see the option to opt out because they’ve never opted in, LinkedIn has repeatedly confirmed.

See also  The storms knocked out more than a million power across the South with high winds and at least 8 tornadoes

Additionally, users may object to generative AI models not being used to create LinkedIn content “using their personal data for training” — models used for personalization or content review purposes, The Verge said. Mentioned– By submission LinkedIn Data Processing Objection Form.

Last year, LinkedIn He shared AI policies promise to “take meaningful steps to reduce the potential risks of AI.”

A renewed risk User Agreement Using LinkedIn’s authoring features to expand a profile or create recommendations while writing a post can create content that is “inaccurate, incomplete, delayed, misleading, or irrelevant to your purposes.”

Users are advised to refrain from sharing misleading information or disseminating AI-generated content that may violate LinkedIn’s Community Guidelines. Users are also cautioned to exercise caution when relying on any information shared on the Platform.

“As with all content and other information on our Services, even if it is labeled as generated by ‘AI,’ please review it carefully before trusting it,” LinkedIn’s user agreement states.

In 2023, LinkedIn said it always “strives to explain in clear and simple ways how our use of AI affects people” because users’ “understanding of AI starts with transparency.”

Legislation like the EU’s AI law and GDPR—especially with its strong privacy protections—would be less traumatic for unsuspecting users if enacted elsewhere. This puts all companies and their users on a level playing field when training AI models and results in fewer nasty surprises and angry customers.

Leave a Reply

Your email address will not be published. Required fields are marked *