Artificial intelligence is here to stay, and many users are turning against tech giants like Microsoft and Meta over data privacy concerns.
While the companies are aware of the criticism, not all responses have been equally reassuring.
Microsoft announced its Recall AI tool for Copilot+ PC as an “everyday AI assistant,” but delayed the release of the tool after a series of data privacy concerns. Credit: Microsoft
Microsoft
Microsoft was forced to delay the release of an artificial intelligence tool called “Recall” after a wave of backlash.
The feature was introduced last month and is billed as “your everyday AI companion.”
It takes a screen capture of your device every few seconds and creates a searchable library of content, including passwords, conversations, private photos, and more.
The release has been postponed indefinitely after a flurry of criticism from data privacy experts, including the UK's Information Commissioner's Office.
Following the outrage, Microsoft announced changes to Recall ahead of its public launch.
“Recall will move from a broadly available preview experience for Copilot+ PC on June 18, 2024 to a preview first available in the Windows Insider Program (WIP) in the coming weeks,” the company told the US Sun.
“After receiving feedback about Recall from our Windows Insider community, we plan to make Recall (Preview) available to all Copilot+ PCs soon, as we usually do.”
Asked to comment on claims that the tool is a security risk, the company declined to respond.
The recall was the centerpiece of a new computer announced at Microsoft's developers conference last month.
The company's vice president, Yusuf Mehdi, said the tool uses AI to “give you access to almost anything you've ever seen before on a PC.”
Mark Zuckerberg to Introduce Meta AI at Meta Connect 2023
Shortly after the launch, the ICO vowed to investigate Microsoft over concerns about user privacy.
Microsoft announced a series of updates to the upcoming tool on June 13th, which will turn recalls off by default.
The company has continually reaffirmed its “commitment to responsible AI,” with privacy and security as its guiding principles.
Adobe
Adobe has overhauled its terms of use updates after customers expressed concerns that their work could be used to train artificial intelligence.
The software company faced intense criticism over vague language in a terms of service reapproval earlier this month.
Customers complained that they couldn't access their accounts unless they agreed to grant Adobe a “worldwide, royalty-free license to copy, display, distribute, modify, and sublicense your work.”
Some users suspected that the company had access to their work and was using it to train generative AI models.
Adobe made headlines when it reissued its terms of use, giving the company permission to “copy, display, distribute, modify, and sublicense” users' works. Credit: Getty
Adobe executives, including David Wadhwani, president of digital media, and chief trust officer Dana Rao, argued that the terms had been misunderstood.
In a statement, the company denied that it trained its generative AI on customers' content, took ownership of customers' work, or allowed access to customers' content beyond what was required by law.
The dispute is the latest development in a long-running feud between Adobe and its users over the use of AI technology.
The company, which dominates the market with graphic design and video editing tools, released its Firefly AI model in March 2023.
Firefly and similar programs are trained on datasets of existing work to create text, images, music, or video in response to user prompts.
Artists raised alarms after noticing their names being used as tags for AI-generated images in Adobe Stock search results, and in some cases the AI art appeared to mimic the artists' style.
Illustrator Kelly McKernan was one of the more outspoken critics.
“Hey @Adobe, if Firefly is supposed to be ethically built then why are these AI-generated stock images using my name as a prompt in the dataset?” she tweeted.
Many assumed the company was using their work to discipline Adobe Firefly, which had already been accused of stealing from artists. Photo: Getty Images
These concerns were compounded after the terms of service update, with YouTuber Sasha Jansin announcing that he was cancelling his Adobe license “after many years as a customer.”
“This is insanity. No sane creator could accept this,” he wrote.
“You pay a huge subscription fee every month and they want to own your content and your entire business.”
Officials acknowledge that the language used to re-authorize the terms of use was vague at best.
Adobe's chief product officer, Scott Belsky, acknowledged in a social media post that the summary was “unclear” and that “trust and transparency are crucial today.”
post.
Belsky and Rao addressed the backlash in a news release on Adobe's official blog, writing that they saw an opportunity to “provide greater clarity and address concerns raised by our community.”
Adobe's latest terms of use state that its software “does not use local or cloud content to train the generative AI.”
The only exception is if you submit your work to the Adobe Stock marketplace, in which case it will be freely used by Firefly.
Adobe officials insisted they had no plans to train AI with user data without permission, forcing the company to reissue its terms of use. Credit: Reuters
Meta
Meta has come under fire for using data from billions of Facebook and Instagram users to train its artificial intelligence tools.
In May, suspicions arose that the company had changed its security policies in anticipation of backlash for scraping content from social media.
One of the first to sound the alarm was Martin Chialy, vice president of product design at Muse Group.
Keeley, who is based in the UK, said he had received notice that the company plans to start training its AI on user content.
Following the wave of backlash, Meta issued an official statement to its European users.
Meta has faced allegations that it trains its generative AI on content from platforms like Facebook and Instagram. Credit: Alamy
The company argued that it doesn't train its AI on private messages, but only on the content users choose to make public, and that it has never obtained information from the accounts of users under the age of 18.
At the end of 2023, an opt-out form was made available under the name Data Subject Rights for Third-Party Information Used for Meta’s AI.
At the time, the company said its latest open-source language model, Llama 2, had not been trained on user data.
But that appears to have changed: while EU users can opt out, US users have no legal basis to do so due to the lack of national privacy laws.
EU users can use Instagram and Facebook[設定]Section[プライバシー ポリシー]You can fill out our Data Subject Rights form here.
But the company says it can only act on a user's request once the user demonstrates that the AI in the model “knows” them.
While the company offers an opt-out form for users in the EU, the lack of a national data privacy law in the US leaves other users with few options. Credit: Getty
The form instructs users to submit input prompts to an AI tool, which will return their personal information, as well as provide evidence of their responses.
There is also a disclaimer informing users that any opt-out requests will only be made in accordance with “local law.”
Activists from NOYB (European Digital Rights Centre) have filed complaints against the tech giant in nearly 10 countries.
The Irish Data Protection Commission (DPC) then issued a formal request to Meta to address the proceedings.
But the company hit back at the DPC, saying the dispute was “a setback for European innovation”.
Meta maintains that its approach complies with legal regulations, including the EU's General Data Protection Regulation. The company did not immediately respond to a request for comment.
Amazon has come under fire for publishing AI-generated books through its Kindle Direct Publishing platform. Photo: AFP
Amazon
The online retailer came under fire after dozens of AI-generated books appeared on its platform.
The problem began a year ago when the author found a piece of work that was not written under his own name.
Compounding the problem has been a proliferation of books containing false and potentially harmful information, including about mushroom foraging.
One of the most vocal critics has been author Jane Freedman: “I would rather have my books pirated than have this happen,” she declared in an August 2023 blog post.
The company removed titles with misspelled authors and changed its privacy policy to require disclosure of AI-generated content. Credit: AP
The company announced the new restrictions in a Kindle Direct Publishing forum post in September. KDP allows authors to publish books and sell them on Amazon.
“While we have not seen a sudden increase in publications, we are relaxing publication limits on new titles to prevent unauthorized use,” the statement read.
Amazon claimed it was “actively monitoring the rapid evolution of generative AI and its impact on reading, writing, and publishing.”
The tech giant subsequently removed an AI-generated book that it had mistakenly attributed to Friedman.
What are the arguments against AI?
Artificial intelligence is a highly contentious issue, and it seems like everyone takes a position on it. Below are some common arguments against artificial intelligence:
Job loss – Some industry experts argue that AI will create new niches in the job market, and as some roles disappear, others will emerge. However, many artists and writers argue that this is ethical, as generative AI tools are trained on their work and would not work otherwise.
Ethics – When AI is trained on datasets, much of the content is taken from the internet, most often, if not exclusively, without notifying the owners of the work being taken.
Privacy – Content from personal social media accounts may be fed into and train the language model. This is a growing concern as Meta launches its AI assistant on platforms like Facebook and Instagram. There are also legal issues with this. In 2016, the EU passed legislation to protect personal data, and similar legislation is expected to be enacted in the US.
Misinformation – Because AI tools retrieve information from the internet, they can take it out of context or fall victim to hallucinations that lead to answers that don't make sense. Tools like Bing's Copilot and Google's generative AI in search are always at risk of serving up misinformation. Some critics argue that this could have deadly effects, such as AI prescribing false health information.
In response to the backlash, the company also changed its privacy policy to add a section about AI-generated books.
The new rules state in part, “When you publish a new book through KDP, or edit and republish an existing book, you must notify us about any AI-generated content (text, images, translations).”
Amazon uses the term “AI-generated” to describe work created by artificial intelligence, even if the user has “subsequently made significant edits.”
Vendors are not required to disclose AI-assisted content, which Amazon defines as content that a vendor has created independently and then improved using AI tools.
However, it is up to the vendor to determine whether the content meets the platform's guidelines.
Amazon also limited authors to self-publishing three books a day in an effort to slow the pace of AI-driven content creation.