OpenAI Sued For 3 Billion Dollars

OpenAI and its corporate partner, Microsoft, are in the legal spotlight as they face a class action lawsuit, with charges of data theft from internet users to train AI models. Filed on June 28 in a federal court in San Francisco, CA, the lawsuit includes sixteen anonymous plaintiffs and demands $3 billion in damages.

There has been a lot of drama going around the artificial intelligence and cryptocurrency industries. However, this recent lawsuit against OpenAI seems more serious than the lawsuits Binance and Coinbase face.

OpenAI Accused Of Data Scraping

The primary claim of the plaintiffs revolves around OpenAI’s surreptitious scraping of 300 billion words from the internet without the required registration as a data broker or obtaining necessary consent. This vast quantity of data, they argue, consists of users’ private information, which OpenAI continues to gather unlawfully. The lawsuit also alleges that millions of unsuspecting consumers worldwide are continually subjected to this data collection practice.

Comparisons with Clearview AI

OpenAI’s alleged data collection practices draw comparisons to Clearview AI, another AI firm infamous for scraping data from the internet without obtaining explicit user consent. Clearview AI had faced numerous lawsuits, including one from the American Civil Liberties Union (ACLU), for harvesting social media images to create a facial recognition tool for police use. Clearview AI had to halt its services to most private U.S. entities and individuals following a settlement last year.

Microsoft And OpenAI: AI Tools, Stolen Data, and Privacy Breaches

Prominent AI tools developed by OpenAI and employed by Microsoft, such as language models GPT 3.5 and 4.0, image model Dall-E, and text-to-speech model Vall-E, are among the tools mentioned in the lawsuit. According to the plaintiffs, their internet activities over the years contributed to the training of these AI models without their consent. They argue that their personal data, spanning names, contact information, email addresses, payment information, and more, were stolen from a variety of online applications and platforms.

The lawsuit asserts that the data theft unjustly enriched the defendants, leading to the creation of their billion-dollar AI businesses, including but not limited to ChatGPT. Therefore, the plaintiffs and the represented classes, according to the lawsuit, deserve damages equating to the value of their stolen data or their share of the profits earned from it.

Desired Remedies and Legal Precedents

The plaintiffs demand that OpenAI and Microsoft take significant steps to respect and protect users’ privacy. First, they must disclose the nature of the collected data and its intended use. Secondly, they should adhere to ethical guidelines and compensate the plaintiffs for their stolen data. Lastly, internet users should have the option to refuse data collection, and all illicit data collection should cease.

In November, both OpenAI and Microsoft were defendants in a similar class action lawsuit. GitHub programmers alleged that GitHub Copilot, a Microsoft-owned AI coding tool, had used their code for training without permission, thereby violating their open-source licenses.

OpenAI Under Scrutiny

OpenAI has faced repeated criticisms for the obscurity surrounding its training methods and datasets, and possible copyright infringements. The release of GPT-4 in March amplified these concerns. Many AI researchers have expressed worries about the potential harm and the limitation of opportunities for external scientists to identify system flaws and biases.

The lawsuit also highlights the potential “existential threat” of unregulated AI and calls for “immediate legal intervention”. High-profile calls to pause or regulate the rapid spread of AI systems have been increasing, including an open letter signed by experts and tech leaders like Elon Musk. Recent developments include Italy’s temporary ban of ChatGPT due to concerns over European data protection law violations.

The lawsuit insists that respecting the law doesn’t mean hindering AI innovation. Instead, it will ensure a safer and fairer AI future. At the time of writing, OpenAI had not responded to requests for comment while Microsoft chose not to comment.

Comments are closed.