Tech

OpenAI’s ChatGPT generating false information? Probe launched

Published

on

The US Federal Trade Commission (FTC) is currently investigating OpenAI, the creator of the popular artificial intelligence-powered ChatGPT, over concerns that the human-like technology is generating false information. 

The investigation raises questions about the potential harm caused to consumers and the mishandling of user data by OpenAI’s technology. 

In a letter to OpenAI, the FTC requested information regarding incidents in which users were falsely disparaged and asked for details on the company’s efforts to prevent such incidents from recurring. 

The inquiry comes as regulators increasingly scrutinise the risks associated with artificial intelligence (AI) technology.

FTC Chair Lina Khan expressed her agency’s concerns about ChatGPT’s output during a congressional committee hearing, saying: “We’ve heard about reports where people’s sensitive information is showing up in response to an inquiry from somebody else.”

“We’ve heard about libel, defamatory statements, flatly untrue things that are emerging. That’s the type of fraud and deception that we are concerned about.”

OpenAI CEO Sam Altman, who appeared before Congress earlier this year, acknowledged that the technology could be prone to errors. He underscored the need for regulations and the establishment of a new agency to oversee AI safety.

The FTC’s investigation focuses not only on the potential harm to users but also on OpenAI’s data privacy practices and the methods used to train and inform AI technology. 

The company’s large language model, GPT-4, forms the foundation of ChatGPT and is licensed to numerous other companies for their own applications.

While OpenAI has made efforts to enhance the safety and reliability of ChatGPT, concerns about offensive or inaccurate content generated by the AI model have persisted. 

In April, Italy banned the use of ChatGPT due to privacy concerns, only reinstating it after OpenAI implemented age verification tools and provided additional information on its privacy policies.

OpenAI and the FTC have yet to comment on the ongoing investigation.

As the use of AI technology, particularly large language models, becomes more prevalent, it is crucial for regulators to address the potential risks to consumers. 

The outcome of the FTC’s investigation will have implications not only for OpenAI but also for the wider AI industry, as companies race to develop and deploy similar technologies while grappling with issues of accuracy, privacy, and user protection.

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending

Exit mobile version