OpenAI, the creator of the popular chatbot ChatGPT, has released a software tool to identify text generated by artificial intelligence, the company said in a blog post on Wednesday.
ChatGPT is a free program that generates text in response to a prompt, including articles, essays, jokes and even poetry, which has gained wide popularity since its debut in November, while raising concerns about copyright and plagiarism.
The AI classifier, a language model trained on the dataset of pairs of human-written and AI-written text on the same topic, aims to distinguish text that is written by AI. It uses a variety of providers to address issues such as automated misinformation campaigns and academic dishonesty, the company said.
In its public beta mode, OpenAI acknowledges the detection tool is very unreliable on texts under 1,000 characters, and AI-written text can be edited to trick the classifier.
“We’re making this classifier publicly available to get feedback on whether imperfect tools like this one are useful,” OpenAI said.
“We recognise that identifying AI-written text has been an important point of discussion among educators, and equally important is recognizing the limits and impacts of AI-generated text classifiers in the classroom.”
Since ChatGPT debuted in November and gained wide popularity among millions of users, some of the largest US school districts, including New York City, have banned the AI chatbot over concerns that students will use the text generator to cheat or plagiarise.
Others have created third-party detection tools including GPTZeroX to help educators detect AI-generated text.
OpenAI said it is engaging with educators to discuss ChatGPT’s capabilities and limitations, and will continue to work on the detection of AI-generated text.