OpenAI issued a warning that its tool is not perfect, especially when dealing with texts that are shorter than 1,000 characters.
OpenAI stated that it intends to make money off of ChatGPT.
A tool to identify whether written works are produced by artificial intelligence was unveiled on Tuesday by the makers of a ChatGPT bot that caused controversy for its ability to mimic human writing.
Concerns that the software can be used to help students with assignments and enable them to cheat on examinations sparked a lot of discussion at schools and universities in the United States and throughout the world when the announcement was made.
In a blog post published on Tuesday, US-based OpenAI claimed that its detection software had been trained “to discriminate between language produced by a person and text authored by AIs from a range of suppliers.”
The OpenAI bot, which just received a large cash infusion from Microsoft, responds to straightforward requests with reams of text drawn from data amassed online.
OpenAI issued a warning that its tool is not perfect, especially when dealing with texts that are shorter than 1,000 characters.
The decision came shortly after word that ChatGPT had passed exams at a US law school after writing essays on topics ranging from constitutional law to taxation.
ChatGPT still makes factual mistakes, but education facilities have rushed to ban the AI tool.
“We recognize that identifying AI-written text has been an important point of discussion among educators, and equally important is recognizing the limits and impacts of AI generated text classifiers in the classroom,” OpenAI said in the post”.
“We are engaging with educators in the US to learn what they are seeing in their classrooms and to discuss ChatGPT’s capabilities and limitations.”
OpenAI said it recommends using the classifier only with English text as it performs worse in other languages.
Writing by Adeniyi Bakare; editing by Julian Osamoto