Using AI Technology to Spot AI Text

AI text writing

The world as we know it is transforming due to advancements in artificial intelligence technology. AI is currently used for automating business processes, gaining insight through data analysis, and engaging with customers and employees.

AI algorithms have even become advanced enough to generate text that is convincing to the average reader. The good news is that AI technology can also be used to spot human text from AI-generated text.

 

What is the Technology?

 

Researchers from Harvard University and MIT have created a new technology which can pinpoint AI-generated text. The Giant Language Model Test Room (GLTR) can detect statistical patterns determining if language model algorithms versus humans have written content.

 

How Does it Work?

 

The GLTR algorithm looks at statistical word choice within the text to identify inconsistencies. If the text was created with a language model, words appear predictably compared to human writing. Although sentence structure and grammatical choice could be accurate, the text may lack sufficient depth and meaning. 

GLTR looks for patterns over sixty-word sequences to spot predictable words and phrases. Colors are used to highlight the statistical likelihood of text appearance. Words with the highest probability of appearing are marked green, with lower probability texts being yellow and red. The words least likely to appear in text are purple. 

Text authored by humans should contain a varied mix of green, yellow, red, and purple colors. If it appears to be mostly green and yellow, there is a strong likelihood of AI-generation at work.

 

Why is it Important?

 

The spread of misinformation is now becoming automated. AI text generators are becoming smarter, faster, and more precise. There is the potential to create malicious attacks, fake news, and slanderous claims.

One artificial intelligence research company, OpenAI, demonstrated the capabilities of an algorithm that creates text passages with stellar realism. They input large amounts of text into a complex machine-learning model which recreated realistic content. They claim it's so convincing that it shouldn't be released for public use.

Fake text has the potential of becoming the next global political threat. AI-generated fake text can be used to impersonate influential people or produce troll-grade propaganda across various social networks and platforms.

 

Does it Work?

 

According to research performed by GLTR, the human detection rate of fake text increased substantially when using the tool.

A Harvard experiment asked students to identify AI-generated text. They first attempted to recognize AI text without the tool, before later using it. 

The results showed students spotted around half of the false text on their own, but this figure increased to 72% when using the tool. 

 

Future Implications 

 

Technology has advanced to the point of being able to mass generate AI text. With these new advancements comes the potential for fake news, slanderous claims, and vilified political campaigns. The GLTR initiative may help spot these false declarations as a means of retaining political integrity without swaying public opinion.

As with any new technology, AI is a work in progress before reaching its full potential. Artificial intelligence and machine learning are technologies continuing to change the way humans interact with the world. 

To learn more about Bitvore's advanced AI techniques and using unstructured data sources in predictive analysis to improve the decision-making process of corporations, download the FAQ - Unstructured Alternative Data in Predictive Modeling.

Download Unstructured Alt-Data FAQ

Subscribe to Bitvore News Blog Weekly Email

Recent Posts

Archives

See all