How Can You Detect If Content Was Created By ChatGPT And Other AIs?

0

Artificial Intelligence (AI) is capable of producing increasingly human-like writing, pictures, music, and video. There have been reports of students using it for cheating, and an industry has emerged around AI-authored books claimed by people as their own work.

However, there is also at least one reported case of a teacher (apparently ineptly) using AI to incorrectly “prove” his students had cheated – leading him to fail all of them.

There is also a recent case of a photographer winning a competition by submitting an AI-generated picture rather than one he took himself. In this case, the photographer had good intentions and returned his award after exposing what he had done.

Fortunately, some fairly accurate – for the moment – methods exist for detecting where works have been created with the help of AI. In this article, I will look at what tools exist, how they work, as well as why they could be vital for security and for protecting academic and artistic integrity.

Why Is AI Content Detection Important?

As AI-created content becomes more commonplace, its potential to cause disruptive and potentially harmful consequences increases. A great example is the phenomenon of “deep fakes,” where realistic images of videos or real people appearing to do or say things they have never done can be made. There have already been examples of this being used to fake pornographic content of people without their consent and to put words in the mouths of politicians, including Barack Obama. You can find a video of Trump being arrested (even before he was) and Joe Biden singing Baby Shark (which, as far as I know, he has never done!)

Some of this might seem funny, but there’s the potential for it to have damaging consequences for the people involved – or for society at large if it influences democratic processes.

AI has been used to clone human voices to commit fraud. In one case, it was used to attempt to trick a family into believing that their daughter had been kidnapped in order to extort ransom money. In another, a company executive was persuaded to transfer more than $240,000 via a deep-faked voice that he believed to be his boss.

If it’s used by students to cheat on essays and exams, it could damage the integrity of education systems and the reputations of schools and colleges. This could result in students being inadequately prepared for the careers they hope to enter and the devaluation of diplomas and certificates.

All of this highlights the importance of robust countermeasures to educate the public on the dangers of AI and, where possible, detect or even prevent it. Without this issue being addressed, AI could lead to widespread disinformation, manipulation, and damage. So, what exactly can be done?

Methods for Detecting AI-Generated Content

Fortunately, there are a number of methods available for detecting AI-generated content.

Firstly, there are digital tools that use their own AI algorithms to attempt to determine whether a piece of text, an image, or a video was created using AI.

You can find several AI text detectors freely available online. The AI Content Detector claims to be 97.8% reliable and can examine any piece of text for signs that it wasn’t written by a human. This is done by training the detector on the methods and patterns used by tools like ChatGPT and other Large Language Models when they create text. It then matches this information against the submitted text to attempt to determine if it is natural human writing or AI-created text.

This is possible because, to a computer, AI content is relatively predictable, being based on probabilities. This means that a concept called “perplexity” can be used to work out whether the text uses language that is highly probable or not. If it consistently uses the most probable language, there’s a higher chance it’s created by AI.

If it’s important that you know with a high degree of assurance, you can check it against multiple AI detectors. Other useful tools are the Writer AI Content Detector and Crossplag.

For detecting Deepfakes, companies including Facebook and Microsoft are collaborating on the Deepfake Detection Challenge. This project regularly releases datasets that can be used to train detection algorithms. It’s also inspired a contest on the collaborative data science portal Kaggle, with users competing to find the most effective algorithms.

Recognizing the threat that AI-generated video and images could pose to national security, military organizations have joined the fight too. The US Defense Department Advanced Research Projects Agency (DARPA) has created tools that aim to determine whether images have been created or manipulated by AI. One of the tools, known as MediFor, works by comparing the AI-generated image to real-world images, looking for telltale signs such as variations in the effect of lighting and coloring that don’t correspond with reality. Another, known as SemaFor, analyzes the context between pictures and text captions or news stories accompanying them.

Finally, we shouldn’t overlook the role that human judgment and critical thinking can play in AI content detection. Humans have a sense of “gut instinct” that – while certainly not infallible – can help us when it comes to determining authenticity. Casting a critical eye and applying what we know – is Joe Biden really likely to create a video of himself singing along to Baby Shark? – is essential, rather than delegating all responsibility to machines.

The Future of AI Detection – An Arms Race?

It’s likely we are only witnessing the very early stages of what will be an “arms race” scenario as AI becomes more efficient at creating lifelike content, and the creators of detection tools race to keep up.

This isn’t a battle that will be fought only between technologists. As the implications for society become clearer, governments and citizen’s groups will find they have an important role as legislators, educators, and custodians of “the truth.” If we discover that we are no longer able to trust what we read, watch, see and hear, our ability to make informed decisions in every walk of life, from politics to science, will be compromised.

Bringing together technological solutions, human judgment, and the informed oversight and intervention, when necessary, of regulators and lawmakers will be our best defense against these emerging challenges.

To stay on top of the latest on new and emerging business and tech trends, make sure to subscribe to my newsletter, follow me on Twitter, LinkedIn, and YouTube, and check out my books Future Skills: The 20 Skills and Competencies Everyone Needs to Succeed in a Digital World and The Future Internet: How the Metaverse, Web 3.0, and Blockchain Will Transform Business and Society.

Stay connected with us on social media platform for instant update click here to join our  Twitter, & Facebook

We are now on Telegram. Click here to join our channel (@TechiUpdate) and stay updated with the latest Technology headlines.

For all the latest Technology News Click Here 

Read original article here

Denial of responsibility! Rapidtelecast.com is an automatic aggregator around the global media. All the content are available free on Internet. We have just arranged it in one platform for educational purpose only. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials on our website, please contact us by email – [email protected]. The content will be deleted within 24 hours.
Leave a comment