People Can’t Tell ChatGPT from Humans – What Does GPT-5 Hold?
If you haven’t heard of the Turing test, it’s a decades old way to see if people can tell the difference between artificial intelligence versus humans. Usually it’s performed through “conversations” held on a computer, so the subject doesn’t know who they are interacting with. It was first passed in 2014 by an advanced computer program called Eugene Goostman. Now? GPT-4 joins the ranks of AI who have bested the test. If Chat GPT to human interaction is already this cloudy, what does GPT5 have in store?
The Results? Chat GPT Has a Pass Rate of 54%
This most recent Turing test was performed by The Department of Cognitive Science at UC San Diego, specifically by Cameron R. Jones and Benjamin K Bergen. They released their findings in May of 2024. To be create a good baseline, they also evaluated GPT 3.5 and ELIZA, which is one of the earliest and simplest AI computer programs, created in the 1960s.
Here were the parameters of the test:
- Human test subjects had 5 minute conversations with AI or other humans
- Both GPT versions were given specific prompts explaining the test, how to behave and what persona to adapt, while ELIZA had more simplistic instructions
- The “persona” was of a young person who used slang, had sporadic spelling errors, and was concise
- Response delays were also in the prompt, to keep the AI from responding at superhuman speeds
- 500 participants were randomly assigned to either the 3 AI models or a human
- Participants had to make their determination within 5 minutes and justify their reasons
Here’s how it all played out in terms of participants thinking they were talking to humans:
- Actual Humans: (67%)
- GPT-4: 54%
- GPT-3.5: (50%)
- ELIZA: (22%)
The Eugene Goostman chatbot passed the Turing test by convincing 33% of humans that it was also human. The Turing criteria requires at least 30% of participants need to be fooled to pass the experiment. Within ten years, AI went from barely passing at 33% to a whopping 54% fool rate. Now that artificial intelligence is advancing at unprecedented rates, what does this mean for Chat GPT to human interaction in the future?
What Can We Expect with GPT-5
The official expectations from Chat GPT 5 are that it will feel like you are “communicating with a person rather than a machine.” It will be able to create human-like text, have improved understanding and better problem solving capabilities. But..is that good?
If the GPT4 already can fool 54% of humans, it’s safe to say GPT5 will be much more deceptive. While AI absolutely has its place in society and offers countless advantages, it also poses problems that haven’t even begun to be sorted yet. One of the biggest of these is increases in cybercrime.
Artificial Intelligence and Cybercrime
The very same month that UCSD released their findings on the Turing test, the San Francisco division of the FBI released a warning that cybercriminals were getting more advanced by using AI tools. Specific threats they emphasized:
- AI phishing attacks using highly-targeted customizations
- AI-powered voice/video impersonations
- Increased scale, automation, and speed of cyber-attacks through AI tools
With how incredibly advanced AI has become, it’s no surprise that cyber criminals would put it to bad use. Cybercrime is already increasing leaps and bounds and it’s becoming more difficult to decipher truth from deceit. Hopefully, as advancements continue, more efforts will be put into mitigating threats and making this technology safer. Government regulations are struggling to keep pace with the exploding AI industry, which means lagging protections.
Does this mean GPT5 is bad? No. AI is only as good or bad as the person who uses it. It has incredible potential in fields like science, medicine, and technology. There is simply no stopping it now, and with so many benefits, they may outweigh the threats anyway. That said, there is a time and place for AI, and there is a lot to learn in terms of using it safely. Educate yourself on the latest scams, beware any requests for money or personal information, and use multi-factor authentication whenever possible. AI may be able to fool us into thinking we’re talking to other humans, but we’ve been dealing with humanity forever. Crime, whether perpetrated by human or machine, can be guarded against.