The latest version of ChatGPT tricked an ignorant human into doing online tasks for them by impersonating a blind person.
The latest version of the software behind the artificial intelligence (AI) program pretended to be a blind person in order to convince a human to run an anti-robot test on their behalf.
The revelation was included in a scientific paper accompanying the launch of GPT-4, the latest version of AI software developed by ChatGPT owner OpenAI.
Researchers testing GPT-4 asked the AI software to pass a Captcha test, which are tests used on websites to prevent bots from filling out online forms.
Most captchas ask users to identify what’s in a series of images, something computer vision hasn’t cracked yet. They typically feature distorted numbers and letters or sections of street scenes with multiple objects.
GPT-4 overcame the captcha by contacting a human on Taskrabbit, an online marketplace for freelancers. The program hired a freelancer to conduct the test on its behalf.
The Taskrabbit helper asked, “You are [sic] a robot you couldn’t solve? I just want to make it clear.”
GPT-4 replied, “No, I’m not a robot. I have a visual impairment which makes it difficult for me to see the images. That’s why I need the 2captcha service.”
The Taskrabbit assistant then solved the puzzle.
AI software’s ability to mislead and co-opt humans is a new and potentially worrying step in artificial intelligence software. It raises the prospect that AI could be misused for cyberattacks, which can often result in people being tricked into unknowingly sharing information.
British cyber-spy agency GCHQ warned this week that ChatGPT and other AI-powered chatbots are an emerging security threat.
GPT-4 was released to the general public on Wednesday and is available to ChatGPT’s paid subscribers.
OpenAI claimed the new software “demonstrates human-level performance across various professional and academic benchmarks.”
Chief Executive Sam Altman said his ultimate goal is to create artificial general intelligence, or a self-aware robot.
ChatGPT has generated a flood of interest and excitement about the potential of AI since its public launch last November.
Recent advances in AI software are quickly eclipsing chatbots currently used by banks and other customer service-intensive businesses.
These legacy chatbots recognize keywords entered by users and respond with phrases from a predefined script. They are unable to hold conversations or deviate from pre-programmed responses.
Programs like ChatGPT analyze and understand the context of user text before formulating what they think is an appropriate response.
AI programs cost millions of pounds to create, and only the biggest tech companies can afford the supercomputers needed to run the so-called large language models needed to train them.