OpenAI on Tuesday released the latest version of ChatGPT, the artificial intelligence language model that’s making waves in the tech industry.
GPT-4, the latest model, can take images as input, meaning it can look at a photo and give the user general information about the image.
The language model also has a larger database of information, making it possible to provide more accurate information and to write code in all major programming languages.
GPT-4 can now read, parse or generate up to 25,000 words of text and appears to be a lot smarter than its predecessor. GPT-4 achieved a 90th percentile score on the Uniform Bar Exam. Its predecessor came in 10th place according to OpenAI.
ChatGPT, which launched just a few months ago, is already considered the fastest growing consumer application in history. The app reached 100 million monthly active users in just a few months. According to a UBS study, it took TikTok nine months to reach that many users and Instagram almost three years.
“Although less capable than humans in many real-world scenarios, [GPT-4] demonstrates human-level performance on various professional and academic benchmarks,” OpenAI wrote in its press release, adding that the language model scored 700/800 on the math SAT.
While impressive, OpenAI acknowledged that the program is still “far from perfect”.
“It’s still buggy, still limited, and it still seems more impressive the first time you use it than once you’ve spent more time with it,” says OpenAI CEO Sam Altman tweeted.
Artificial intelligence models, including ChatGPT, have raised some concerns and disturbing headlines in recent months. In education, students have used the systems to complete writing assignments, but educators are torn as to whether these systems are disruptive or if they could be used as learning tools.
These systems were also prone to generating inaccurate information – Google’s AI “Bard” made a factual error in its first public demo. This is a bug OpenAI hopes to fix – GPT-4 is 40% more likely to produce accurate information than its previous version, according to OpenAI.
MORE: What is ChatGPT, the artificial intelligence text bot that went viral?
Misinformation and potentially biased information is a concern. AI language models are trained on large datasets that can sometimes contain biases related to race, gender, religion, and more. This can cause the AI language model to generate biased or discriminatory responses.
Many have pointed out the malicious ways people could use misinformation through models like ChatGPT like phishing scams or spread misinformation to intentionally disrupt important events like elections.
OpenAI says it “spent months developing it [ChatGPT] safer,” adding that the company works with “over 50 experts for early feedback in areas like AI safety and security.”
GPT-4 is 82% less likely to provide users with “prohibited content” related to illegal or morally objectionable content, according to OpenAI.
OpenAI releases GPT-4 and claims its chatbot is significantly smarter than previous versions that originally appeared on abcnews.go.com