The creators of ChatGPT are releasing an updated version of the AI behind their powerful image-recognizing chatbots.
OpenAI’s impressive software took the internet by storm late last year with its ability to generate human-like responses to almost any text prompt you throw at it, from make a story come up with chat lines.
It proves such a revelation that tech giants Microsoft is using a version of the same technology as its trunk The new Bing search enginealthough Rival Google is developing its own chatbot.
OpenAI now introduces the next generation of GPT model called GPT-4 (Chat GPT Powered by GPT-3.5).
It’s a “large multimodal model” that the company says “solves difficult problems with great accuracy due to its broader common sense and problem-solving abilities.”
What is a “multimodal model”?
While ChatGPT is based on a language model that is only capable of recognizing and generating text, the multimodal model demonstrates the ability to use different forms of media to achieve this.
Professor Oliver Lemon, an artificial intelligence expert at Heriot-Watt University in Edinburgh, explained: “This means that it combines not only text, but possibly images as well.
“Not only can you interact in a conversation with the text, but you can also ask questions about the image.”
In a blog post announcing GPT-4, OpenAI confirmed that it can take images as input, recognize and interpret them.
In one example, the model was asked to explain why a certain picture was interesting.
OpenAI said GPT-4 “demonstrated human-level performance on a variety of professional and academic benchmarks,” with results showing improved accuracy compared to previous versions.
The release is limited to subscribers of the company’s premium ChatGPT Plus, while others must join a waitlist.
New artificial intelligence can ‘see’
OpenAI’s announcement comes after a Microsoft executive teased that GPT-4 would be released this week.
The U.S. tech giant recently made a multibillion-dollar investment in the company.
Speaking on stage last week, Microsoft Germany chief technology officer Andreas Braun joked that image recognition is indeed one of GPT-4’s capabilities, German news site Heise reported.
OpenAI employee Andrej Karpathy tweeted that the feature means the AI can “see”.
However, any expectation that GPT-4 might be able to actually generate images in the same way that GPT-3.5 can generate text already appears to be widely available in the market.
There are already dedicated AI tools for generating images, such as OpenAI’s own Dall-E 2, which can create images from simple text prompts.
Meta and other generative AIs being developed by companies like Google can make videos and music.
Meta’s Make-A-Video hasn’t been released to the public, but the company says it lets people generate snappy and shareable video clips based on text prompts.
Google researchers revealed earlier this year that they had developed an AI that can make short music tracks, again based only on short text prompts. Like Meta’s video tools, it’s not available to the public.
How Teachers Face ChatGPT
ChatGPT is recommended for job interviews
The success of ChatGPT appears to be forcing tech companies that seem keen to be wary of deploying their own AI technology to act.
As a result, Google has reportedly accelerated plans for its ambitious chatbot called Bard, imposes severe restrictions on previously released models.
Tech companies often get burned for releasing undercooked artificial intelligence for public use.Back in 2016, Microsoft got red-faced When a chatbot named Tay was taught to say offensive things.