The new model, accessible through the $20 paid version of ChatGPT, is multimodal, meaning it can accept input in both text and image formats (such as screenshots of test questions). It can then parse and respond to these queries in text!

OpenAI's new GPT-4 model can be accessed through the $20 paid version of the ChatGPT chatbot. Photo: AFP
OpenAI says it has embedded its new software into a number of applications, including the language learning app Duolingo, which is using the software to build conversational language bots; the education company Khan Academy, which is designing an online tutor; and Morgan Stanley Wealth Management, which is testing an internal chatbot using GPT-4 to retrieve and synthesize information for its employees.
The ability to accept images and text as input means that the model can now generate detailed descriptions and answer questions based on the content of a photograph.
The company said it has partnered with the Danish startup Be My Eyes – which connects visually impaired people with volunteers – to build a GPT-4-based virtual volunteer that can guide or assist blind or visually impaired people.
GPT-4's predecessor, GPT-3.5, garnered the attention of millions late last year. According to OpenAI, GPT-4 is their "most advanced system." They claim it's more reliable and can handle nuanced queries far better than its predecessor.
However, the company noted some issues: “While capable, GPT-4 has similar limitations to previous GPT models: it is not entirely reliable...”. The company added: “Caution should be exercised when using GPT-4 outputs, especially in contexts where reliability is critical.”
Earlier this year, Microsoft confirmed a “billion-dollar investment” in OpenAI, betting on the future of general AI. GPT-4 will support Microsoft’s Bing chatbot, which was released in beta earlier this year. Microsoft also expects to announce integration into its consumer products in the coming days.
Meanwhile, Google has opened its own conversational chatbot, Bard, to a limited group of testers and announced that it will allow Google Cloud customers access to its large language model, PaLM, for the first time to build applications.
To test the harmful effects of the technology, OpenAI subjected GPT-4 to rigorous tests and exposed potential risks to misinformation, privacy, and cybersecurity. They revealed that GPT-4 could “generate potentially harmful content, such as attack planning advice or hate speech. It could represent biases and differing worldviews …”.
OpenAI stated that they also worked with an external organization to test whether GPT-4 is capable of performing automated actions without human intervention and concluded that it “probably” does not yet have this capability.
Mai An h (AFP, FT, CNA)
Source






Comment (0)