GPT-4 is “multimodal,” meaning it can answer questions that include images and text.

Photo: GI
GPT-3.5 only outputs results from text questions, while the latest version of this large language model can use images as input data to recognize objects in the image and analyze them.
GPT-3.5 is limited to about 3,000 word responses, while GPT-4 can generate responses of over 25,000 words.
GPT-4 is 82% less likely to respond to requests for unauthorized content than its predecessor and scores 40% higher on some realism tests.
It will also allow developers to decide the tone of voice and level of detail for their AI.
OpenAI says ChatGPT users will soon have the option to change the chatbot's tone and response style.
The latest version has outperformed its predecessor on the US bar exam and the Graduate Record Examination (GRE).
According to OpenAI, GPT-4 has the same limitations as its predecessors and is “potentially worse than humans in many real -world situations.”
Inaccurate feedback called “hallucinations” is a challenge for many AI programs, including GPT-4.
GPT-4 lacks knowledge of events that occurred after September 2021.
GPT-4 is being integrated into Microsoft's Bing AI chatbot and the popular language learning platform Duolingo.
Mai Anh (according to Reuters)
Source
Comment (0)