News

OpenAI fireworks; GPT-4 artificial intelligence with simultaneous text and image interpretation was introduced

After months of rumors and speculation, Inc OpenAI (creator ChatGPT) finally model Introduced GPT-4 artificial intelligence. GPT-4 is the latest example of language models, tools from which to build services such as ChatGPT and The latest version of Bing is used.

According to Verge, OpenAI says its new AI model is “more innovative and collaborative than ever” and can “solve hard problems with greater accuracy.” Unlike the previous version, the GPT-4 language model can analyze image inputs in addition to text inputs, but it only responds through text.

OpenAI says it has partnered with several companies, including Duolingo, Stripe, and Khan Academy, to bring its new language model to their services. Users of the ChatGPT Plus subscription service, which costs $20 per month, can access the new language model. In a separate statement, Microsoft confirmed that the new version of Bing is based on GPT-4. OpenAI plans to make the GPT-4 language model API available to developers soon.

According to OpenAI, the bit difference between GPT-4 and GPT-3.5 is “imperceptible” in normal conversations. GPT-3.5 is the model used to develop ChatGPT. Sam AltmanCEO of OpenAI, tweeted that GPT-4 is “still incomplete and limited” and appears more impressive at first glance than if you’ve been working with it for a while.

The differences between GPT-4 and the previous model are more visible in tests such as the lawyer test, law executive test and SAT math test. GPT-4 has managed to register a score in the 88th percentile and above in a number of tests.


Source link

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker