The competition in the field of artificial intelligence has become very hot, especially with the launch of OpenAI’s ChatGPT chatbot and Microsoft’s huge investments in this startup, and Google is also trying to keep up with its competitor by introducing the Bard chatbot. In addition, many investors and other technology companies seem very willing to spend money on artificial intelligence systems.
According to some experts, the hype cycle surrounding artificial intelligence chatbots is doomed to be what investors fear the most; a bubble
Gary N. Smith Fletcher Jones Professor of Economics at Pomona College and Jeffrey Lee Funk Independent technology consultant writes:
The experts’ paper relies on the argument that many investors simply misunderstand the underlying technology of language models. While chatbots, especially ChatGPT and the new Bing search engine, produce remarkably human-like speech, they do not actually provide analysis for their responses and do not understand what they are saying.
In fact, artificial intelligence chatbots, like text prediction in smartphones, only predict what words can be placed in the continuation of a sentence. Every quick answer is a probability equation and in fact this technology has no understanding of what it is saying. The underlying tools that lead to the illusion phenomenon in artificial intelligence are a very serious failure that has increased in complexity. Machines’ desire to appear trustworthy sometimes goes so far as to even give false answers, using a threatening tone to persuade the user to accept their answers.
Smith And funk they say:
written by Futurism, many AI optimists point to the burgeoning technology’s hilarious and sometimes horrifying quirks and blunders. They often say that more data, including information obtained through the use of artificial intelligence, will solve the chatbots’ fact-checking problem.