Artificial Intelligence PaLM 2 vs. GPT-4; When Google is in its own land!

Artificial Intelligence PaLM 2 vs. GPT-4;  When Google is in its own land!

[ad_1]

This table does not provide an exact one-to-one comparison. In this report, we read that Google has used two special features to improve the performance of PaLM 2: one is chain-of-thought prompting and the other is self-consistency.

Self-consistency means that the model generates several different answers and then sees which answer is repeated the most to choose it as the correct answer. For example, if the first answer says A, the second answer says B, and the third answer says A, the model says A was more frequent; So I choose this as the correct answer.

The chain of thought also requires the model to think about its answers step by step. In many recent researches, the effect of using the chain of thought on improving the performance of language models has been shown. For example, the Khan Academy website, which uses artificial intelligence OpenAI, uses this same chain of thought process to better answer users’ math questions. In this way, even before the artificial intelligence trainers of this platform ask the user a question, they have created all the steps to reach the answer once for themselves, so that when the user answers, they can use their “memory” to guide them step by step. Accompany the correct answer.

With this explanation, let’s go to check the points. Google says that by using the chain of thought query, it was able to increase the performance of PaLM 2 compared to PaLM in all tests. This is especially interesting about the MATH test score, which shows a more than 4-fold increase in performance in the model equipped with chain-of-thinking questions and a more than 6-fold increase in the model equipped with self-consistency.

However, while Google has compared the score of PaLM 2 with its competitor, based on the GPT-4 test, the OpenAI company has only used the chain of thought questioning feature in the GSM-8K test, and the score 92.2 reached, which is still more than PaLM 2. Comparing this score with the Flan-PaLM 2 model is not a correct comparison; Because this model is trained with specialized test data. Let’s say that Google refused to put MGSM test score for GPT-4 for some reasons.

Why does Google not talk about its plans to increase the security of artificial intelligence?

Another interesting point about Google’s technical report is not talking about the concerns of people and different industries these days about artificial intelligence; Concerns such as the replacement of humans with artificial intelligence in the workplace, the use of artificial intelligence in weapons, copyright issues and the general safety of using artificial intelligence for the human race.

A large part of Google’s report is dedicated to “Responsible AI”; But the focus of the talks of this giant of the technology world is the use of correct pronouns in translation. This topic particularly caught my attention because some time ago, Jeffrey Hintonthe godfather of artificial intelligence, left Google after 10 years to talk about the dangers of artificial intelligence to jobs and even humans themselves.

Meanwhile, when OpenAI is talking about the security of its language model in the GPT-4 report, it shows examples where the chatbot does not respond to illegal or dangerous requests such as making bombs.

OpenAI Inc. at the end Report yourself It says it is working with independent researchers to better understand and assess the potential effects of artificial intelligence and to plan for dangerous capabilities that may appear in future systems. The question that arises is, what are Google’s plans for artificial intelligence security? Why has he chosen to limit AI problems to translation problems for now?

Will Google catch up with its competitors in the field of artificial intelligence?

What seems strange in the meantime is how Google, with all the resources and billions of dollars it has spent on artificial intelligence research and development, and that it entered this field even earlier than its competitors, is still lagging behind a much smaller company like OpenAI?

It was Google that in 2017 with the publication of the article “Attention Is All You Need(Attention is all you need), introduced the Transformer Neural Network; The network that made the emergence of large language models possible and without which the creation of the ChatGPT chatbot would not have been possible.

It is interesting to know that out of the 8 authors of this article, only one person is still at Google and the rest have gone to launch their own artificial intelligence startups; including Adept AI Lab and Air Street Capital and of course OpenAI. It is even rumored that some Google artificial intelligence researchers are leaving the company; Because Google has been accused of training the Bard chatbot with ChatGPT data without permission.

[ad_2]

Source link

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *