How likely is the apocalypse of artificial intelligence?

How likely is the apocalypse of artificial intelligence?

[ad_1]

In 1945, just before the first nuclear bomb was tested in the New Mexico desert in the United States, Enrico Fermi, one of the physicists who had collaborated in the construction of this bomb, made a bet with his fellow scientists. Will the unprecedented heat from the explosion cause a nuclear fire in the atmosphere? If so, will the firestorm destroy only New Mexico or will it destroy the whole world? Of course, that experiment was not as reckless as Fermi’s mischievous betting. Hans Betteanother physicist, had calculated that it is almost impossible for such an inferno to exist.

These days, concern about “existential threats,” that is, those that pose a serious threat to the entire human race as a species (not to humans as individuals), is not limited to military scientists. Nuclear war, nuclear winter, global diseases (whether natural like covid-19 or engineered), asteroid impact with the earth and countless other cases, can destroy most humans or even the entire human race.

Quoted from EconomistToday, the latest apocalyptic threat to humanity is artificial intelligence (AI) and the end of the world and the human race at the hands of this technology. In May, a group of leading figures in the field of artificial intelligence signed an open letter stating: “Reducing the risk of extinction caused by artificial intelligence should be a global priority, alongside other societal risks such as global disease and nuclear war.”

But how reasonable is it to worry about this? July 10 (19 July) a group of researchers including Ezra Kargareconomist at the Chicago Federal Reserve, and Philip Tetlocka political scientist from the University of Pennsylvania, published an article that tries to find the answer to this question by systematically examining two different groups of experts.

On one side of the research, there were experts in the discussed field or “special field” in nuclear war, biological weapons, artificial intelligence, and even the topic of “extinction” itself, and on the other side, a group of “professional superforecasters” or all-purpose forecasters with accurate predictions in About all kinds of different topics, from election results to the beginning of different wars.

The researchers selected a total of 89 superpredictors, 65 domain experts, and 15 “endangered” experts. Two different types of disasters were presented to the volunteers for evaluation. A disaster is defined as a “sudden natural disaster” that kills just 10% of the world’s human population, or about 800 million people. For comparison, World War II is estimated to have killed about 3 percent of the world’s 2 billion people. On the other hand, “extinction” was defined as an event that wipes out all humans, with the exception of up to 5,000 lucky (or unlucky) individuals.

If we have to go, we all go together

The two groups were asked to think about the likelihood of everything from terminal events such as AI-induced extinction or nuclear war to smaller questions, including what alarming advances in AI capabilities might occur as signposts on the path to a catastrophe in the future. act in the future, provide predictions.

The most striking result of the study was that experts in the field, who usually dominate the public conversation about existential risks, see the future much more bleak than superforecasters. Experts estimated that the probability of a natural disaster occurring by the year 2100 is about 20% and the probability of human extinction is about 6%. For these events, the probabilities of 9 and 1 percent were taken into account by the prognosticators.

This wide gap hides some interesting details. Two groups had the biggest difference of opinion when identifying the risks caused by artificial intelligence. On average, forecasters estimated that by the end of the century, the probability of an AI-induced catastrophe is 2.1%, and the probability of an AI-induced extinction is 0.38%. On the other hand, experts in the field of artificial intelligence assigned a probability of 12 and 3 percent to these two events, respectively. When it came to mundanities, superforecasters were more pessimistic about natural disease risks than experts.

Perhaps the most interesting result was that although the two groups disagreed on the exact level of risk, both ranked AI as the biggest concern when thinking about catastrophe or extinction. Dan Marylanda superforecaster who participated in the study, believes that one of the reasons AI is more dangerous is that it acts as a “force multiplier” on other threats, such as nuclear weapons.

In the case of nuclear war or an asteroid strike, artificial intelligence (in the form of armed robots, for example) could send humans directly to death. Also, it can be involved in sharpening the ax of other executioners. For example, if humans use AI to design more powerful bioweapons, AI would be fundamentally, albeit indirectly, implicated in the disaster.

Superforecasters, while pessimistic about AI, were somewhat uncertain about it. The world has been dealing with nuclear weapons for nearly 80 years. The fact that an all-out nuclear war has yet to occur provides valuable data that can be useful in future predictions about its likelihood. The term artificial intelligence, at least in its current sense, is much newer. The emergence of today’s powerful machine learning models dates back to the early 2010s. This field is still developing and expanding rapidly; Which leaves much less data to predict.

Dr. Tetlock, in particular, has done a lot of work on the problem of predicting the future. He was the first to identify and name “superforecasters”; People who seem to be unusually successful at predicting the future. Such people have several characteristics in common, such as careful thinking based on statistics and awareness of cognitive biases that may confuse and mislead them. Despite their lack of specific expertise, superforecasters have a track record of outperforming experts in many other technical fields, from finance to geopolitics.

The difference in opinion between the groups seems to reflect, to some extent, the difference in their models of how the world works. Catastrophic risks depend not only on how complex or powerful a technology is, but also on how humans react to it. For example, after the fear of nuclear war at the beginning of the Cold War, the United States and the Soviet Union, the world’s two major nuclear powers, began to cooperate with each other. The two world powers have helped reduce the risks of nuclear war with initiatives such as establishing “direct lines of contact” between Moscow and Washington, agreements to inspect each other’s weapons, and treaties designed to limit the size of stockpiles.

But superforecasters and AI experts apparently had very different views on how societies would react to small AI damage. Superforecasters thought that such damage would prompt more scrutiny and stricter regulation to address larger problems to come. In contrast, experts in the field tended to think that commercial motives and geopolitics might outweigh safety concerns, even after actual damage had been done.

Superforecasters and experts also had different views on the limits of intelligence. Christy MorrellAnother superforecaster participating in the study simply believes that “it’s not that easy to kill all humans.” He notes that doing so probably requires having “a certain amount of ability to interact with the physical world. “We probably need a lot of progress in the world of robotics before we get to that point.”

[ad_2]

Source link

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *