Why The Existential Threat Of AI May Be Overblown

Photo Credit: Getty

In response to OpenAI CEO Sam Altman’s recent Congressional testimony, a heightened national conversation is taking place surrounding the potential existential risks stemming from artificial intelligence. Although he championed the merits of AI and the great benefits it can provide to humanity, Altman also vocalized his fears that the technology industry could “cause significant harm to the world,” even going as far as endorsing a new federal agency to regulate AI and calling for licensing of AI firms. While his concerns merit attention, it’s essential to contextualize them against what we actually know about AI and existential risks, as opposed to what is mere speculation.

One notable voice sounding alarm over AI risks is Eliezer Yudkowsky, who has made the extraordinary claim that the “most likely result of building a superhumanly smart AI… is that literally everyone on Earth will die.” Superintelligent AI is usually thought to mean something along the lines of an AI system that surpasses the intelligence and capabilities of the smartest human beings in nearly every field.

Yet, in a recent podcast with economist Russell Roberts, Yudkowsky was unable to provide any coherent mechanism behind his outrageous claim. In other words, he couldn’t offer any plain English description of how the world goes from chatbots answering questions on the internet to literally the end of the human race. Even digging through the arguments raised by more clear-thinking AI pessimists, one can actually extract reasons for optimism rather than foreboding.

An illustrative example lies in the concept of “instrumental convergence,” which is the idea that there are intermediate goals that an AI might set for itself on the way to achieving some terminal goal that humans program for it. For example, an AI tasked with producing widgets might decide that accumulating money is the most effective strategy to accomplish this end, since money allows it to buy factories, hire workers, and so on. This notion suggests there may be some general tendency for superintelligent AIs to converge on similar strategies even if AIs overall have a diverse array of final goals.

Read the full article on Forbes.