One of the most influential thinkers in the field of AI risk explains why superintelligent AI is a global suicide bomb. In this essential session, Nate Soares, co-author of If Anyone Builds It, Everyone Dies with Eliezer Yudkowsky, explains just why superintelligent AI could be catastrophic for our species, and how we stop it.
Soares is president of the Machine Intelligence Research Institute and one of its most senior researchers. He has been working in the field of AI alignment for over a decade, after previous experience at Microsoft and Google. He talks to Patricia Clarke, technology reporter at the Observer.