According to an article published in The Independent, SpaceX mogul Elon Musk and celebrated physicist Stephen Hawking have signed an open letter requesting more robust research into the possible dangers associated with artificial intelligence.
The letter, created by the Future of Life institute, calls for increased study on the topic of artificial intelligence generally, but stresses the importance of determining “how to reap its benefits while avoiding potential pitfalls.”
Musk and Hawking join a list of hundreds of signatories that includes representatives from Google, Microsoft, GE, Yahoo! and a host of academic institutions such as Oxford, Cambridge, Harvard and MIT.
The Future of Life institute requires no qualifications to join this list by signing the letter and accepts new signatories on their website.
Musk, in an interview at the MIT AeroAstro Centennial Symposium likened the danger of building artificial intelligence to “summoning the demon.”
Earlier in the interview, he emphasized the importance of regulatory measures on the development of artificial intelligence, stating that he considered the technology his guess for humanity’s “biggest existential threat.”
In a December 2014 interview with the BBC, Hawking raised similar concerns, warning that “the development of full artificial intelligence could spell the end of the human race.”
He argued that the danger of artificial intelligence lay in the potential for the technology to outstrip the intelligence of humanity: “Humans, who are limited by slow biological evolution, couldn’t compete and would be superseded.”
The letter in question establishes the importance of diverse perspectives on the ethics and societal role of artificial intelligence, stating that the technology could have important ramifications in economics, government and law.
The letter defines intelligence colloquially in this context as “the ability to make good decisions, plans or inferences.”
In addition to warning against hazards, the letter describes successes of current research in the field:
“The establishment of shared theoretical frameworks, combined with the availability of data and processing power, has yielded remarkable successes in various component tasks such as speech recognition, image classification, autonomous vehicles, machine translation, legged locomotion and question-answering systems.”
The letter also points to the potential societal benefits of artificial intelligence, claiming that “we cannot predict what we might achieve when this intelligence is magnified by the tools A.I. may provide, but the eradication of disease and poverty are not unfathomable.”
The letter contains a link to a priorities document, which lays out a more comprehensive set of principles and guidelines for research. This document looks at long-term considerations to ensure study on and development of artificial intelligence technology remains beneficial and “aligned with human interests.”
It also contains short-term priorities such as optimizing the economic effect of this research and developing more effective legal and ethical systems to govern research.
“I think the risk with a document like this is that they say ‘human interests,’ but humans are not a homogenous group,” said senior Steven Lewis. “A proposal like this is great in concept but might be risky in application.”
The full letter and additional information can be found online at futureoflife.org.