SpaceX founder Elon Musk reacts to a press conference after launch after the SpaceX Falcon 9 rocket, carrying the spacecraft Crew Dragon, took off on an unmanned test flight to the Space Station from the Kennedy Space Center in Cape Canaveral, Florida on March 2, 2019.
Mike Blake | Reuters
Tech billionaire Elon Musk likes to think he knows a thing or two about artificial intelligence (AI), but the research community thinks his confidence is out of place.
The boss of Tesla and SpaceX has repeatedly warned that AI will soon become as intelligent as humans and said that in this case we should all be afraid because the very existence of humanity is at stake.
Several AI researchers from different companies have told CNBC they consider Musk’s comments on AI inappropriate and have urged the public not to take his opinion of AI too seriously. The smartest computers can only excel in a “narrow” selection of tasks, and there is still a long way to go before AI on a human level is reached.
“A large part of the community thinks it is a negative distraction,” said an Amnesty International official who had close ties to the community who wished to remain anonymous because their company could work for one of Musk’s companies. .
“He is sensationalist, he turns a lot between being openly worried about the risk of falling technology and then hyper the AGI (general artificial intelligence) agenda. While his very real achievements are recognized, his cowardly remarks lead the general public to have an unrealistic understanding of the state of maturity of AI. “
An AI scientist specializing in voice recognition and wishing to remain anonymous to avoid public reaction said Musk is “not always viewed favorably” by the AI research community.
“I instinctively fall on aversion because it makes up such nonsense,” said another AI researcher at a UK university who asked to remain anonymous. “But then he delivers such extraordinary things. It always leaves me wondering if he knows what he’s doing? Are all visionary things just a tip for bringing something innovative to market? “
CNBC contacted Musk and its representatives for this article, but has yet to receive a response.
Invest in A.I.
Musk’s relationship with AI goes back several years and he certainly has an eye for promising AI start-ups.
He was one of the first investors of DeepMind in Britain, which is widely regarded as one of the leading AI labs in the world. The company was acquired by Google in January 2014 for around $ 600 million, which makes Musk and other early investors like his PayPal co-founder Peter Thiel a good return on their investments.
But his motivations for investing in AI are not purely financial. In March 2014, just two months after the acquisition of DeepMind, Musk warned that AI is “potentially more dangerous than nuclear weapons”, suggesting that his investment could have been made because he was concerned about the destination of the technology.
The following year, he helped set up a new billion dollar AI research lab in San Francisco to compete with DeepMind, called OpenAI, which has a particular focus on AI security.
Musk has another business looking to push the boundaries of AI. Founded in 2016, Neuralink wants to merge people’s brains and AI using a Bluetooth enabled processor that sits in the skull and talks on a person’s phone. Last July, the company announced that human testing will begin in 2020.
In many ways, Musk’s investments in AI have kept him close to the terrain he is so afraid of.
“It really is the scariest problem for me”
As one of the most famous tech figures in the world, Musk’s alarmist views on AI can potentially reach millions of people.
A number of other tech leaders, including Microsoft’s Bill Gates, believe that someday super-intelligent machines will exist, but they tend to be a little more diplomatic when they express their thoughts to an audience. Musk, on the other hand, doesn’t hold back.
In September 2017, Musk said on Twitter that AI could be the “most likely” cause of a third world war. His comment was in response to Russian President Vladimir Putin who said that the first AI world leader “would become the world leader”.
Earlier in the year, in July 2017, Musk warned that robots will become better than all humans overall. and that this will lead to a general interruption of employment.
“There will certainly be work interruptions,” he said. “Because what’s going to happen is that the robots will be able to do everything better than us … I mean all of us. Yeah, I don’t know exactly what to do about it. C is really the scariest problem for me, I will let you know. “
He added: “Transportation will be one of the first to become fully autonomous. But when I say everything – robots can do anything, nothing.”
Musk didn’t stop there.
“I am exposed to the most advanced artificial intelligence, and I think people should really worry about it,” he said. “AI is a fundamental risk for the existence of human civilization.”
The cutting-edge AI it refers to is probably being developed by scientists at OpenAI, and possibly also by Tesla.
Rather awkwardly, OpenAI has repeatedly tried to distance itself from Musk and his comments on the AI. OpenAI employees don’t always like to see “Elai Musk’s OpenAI” in the headlines, for example.
Musk resigned from the OpenAI board in February 2018, but continued to share his hard-hitting views on the direction AI is taking in public forums.
An OpenAI spokesperson said he had left the board to avoid future disputes with Tesla.
“As Tesla continues to focus more on AI, Elon has chosen to leave the OpenAI board of directors to eliminate potential future conflicts. We are very fortunate that he is always ready to advise us.”
Feud with Zuckerberg
Some people in places like the Center for the Study of Existential Risk at Cambridge University or the Future of Humanity Institute in Oxford may not disagree with all of Musk’s comments.
But his comments of July 2017 were the last drop for some people.
In a rare public disagreement with another tech leader, Facebook CEO Mark Zuckerberg accused Musk of spreading fear and said his comments were “fairly irresponsible”.
Musk responded by saying that Zuckerberg did not understand the subject.
Facebook CEO Mark Zuckerberg at the F8 Developer Conference in 2017.
David Paul Morris | Bloomberg via Getty Images
Undeterred by the meeting in August 2017, Musk called AI a bigger threat than North Korea and said people should be more concerned about the rise of machines than they are.
The prolific tweeter told his millions of followers, “If you’re not concerned about AI security, you should be. Really more risky than North Korea.” The tweet was accompanied by a photo of a game poster that reads “Ultimately, the machines will win.”
Zuckerberg isn’t the only Facebooker to question Musk’s views on AI. Edward Grefenstette, a former DeepMinder, has repeatedly challenged Musk’s point of view. “If you needed more evidence @Elon Musk is an opportunistic moron who was in the right place at the right time once, let’s go, “he said on Twitter this month after Musk tweeted” FREE AMERICA NOW “in connection with coronavirus blockages. .
Yann LeCun, chief AI scientist at Facebook, has questioned Musk’s views on AI more than once. In September 2018, he said it was “crazy” for Musk to call for more AI regulation.
It’s not just Facebookers who don’t agree with Musk on AI. Former Google CEO Eric Schmidt said in May 2018 that Musk is “exactly wrong” on AI.
In March 2018, at the South by Southwest technology conference in Austin, Texas, Musk doubled his 2014 comments and said he thought AI was far more dangerous than nuclear weapons, adding that there needed to be a regulatory body overseeing the development of super intelligence.
These relatively extreme views on AI are shared by a small minority of AI researchers. But Musk’s celebrity status means they are heard by a large audience, which frustrates people who do research on AI.