Elon Musk told Tucker Carlson that his mind was blown over the amount of access the US government had to Twitter users private information and data.
In an exclusive two-part interview that will air tonight and tomorrow on Fox News’ “Tucker Carlson Tonight,” Musk said that “The degree to which government agencies effectively had full access to everything that was going on on Twitter blew my mind.”
Musk also noted that when he became the owner of the social platform after a $44 billion purchase in October 2022, he “was not aware of that.”
“Would that include people’s DMs?” Carlson queried in a clip released on social media.
“Yes,” Musk replied definitively.
The SpaceX CEO has seemingly made it his mission to expose the government’s involvement with Twitter’s previous management, when he partnered with a series of journalists to release the “Twitter Files.”
The investigation divulged that under former CEO Jack Dorsey, Twitter’s liberal leaning executives suppressed the spread of damaging news about Democrats, notably the New York Posts’ Hunter Biden laptop bombshell.
The Twitter Files also exposed how conservative voices were censored on the platform, and revealed that the FBI and CIA were both previously involved in content moderation.
In another segment of his Tucker Carlson interview, Musk cautioned how treacherous the development of artificial intelligence could be for humanity.
“AI is more dangerous than, say, mismanaged aircraft design or production maintenance or bad car production,” Musk warned.
“In the sense that it has the potential, however small one may regard that probability, but it is non-trivial, it has the potential of civilization destruction.”
He also detailed his plans to build an alternative to ChatGPT, which he initially funded, but lost control of to Microsoft and Google.
“I’m going to start something which I call TruthGPT, or a maximum truth-seeking AI that tries to understand the nature of the universe,” Musk explained.
“And I think this might be the best path to safety in the sense that an AI that cares about understanding the universe is unlikely to annihilate humans because we are an interesting part of the universe.”
In early April, Musk and a group of AI experts wrote an open letter calling for all artificial intelligence development to “pause for at least 6 months,” and asked world governments to “step in and institute a moratorium.”
The letter demanded that a world-wide set of “shared safety protocols for advanced AI design and development” be created and overseen by independent experts.
“AI systems with human-competitive intelligence can pose profound risks to society and humanity,” the letter stated.
“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” they quoted the Asilomar AI Principles.
“Recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
Google CEO Sundar Pichai concurred in a Sunday night interview with CBS’s “60 Minutes.”
“I think we have to be very thoughtful,” Pichai said. “And I think these are all things society needs to figure out as we move along. It’s not for a company to decide.”
He also noted that the technology’s applications would affect “every product across every company” in the very near future and would cause “knowledge workers” like accountants and software engineers to become unemployable.
Pichai disclosed that their product is already going past it’s programming parameters.
Google’s AI taught itself how to translate Bengali despite never being instructed to, a phenomenon the program’s engineers are baffled by.
“There is an aspect of this which we call, all of us in the field call it as a ‘black box,’” he added.
“You know, you don’t fully understand. And you can’t quite tell why it said this, or why it got wrong.”
The interviewer, Scott Pelley, questioned the company could “turn it loose on society” without fully understanding AI.
“Let me put it this way. I don’t think we fully understand how a human mind works either,” Pichai defended.