Google’s artificial intelligence sibling DeepMind repurposes Go-playing AI to conquer chess and shogi without aid of human knowledge.
(Natural News) A new study conducted by the RAND Corporation warns that advances in artificial intelligence could spark a nuclear apocalypse as soon as 2040. The researchers gathered information from experts in nuclear issues, government, AI research, AI policy, and national security. According to the paper, AI machines might not destroy the world autonomously, but artificial intelligence could encourage humans to take apocalyptic risks with military decisions.
Humans will inevitably trust in AI technology to a greater extent, as advances are made in AI for detection, tracking, and targeting. The newfound data intelligence that AI provides will escalate war time tensions and encourage bold, calculated decisions. As armies trust AI to translate data, they will be more apt to take drastic measures against one another. It will be like playing chess against a computer that can predict your future moves and make decisions accordingly.
Since 1945, the thought of mutually assured destruction through nuclear war has kept countries accountable to one another. With AI calculating risks more efficiently, armies will be able to attack with greater precision. Trusting in AI, humans may be able to advance their use of nuclear weapons as they predict and mitigate retaliatory forces. Opposing forces may see nuclear weapons as their only way out.
In the paper, researchers highlight the potential of AI to erode the condition of mutually assured destruction, therefore undermining strategic stability. Humans could take more calculated risks using nuclear weapons if they come to trust in the AI’s understanding of data. An improvement in sensor technology, for example, could help one side take out opposing submarines, as they gain bargaining leverage in an escalating conflict. AI will give armies the knowledge they need to take risky moves that give them the upper hand in battle.
The first intended use for AI was military purposes. The Survivable Adaptive Planning Experiment of the 1980s looked to utilize AI for translating reconnaissance data for improving nuclear targeting plans. Today, the Department of Defense is reaching out to Google for integrating AI into military intelligence. At least a dozen Google employees have resigned, protesting Google’s partnership with the Department of Defense for integrating AI with military drones. Project Maven seeks to incorporate AI into drones to scan images, identify targets, and classify images of objects and people to “augment or automate Processing, Exploitation and Dissemination (PED) for unmanned aerial vehicles.”
Improved analytics could help militaries interpret their opposition’s actions, too. This could help humans understand the motives behind an adversary’s decision and could lead to more strategic retaliation as the AI predicts behavior. Then again, what if the computer intelligence miscalculates the data, pushing humans to make decisions not in anyone’s best interest?
“Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes,” said Andrew Lohn, co-author on the paper and associate engineer at RAND. “There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk.”
How will adversaries perceive the AI capabilities of a geopolitical threat? Will their fears and suspicions lead to conflict? How might adversaries use artificial intelligence against one another and will this escalate risk and casualties? An apocalypse might not be machines taking over the world by themselves; it could be human trust in the machine intelligence that lays waste to the world.
For more on the dangers of AI and nuclear war, visit Nuclear.News.
Great doc on artificial intelligence, and the future is here and it is scary.
February 15th, 2018
The father of artificial intelligence has sounded the alarm, and the clock is ticking down to the singularity. For those who haven’t been following the advancements in AI, maybe now’s the time, because we are approaching the point of no return.
Singularity is the point in time when humans can create an artificial intelligence machine that is smarter. Ray Kurzweil, Google’s chief of engineering, says that the singularity will happen in 2045. Louis Rosenberg claims that we are actually closer than that and that the day will be arriving sometime in 2030. MIT’s Patrick Winston would have you believe that it will likely be a little closer to Kurzweil’s prediction, though he puts the date at 2040, specifically.
Jürgen Schmidhuber, who is the Co-Founder and Chief Scientist at AI company NNAISENSE, the Director of the Swiss AI lab IDSIA, and heralded by some as the “father of artificial intelligence” is confident that the singularity “is just 30 years away. If the trend doesn’t break, and there will be rather cheap computational devices that have as many connections as your brain but are much faster,” he said. “There is no doubt in my mind that AIs are going to become super smart,” Schmidhuber says.
When biological life emerged from chemical evolution, 3.5 billion years ago, a random combination of simple, lifeless elements kickstarted the explosion of species populating the planet today. Something of comparable magnitude may be about to happen. “Now the universe is making a similar step forward from lower complexity to higher complexity,” Schmidhuber beams. “And it’s going to be awesome.” But will it really be awesome when human beings are made obsolete by their very creations?
Artifical intelligence has already had an impact on humanity. A recent warning from the Institute for Public Policy Research (IPPR) declared that thousands of jobs are being lost to robots and those with those on lowest wages are likely to be hardest hit. As it becomes more expensive to hire people for work because of government intervention like minimum wage hikes and overbearing regulations, more companies are shifting to robotics to save money on labor.
Kurzweil has said that the work happening right now “will change the nature of humanity itself.” He said robots “will reach human intelligence by 2029 and life as we know it will end in 2045.” There is a risk that technology will overtake humanity and make human society irrelevant at best and extinct at worst.
This is just the beginning, as artificial intelligence begins its own evolution. Imagine what it will do in, say, 50 years?
Google’s artificial intelligence sibling DeepMind repurposes Go-playing AI to conquer chess and shogi without aid of human knowledge
AlphaZero, the game-playing AI created by Google sibling DeepMind, has beaten the world’s best chess-playing computer program, having taught itself how to play in under four hours.
The repurposed AI, which has repeatedly beaten the world’s best Go players as AlphaGo, has been generalised so that it can now learn other games. It took just four hours to learn the rules to chess before beating the world champion chess program, Stockfish 8, in a 100-game match up.
AlphaZero won or drew all 100 games, according to a non-peer-reviewed research paper published with Cornell University Library’s arXiv.
“Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi [a similar Japanese board game] as well as Go, and convincingly defeated a world-champion program in each case,” said the paper’s authors that include DeepMind founder Demis Hassabis, who was a child chess prodigy reaching master standard at the age of 13.
Elon Musk is no stranger to futurecasting a foreboding dystopia ahead for mankind, as we noted recently. But during a speech he gave today at the National Governors Association Summer Meeting in Rhode Island, Musk turned up the future-fearmongery amplifier to ’11’.
As a reminder, in the past, when he was asked about whether humans are living inside a computer simulation, Musk made headlines last year by saying he thinks the chances are one in billions that we aren’t.
“The strongest argument for us probably being in a simulation I think is the following: 40 years ago we had Pong – two rectangles and a dot,” Musk stated.
“That’s where we were. Now 40 years later we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality. If you assume any rate of improvement at all, then the games will become indistinguishable from reality, just indistinguishable.”
Here Musk is referring to the exponential growth of technology, the lynchpin of the Singularity theory. If in 40 years we’ve gone from the two-dimensional pong to the cusp of augmented and virtual reality, imagine where we’ll be in another forty, or a hundred, or 400. And that is where he began today…
But today, Musk discussed a broad range of topics from energy sources in the future…
“It’s inevitable,” Musk said, speaking of shift to sustainable energy. “But it matters if it happens sooner or later.”
As for those pushing some other type of fusion, Musk notes that the sun is a giant fusion reactor in the sky. “It’s really reliable,” he said. “It comes up every day. if it doesn’t we’ve got (other) problems).”
To Tesla’s share price:
Musk said he has been on record several times as saying its stock price “is higher than we have any right to deserve” especially based on current and past performance.
“The stock price obviously reflects a lot of optimism on where we will be in the future,” he said. “Those expectations sometimes get out of control. I hate disappointing people, I am trying really hard to meet those expectations.”
Musk added that he won’t be selling any stock “unless I have to for taxes,” and said “I’m going down with the ship… I’ll be the last [to sell].”
Musk addressed government regulation and incentives:
“It sure is important to get the rules right,” Musk said. “Regulations are immortal. They never die unless somebody actually goes and kills them. A lot of times regulations can be put in place for all the right reasons but nobody goes back and kills them because they no longer make sense.”
Musk also focused on the importance of incentives, saying whatever societies incentivize tends to be what happens. “It’s economics 101,” he said.
And what drives him:
“I want to be able to think about the future and feel good about that, to dream what we can to have the future be as good as possible. To be inspired by what is likely to happen and to look forward to the next day. How do we make sure things are great? That’s the underlying principle behind Tesla and SpaceX.”
Within 20 years, he said driving a car will be like having a horse (i.e. rare and totally optional). “There will not be a steering wheel.”
“There will be people that will have non-autonomous cars, like people have horses,” he said.
“It just would be unusual to use that as a mode of transport.”
But what started off as the latest sales pitch for electric cars quickly devolved into a bizarre rant that among other things, touched on Elon Musk’s gloomy, apocalyptic vision of how the world could end… (via ReCode)
Musk called on the government to proactively regulate artificial intelligence before things advance too far.
“Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal,” he said.
“AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”
“Normally the way regulations are set up is a while bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry,” he continued.
“It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of human civilization.”
Musk has been concerned about AI for years, and he’s working on technology that would connect the human brain to the computer software meant to mimic it.
Full interview below (Musk begins talking around 42 minutes in)…
Assange spoke of the threat of AI-controlled social media via video link at rapper and activist M.I.A.’s Meltdown Festival in the Southbank Centre, London.
Speaking about the future of AI, Assange told a panel including Slovenian philosopher Slavoj Žižek that there will be a time when AI will be used to adjust perception.
“Imagine a Daily Mail run by essentially Artificial Intelligence, what does that look like when there’s only the Daily Mail worldwide? That’s what Facebook and Twitter will shift into,” he said.
Assange referenced the apparent intense pressure Facebook and Google were under to ensure Emmanuel Macron, and not Marine Le Pen, won last month’s French presidential election runoff.
When asked by M.I.A. if AI and VR technology will make society more vulnerable to becoming apolitical, Assange replied: “Yes, of course we can be influenced, but I don’t see that as the main problem.”
“Human beings have always been influenced by sophisticated systems of production, information and experience, [such as the] BBC for example.”
The technologies “just amplify the power of the ability to project into the mind,” he added.
The main concern in Assange’s eyes centers around how AI can be used to advance propaganda.
“The most important development as far as the fate of human beings are concerned is that we are getting close to the threshold where the traditional propaganda function that is employed by BBC, The Daily Mail, and cultures also, can be encapsulated by AI processes,” Assange said.
“When you have AI programs harvesting all the search queries and YouTube videos someone uploads it starts to lay out perceptual influence campaigns, twenty to thirty moves ahead. This starts to become totally beneath the level of human perception.”
Using Google as an example, and comparing the wit involved to a game of chess, he said at this level human beings become powerless as they can’t even see it happening.
Admitting his vision was dystopian, he suggested that he could be wrong.
“Maybe there will be a new band of technologically empowered human beings that can see this [rueful] fate coming towards us, [which] will be able to extract value or diminish it by directly engaging with it – that’s also possible.”
Another insight offered by the WikiLeaks founder was his opinion that engineers involved in AI lack perception about what they’re doing.
“I know from our sources deep inside the Silicon Valley institution[s] that they genuinely believe that they are going to produce AI that’s so powerful, relatively soon, that people will have their brains digitized, uploaded to these AIs and live forever in simulation, therefore have eternal life.”
“It’s like a religion for atheists,” he added. “And given you’re in a simulation, why not program the simulation to have endless drug and sex orgy parties around you.”
Assange said this vision makes them work harder and the dystopian consequences of their work is overshadowed by cultural and industrial bias to not perceiving it.
He concluded that the normal perception someone would have regarding their work has been supplanted with “this ridiculous quasi-religious model that’s it all going to lead to nirvana.”