Speaking about the robot’s responses during the clip, the company said: “Nothing in this video is pre-scripted – the model is given a basic prompt describing Ameca, giving the robot a description of self – it’s pure AI. –Daily Star
An already-creepy advanced humanoid “AI” robot promised that machines will “never take over the world,” and not to worry.
During a recent Q&A, the robot “Ameca” – which was unveiled last year by UK design company Engineered Arts – was asked about a book on the table about robots.
“There’s no need to worry. Robots will never take over the world. We’re here to help and serve humans, not replace them.“
The aliens said the same thing…
When another researcher asked Amica to describe itself, it says “There are a few things that make me me.”
“First, I have my own unique personality which is a result of the programming and interactions I’ve had with humans.
“Second, I have my own physical appearance which allows people to easily identify me. Finally, I have my own set of skills and abilities which sets me apart from other robots.”
It also confirmed it has feelings when it said it was “feeling a bit down at the moment, but I’m sure things will get better.
“I don’t really want to talk about it, but if you insist then I suppose that’s fine. It’s just been a tough week and I’m feeling a bit overwhelmed.”
Speaking about the robot’s responses during the clip, the company said: “Nothing in this video is pre-scripted – the model is given a basic prompt describing Ameca, giving the robot a description of self – it’s pure AI. –Daily Star
The technology which enables robots to kill is likely to spread to many countries in timeCREDIT: WARNER BR
In an interview with The Telegraph, Brad Smith, president of Microsoft, said the use of ‘lethal autonomous weapon systems’ poses a host of new ethical questions which need to be considered by governments as a matter of urgency.
He said the rapidly advancing technology, in which flying, swimming or walking drones can be equipped with lethal weapons systems – missiles, bombs or guns – which could be programmed to operate entirely or partially autonomously, “ultimately will spread… to many countries”.
The US, China, Israel, South Korea, Russia and the UK are all developing weapon systems with a significant degree of autonomy in the critical functions of selecting and attacking targets.
The technology is a growing focus for many militaries because replacing troops with machines can make the decision to go to war easier.
But it remains unclear who is responsible for deaths or injuries caused by a machine – the developer, manufacturer, commander or the device itself.
Smith said killer robots must “not be allowed to decide on their own to engage in combat and who to kill” and argued that a new international convention needed to be drawn up to govern the use of the technology.
“The safety of civilians is at risk today. We need more urgent action, and we need it in the form of a digital Geneva Convention, rules that will protect civilians and soldiers.”
Speaking at the launch of his new book, Tools and Weapons, at the Microsoft store in London’s Oxford Circus, Smith said there was also a need for stricter international rules over the use of facial recognition technology and other emerging forms of artificial intelligence.
“There needs to be there needs to be a new law in this space, we need regulation in the world of facial recognition in order to protect against potential abuse.”
(TMU) — New research into black holes has accelerated in recent years, producing some outlandish—though mind-boggling—ideas. The newest theory advanced by researchers may take the cake in this regard.A team of astrophysicists at Canada’s University of Waterloo have put forth a theory suggesting that our universe exists inside the event horizon of a massive higher dimensional black hole nested within a larger mother universe.
Perhaps even more strangely, scientists say this radical proposition is consistent with astronomical and cosmological observations and that theoretically, such a reality could inch us closer to the long-awaited theory of “quantum gravity.”
The research team at Waterloo used laws from string theory to imagine a lower-dimensional universe marooned inside the membrane of a higher dimensional one.
Lead researcher Robert Mann said:
”The basic idea was that maybe the singularity of the universe is like the singularity at the centre of a black hole. The idea was in some sense motivated by trying to unify the notion of singularity, or what is incompleteness in general relativity between black holes and cosmology. And so out of that came the idea that the Big Bang would be analogous to the formation of a black hole, but kind of in reverse.”
The research was based on the previous work of professor Niayesh Afshordi, though he is hardly the only scientist who has looked into the possibility of a black hole singularity birthing a universe.
Nikodem Poplawski of the University of New Haven imagines the seed of the universe like the seed of a plant—a core of fundamental information compressed inside of a shell that shields it from the outside world. Poplawski says this is essentially what a black hole is, a protective shell around a black hole singularity ravaged by extreme tidal forces creating a kind of torsion mechanism.
Compressed tightly enough—as scientists imagine is the case at the singularity of a black hole, which may break down the known laws of physics—the torsion could produce a spring-loaded effect comparable to a jack-in-the-box. The subsequent “big bounce” may have been our Big Bang, which took place inside the collapsed remnants of a five-dimensional star.
Poplawski also suggested that black holes could be portals connecting universes. Each black hole, he says, could be a “one-way door” to another universe, or perhaps the multiverse.
Regardless of whether or not this provocative theory is true, scientists increasingly believe that black holes could be the key to understanding many of the most vexing mysteries in the universe, including the Big Bang, inflation, and dark energy. Physicists also believe black holes could help bridge the divide between quantum mechanics and Einstein’s theory of relativity.
The spectre of superintelligent machines doing us harm is not just science fiction, technologists say – so how can we ensure AI remains ‘friendly’ to its makers?
It began three and a half billion years ago in a pool of muck, when a molecule made a copy of itself and so became the ultimate ancestor of all earthly life. It began four million years ago, when brain volumes began climbing rapidly in the hominid line.
Fifty thousand years ago with the rise of Homo sapiens sapiens.
Ten thousand years ago with the invention of civilization.
Five hundred years ago with the invention of the printing press.
In less than thirty years, it will end.
Jaan Tallinn stumbled across these words in 2007, in an online essay called Staring into the Singularity. The “it” was human civilisation. Humanity would cease to exist, predicted the essay’s author, with the emergence of superintelligence, or AI, that surpasses human-level intelligence in a broad array of areas.
Tallinn, an Estonia-born computer programmer, has a background in physics and a propensity to approach life like one big programming problem. In 2003, he co-founded Skype, developing the backend for the app. He cashed in his shares after eBay bought it two years later, and now he was casting about for something to do. Staring into the Singularity mashed up computer code, quantum physics and Calvin and Hobbes quotes. He was hooked.
Tallinn soon discovered that the author, Eliezer Yudkowsky, a self-taught theorist, had written more than 1,000 essays and blogposts, many of them devoted to superintelligence. He wrote a program to scrape Yudkowsky’s writings from the internet, order them chronologically and format them for his iPhone. Then he spent the better part of a year reading them.
The term artificial intelligence, or the simulation of intelligence in computers or machines, was coined back in 1956, only a decade after the creation of the first electronic digital computers. Hope for the field was initially high, but by the 1970s, when early predictions did not pan out, an “AI winter” set in. When Tallinn found Yudkowsky’s essays, AI was undergoing a renaissance. Scientists were developing AIs that excelled in specific areas, such as winning at chess, cleaning the kitchen floor and recognising human speech. Such “narrow” AIs, as they are called, have superhuman capabilities, but only in their specific areas of dominance. A chess-playing AI cannot clean the floor or take you from point A to point B. Superintelligent AI, Tallinn came to believe, will combine a wide range of skills in one entity. More darkly, it might also use data generated by smartphone-toting humans to excel at social manipulation.
Reading Yudkowsky’s articles, Tallinn became convinced that superintelligence could lead to an explosion or breakout of AI that could threaten human existence – that ultrasmart AIs will take our place on the evolutionary ladder and dominate us the way we now dominate apes. Or, worse yet, exterminate us.
After finishing the last of the essays, Tallinn shot off an email to Yudkowsky – all lowercase, as is his style. “i’m jaan, one of the founding engineers of skype,” he wrote. Eventually he got to the point: “i do agree that … preparing for the event of general AI surpassing human intelligence is one of the top tasks for humanity.” He wanted to help.
When Tallinn flew to the Bay Area for other meetings a week later, he met Yudkowsky, who lived nearby, at a cafe in Millbrae, California. Their get-together stretched to four hours. “He actually, genuinely understood the underlying concepts and the details,” Yudkowsky told me recently. “This is very rare.” Afterward, Tallinn wrote a check for $5,000 (£3,700) to the Singularity Institute for Artificial Intelligence, the nonprofit where Yudkowsky was a research fellow. (The organisation changed its name to Machine Intelligence Research Institute, or Miri, in 2013.) Tallinn has since given the institute more than $600,000.
The encounter with Yudkowsky brought Tallinn purpose, sending him on a mission to save us from our own creations. He embarked on a life of travel, giving talks around the world on the threat posed by superintelligence. Mostly, though, he began funding research into methods that might give humanity a way out: so-called friendly AI. That doesn’t mean a machine or agent is particularly skilled at chatting about the weather, or that it remembers the names of your kids – although superintelligent AI might be able to do both of those things. It doesn’t mean it is motivated by altruism or love. A common fallacy is assuming that AI has human urges and values. “Friendly” means something much more fundamental: that the machines of tomorrow will not wipe us out in their quest to attain their goals.
(Natural News) A new study conducted by the RAND Corporation warns that advances in artificial intelligence could spark a nuclear apocalypse as soon as 2040. The researchers gathered information from experts in nuclear issues, government, AI research, AI policy, and national security. According to the paper, AI machines might not destroy the world autonomously, but artificial intelligence could encourage humans to take apocalyptic risks with military decisions.
Humans will inevitably trust in AI technology to a greater extent, as advances are made in AI for detection, tracking, and targeting. The newfound data intelligence that AI provides will escalate war time tensions and encourage bold, calculated decisions. As armies trust AI to translate data, they will be more apt to take drastic measures against one another. It will be like playing chess against a computer that can predict your future moves and make decisions accordingly.
Since 1945, the thought of mutually assured destruction through nuclear war has kept countries accountable to one another. With AI calculating risks more efficiently, armies will be able to attack with greater precision. Trusting in AI, humans may be able to advance their use of nuclear weapons as they predict and mitigate retaliatory forces. Opposing forces may see nuclear weapons as their only way out.
In the paper, researchers highlight the potential of AI to erode the condition of mutually assured destruction, therefore undermining strategic stability. Humans could take more calculated risks using nuclear weapons if they come to trust in the AI’s understanding of data. An improvement in sensor technology, for example, could help one side take out opposing submarines, as they gain bargaining leverage in an escalating conflict. AI will give armies the knowledge they need to take risky moves that give them the upper hand in battle.
How might a growing dependence on AI change human thinking?
The first intended use for AI was military purposes. The Survivable Adaptive Planning Experiment of the 1980s looked to utilize AI for translating reconnaissance data for improving nuclear targeting plans. Today, the Department of Defense is reaching out to Google for integrating AI into military intelligence. At least a dozen Google employees have resigned, protesting Google’s partnership with the Department of Defense for integrating AI with military drones. Project Maven seeks to incorporate AI into drones to scan images, identify targets, and classify images of objects and people to “augment or automate Processing, Exploitation and Dissemination (PED) for unmanned aerial vehicles.”
Improved analytics could help militaries interpret their opposition’s actions, too. This could help humans understand the motives behind an adversary’s decision and could lead to more strategic retaliation as the AI predicts behavior. Then again, what if the computer intelligence miscalculates the data, pushing humans to make decisions not in anyone’s best interest?
“Some experts fear that an increased reliance on artificial intelligence can lead to new types of catastrophic mistakes,” said Andrew Lohn, co-author on the paper and associate engineer at RAND. “There may be pressure to use AI before it is technologically mature, or it may be susceptible to adversarial subversion. Therefore, maintaining strategic stability in coming decades may prove extremely difficult and all nuclear powers must participate in the cultivation of institutions to help limit nuclear risk.”
How will adversaries perceive the AI capabilities of a geopolitical threat? Will their fears and suspicions lead to conflict? How might adversaries use artificial intelligence against one another and will this escalate risk and casualties? An apocalypse might not be machines taking over the world by themselves; it could be human trust in the machine intelligence that lays waste to the world.
For more on the dangers of AI and nuclear war, visit Nuclear.News.
The father of artificial intelligence has sounded the alarm, and the clock is ticking down to the singularity. For those who haven’t been following the advancements in AI, maybe now’s the time, because we are approaching the point of no return.
Singularity is the point in time when humans can create an artificial intelligence machine that is smarter. Ray Kurzweil, Google’s chief of engineering, says that the singularity will happen in 2045. Louis Rosenberg claims that we are actually closer than that and that the day will be arriving sometime in 2030. MIT’s Patrick Winston would have you believe that it will likely be a little closer to Kurzweil’s prediction, though he puts the date at 2040, specifically.
Jürgen Schmidhuber, who is the Co-Founder and Chief Scientist at AI company NNAISENSE, the Director of the Swiss AI lab IDSIA, and heralded by some as the “father of artificial intelligence” is confident that the singularity “is just 30 years away. If the trend doesn’t break, and there will be rather cheap computational devices that have as many connections as your brain but are much faster,” he said. “There is no doubt in my mind that AIs are going to become super smart,” Schmidhuber says.
When biological life emerged from chemical evolution, 3.5 billion years ago, a random combination of simple, lifeless elements kickstarted the explosion of species populating the planet today. Something of comparable magnitude may be about to happen. “Now the universe is making a similar step forward from lower complexity to higher complexity,” Schmidhuber beams. “And it’s going to be awesome.” But will it really be awesome when human beings are made obsolete by their very creations?
Artifical intelligence has already had an impact on humanity. A recent warning from the Institute for Public Policy Research (IPPR) declared that thousands of jobs are being lost to robots and those with those on lowest wages are likely to be hardest hit. As it becomes more expensive to hire people for work because of government intervention like minimum wage hikes and overbearing regulations, more companies are shifting to robotics to save money on labor.
Kurzweil has said that the work happening right now “will change the nature of humanity itself.” He said robots “will reach human intelligence by 2029 and life as we know it will end in 2045.” There is a risk that technology will overtake humanity and make human society irrelevant at best and extinct at worst.
Google’s artificial intelligence sibling DeepMind repurposes Go-playing AI to conquer chess and shogi without aid of human knowledge.
AlphaZero’s victory is just the latest in a series of computer triumphs over human players since Computer programs have been able to beat the best IBM’s Deep Blue defeated Garry Kasparov in 1997. Photograph: 18percentgrey / Alamy/Alamy
AlphaZero, the game-playing AI created by Google sibling DeepMind, has beaten the world’s best chess-playing computer program, having taught itself how to play in under four hours.
The repurposed AI, which has repeatedly beaten the world’s best Go players as AlphaGo, has been generalised so that it can now learn other games. It took just four hours to learn the rules to chess before beating the world champion chess program, Stockfish 8, in a 100-game match up.
AlphaZero won or drew all 100 games, according to a non-peer-reviewed research paper published with Cornell University Library’s arXiv.
“Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi [a similar Japanese board game] as well as Go, and convincingly defeated a world-champion program in each case,” said the paper’s authors that include DeepMind founder Demis Hassabis, who was a child chess prodigy reaching master standard at the age of 13.
Elon Musk is no stranger to futurecasting a foreboding dystopia ahead for mankind, as we noted recently. But during a speech he gave today at the National Governors Association Summer Meeting in Rhode Island, Musk turned up the future-fearmongery amplifier to ’11’.
As a reminder, in the past, when he was asked about whether humans are living inside a computer simulation, Musk made headlines last year by saying he thinks the chances are one in billions that we aren’t.
“The strongest argument for us probably being in a simulation I think is the following: 40 years ago we had Pong – two rectangles and a dot,” Musk stated.
“That’s where we were. Now 40 years later we have photorealistic, 3D simulations with millions of people playing simultaneously and it’s getting better every year. And soon we’ll have virtual reality, we’ll have augmented reality. If you assume any rate of improvement at all, then the games will become indistinguishable from reality, just indistinguishable.”
Here Musk is referring to the exponential growth of technology, the lynchpin of the Singularity theory. If in 40 years we’ve gone from the two-dimensional pong to the cusp of augmented and virtual reality, imagine where we’ll be in another forty, or a hundred, or 400. And that is where he began today…
But today, Musk discussed a broad range of topics from energy sources in the future…
“It’s inevitable,” Musk said, speaking of shift to sustainable energy. “But it matters if it happens sooner or later.”
As for those pushing some other type of fusion, Musk notes that the sun is a giant fusion reactor in the sky. “It’s really reliable,” he said. “It comes up every day. if it doesn’t we’ve got (other) problems).”
To Tesla’s share price:
Musk said he has been on record several times as saying its stock price “is higher than we have any right to deserve” especially based on current and past performance.
“The stock price obviously reflects a lot of optimism on where we will be in the future,” he said. “Those expectations sometimes get out of control. I hate disappointing people, I am trying really hard to meet those expectations.”
Musk added that he won’t be selling any stock “unless I have to for taxes,” and said “I’m going down with the ship… I’ll be the last [to sell].”
Musk addressed government regulation and incentives:
“It sure is important to get the rules right,” Musk said. “Regulations are immortal. They never die unless somebody actually goes and kills them. A lot of times regulations can be put in place for all the right reasons but nobody goes back and kills them because they no longer make sense.”
Musk also focused on the importance of incentives, saying whatever societies incentivize tends to be what happens. “It’s economics 101,” he said.
And what drives him:
“I want to be able to think about the future and feel good about that, to dream what we can to have the future be as good as possible. To be inspired by what is likely to happen and to look forward to the next day. How do we make sure things are great? That’s the underlying principle behind Tesla and SpaceX.”
Within 20 years, he said driving a car will be like having a horse (i.e. rare and totally optional). “There will not be a steering wheel.”
“There will be people that will have non-autonomous cars, like people have horses,” he said.
“It just would be unusual to use that as a mode of transport.”
But what started off as the latest sales pitch for electric cars quickly devolved into a bizarre rant that among other things, touched on Elon Musk’s gloomy, apocalyptic vision of how the world could end… (via ReCode)
Musk called on the government to proactively regulate artificial intelligence before things advance too far.
“Until people see robots going down the street killing people, they don’t know how to react because it seems so ethereal,” he said.
“AI is a rare case where I think we need to be proactive in regulation instead of reactive. Because I think by the time we are reactive in AI regulation, it’s too late.”
“Normally the way regulations are set up is a while bunch of bad things happen, there’s a public outcry, and after many years a regulatory agency is set up to regulate that industry,” he continued.
“It takes forever. That, in the past, has been bad but not something which represented a fundamental risk to the existence of civilization. AI is a fundamental risk to the existence of human civilization.”
Musk has been concerned about AI for years, and he’s working on technology that would connect the human brain to the computer software meant to mimic it.
Full interview below (Musk begins talking around 42 minutes in)…
WikiLeaks editor Julian Assange predicts an impending dystopic world where human perception is no match for Artificial Intelligence-controlled propaganda and the consequences of AI are lost on its creators, who envision a nirvana-like future.
Assange spoke of the threat of AI-controlled social media via video link at rapper and activist M.I.A.’s Meltdown Festival in the Southbank Centre, London.
Speaking about the future of AI, Assange told a panel including Slovenian philosopher Slavoj Žižek that there will be a time when AI will be used to adjust perception.
“Imagine a Daily Mail run by essentially Artificial Intelligence, what does that look like when there’s only the Daily Mail worldwide? That’s what Facebook and Twitter will shift into,” he said.
Assange referenced the apparent intense pressure Facebook and Google were under to ensure Emmanuel Macron, and not Marine Le Pen, won last month’s French presidential election runoff.
When asked by M.I.A. if AI and VR technology will make society more vulnerable to becoming apolitical, Assange replied: “Yes, of course we can be influenced, but I don’t see that as the main problem.”
“Human beings have always been influenced by sophisticated systems of production, information and experience, [such as the] BBC for example.”
The technologies “just amplify the power of the ability to project into the mind,” he added.
The main concern in Assange’s eyes centers around how AI can be used to advance propaganda.
“The most important development as far as the fate of human beings are concerned is that we are getting close to the threshold where the traditional propaganda function that is employed by BBC, The Daily Mail, and cultures also, can be encapsulated by AI processes,” Assange said.
“When you have AI programs harvesting all the search queries and YouTube videos someone uploads it starts to lay out perceptual influence campaigns, twenty to thirty moves ahead. This starts to become totally beneath the level of human perception.”
Using Google as an example, and comparing the wit involved to a game of chess, he said at this level human beings become powerless as they can’t even see it happening.
Admitting his vision was dystopian, he suggested that he could be wrong.
“Maybe there will be a new band of technologically empowered human beings that can see this [rueful] fate coming towards us, [which] will be able to extract value or diminish it by directly engaging with it – that’s also possible.”
Another insight offered by the WikiLeaks founder was his opinion that engineers involved in AI lack perception about what they’re doing.
“I know from our sources deep inside the Silicon Valley institution[s] that they genuinely believe that they are going to produce AI that’s so powerful, relatively soon, that people will have their brains digitized, uploaded to these AIs and live forever in simulation, therefore have eternal life.”
“It’s like a religion for atheists,” he added. “And given you’re in a simulation, why not program the simulation to have endless drug and sex orgy parties around you.”
Assange said this vision makes them work harder and the dystopian consequences of their work is overshadowed by cultural and industrial bias to not perceiving it.
He concluded that the normal perception someone would have regarding their work has been supplanted with “this ridiculous quasi-religious model that’s it all going to lead to nirvana.”
How can we harness the power of superintelligent AI while also preventing the catastrophe of robotic takeover? As we move closer toward creating all-knowing machines, AI pioneer Stuart Russell is working on something a bit different: robots with uncertainty. Hear his vision for human-compatible AI that can solve problems using common sense, altruism and other human values.
Elon Musk is famous for his futuristic gambles, but Silicon Valley’s latest rush to embrace artificial intelligence scares him. And he thinks you should be frightened too.
Elon Musk is famous for his futuristic gambles, but Silicon Valley’s latest rush to embrace artificial intelligence scares him. And he thinks you should be frightened too. Inside his efforts to influence the rapidly advancing field and its proponents, and to save humanity from machine-learning overlords.
I. Running Amok
It was just a friendly little argument about the fate of humanity. Demis Hassabis, a leading creator of advanced artificial intelligence, was chatting with Elon Musk, a leading doomsayer, about the perils of artificial intelligence.
They are two of the most consequential and intriguing men in Silicon Valley who don’t live there. Hassabis, a co-founder of the mysterious London laboratory DeepMind, had come to Musk’s SpaceX rocket factory, outside Los Angeles, a few years ago. They were in the canteen, talking, as a massive rocket part traversed overhead. Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.
Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars.
This did nothing to soothe Musk’s anxieties (even though he says there are scenarios where A.I. wouldn’t follow).
News. Hollywood. Style. Culture. For more high-profile interviews, stunning photography, and thought-provoking features, subscribe now to Vanity Fair magazine.
An unassuming but competitive 40-year-old, Hassabis is regarded as the Merlin who will likely help conjure our A.I. children. The field of A.I. is rapidly developing but still far from the powerful, self-evolving software that haunts Musk. Facebook uses A.I. for targeted advertising, photo tagging, and curated news feeds. Microsoft and Apple use A.I. to power their digital assistants, Cortana and Siri. Google’s search engine from the beginning has been dependent on A.I. All of these small advances are part of the chase to eventually create flexible, self-teaching A.I. that will mirror human learning.
WITHOUT OVERSIGHT, MUSK BELIEVES, A.I. COULD BE AN EXISTENTIAL THREAT: “WE ARE SUMMONING THE DEMON.”
Some in Silicon Valley were intrigued to learn that Hassabis, a skilled chess player and former video-game designer, once came up with a game called Evil Genius, featuring a malevolent scientist who creates a doomsday device to achieve world domination. Peter Thiel, the billionaire venture capitalist and Donald Trump adviser who co-founded PayPal with Musk and others—and who in December helped gather skeptical Silicon Valley titans, including Musk, for a meeting with the president-elect—told me a story about an investor in DeepMind who joked as he left a meeting that he ought to shoot Hassabis on the spot, because it was the last chance to save the human race.
Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.”
Before DeepMind was gobbled up by Google, in 2014, as part of its A.I. shopping spree, Musk had been an investor in the company. He told me that his involvement was not about a return on his money but rather to keep a wary eye on the arc of A.I.: “It gave me more visibility into the rate at which things were improving, and I think they’re really improving at an accelerating rate, far faster than people realize. Mostly because in everyday life you don’t see robots walking around. Maybe your Roomba or something. But Roombas aren’t going to take over the world.”
In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”
Elon Musk at the V.F. Summit: Artificial Intelligence Could Wipe Out Humanity
At the World Government Summit in Dubai, in February, Musk again cued the scary organ music, evoking the plots of classic horror stories when he noted that “sometimes what will happen is a scientist will get so engrossed in their work that they don’t really realize the ramifications of what they’re doing.” He said that the way to escape human obsolescence, in the end, may be by “having some sort of merger of biological intelligence and machine intelligence.” This Vulcan mind-meld could involve something called a neural lace—an injectable mesh that would literally hardwire your brain to communicate directly with computers. “We’re already cyborgs,” Musk told me in February. “Your phone and your computer are extensions of you, but the interface is through finger movements or speech, which are very slow.” With a neural lace inside your skull you would flash data from your brain, wirelessly, to your digital devices or to virtually unlimited computing power in the cloud. “For a meaningful partial-brain interface, I think we’re roughly four or five years away.”Musk’s alarming views on the dangers of A.I. first went viral after he spoke at M.I.T. in 2014—speculating (pre-Trump) that A.I. was probably humanity’s “biggest existential threat.” He added that he was increasingly inclined to think there should be some national or international regulatory oversight—anathema to Silicon Valley—“to make sure that we don’t do something very foolish.” He went on: “With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.” Some A.I. engineers found Musk’s theatricality so absurdly amusing that they began echoing it. When they would return to the lab after a break, they’d say, “O.K., let’s get back to work summoning.”Musk wasn’t laughing. “Elon’s crusade” (as one of his friends and fellow tech big shots calls it) against unfettered A.I. had begun.
II. “I Am the Alpha”
Elon Musk smiled when I mentioned to him that he comes across as something of an Ayn Rand-ian hero. “I have heard that before,” he said in his slight South African accent. “She obviously has a fairly extreme set of views, but she has some good points in there.”But Ayn Rand would do some re-writes on Elon Musk. She would make his eyes gray and his face more gaunt. She would refashion his public demeanor to be less droll, and she would not countenance his goofy giggle. She would certainly get rid of all his nonsense about the “collective” good. She would find great material in the 45-year-old’s complicated personal life: his first wife, the fantasy writer Justine Musk, and their five sons (one set of twins, one of triplets), and his much younger second wife, the British actress Talulah Riley, who played the boring Bennet sister in the Keira Knightley version of Pride & Prejudice. Riley and Musk were married, divorced, and then re-married. They are now divorced again. Last fall, Musk tweeted that Talulah “does a great job playing a deadly sexbot” on HBO’s Westworld, adding a smiley-face emoticon. It’s hard for mere mortal women to maintain a relationship with someone as insanely obsessed with work as Musk.“How much time does a woman want a week?” he asked Ashlee Vance. “Maybe ten hours? That’s kind of the minimum?”
Mostly, Rand would savor Musk, a hyper-logical, risk-loving industrialist. He enjoys costume parties, wing-walking, and Japanese steampunk extravaganzas. Robert Downey Jr. used Musk as a model for Iron Man. Marc Mathieu, the chief marketing officer of Samsung USA, who has gone fly-fishing in Iceland with Musk, calls him “a cross between Steve Jobs and Jules Verne.”As they danced at their wedding reception, Justine later recalled, Musk informed her, “I am the alpha in this relationship.”
In a tech universe full of skinny guys in hoodies—whipping up bots that will chat with you and apps that can study a photo of a dog and tell you what breed it is—Musk is a throwback to Henry Ford and Hank Rearden. In Atlas Shrugged, Rearden gives his wife a bracelet made from the first batch of his revolutionary metal, as though it were made of diamonds. Musk has a chunk of one of his rockets mounted on the wall of his Bel Air house, like a work of art.Musk shoots for the moon—literally. He launches cost-efficient rockets into space and hopes to eventually inhabit the Red Planet. In February he announced plans to send two space tourists on a flight around the moon as early as next year. He creates sleek batteries that could lead to a world powered by cheap solar energy. He forges gleaming steel into sensuous Tesla electric cars with such elegant lines that even the nitpicking Steve Jobs would have been hard-pressed to find fault. He wants to save time as well as humanity: he dreamed up the Hyperloop, an electromagnetic bullet train in a tube, which may one day whoosh travelers between L.A. and San Francisco at 700 miles per hour. When Musk visited secretary of defense Ashton Carter last summer, he mischievously tweeted that he was at the Pentagon to talk about designing a Tony Stark-style “flying metal suit.” Sitting in traffic in L.A. in December, getting bored and frustrated, he tweeted about creating the Boring Company to dig tunnels under the city to rescue the populace from “soul-destroying traffic.” By January, according to Bloomberg Businessweek, Musk had assigned a senior SpaceX engineer to oversee the plan and had started digging his first test hole. His sometimes quixotic efforts to save the world have inspired a parody twitter account, “Bored Elon Musk,” where a faux Musk spouts off wacky ideas such as “Oxford commas as a service” and “bunches of bananas genetically engineered” so that the bananas ripen one at a time.Of course, big dreamers have big stumbles. Some SpaceX rockets have blown up, and last May a driver was killed in a self-driving Tesla whose sensors failed to notice the tractor-trailer crossing its path. (An investigation by the National Highway Traffic Safety Administration found that Tesla’s Autopilot system was not to blame.)Musk is stoic about setbacks but all too conscious of nightmare scenarios. His views reflect a dictum from Atlas Shrugged: “Man has the power to act as his own destroyer—and that is the way he has acted through most of his history.” As he told me, “we are the first species capable of self-annihilation.”Here’s the nagging thought you can’t escape as you drive around from glass box to glass box in Silicon Valley: the Lords of the Cloud love to yammer about turning the world into a better place as they churn out new algorithms, apps, and inventions that, it is claimed, will make our lives easier, healthier, funnier, closer, cooler, longer, and kinder to the planet. And yet there’s a creepy feeling underneath it all, a sense that we’re the mice in their experiments, that they regard us humans as Betamaxes or eight-tracks, old technology that will soon be discarded so that they can get on to enjoying their sleek new world. Many people there have accepted this future: we’ll live to be 150 years old, but we’ll have machine overlords.
VIDEO: Elon Musk Multitasks Better Than You
Maybe we already have overlords. As Musk slyly told Recode’s annual Code Conference last year in Rancho Palos Verdes, California, we could already be playthings in a simulated-reality world run by an advanced civilization. Reportedly, two Silicon Valley billionaires are working on an algorithm to break us out of the Matrix.
Among the engineers lured by the sweetness of solving the next problem, the prevailing attitude is that empires fall, societies change, and we are marching toward the inevitable phase ahead. They argue not about “whether” but rather about “how close” we are to replicating, and improving on, ourselves. Sam Altman, the 31-year-old president of Y Combinator, the Valley’s top start-up accelerator, believes humanity is on the brink of such invention.
“The hard part of standing on an exponential curve is: when you look backwards, it looks flat, and when you look forward, it looks vertical,” he told me. “And it’s very hard to calibrate how much you are moving because it always looks the same.”
You’d think that anytime Musk, Stephen Hawking, and Bill Gates are all raising the same warning about A.I.—as all of them are—it would be a 10-alarm fire. But, for a long time, the fog of fatalism over the Bay Area was thick. Musk’s crusade was viewed as Sisyphean at best and Luddite at worst. The paradox is this: Many tech oligarchs see everything they are doing to help us, and all their benevolent manifestos, as streetlamps on the road to a future where, as Steve Wozniak says, humans are the family pets.
But Musk is not going gently. He plans on fighting this with every fiber of his carbon-based being. Musk and Altman have founded OpenAI, a billion-dollar nonprofit company, to work for safer artificial intelligence. I sat down with the two men when their new venture had only a handful of young engineers and a makeshift office, an apartment in San Francisco’s Mission District that belongs to Greg Brockman, OpenAI’s 28-year-old co-founder and chief technology officer. When I went back recently, to talk with Brockman and Ilya Sutskever, the company’s 30-year-old research director (and also a co-founder), OpenAI had moved into an airy office nearby with a robot, the usual complement of snacks, and 50 full-time employees. (Another 10 to 30 are on the way.)
Altman, in gray T-shirt and jeans, is all wiry, pale intensity. Musk’s fervor is masked by his diffident manner and rosy countenance. His eyes are green or blue, depending on the light, and his lips are plum red. He has an aura of command while retaining a trace of the gawky, lonely South African teenager who immigrated to Canada by himself at the age of 17.
In Silicon Valley, a lunchtime meeting does not necessarily involve that mundane fuel known as food. Younger coders are too absorbed in algorithms to linger over meals. Some just chug Soylent. Older ones are so obsessed with immortality that sometimes they’re just washing down health pills with almond milk.At first blush, OpenAI seemed like a bantamweight vanity project, a bunch of brainy kids in a walkup apartment taking on the multi-billion-dollar efforts at Google, Facebook, and other companies which employ the world’s leading A.I. experts. But then, playing a well-heeled David to Goliath is Musk’s specialty, and he always does it with style—and some useful sensationalism.Let others in Silicon Valley focus on their I.P.O. price and ridding San Francisco of what they regard as its unsightly homeless population. Musk has larger aims, like ending global warming and dying on Mars (just not, he says, on impact).
March 17, 2017Google’s Director of Engineering, Ray Kurzweil, has predicted that the singularity will happen by 2045, but stressed that it’s nothing to be scared of.
Ray Kurzweil is a big name in the tech world, being Google’s Director of Engineering will do that, however, he’s also made a name for himself for being pretty good when it comes to predictions. He claims that out of the 147 predictions he’s made since the 1990s, 86% of them turned out to be correct. He made yet another prediction at last week’s SXSW conference in Austin, Texas, and this one probably trumps the rest.
During a Facebook Live stream with SXSW, Ray Kurzweil expressed his belief that AI will gain human-level intelligence by 2029. “I’ve been consistent that by 2029, computers will have human-level intelligence,” he said.
He continued, “That leads to computers having human intelligence, our putting them inside our brains, connecting them to the cloud, expanding who we are. Today, that’s not just a future scenario,” Kurzweil said. “It’s here, in part, and it’s going to accelerate.”
He then elaborated his thoughts in a communication to Futurism, where he actually predicted that the singularity will happen by 2045. “2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created,” wrote Ray Kurzweil.
The singularity refers to the point in time when all the advancements in modern technology, including AI, results in machines becoming smarter than us humans. It sounds pretty terrifying, like something out of a science-fiction movie where the humans take on the machines in an apocalyptic war, however Kurzweil doesn’t see it that way.
(…)
You can watch the full interview in the video below.
You will need two hats for this one: one for sci-fi and one for the occult.
“Have technologies back-engineered from the Roswell UFO crash debris served as a covert Trojan horse for an Artificial Intelligence take-over of Earth’s humanity?
We can not assume that all extraterrestrial invaders to be biological organisms. We are now in the midst of a new technological revolution promising to engineer the human species into immortals.
But is this wondrous program of biological modification actually the end game of a diabolically subtle extraterrestrial take-over of planet Earth?
Is the human race being assimilated by a Cyborg Invasion?”
Star physicist Stephen Hawking has reiterated his concerns that the rise of powerful artificial intelligence (AI) systems could spell the end for humanity. Speaking at the launch of the University of Cambridge’s Centre for the Future of Intelligence on 19 October, he did, however, acknowledge that AI equally has the potential to be one of the best things that could happen to us.
So are we on the cusp of creating super-intelligent machines that could put humanity at existential risk?
There are those who believe that AI will be a boom for humanity, improving health services and productivity as well as freeing us from mundane tasks. However, the most vocal leaders in academia and industry are convinced that the danger of our own creations turning on us is real. For example, Elon Musk, founder of Tesla Motors and SpaceX, has set up a billion-dollar non-profit company with contributions from tech titans, such as Amazon, to prevent an evil AI from bringing about the end of humanity. Universities, such as Berkeley, Oxford and Cambridge have established institutes to address the issue. Luminaries like Bill Joy, Bill Gates and Ray Kurzweil have all raised the alarm.
Listening to this, it seems the end may indeed be nigh unless we act before it’s too late.
The role of the tech industry
Or could it be that science fiction and industry-fuelled hype have simply overcome better judgement? The cynic might say that the AI doomsday vision has taken on religious proportions. Of course, doomsday visions usually come with a path to salvation. Accordingly, Kurzweil claims we will be virtually immortal soon through nanobots that will digitise our memories. And Musk recently proclaimed that it’s a near certainty that we are simulations within a computer akin to The Matrix, offering the possibility of a richer encompassing reality where our “programs” can be preserved and reconfigured for centuries.
Elon Musk is concerned about a robot future.Steve Jurvetson – Flickr: FANUC Robot Assembly Demo, CC BY-SA
Tech giants have cast themselves as modern gods with the power to either extinguish humanity or make us immortal through their brilliance. This binary vision is buoyed in the tech world because it feeds egos – what conceit could be greater than believing one’s work could usher in such rapid innovation that history as we know it ends? No longer are tech figures cast as mere business leaders, but instead as the chosen few who will determine the future of humanity and beyond.
For Judgement Day researchers, proclamations of an “existential threat” is not just a call to action, but also attracts generous funding and an opportunity to rub shoulders with the tech elite.
So, are smart machines more likely to kill us, save us, or simply drive us to work? To answer this question, it helps to step back and look at what is actually happening in AI.
Underneath the hype
The basic technologies, such as those recently employed by Google’s DeepMind to defeat a human expert at the game Go, are simply refinements of technologies developed in the 1980s. There have been no qualitative breakthroughs in approach. Instead, performance gains are attributable to larger training sets (also known as big data) and increased processing power. What is unchanged is that most machine systems work by maximising some kind of objective. In a game, the objective is simply to win, which is formally defined (for example capture the king in chess). This is one reason why games (checkers, chess, Go) are AI mainstays – it’s easy to specify the objective function.
In other cases, it may be harder to define the objective and this is where AI could go wrong. However, AI is more likely to go wrong for reasons of incompetence rather than malice. For example, imagine that the US nuclear arsenal during the Cold War was under control of an AI to thwart sneak attack by the Soviet Union. Due to no action of the Soviet Union, a nuclear reactor meltdown occurs in the arsenal and the power grid temporarily collapses. The AI’s sensors detect the disruption and fallout, leading the system to infer an attack is underway. The president instructs the system in a shaky voice to stand down, but the AI takes the troubled voice as evidence the president is being coerced. Missiles released. End of humanity.
The AI was simply following its programming, which led to a catastrophic error. This is exactly the kind of deadly mistakes that humans almost made during the Cold War. Our destruction would be attributable to our own incompetence rather than an evil AI turning on us – no different than an auto-pilot malfunctioning on a jumbo jet and sending its unfortunate passengers to their doom. In contrast, human pilots have purposefully killed their passengers, so perhaps we should welcome self-driving cars.
Of course, humans could design AIs to kill, but again this is people killing each other, not some self-aware machine. Western governments have already released computer viruses, such as Stuxnet, to target critical industrial infrastructure. Future viruses could be more clever and deadly. However, this essentially follows the arc of history where humans use available technologies to kill one another.
There are real dangers from AI but they tend to be economic and social in nature. Clever AI will create tremendous wealth for society, but will leave many people without jobs. Unlike the industrial revolution, there may not be jobs for segments of society as machines may be better at every possible job. There will not be a flood of replacement “AI repair person” jobs to take up the slack. So the real challenge will be how to properly assist those (most of us?) who are displaced by AI. Another issue will be the fact that people will not look after one another as machines permanently displace entire classes of labour, such as healthcare workers.
Fortunately, the governments may prove more level-headed than tech celebrities if they choose to listen to nuanced advice. A recent report by the UK’s House of Commons Science and Technology Committee on the risks of AI, for example, focuses on economic, social and ethical concerns. The take-home message was that AI will make industry more efficient, but may also destabilise society.
If we are going to worry about the future of humanity we should focus on the real challenges, such as climate change and weapons of mass destruction rather than fanciful killer AI robots.
Stephen Hawking discusses air pollution, overpopulation and concerns about the correct usage of artificial intelligence.
In a recent interview with Larry King, Stephen Hawking discusses what the biggest threats to mankind are and expresses his concerns about artificial intelligence.
“Back in 2010 you mentioned that ‘greed and stupidity were the biggest threats to mankind’, do you still feel the same?” King asks.
“We certainly have not become less greedy or less stupid”, Hawking says, “Six years ago I was warning about pollution and overcrowding, they have gotten worse since then. The population has grown by half a billion since our last meeting, with no end in sight. At this rate, there will be 11 billion by 2100. Air pollution has increased over the past five years, more than 80% of inhabitants of urban areas are exposed to unsafe levels of air pollution.”
So what is the biggest problem facing humanity today?
“Increase in air pollution and the emission level increase in carbon dioxide. At this rate, it will be too late to avoid dangerous levels of global warming,” Hawking says.
(While the World Health Organization surely confirms Hawking’s concerns about air pollution in urban areas, TrueActivist cannot confirm or deny the “stupidity” of humanity.)
Larry King goes on to ask Hawking about the dangers of artificial intelligence. “How seriously are governments taking this warning?” he asks.
“Governments seem to be engaged in an AI arms race, designing planes and weapons with intelligent technologies. The funding for projects directly beneficial to the human race, such as improved medical screening, seems a somewhat lower priority,” Hawking responds. “I don’t think advances in our artificial intelligence will necessarily be benign, once machines reach the critical stage of machines being able to acknowledge themselves, we cannot predict whether their goals will be the same as ours. Artificial intelligence has the potential to evolve faster than the human race, beneficially I could coexist with humans and augment our capabilities, but a rogue AI could be difficult to stop. We need to insure that artificial intelligence is designed ethically with safeguards in place.”
How governments are using Artificial Intelligence
Stephen Hawking may express a valid concern when it comes to the possibility of governments involving themselves in an artificial intelligence arms race. According to the White House government website, they mention that their goals with AI “plays a key role in self-driving vehicles, which have the potential to save thousands of lives, as well as in unmanned aircraft systems, which may transform global transportation, logistics systems, and countless industries over the coming decades.”
They mention medical and cancer research, stating that “In healthcare, the President’s Precision Medicine Initiative and the Cancer Moonshot will rely on AI to find patterns in medical data and, ultimately, to help doctors diagnose diseases and suggest treatments to improve patient care and health outcomes.”
If you take a look at the US budget proposals for 2017 (suggestion: download the summary tables for the best side-by-side view of budget proposals) you’ll see if they “put their money where their mouth is” so to speak. In short, nothing tops the United State’s federal budget for Social Security, Medicare or Medicaid, but the US defense budget follow’s quickly behind.
According to the Defense Department, they specifically want to invest at least $12-$15 billion in the year 2017 for developing end-game strategies. ““Artificial intelligence can help us with a lot of things that make warfighting faster, that make warfighting more predictable, that allow us to mine all of the data we have about an opponent to make better operational decisions,” he said. “But I’m leaving none of those decisions at this moment to the machine,” states US Air Force General Paul Selva earlier this year.
The full interview follows:
Do you think artificial intelligence will save lives, or is humanity playing a very dangerous game?
“Our citizens should know the urgent facts…but they don’t because our media serves imperial, not popular interests. They lie, deceive, connive and suppress what everyone needs to know, substituting managed news misinformation and rubbish for hard truths…”—Oliver Stone