March 17, 2017Google’s Director of Engineering, Ray Kurzweil, has predicted that the singularity will happen by 2045, but stressed that it’s nothing to be scared of.
Ray Kurzweil is a big name in the tech world, being Google’s Director of Engineering will do that, however, he’s also made a name for himself for being pretty good when it comes to predictions. He claims that out of the 147 predictions he’s made since the 1990s, 86% of them turned out to be correct. He made yet another prediction at last week’s SXSW conference in Austin, Texas, and this one probably trumps the rest.
During a Facebook Live stream with SXSW, Ray Kurzweil expressed his belief that AI will gain human-level intelligence by 2029. “I’ve been consistent that by 2029, computers will have human-level intelligence,” he said.
He continued, “That leads to computers having human intelligence, our putting them inside our brains, connecting them to the cloud, expanding who we are. Today, that’s not just a future scenario,” Kurzweil said. “It’s here, in part, and it’s going to accelerate.”
He then elaborated his thoughts in a communication to Futurism, where he actually predicted that the singularity will happen by 2045. “2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created,” wrote Ray Kurzweil.
The singularity refers to the point in time when all the advancements in modern technology, including AI, results in machines becoming smarter than us humans. It sounds pretty terrifying, like something out of a science-fiction movie where the humans take on the machines in an apocalyptic war, however Kurzweil doesn’t see it that way.
You can watch the full interview in the video below.
Star physicist Stephen Hawking has reiterated his concerns that the rise of powerful artificial intelligence (AI) systems could spell the end for humanity. Speaking at the launch of the University of Cambridge’s Centre for the Future of Intelligence on 19 October, he did, however, acknowledge that AI equally has the potential to be one of the best things that could happen to us.
There are those who believe that AI will be a boom for humanity, improving health services and productivity as well as freeing us from mundane tasks. However, the most vocal leaders in academia and industry are convinced that the danger of our own creations turning on us is real. For example, Elon Musk, founder of Tesla Motors and SpaceX, has set up a billion-dollar non-profit company with contributions from tech titans, such as Amazon, to prevent an evil AI from bringing about the end of humanity. Universities, such as Berkeley, Oxford and Cambridge have established institutes to address the issue. Luminaries like Bill Joy, Bill Gates and Ray Kurzweil have all raised the alarm.
Listening to this, it seems the end may indeed be nigh unless we act before it’s too late.
The role of the tech industry
Or could it be that science fiction and industry-fuelled hype have simply overcome better judgement? The cynic might say that the AI doomsday vision has taken on religious proportions. Of course, doomsday visions usually come with a path to salvation. Accordingly, Kurzweil claims we will be virtually immortal soon through nanobots that will digitise our memories. And Musk recently proclaimed that it’s a near certainty that we are simulations within a computer akin to The Matrix, offering the possibility of a richer encompassing reality where our “programs” can be preserved and reconfigured for centuries.
Tech giants have cast themselves as modern gods with the power to either extinguish humanity or make us immortal through their brilliance. This binary vision is buoyed in the tech world because it feeds egos – what conceit could be greater than believing one’s work could usher in such rapid innovation that history as we know it ends? No longer are tech figures cast as mere business leaders, but instead as the chosen few who will determine the future of humanity and beyond.
For Judgement Day researchers, proclamations of an “existential threat” is not just a call to action, but also attracts generous funding and an opportunity to rub shoulders with the tech elite.
So, are smart machines more likely to kill us, save us, or simply drive us to work? To answer this question, it helps to step back and look at what is actually happening in AI.
Underneath the hype
The basic technologies, such as those recently employed by Google’s DeepMind to defeat a human expert at the game Go, are simply refinements of technologies developed in the 1980s. There have been no qualitative breakthroughs in approach. Instead, performance gains are attributable to larger training sets (also known as big data) and increased processing power. What is unchanged is that most machine systems work by maximising some kind of objective. In a game, the objective is simply to win, which is formally defined (for example capture the king in chess). This is one reason why games (checkers, chess, Go) are AI mainstays – it’s easy to specify the objective function.
In other cases, it may be harder to define the objective and this is where AI could go wrong. However, AI is more likely to go wrong for reasons of incompetence rather than malice. For example, imagine that the US nuclear arsenal during the Cold War was under control of an AI to thwart sneak attack by the Soviet Union. Due to no action of the Soviet Union, a nuclear reactor meltdown occurs in the arsenal and the power grid temporarily collapses. The AI’s sensors detect the disruption and fallout, leading the system to infer an attack is underway. The president instructs the system in a shaky voice to stand down, but the AI takes the troubled voice as evidence the president is being coerced. Missiles released. End of humanity.
The AI was simply following its programming, which led to a catastrophic error. This is exactly the kind of deadly mistakes that humans almost made during the Cold War. Our destruction would be attributable to our own incompetence rather than an evil AI turning on us – no different than an auto-pilot malfunctioning on a jumbo jet and sending its unfortunate passengers to their doom. In contrast, human pilots have purposefully killed their passengers, so perhaps we should welcome self-driving cars.
Of course, humans could design AIs to kill, but again this is people killing each other, not some self-aware machine. Western governments have already released computer viruses, such as Stuxnet, to target critical industrial infrastructure. Future viruses could be more clever and deadly. However, this essentially follows the arc of history where humans use available technologies to kill one another.
There are real dangers from AI but they tend to be economic and social in nature. Clever AI will create tremendous wealth for society, but will leave many people without jobs. Unlike the industrial revolution, there may not be jobs for segments of society as machines may be better at every possible job. There will not be a flood of replacement “AI repair person” jobs to take up the slack. So the real challenge will be how to properly assist those (most of us?) who are displaced by AI. Another issue will be the fact that people will not look after one another as machines permanently displace entire classes of labour, such as healthcare workers.
Fortunately, the governments may prove more level-headed than tech celebrities if they choose to listen to nuanced advice. A recent report by the UK’s House of Commons Science and Technology Committee on the risks of AI, for example, focuses on economic, social and ethical concerns. The take-home message was that AI will make industry more efficient, but may also destabilise society.
If we are going to worry about the future of humanity we should focus on the real challenges, such as climate change and weapons of mass destruction rather than fanciful killer AI robots.
Stephen Hawking discusses air pollution, overpopulation and concerns about the correct usage of artificial intelligence.
In a recent interview with Larry King, Stephen Hawking discusses what the biggest threats to mankind are and expresses his concerns about artificial intelligence.
“Back in 2010 you mentioned that ‘greed and stupidity were the biggest threats to mankind’, do you still feel the same?” King asks.
“We certainly have not become less greedy or less stupid”, Hawking says, “Six years ago I was warning about pollution and overcrowding, they have gotten worse since then. The population has grown by half a billion since our last meeting, with no end in sight. At this rate, there will be 11 billion by 2100. Air pollution has increased over the past five years, more than 80% of inhabitants of urban areas are exposed to unsafe levels of air pollution.”
So what is the biggest problem facing humanity today?
“Increase in air pollution and the emission level increase in carbon dioxide. At this rate, it will be too late to avoid dangerous levels of global warming,” Hawking says.
(While the World Health Organization surely confirms Hawking’s concerns about air pollution in urban areas, TrueActivist cannot confirm or deny the “stupidity” of humanity.)
Larry King goes on to ask Hawking about the dangers of artificial intelligence. “How seriously are governments taking this warning?” he asks.
“Governments seem to be engaged in an AI arms race, designing planes and weapons with intelligent technologies. The funding for projects directly beneficial to the human race, such as improved medical screening, seems a somewhat lower priority,” Hawking responds. “I don’t think advances in our artificial intelligence will necessarily be benign, once machines reach the critical stage of machines being able to acknowledge themselves, we cannot predict whether their goals will be the same as ours. Artificial intelligence has the potential to evolve faster than the human race, beneficially I could coexist with humans and augment our capabilities, but a rogue AI could be difficult to stop. We need to insure that artificial intelligence is designed ethically with safeguards in place.”
How governments are using Artificial Intelligence
Stephen Hawking may express a valid concern when it comes to the possibility of governments involving themselves in an artificial intelligence arms race. According to the White House government website, they mention that their goals with AI “plays a key role in self-driving vehicles, which have the potential to save thousands of lives, as well as in unmanned aircraft systems, which may transform global transportation, logistics systems, and countless industries over the coming decades.”
They mention medical and cancer research, stating that “In healthcare, the President’s Precision Medicine Initiative and the Cancer Moonshot will rely on AI to find patterns in medical data and, ultimately, to help doctors diagnose diseases and suggest treatments to improve patient care and health outcomes.”
If you take a look at the US budget proposals for 2017 (suggestion: download the summary tables for the best side-by-side view of budget proposals) you’ll see if they “put their money where their mouth is” so to speak. In short, nothing tops the United State’s federal budget for Social Security, Medicare or Medicaid, but the US defense budget follow’s quickly behind.
According to the Defense Department, they specifically want to invest at least $12-$15 billion in the year 2017 for developing end-game strategies. ““Artificial intelligence can help us with a lot of things that make warfighting faster, that make warfighting more predictable, that allow us to mine all of the data we have about an opponent to make better operational decisions,” he said. “But I’m leaving none of those decisions at this moment to the machine,” states US Air Force General Paul Selva earlier this year.
The full interview follows:
Do you think artificial intelligence will save lives, or is humanity playing a very dangerous game?
Elon Musk’s contributions to society know no bounds: his latest scheme is intended to save humanity from being destroyed by artificial intelligence (AI). The billionaire, known for garnering a massive amount of wealth and attention with his revolutionary projects of PayPal, Tesla, and SpaceX, has consistently warned against AI, recently calling it humanity’s greatest existential […]
Musk and other tech giants are joining forces to fund research that will halt artificial intelligence from overtaking mankind.
Credit: Art Streiber/August Image
Elon Musk’s contributions to society know no bounds: his latest scheme is intended to save humanity from being destroyed by artificial intelligence (AI).
The billionaire, known for garnering a massive amount of wealth and attention with his revolutionary projects of PayPal, Tesla, and SpaceX, has consistently warned against AI, recently calling it humanity’s greatest existential threat.
His belief of the detriment AI may cause has led him to pool forces with other well-known tech entrepreneurs to establish an investment fund intended for researchers to pursue actions with a positive social impact. The $1 billion fund is slated to assist humans in staying at least one step ahead of technology.
According to a statement released by the group of investors, AI’s surprising history makes it difficult to “predict when human-level AI might come within reach”. The statement continued on to advise, “When it does, it’ll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.”
The debate within the technology world over the threats and benefits provided by rapid advances in computer intelligence is a long-standing one, with questions of whether or not legislation should be implemented to act as safeguards. A total moratorium on research has been contested as well, particularly as the scientific and technology worlds are arguably capable of advancing to the point of superseding humans. It is a very likely possibility that we may become redundant, unnecessary, and, thus, expendable.
Scientists postulate that eventually AI systems will be able to intercommunicate exclusively among themselves, controlling entire transport networks and even national economies.
Renowned theoretical physicist Stephen Hawking told the BBC last year that technology could very well spell doom for the entire human race, warning of a type of system so advanced it could “re-design itself at an ever-increasing rate”, thus outpacing human improvements exponentially.
At a recent symposium held at the Massachusetts Institute of Technology (MIT), Musk spoke of the dangers of AI, stating that “we need to be very careful with the artificial intelligence. With artificial intelligence we are summoning the demon.”
PayPal co-founder Peter Thiel, along with tech giant Infosys and Amazon Web Services, have contributed to the startup of OpenAI. The non-profit will work towards researching novel uses of AI and share the findings; with access to this knowledge, the idea is to guarantee that someone is examining the pros and cons sans the financial restraints imposed by the research and development departments of conglomerates like Google and IBM.
According to OpenAI’s website, freedom from financial obligations allows for a “better focus on a positive human impact…AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as is possible safely.”
The researchers said the calamity will come from asteroids, development of Artificial Intelligence (AI) and synthetic biology, which could give way for the rise of some deadly viruses.
Super volcanoes and nuclear war were also added to the danger list, which could annihilate humanity. The researchers made this known in their special report known as Global Catastrophic Risks.
The report warned that no generation in human history has ever experienced these impending dangers. The report ranked the impending danger ahead of the 1918 flu pandemic, which killed about 50,000,000 people.
Specifically, the report predicted that the biggest threat to humanity over the next five years will come from asteroids, super volcanic eruptions or other “unknown” risks.
However, in the long term, it is artificial biology which could open the door for ‘off-the-shelf’ deadly viruses; nuclear war and devastating climate change can also pose the greatest threat. Also, the development of AI will also give rise to robots, which the study says could be deadly to humanity.
The Director of the Global Priorities Project, Sebastian Farquhar said looking back at past events, and what is happening currently, it points to the fact that something dangerous could happen to humanity soon.
“There are some things that are on the horizon, things that probably won’t happen in any one year but could happen, which could completely reshape our world and do so in a really devastating and disastrous way. History teaches us that many of these things are more likely than we intuitively think,” Farquhar said.
Mr Farquhar also said it is just a matter of time that the self-proclaimed Islamic State of Iraq and Syria will be able to manufacture deadly diseases such as the smallpox virus, spreading it via the Internet black market, to their targets.
He said “We have seen that in the field of synthetic biology and genetic manipulation of small organisms or things like viruses, the cost has come down unbelievably in the last decade. It is still too expensive to worry about rogue groups trying to use the technology, but that might not remain true.”
The researchers agreed that as the disaster is not that far from us, various governments are not responding to it with the necessary solutions, but are rather pursuing policies that will further even aggravate the situation.
The report therefore called for the international community to improve planning and coordination for pandemics, investigate the possible risks of AI and biotechnology, and continue to cut the number of nuclear weapons down in their countries.
Particularly for AI, some scientists and inventors have said humans must be careful with its development, as it could pose a great risk to humanity.
The British theoretical physicist and cosmologist, Professor Stephen Hawking told the BBC in 2014 that the development of full AI could spell the end of the human race. (AI assists Mr Hawking in his communications. It makes him think faster than the rest of us.)
The Chief Executive Officer of Tesla Motors, Elon Reeve Musk has also added his own concerns that AI could potentially be more hazardous to humanity. Mr Musk is also the co-founder of SpaceX, a commercial space flight company.
As it stands now, we need to only keep our fingers crossed. It seems that the end of humanity is near, and it only requires extraordinary commitment to avert us from the calamity.