In an interview with The Telegraph, Brad Smith, president of Microsoft, said the use of ‘lethal autonomous weapon systems’ poses a host of new ethical questions which need to be considered by governments as a matter of urgency.
He said the rapidly advancing technology, in which flying, swimming or walking drones can be equipped with lethal weapons systems – missiles, bombs or guns – which could be programmed to operate entirely or partially autonomously, “ultimately will spread… to many countries”.
The US, China, Israel, South Korea, Russia and the UK are all developing weapon systems with a significant degree of autonomy in the critical functions of selecting and attacking targets.
The technology is a growing focus for many militaries because replacing troops with machines can make the decision to go to war easier.
But it remains unclear who is responsible for deaths or injuries caused by a machine – the developer, manufacturer, commander or the device itself.
Smith said killer robots must “not be allowed to decide on their own to engage in combat and who to kill” and argued that a new international convention needed to be drawn up to govern the use of the technology.
“The safety of civilians is at risk today. We need more urgent action, and we need it in the form of a digital Geneva Convention, rules that will protect civilians and soldiers.”
Speaking at the launch of his new book, Tools and Weapons, at the Microsoft store in London’s Oxford Circus, Smith said there was also a need for stricter international rules over the use of facial recognition technology and other emerging forms of artificial intelligence.
“There needs to be there needs to be a new law in this space, we need regulation in the world of facial recognition in order to protect against potential abuse.”
(TMU) — New research into black holes has accelerated in recent years, producing some outlandish—though mind-boggling—ideas. The newest theory advanced by researchers may take the cake in this regard.A team of astrophysicists at Canada’s University of Waterloo have put forth a theory suggesting that our universe exists inside the event horizon of a massive higher dimensional black hole nested within a larger mother universe.
Perhaps even more strangely, scientists say this radical proposition is consistent with astronomical and cosmological observations and that theoretically, such a reality could inch us closer to the long-awaited theory of “quantum gravity.”
The research team at Waterloo used laws from string theory to imagine a lower-dimensional universe marooned inside the membrane of a higher dimensional one.
Lead researcher Robert Mann said:
”The basic idea was that maybe the singularity of the universe is like the singularity at the centre of a black hole. The idea was in some sense motivated by trying to unify the notion of singularity, or what is incompleteness in general relativity between black holes and cosmology. And so out of that came the idea that the Big Bang would be analogous to the formation of a black hole, but kind of in reverse.”
The research was based on the previous work of professor Niayesh Afshordi, though he is hardly the only scientist who has looked into the possibility of a black hole singularity birthing a universe.
Nikodem Poplawski of the University of New Haven imagines the seed of the universe like the seed of a plant—a core of fundamental information compressed inside of a shell that shields it from the outside world. Poplawski says this is essentially what a black hole is, a protective shell around a black hole singularity ravaged by extreme tidal forces creating a kind of torsion mechanism.
Compressed tightly enough—as scientists imagine is the case at the singularity of a black hole, which may break down the known laws of physics—the torsion could produce a spring-loaded effect comparable to a jack-in-the-box. The subsequent “big bounce” may have been our Big Bang, which took place inside the collapsed remnants of a five-dimensional star.
Poplawski also suggested that black holes could be portals connecting universes. Each black hole, he says, could be a “one-way door” to another universe, or perhaps the multiverse.
Regardless of whether or not this provocative theory is true, scientists increasingly believe that black holes could be the key to understanding many of the most vexing mysteries in the universe, including the Big Bang, inflation, and dark energy. Physicists also believe black holes could help bridge the divide between quantum mechanics and Einstein’s theory of relativity.
The spectre of superintelligent machines doing us harm is not just science fiction, technologists say – so how can we ensure AI remains ‘friendly’ to its makers?
It began three and a half billion years ago in a pool of muck, when a molecule made a copy of itself and so became the ultimate ancestor of all earthly life. It began four million years ago, when brain volumes began climbing rapidly in the hominid line.
Fifty thousand years ago with the rise of Homo sapiens sapiens.
Ten thousand years ago with the invention of civilization.
Five hundred years ago with the invention of the printing press.
In less than thirty years, it will end.
Jaan Tallinn stumbled across these words in 2007, in an online essay called Staring into the Singularity. The “it” was human civilisation. Humanity would cease to exist, predicted the essay’s author, with the emergence of superintelligence, or AI, that surpasses human-level intelligence in a broad array of areas.
Tallinn, an Estonia-born computer programmer, has a background in physics and a propensity to approach life like one big programming problem. In 2003, he co-founded Skype, developing the backend for the app. He cashed in his shares after eBay bought it two years later, and now he was casting about for something to do. Staring into the Singularity mashed up computer code, quantum physics and Calvin and Hobbes quotes. He was hooked.
Tallinn soon discovered that the author, Eliezer Yudkowsky, a self-taught theorist, had written more than 1,000 essays and blogposts, many of them devoted to superintelligence. He wrote a program to scrape Yudkowsky’s writings from the internet, order them chronologically and format them for his iPhone. Then he spent the better part of a year reading them.
The term artificial intelligence, or the simulation of intelligence in computers or machines, was coined back in 1956, only a decade after the creation of the first electronic digital computers. Hope for the field was initially high, but by the 1970s, when early predictions did not pan out, an “AI winter” set in. When Tallinn found Yudkowsky’s essays, AI was undergoing a renaissance. Scientists were developing AIs that excelled in specific areas, such as winning at chess, cleaning the kitchen floor and recognising human speech. Such “narrow” AIs, as they are called, have superhuman capabilities, but only in their specific areas of dominance. A chess-playing AI cannot clean the floor or take you from point A to point B. Superintelligent AI, Tallinn came to believe, will combine a wide range of skills in one entity. More darkly, it might also use data generated by smartphone-toting humans to excel at social manipulation.
Reading Yudkowsky’s articles, Tallinn became convinced that superintelligence could lead to an explosion or breakout of AI that could threaten human existence – that ultrasmart AIs will take our place on the evolutionary ladder and dominate us the way we now dominate apes. Or, worse yet, exterminate us.
After finishing the last of the essays, Tallinn shot off an email to Yudkowsky – all lowercase, as is his style. “i’m jaan, one of the founding engineers of skype,” he wrote. Eventually he got to the point: “i do agree that … preparing for the event of general AI surpassing human intelligence is one of the top tasks for humanity.” He wanted to help.
When Tallinn flew to the Bay Area for other meetings a week later, he met Yudkowsky, who lived nearby, at a cafe in Millbrae, California. Their get-together stretched to four hours. “He actually, genuinely understood the underlying concepts and the details,” Yudkowsky told me recently. “This is very rare.” Afterward, Tallinn wrote a check for $5,000 (£3,700) to the Singularity Institute for Artificial Intelligence, the nonprofit where Yudkowsky was a research fellow. (The organisation changed its name to Machine Intelligence Research Institute, or Miri, in 2013.) Tallinn has since given the institute more than $600,000.
The encounter with Yudkowsky brought Tallinn purpose, sending him on a mission to save us from our own creations. He embarked on a life of travel, giving talks around the world on the threat posed by superintelligence. Mostly, though, he began funding research into methods that might give humanity a way out: so-called friendly AI. That doesn’t mean a machine or agent is particularly skilled at chatting about the weather, or that it remembers the names of your kids – although superintelligent AI might be able to do both of those things. It doesn’t mean it is motivated by altruism or love. A common fallacy is assuming that AI has human urges and values. “Friendly” means something much more fundamental: that the machines of tomorrow will not wipe us out in their quest to attain their goals.
“Our citizens should know the urgent facts…but they don’t because our media serves imperial, not popular interests. They lie, deceive, connive and suppress what everyone needs to know, substituting managed news misinformation and rubbish for hard truths…”—Oliver Stone