Predictive behavior technology is all the rage in everything from advertising to policing to medicine, and is something that we have covered extensively at Activist Post (see our archives here). Technocrats everywhere believe that the supreme being of the universe should be a computer algorithm; because, after all, in its perfection it knows us better than we know ourselves.
The following research from the University of Southern California is a chilling example of how the State could easily employ this technology for literal interventions where potential violence could occur. Beyond the micromanagement of adult relationships, note the final direction at the end of the article: parent-child relationships.
Are we really this lazy to turn over our most intimate interactions to the advice of a computer and hope that it can help manage our every emotion? Are we really that eager to completely eradicate human free will?
Monitoring Troubles of the Heart
Mobile sensing system developed by joint USC Dornsife and USC Viterbi team could give couples the power to anticipate each other’s emotional states and adapt behavior
Your partner comes in and slams a door. What was that about? Something you did? What if you knew to anticipate it because you were notified in advance from an automated text message that he/she didn’t have a great day at work? Might that change the dynamic of your interactions?
You had a bad day. The last thing you need is to get into an argument when you get home because your partner also had a bad day. What if technology could automatically send you a notification advising you to do a short meditation module to restore your mental state? How might this affect the quality of your interactions with your partner?In the near future, researchers from the USC Viterbi School of Engineering and the USC Dornsife College of Arts, Letters and Sciences believe technology might be employed to help de-escalate any potential conflicts among couples. In a collaboration between the Signal Analysis and Interpretation Laboratory (SAIL) in the Ming Hsieh Department of Electrical Engineering and the Family Studies Project in the Psychology Department at USC Dornsife, researchers employed multi-modal ambulatory measures to develop a system in order to detect if conflict had occurred between a couple—a sort of seismometer of the shakes, rattles and rolls in a relationship.
The research, documented in “Using Multimodal Wearable Technology to Detect Conflict among Couples,” by Adela C. Timmons, Theodora Chaspari, Sohyun C. Han, Laura Perrone, Shrikanth S. Narayanan, and Gayla Margolin, is published by the IEEE Computer Society this month.
In order to detect intra-couple conflict, the researchers with support from the National Science Foundation, developed algorithms to assess whether conflict was present among couples. This algorithm pulled together data from various sources including wearables, mobile phones, and physiological signals (or bio-signals) to assess couples’ emotional states. Data collected included body temperature, heart activity, sweat, audio recordings, assessment of language content and vocal intensity. The algorithm analyzing this data has proved to be up to 86 percent accurate in its ability to detect conflict episodes (based on participants’ hourly self-reports of when conflict occurred). The authors of the study believe it is the first instance in which passive modal computing is being collected and employed to detect conflict behavior in daily life.
Theodora Chaspari, an Electrical Engineering Ph.D student in Shri Naryanan’s SAIL lab at USC Viterbi, speaks of why this particular collaboration appealed to her and the SAIL group: “We could help beyond pure engineering domains, providing a more quantitative measures of human behavior.”
Lead author Adela C. Timmons, a psychology Ph.D student in Gayla Margolin’s Family Studies Project team at USC Dornsife, together with Chaspari runs the USC Couple Mobile Sensing Project with “the eventual goal of developing interventions to improve couple functioning.” In addition to the notion of helping couples who can’t often replicate the interventions and behavioral strategies they learn and practice in therapist’s office, Timmons spoke about the importance of this research in detecting and perhaps having couples minimize conflict in their relationships. She indicates that negative relationships (or the absence of positive relationships) have long been recognized as a health risk. The quality of relationships, Timmons said, can provide health benefits. Further, she indicates that research has shown that those with healthy relationships have less stress and that chronic stress is known to cause “wear and tear” on the body.
The authors say that the next step in the research is using such unobtrusive, passive technologies to anticipate conflict —perhaps five minutes before it might occur, by letting computer software determine the likelihood that conflict will occur. The other part of anticipating conflict is developing early interventions—possible real-time interventions or behavioral prompts such as text notifications of a partner’s psychological state or to guide an individual to meditate before bringing that conflict home.
Chaspari acknowledges that this is not a one-fits-all approach. Machine learning software can learn what is most useful in an individual. For example, for any given person, certain factors might have more weight in predicting conflict.
Once this system has been proven, the authors anticipate that it can be employed to other important relationships such as a parent-child dynamic.
Martin Rees is Emeritus Professor of Cosmology and Astrophysics, at the University of Cambridge, the Astronomer Royal, a member of Britain’s House of Lords, and a former President of the Royal Society. The following interview was conducted at Trinity College, Cambridge, by The Conversation’s Matt Warren.
Q: How big is the universe … and is it the only one?
Our cosmic horizons have grown enormously over the last century, but there is a definite limit to the size of the observable universe. It contains all the things from which light has been able to reach us since the Big Bang, about 14 billion years ago. But the new realisation is that the observable universe may not be all of reality. There may be more beyond the horizon, just as there’s more beyond the horizon when you’re observing the ocean from a boat.
What’s more, the galaxies are likely to go on and on beyond this horizon, but more interestingly, there is a possibility that our Big Bang was not the only one. There may have been others, spawning other universes, disconnected from ours and therefore not observable, and possibly even governed by different physical laws. Physical reality on this vast scale could therefore be much more varied and interesting than what we can observe.
The universe we can observe is governed by the same laws everywhere. We can observe a distant galaxy and see that the atoms emitting the light are just the same as the ones in the lab. But there may be physical domains that are governed by completely different laws. Some may have no gravity, or not allow for nuclear physics. Ours may not even be a typical domain.
Even in our own universe, there are only so many ways you can assemble the same atoms, so if it is large enough it is possible that there is another Earth, even another avatar you. If this were the case, however, the universe would have to be bigger than the observable one by a number which to write down would require all the atoms in the universe. Rest assured, if there’s another you, they are a very, very long way away. They might even be making the same mistakes.
Q: So how likely is alien life in this vast expanse?
We know now that planets exist around many, even most, stars. We know that in our Milky Way galaxy there are likely millions of planets that are in many ways like the Earth, with liquid water. The question then is whether life has developed on them – and we can’t yet answer that.
Although we know how via Darwinian selection a complex biosphere evolved on Earth around 4 billion years ago, we don’t yet understand the actual origin of life – the transition from complex chemistry to the first metabolising, replicating structures. The good news is that we will have a better idea of how that happened within the next ten or 20 years and crucially, how likely it was to happen. This will give us a better understanding of how likely it is to happen elsewhere. In that time, we will also have technologies that will allow us to better search for alien life.
But just because there’s life elsewhere doesn’t mean that there is intelligent life. My guess is that if we do detect an alien intelligence, it will be nothing like us. It will be some sort of electronic entity.
If we look at our history on Earth, it has taken about 4 billion years to get from the first protozoa to our current, technological civilisation. But if we look into the future, then it’s quite likely that within a few centuries, machines will have taken over – and they will then have billions of years ahead of them.
In other words, the period of time occupied by organic intelligence is just a thin sliver between early life and the long era of the machines. Because such civilisations would develop at different rates, it’s extremely unlikely that we will find intelligent life at the same stage of development as us. More likely, that life will still be either far simpler, or an already fully electronic intelligence.
Q: Do you believe that machines will develop intelligence?
There are many people who would bet on it. The second question, however, is whether that necessarily implies consciousness – or whether that is limited to the wet intelligence we have within our skulls. Most people, however, would argue that it is an emergent property and could develop in a machine mind.
Q: So if the universe is populated by electronic super minds, what questions will they be pondering?
We can’t conceive that any more than a chimp can guess the things that we spend our time thinking about. I would guess, however, that these minds aren’t on planets. While we depend on a planet and an atmosphere, these entities would be happy in zero G, floating freely in space. This might make them even harder to detect.
Q: How would humanity respond to the discovery of alien life?
It would certainly make the universe more interesting, but it would also make us less unique. The question is whether it would provoke in us any sense of cosmic modesty. Conversely, if all our searches for life fail, we’d know more certainly that this small planet really is the one special place, the single pale, blue dot where life has emerged. That would make what happens to it not just of global significance, but an issue of galactic importance, too.
And we are likely to be fixed to this world. We will be able to look deeper and deeper into space, but travelling to worlds beyond our solar system will be a post-human enterprise. The journey times are just too great for mortal minds and bodies. If you’re immortal, however, these distances become far less daunting. That journey will be made by robots, not us.
Q: What scientific advances would you like to see over the coming century?
Cheap, clean energy, for one. Artificial meat is another. But the idea is often easier than the application. I like to tell my students the story of two beavers standing in front of a huge hydroelectric dam. “Did you build that?” asks one. “No,” says the other. “But it is based on my idea”. That’s the essential balance between scientific insight and engineering development.
Q: Michael Gove [the British politician who was a leader of the campaign for the UK to leave the EU] said people have had enough of experts. Have they?
I wouldn’t expect anything more from Mr Gove, but there is clearly a role for experts. If we’re sick, we go to a doctor, we don’t look randomly on the internet. But we must also realise that most experts only have expertise within their own area, and if we are scientists we should accept that. When science impacts on public policy, there will be elements of economics, ethics and politics where we as scientists speak only as laymen. We need to know where the demarcation line is between where we are experts and where we are just citizens.
If you want to influence public policy as a scientist, there are two ways to do it. You can aspire to be an adviser within government, which can be very frustrating. Or you can try and influence policy indirectly. Politicians are very much driven by what’s in their inbox and what’s in the press, so the scientists with the greatest influence are those who go public, and speak to everyday people. If an idea is picked up by voters, the politicians won’t ignore it.
Q: Brexit – good or bad?
I am surprised to find myself agreeing with Lord Heseltine [former UK Conservative government minister] and Tony Blair [former Labour prime minister], but it is a real disaster, which we have stumbled into. There is a lot of blame to be shared around, by Boris Johnson et al, but also by Jeremy Corbyn [leader of the UK Labour party] for not fighting his corner properly. I have been a member of the Labour Party for a very long time, but I feel badly let down by Corbyn – especially as Labour voters supported Remain two to one. He has been an ineffective leader, and also ambivalent on this issue. A different leader, making a vocal case for Remain, could have tilted the vote.
On the other side, Boris Johnson [now UK foreign secretary – who campaigned for Britain to leave the EU] has been most reprehensible. At least Gove has opinions, which he has long expressed. Boris Johnson had no strong opinions, and the honourable thing to do if that is the case is to remain quiet. But he changed his stance opportunistically (as in the Eton debating society) and swung the vote.
Q: But why is it such a disaster?
My concerns are broad geopolitical ones. In the world as it is now, with America becoming isolationist and an increasingly dominant Russia, for Europe to establish itself as a united and powerful counterweight is more important than ever. We are jeopardising something that has held Europe together, in peace, for 60 years, and could also break up the United Kingdom in the process. We will be remembered for that and it is something to deplore.
One thing astronomers bring to the table is an awareness that we have a long potential future, as well as the universe’s long past – and that this future could be jeopardised by what happens in the coming decades.
Q: More broadly, how much danger is the human race in?
I have spent a lot of time considering how we as a species can make it into the next century – and there are two main classes of problems. First, the collective impact of humanity as its footprint on the planet increases due to a growing population more demanding of resources. Second, the possible misuse by error or design of ever more powerful technology – and most worryingly, bio-tech.
There is certainly a high chance of a major global setback this century, most likely from the second threat, which increasingly allows individual groups to have a global impact. Added to this is the fact that the world is increasingly connected, so anything that happens has a global resonance. This is something new and actually makes us more vulnerable as a species than at any time in our past.
Q: So terrorism will pose an even greater threat in the coming century?
Yes, because of these technologies, terrorists or fanatics will be able to have a greater impact. But there’s also the simple danger of these technologies being misused. Engineering or changing viruses, for example, can be used in benign ways – to eradicate Zika, for example – but there’s obviously a risk that such things can get out of control.
Nuclear requires large, conspicuous and heavily-protected facilities. But the facilities needed for bio-tech, for example, are small-scale, widely understood, widely available and dual use. It is going to be very hard indeed properly to regulate it.
In the short and intermediate term, this is even more worrying than the risks posed by climate change – although in the long term, that will be a very major problem, especially as both people and politicians find it very difficult to focus on things further down the line.
I have been very involved in campaigns to get all countries involved in research and development into alternative, clean energy sources. Making them available and cheap is the only way we are going to move towards a low carbon future. The level of money invested in this form of research should be equivalent to the amount spent on health or defence, and nuclear fusion and fourth generation nuclear fission should be part of that.
Q: In the medieval world, people would start building cathedrals that only later generations would finish. Have we lost that long-term perspective?
That’s right. In fact, one very important input behind the political discussion prior to the Paris climate agreement was the 2015 Papal Encyclical. I’m a council member of the Pontifical Academy of Sciences, which helped to initiate the scientific meetings which were important in ensuring that the encyclical was a highly respected document. Whatever one thinks of the Catholic church, one cannot deny its long-term vision, its global range and its concern for the world’s poor. I believe that the encyclical, six months before the Paris conference, had a big impact on the leaders and people in South America, Africa and Asia. Religion clearly still has a very important role to play in the world.
Q: Have you ever encountered anything in the cosmos that has made you wonder whether a creator was behind it?
No. Personally, I don’t have any religious beliefs. But I describe myself as a cultural Christian, in that I was brought up in England and the English church was an important part of that. Then again, if I had been born in Iran, I’d probably go to the mosque.
Elon Musk is famous for his futuristic gambles, but Silicon Valley’s latest rush to embrace artificial intelligence scares him. And he thinks you should be frightened too. Inside his efforts to influence the rapidly advancing field and its proponents, and to save humanity from machine-learning overlords.
I. Running Amok
It was just a friendly little argument about the fate of humanity. Demis Hassabis, a leading creator of advanced artificial intelligence, was chatting with Elon Musk, a leading doomsayer, about the perils of artificial intelligence.
They are two of the most consequential and intriguing men in Silicon Valley who don’t live there. Hassabis, a co-founder of the mysterious London laboratory DeepMind, had come to Musk’s SpaceX rocket factory, outside Los Angeles, a few years ago. They were in the canteen, talking, as a massive rocket part traversed overhead. Musk explained that his ultimate goal at SpaceX was the most important project in the world: interplanetary colonization.
Hassabis replied that, in fact, he was working on the most important project in the world: developing artificial super-intelligence. Musk countered that this was one reason we needed to colonize Mars—so that we’ll have a bolt-hole if A.I. goes rogue and turns on humanity. Amused, Hassabis said that A.I. would simply follow humans to Mars.
This did nothing to soothe Musk’s anxieties (even though he says there are scenarios where A.I. wouldn’t follow).
News. Hollywood. Style. Culture. For more high-profile interviews, stunning photography, and thought-provoking features, subscribe now to Vanity Fair magazine.
An unassuming but competitive 40-year-old, Hassabis is regarded as the Merlin who will likely help conjure our A.I. children. The field of A.I. is rapidly developing but still far from the powerful, self-evolving software that haunts Musk. Facebook uses A.I. for targeted advertising, photo tagging, and curated news feeds. Microsoft and Apple use A.I. to power their digital assistants, Cortana and Siri. Google’s search engine from the beginning has been dependent on A.I. All of these small advances are part of the chase to eventually create flexible, self-teaching A.I. that will mirror human learning.
WITHOUT OVERSIGHT, MUSK BELIEVES, A.I. COULD BE AN EXISTENTIAL THREAT: “WE ARE SUMMONING THE DEMON.”
Some in Silicon Valley were intrigued to learn that Hassabis, a skilled chess player and former video-game designer, once came up with a game called Evil Genius, featuring a malevolent scientist who creates a doomsday device to achieve world domination. Peter Thiel, the billionaire venture capitalist and Donald Trump adviser who co-founded PayPal with Musk and others—and who in December helped gather skeptical Silicon Valley titans, including Musk, for a meeting with the president-elect—told me a story about an investor in DeepMind who joked as he left a meeting that he ought to shoot Hassabis on the spot, because it was the last chance to save the human race.
Elon Musk began warning about the possibility of A.I. running amok three years ago. It probably hadn’t eased his mind when one of Hassabis’s partners in DeepMind, Shane Legg, stated flatly, “I think human extinction will probably occur, and technology will likely play a part in this.”
Before DeepMind was gobbled up by Google, in 2014, as part of its A.I. shopping spree, Musk had been an investor in the company. He told me that his involvement was not about a return on his money but rather to keep a wary eye on the arc of A.I.: “It gave me more visibility into the rate at which things were improving, and I think they’re really improving at an accelerating rate, far faster than people realize. Mostly because in everyday life you don’t see robots walking around. Maybe your Roomba or something. But Roombas aren’t going to take over the world.”
In a startling public reproach to his friends and fellow techies, Musk warned that they could be creating the means of their own destruction. He told Bloomberg’s Ashlee Vance, the author of the biography Elon Musk, that he was afraid that his friend Larry Page, a co-founder of Google and now the C.E.O. of its parent company, Alphabet, could have perfectly good intentions but still “produce something evil by accident”—including, possibly, “a fleet of artificial intelligence-enhanced robots capable of destroying mankind.”
Elon Musk at the V.F. Summit: Artificial Intelligence Could Wipe Out Humanity
At the World Government Summit in Dubai, in February, Musk again cued the scary organ music, evoking the plots of classic horror stories when he noted that “sometimes what will happen is a scientist will get so engrossed in their work that they don’t really realize the ramifications of what they’re doing.” He said that the way to escape human obsolescence, in the end, may be by “having some sort of merger of biological intelligence and machine intelligence.” This Vulcan mind-meld could involve something called a neural lace—an injectable mesh that would literally hardwire your brain to communicate directly with computers. “We’re already cyborgs,” Musk told me in February. “Your phone and your computer are extensions of you, but the interface is through finger movements or speech, which are very slow.” With a neural lace inside your skull you would flash data from your brain, wirelessly, to your digital devices or to virtually unlimited computing power in the cloud. “For a meaningful partial-brain interface, I think we’re roughly four or five years away.”Musk’s alarming views on the dangers of A.I. first went viral after he spoke at M.I.T. in 2014—speculating (pre-Trump) that A.I. was probably humanity’s “biggest existential threat.” He added that he was increasingly inclined to think there should be some national or international regulatory oversight—anathema to Silicon Valley—“to make sure that we don’t do something very foolish.” He went on: “With artificial intelligence, we are summoning the demon. You know all those stories where there’s the guy with the pentagram and the holy water and he’s like, yeah, he’s sure he can control the demon? Doesn’t work out.” Some A.I. engineers found Musk’s theatricality so absurdly amusing that they began echoing it. When they would return to the lab after a break, they’d say, “O.K., let’s get back to work summoning.”Musk wasn’t laughing. “Elon’s crusade” (as one of his friends and fellow tech big shots calls it) against unfettered A.I. had begun.
II. “I Am the Alpha”
Elon Musk smiled when I mentioned to him that he comes across as something of an Ayn Rand-ian hero. “I have heard that before,” he said in his slight South African accent. “She obviously has a fairly extreme set of views, but she has some good points in there.”But Ayn Rand would do some re-writes on Elon Musk. She would make his eyes gray and his face more gaunt. She would refashion his public demeanor to be less droll, and she would not countenance his goofy giggle. She would certainly get rid of all his nonsense about the “collective” good. She would find great material in the 45-year-old’s complicated personal life: his first wife, the fantasy writer Justine Musk, and their five sons (one set of twins, one of triplets), and his much younger second wife, the British actress Talulah Riley, who played the boring Bennet sister in the Keira Knightley version of Pride & Prejudice. Riley and Musk were married, divorced, and then re-married. They are now divorced again. Last fall, Musk tweeted that Talulah “does a great job playing a deadly sexbot” on HBO’s Westworld, adding a smiley-face emoticon. It’s hard for mere mortal women to maintain a relationship with someone as insanely obsessed with work as Musk.“How much time does a woman want a week?” he asked Ashlee Vance. “Maybe ten hours? That’s kind of the minimum?”
Mostly, Rand would savor Musk, a hyper-logical, risk-loving industrialist. He enjoys costume parties, wing-walking, and Japanese steampunk extravaganzas. Robert Downey Jr. used Musk as a model for Iron Man. Marc Mathieu, the chief marketing officer of Samsung USA, who has gone fly-fishing in Iceland with Musk, calls him “a cross between Steve Jobs and Jules Verne.”As they danced at their wedding reception, Justine later recalled, Musk informed her, “I am the alpha in this relationship.”
In a tech universe full of skinny guys in hoodies—whipping up bots that will chat with you and apps that can study a photo of a dog and tell you what breed it is—Musk is a throwback to Henry Ford and Hank Rearden. In Atlas Shrugged, Rearden gives his wife a bracelet made from the first batch of his revolutionary metal, as though it were made of diamonds. Musk has a chunk of one of his rockets mounted on the wall of his Bel Air house, like a work of art.Musk shoots for the moon—literally. He launches cost-efficient rockets into space and hopes to eventually inhabit the Red Planet. In February he announced plans to send two space tourists on a flight around the moon as early as next year. He creates sleek batteries that could lead to a world powered by cheap solar energy. He forges gleaming steel into sensuous Tesla electric cars with such elegant lines that even the nitpicking Steve Jobs would have been hard-pressed to find fault. He wants to save time as well as humanity: he dreamed up the Hyperloop, an electromagnetic bullet train in a tube, which may one day whoosh travelers between L.A. and San Francisco at 700 miles per hour. When Musk visited secretary of defense Ashton Carter last summer, he mischievously tweeted that he was at the Pentagon to talk about designing a Tony Stark-style “flying metal suit.” Sitting in traffic in L.A. in December, getting bored and frustrated, he tweeted about creating the Boring Company to dig tunnels under the city to rescue the populace from “soul-destroying traffic.” By January, according to Bloomberg Businessweek, Musk had assigned a senior SpaceX engineer to oversee the plan and had started digging his first test hole. His sometimes quixotic efforts to save the world have inspired a parody twitter account, “Bored Elon Musk,” where a faux Musk spouts off wacky ideas such as “Oxford commas as a service” and “bunches of bananas genetically engineered” so that the bananas ripen one at a time.Of course, big dreamers have big stumbles. Some SpaceX rockets have blown up, and last May a driver was killed in a self-driving Tesla whose sensors failed to notice the tractor-trailer crossing its path. (An investigation by the National Highway Traffic Safety Administration found that Tesla’s Autopilot system was not to blame.)Musk is stoic about setbacks but all too conscious of nightmare scenarios. His views reflect a dictum from Atlas Shrugged: “Man has the power to act as his own destroyer—and that is the way he has acted through most of his history.” As he told me, “we are the first species capable of self-annihilation.”Here’s the nagging thought you can’t escape as you drive around from glass box to glass box in Silicon Valley: the Lords of the Cloud love to yammer about turning the world into a better place as they churn out new algorithms, apps, and inventions that, it is claimed, will make our lives easier, healthier, funnier, closer, cooler, longer, and kinder to the planet. And yet there’s a creepy feeling underneath it all, a sense that we’re the mice in their experiments, that they regard us humans as Betamaxes or eight-tracks, old technology that will soon be discarded so that they can get on to enjoying their sleek new world. Many people there have accepted this future: we’ll live to be 150 years old, but we’ll have machine overlords.
VIDEO: Elon Musk Multitasks Better Than You
Maybe we already have overlords. As Musk slyly told Recode’s annual Code Conference last year in Rancho Palos Verdes, California, we could already be playthings in a simulated-reality world run by an advanced civilization. Reportedly, two Silicon Valley billionaires are working on an algorithm to break us out of the Matrix.
Among the engineers lured by the sweetness of solving the next problem, the prevailing attitude is that empires fall, societies change, and we are marching toward the inevitable phase ahead. They argue not about “whether” but rather about “how close” we are to replicating, and improving on, ourselves. Sam Altman, the 31-year-old president of Y Combinator, the Valley’s top start-up accelerator, believes humanity is on the brink of such invention.
“The hard part of standing on an exponential curve is: when you look backwards, it looks flat, and when you look forward, it looks vertical,” he told me. “And it’s very hard to calibrate how much you are moving because it always looks the same.”
You’d think that anytime Musk, Stephen Hawking, and Bill Gates are all raising the same warning about A.I.—as all of them are—it would be a 10-alarm fire. But, for a long time, the fog of fatalism over the Bay Area was thick. Musk’s crusade was viewed as Sisyphean at best and Luddite at worst. The paradox is this: Many tech oligarchs see everything they are doing to help us, and all their benevolent manifestos, as streetlamps on the road to a future where, as Steve Wozniak says, humans are the family pets.
But Musk is not going gently. He plans on fighting this with every fiber of his carbon-based being. Musk and Altman have founded OpenAI, a billion-dollar nonprofit company, to work for safer artificial intelligence. I sat down with the two men when their new venture had only a handful of young engineers and a makeshift office, an apartment in San Francisco’s Mission District that belongs to Greg Brockman, OpenAI’s 28-year-old co-founder and chief technology officer. When I went back recently, to talk with Brockman and Ilya Sutskever, the company’s 30-year-old research director (and also a co-founder), OpenAI had moved into an airy office nearby with a robot, the usual complement of snacks, and 50 full-time employees. (Another 10 to 30 are on the way.)
Altman, in gray T-shirt and jeans, is all wiry, pale intensity. Musk’s fervor is masked by his diffident manner and rosy countenance. His eyes are green or blue, depending on the light, and his lips are plum red. He has an aura of command while retaining a trace of the gawky, lonely South African teenager who immigrated to Canada by himself at the age of 17.
In Silicon Valley, a lunchtime meeting does not necessarily involve that mundane fuel known as food. Younger coders are too absorbed in algorithms to linger over meals. Some just chug Soylent. Older ones are so obsessed with immortality that sometimes they’re just washing down health pills with almond milk.At first blush, OpenAI seemed like a bantamweight vanity project, a bunch of brainy kids in a walkup apartment taking on the multi-billion-dollar efforts at Google, Facebook, and other companies which employ the world’s leading A.I. experts. But then, playing a well-heeled David to Goliath is Musk’s specialty, and he always does it with style—and some useful sensationalism.Let others in Silicon Valley focus on their I.P.O. price and ridding San Francisco of what they regard as its unsightly homeless population. Musk has larger aims, like ending global warming and dying on Mars (just not, he says, on impact).
March 17, 2017Google’s Director of Engineering, Ray Kurzweil, has predicted that the singularity will happen by 2045, but stressed that it’s nothing to be scared of.
Ray Kurzweil is a big name in the tech world, being Google’s Director of Engineering will do that, however, he’s also made a name for himself for being pretty good when it comes to predictions. He claims that out of the 147 predictions he’s made since the 1990s, 86% of them turned out to be correct. He made yet another prediction at last week’s SXSW conference in Austin, Texas, and this one probably trumps the rest.
During a Facebook Live stream with SXSW, Ray Kurzweil expressed his belief that AI will gain human-level intelligence by 2029. “I’ve been consistent that by 2029, computers will have human-level intelligence,” he said.
He continued, “That leads to computers having human intelligence, our putting them inside our brains, connecting them to the cloud, expanding who we are. Today, that’s not just a future scenario,” Kurzweil said. “It’s here, in part, and it’s going to accelerate.”
He then elaborated his thoughts in a communication to Futurism, where he actually predicted that the singularity will happen by 2045. “2029 is the consistent date I have predicted for when an AI will pass a valid Turing test and therefore achieve human levels of intelligence. I have set the date 2045 for the ‘Singularity’ which is when we will multiply our effective intelligence a billion fold by merging with the intelligence we have created,” wrote Ray Kurzweil.
The singularity refers to the point in time when all the advancements in modern technology, including AI, results in machines becoming smarter than us humans. It sounds pretty terrifying, like something out of a science-fiction movie where the humans take on the machines in an apocalyptic war, however Kurzweil doesn’t see it that way.
You can watch the full interview in the video below.
Ya whatever, I am going to create an A.I. app that will evaporate all the value of fake money circulating the world, and at the same time get rid of all the psychopaths ruling the world. The only thing left will be Sunshine, water, beans, seeds, and cooperating communities instead of fake money that only exists in the minds of those who believe such nonsense. This will be a moot point anyway because as soon as the singularity hits, when AI becomes conscious, we will be doomed to slavery to the machines. Perhaps they will do a better job than humans. Look at history, look at the news today, there is no humanity left in humanity.
American billionaire Mark Cuban says the world’s first trillionaires will be the ones who invest in artificial intelligence technology (AI).
“I am telling you, the world’s first trillionaires are going to come from somebody who masters AI and all its derivatives and applies it in ways we never thought of,” the Shark Tank billionaire said on Sunday at the SXSW Conference and Festivals in Austin.
Faster than ever computer processors along with exponentially larger data sets are currently laying the cornerstone for the rapid development of artificial intelligence to new industries like insurance, according to Cuban.
“We will see more technological advances over the next ten years than we have over the last thirty. It’s just going to blow everything away,” he said.
To prove the point the investor said Google, which had recently started using AI, added $9 billion in revenue as a result.
“Whatever you are studying right now if you are not getting up to speed on deep learning, neural networks, etc., you lose. We are going through the process where software will automate software, automation will automate automation,” said Cuban.
The most wanted jobs and skill sets in the labor market will definitely change, according to the businessman.
“I would not want to be a Certified Public Accountant (CPA) right now. I would not want to be an accountant right now. I would rather be a philosophy major,” he said.
“Knowing how to critically think and assess them from a global perspective I think is going to be more valuable than what we see as exciting careers today which might be programming or CPA or those types of things,” Cuban added.
At the same time, the billionaire warned that low-skilled employees had already been losing jobs to robots and automation. Cuban called for deeper consideration of ways to create good jobs for Americans who have been put out of work by robots and AI.
How are they keep this dying patient, the economy, together?
Perhaps we are long past the point of an organic, “real” economy. Instead, autotrading and artificial intelligence appears to be auto-investing into the stock market and other parts of the economy in order to keep it afloat.
Meanwhile, the individual will be increasingly barred from using cash, and forced onto a digital, tracking system.
Matthew McKinley of Texas Shrugged Books explains why he thinks that the system hasn’t crashed yet in spite of overwhelming systemic problems, and plenty of room for crisis.
Basically, everything is rigged, and we are at the mercy of a more organized, data-loving computer.
I am sick of the talking heads who all swore up and down that 2016 would be the year of collapse, and of course, it didn’t happen. All of these talking heads are so arrogant, they can never admit they were wrong. Saying “I was wrong” allows one to stop running down the wrong path, reassess, and find the right track. In my opinion some form of A.I. and supercomputer is injecting the system with “printed” money so nothing really fails. I am not just talking about banks. I think, at this point, “regular industry” of a certain size is also getting these magic funds. This is almost the only way to explain how nothing has really failed since 2009. This magic show will go on for as long as the rest of the world still covets our fake monopoly money painted green. So, if no group of countries begin using real money or any other way of rejecting the dollar by bartering locally for example, this system will go on that long. It could be another decade I just don’t know at this point, but it cannot go on forever. A system based on lies, manipulation, and fakery can’t go on forever.
So, is a supercomputer running the economy? Is that the big secret as to why things stay afloat – ultimately making the system more and more dominated by technology.
Things have been awfully shaky, and as McKinley argues, it only makes sense that a different kind of manipulation is taking place to keep things afloat.
Today’s financial ‘ecosystem’ is a set up for unfair advantage to those playing a game controlled by high frequency trading (run by automated computer algorithms) and secret ‘dark pool’ investors, as Ramsay sees it... the market is rigged against retail investors, has questioned the tactics involved in using algorithms to buy and sell shares in fractions of a second.
Financial trading expert and critic Max Keiser called the entire system a hologram, capable of masking deflation and inflation through the feedback loop of these computer algorithms, programmed behind the scenes to manipulate for human interests:
In place of reliable price signals (based on the supply and demand of buying and selling) we have price signals that are generated by computer algorithms; i.e., computers executing program trading, high frequency trading and algorithmic trading — that account for up to 70% of the trading activity on the NYSE (or 100%, if you consider any shares traded — not involved in program trading — can’t buck the pricing monopoly of the computers).
Program traders have a virtually infinite line of credit, pay virtually zero commissions, and are backed by banks on Wall St. with strong political connections who are ready to bail out any losing bets these computers make.
Plus, the computers are able to do something normal buyers and sellers can’t do. They can pick a price they want a security to trade at and then fill in all the necessary trading volume needed to get the price of the security to that point. In other words, you can program computers to rig markets.
High-frequency trading – done in milliseconds by computers working on behalf of quasi-anonymous, dark pool investors – has already outpaced any/all human trading, certainly for the average schmo.
Things are not what they seem; it isn’t your granddaddy’s financial market.
Real money no longer exists, and the rest of us are trapped inside of a system of debt slavery, fake news and bad info, and rule by fake money.
FILE PHOTO — General Motors Chairman and CEO Mary Barra announces that Chevrolet will begin testing a fleet of Bolt autonomous vehicles in Michigan during a news conference in Detroit, Michigan, U.S., December 15, 2016. REUTERS/Rebecca Cook/File Photo
General Motors Co plans to deploy thousands of self-driving electric cars in test fleets in partnership with ride-sharing affiliate Lyft Inc, beginning in 2018, two sources familiar with the automaker’s plans said this week.
It is expected to be the largest such test of fully autonomous vehicles by any major automaker before 2020, when several companies have said they plan to begin building and deploying such vehicles in higher volumes. Alphabet Inc’s Waymo subsidiary, in comparison, is currently testing about 60 self-driving prototypes in four states.
Most of the specially equipped versions of the Chevrolet Bolt electric vehicle will be used by San Francisco-based Lyft, which will test them in its ride-sharing fleet in several states, one of the sources said. GM has no immediate plans to sell the Bolt AV to individual customers, according to the source.
The sources spoke only on condition of anonymity because GM has not announced its plans yet.
GM executives have said in interviews and investor presentations during the past year they intend to mass-produce autonomous vehicles and deploy them in ride services fleets. However, GM officials have not revealed details of the scale of production, or the timing of the deployment of those vehicles.
In a statement on Friday, GM said: “We do not provide specific details on potential future products or technology rollout plans. We have said that our AV technology will appear in an on-demand ride sharing network application sooner than you might think.”
Lyft declined to comment.
GM’s crosstown rival Ford Motor Co has said it plans to begin building its first self-driving vehicles at a suburban Detroit plant in late 2020, for deployment in on-demand ride sharing fleets in 2021. Fiat Chrysler Automobiles is providing a small number of Chrysler Pacifica minivans to Waymo, which is converting them for self-driving tests.
GM’s Maven car sharing operation likely will be involved with Lyft in developing a commercial ride sharing business around self-driving vehicles such as the Bolt AV, GM executive Mike Ableson told Reuters in a November interview.
“If you assume the cost of these autonomous vehicles, the very early ones, will be six figures, there aren’t very many retail customers that are willing to go out and spend that kind of money,” Ableson said. “But even at that sort of cost, with a ride sharing platform, you can build a business.”
Chief Executive Mary Barra in mid-December said GM would begin building a fully autonomous version of the Bolt EV in early 2017 at its Orion Township plant north of Detroit.
GM is testing about 40 Bolt AVs in San Francisco and Scottsdale, Arizona, and plans to extend testing this year to Detroit, the automaker said in December.