Published on 13 Jun 2019
Published on 13 Jun 2019
What do a Yemeni refugee in the queue for food aid, a checkout worker in a British supermarket and a depressed university student have in common? They’re all being sifted by some form of artificial intelligence.
Advanced nations and the world’s biggest companies have thrown billions of dollars behind AI – a set of computing practices, including machine learning that collate masses of our data, analyse it, and use it to predict what we would do.
Yet cycles of hype and despair are inseparable from the history of AI. Is that clunky robot really about to take my job? How do the non-geeks among us distinguish AI’s promise from the hot air and decide where to focus concern?
Computer scientist Jaron Lanier ought to know. An inventor of virtual reality, Lanier worked with AI pioneer Marvin Minsky, one of the people who coined the term “artificial intelligence” in the 1950s. Lanier insists AI, then and now, is mostly a marketing term. In our interview, he recalled years of debate with Minsky about whether AI was real or a myth:
“At one point, [Minsky] said to me, ‘Look, whatever you think about this, just play along, because it gets us funding, this’ll be great.’ And it’s true, you know … in those days, the military was the principal source of funding for computer science research. And if you went into the funders and you said, ‘We’re going to make these machines smarter than people some day and whoever isn’t on that ride is going to get left behind and big time. So we have to stay ahead on this, and boy! You got funding like crazy.'”
But at worst, he says, AI can be more insidious: a ploy the powerful use to shirk responsibility for the decisions they make. If “computer says, ‘no,'” as the old joke goes, to whom do you complain?
We’d all better find out quickly. Whether or not you agree with Lanier about the term AI, machine learning is getting more sophisticated, and it’s in use by everyone from the tech giants of Silicon Valley to cash-strapped local authorities. From credit to jobs to policing to healthcare, we’re ceding more and more power to algorithms, or rather – to the people behind them.
Many applications of AI are incredible: we could it to improve wind farms or spot cancer sooner. But that isn’t the only, or even the main, AI trend. The worrying ones involve the assessment and prediction of people – and, in particular, grading for various kinds of risk.
As a human rights lawyer doing “war on terror” cases, I thought a lot about our attitudes to risk. Remember Vice President Dick Cheney’s “one percent doctrine”? He said that any risk – even one percent – of a terror attack would, in the post-9/11 world, to be treated like a certainty.
That was just a complex way of saying that the US would use force based on the barest suspicion about a person. This attitude survived the transition to a new administration – and the shift to a machine learning-driven process in national security, too.
During President Barack Obama’s drone wars, suspicion didn’t even need to be personal – in a “signature strike”, it could be a nameless profile, generated by an algorithm, analyzing where you went and who you talked to on your mobile phone. This was made clear in an unforgettable comment by ex-CIA and NSA director Michael Hayden: “We kill people based on metadata,” he said.
Now a similar logic pervades the modern marketplace, the sense that total certainty and zero risk – that is, zero risk for the class of people Lanier describes as “closest to the biggest computer” – is achievable and desirable. This is what is crucial for us all to understand: AI isn’t just about Google and Facebook targeting you with advertisements. It’s about risk.
The police in Los Angeles believed it was possible to use machine learning to predict crime. London’s Metropolitan Police, and others, want to use it to see your face wherever you go. Credit agencies and insurers want to build a better profile to understand whether you might get heart disease, or drop out of work, or fall behind on payments.
It used to be common to talk about “the digital divide”. This originally meant that the skills and advantages of connected citizens in rich nations would massively outrun poorer citizens without computers and the Internet. The solution: get everyone online and connected. This drove policies like One Laptop Per Child – and it drives newer ones, like Digital ID, the aim to give everyone on Earth a unique identity, in the name of economic participation. And connectivity has, at times, indeed opened people to new ideas and opportunities.
But it also comes at a cost. Today, a new digital divide is opening. One between the knowers and the known. The data miners and optimisers, who optimise, of course, according to their values, and the optimised. The surveillance capitalists, who have the tools and the skills to know more about everyone, all the time, and the world’s citizens.
AI has ushered in a new pecking order, largely set by our proximity to this new computational power. This should be our real concern: how advanced computing could be used to preserve power and privilege.
This is not a dignified future. People are right to be suspicious of this use of AI, and to seek ways to democratise this technology. I use an iPhone and enjoy, on this expensive device, considerably less personalised tracking of me by default than a poorer user of an Android phone.
When I apply for a job in law or journalism, a panel of humans interviews me; not an AI using “expression analysis” as I would experience applying for a job in a Tesco supermarket in the UK. We can do better than to split society into those who can afford privacy and personal human assessment – and everyone else, who gets number-crunched, tagged, and sorted.
Unless we head off what Shoshana Zuboff calls “the substitution of computation for politics” – where decisions are taken outside of a democratic contest, in the grey zone of prediction, scoring, and automation – we risk losing control over our values.
The future of artificial intelligence belongs to us all. The values that get encoded into AI ought to be a matter for public debate and, yes, regulation. Just as we banned certain kinds of discrimination, should certain inferences by AI be taken off the table? Should AI firms have a statutory duty to allow in auditors to test for bias and inequality?
Is a certain platform size (say, Facebook and Google, which drive much AI development now and supply services to over two billion people) just too big – like Big Rail, Big Steel, and Big Oil of the past? Do we need to break up Big Tech?
Everyone has a stake in these questions. Friendly panels and hand-picked corporate “AI ethics boards” won’t cut it. Only by opening up these systems to critical, independent enquiry – and increasing the power of everyone to participate in them – will we build a just future for all.
The views expressed in this article are the author’s own and do not necessarily reflect Al Jazeera’s editorial stance.
Now we know. And just wait until the Internet of Things really takes off.
Do you have trouble with your computer? Does is freeze or shut down at the most inopportune times? Does it go haywire suddenly or arbitrarily erase important data for no reason? Does it vomit forth long dead languages in a deep, growling voice? If so, then you may have a serious problem. According to some reports, your computer or even your smartphone may actually be possessed by an evil spirit.
Demonic possession is a very well-documented phenomenon among human beings, and has been for centuries, but what of modern devices such as computers and smartphones? Can these machines serve as some sort of conduit for evil forces? One person who would say yes to that is a Reverend Jim Peasboro, of Savannah, Georgia, in the United States, who has spent a lot of time denouncing how computers are powerful tools of the Devil for corrupting our souls. So far, so standard, but Peasboro goes beyond just using the Devil working through computers as a metaphor for their bad influence on our youth, and rather seems to believe that demonic forces can literally possess computers.
Peasboro has written a whole book on this, called The Devil in the Machine: Is Your Computer Possessed by a Demon?, in which he outlines his belief that possession by demons can be experienced by anything with a mind, including humans, animals, and even the processor of your computer. According to Peasboro, “Any PC built after 1985 has the storage capacity to house an evil spirit,” with storage capacity seeming to make a difference, and he asserts that “one in 10 computers in America now houses some type of evil spirit.” He seems to take this all quite literally, and claims that these malicious spirits are responsible for seeping through our screens to exert their influence, which has led to much of the crime and gun violence among young people seen in the country. As to other effects of these malevolent cyber-demons he says:
I learned that many members of my congregation became in touch with a dark force whenever they used their computers. Decent, happily married family men were drawn irresistibly to pornographic websites and forced to witness unspeakable abominations. Housewives who had never expressed an impure thought were entering Internet chat rooms and found themselves spewing foul, debasing language they would never use normally…One woman wept as she confessed to me, ‘I feel when I’m on the computer as if someone else or something else just takes over.’
Surely this seems like it still must be just talking in metaphor. After all, the Internet can indeed be a scary, lawless badland where terrible people do terrible things, we get that, but Peasboro truly seems to think that it is not just the barrage of impure images and the limitless new opportunities to be exposed to violence and pornography online, but rather actual supernatural demons worming their way into our technology to reach out into us. The most bizarre story he tells is of coming face to face with one of these demons while inspecting a computer that was suspected of being possessed. He would say of the experience:
The program began talking directly to me, openly mocked me. It typed out, ‘Preacher, you are a weakling and your God is a damn liar.’ Then the device went haywire and started printing out what looked like gobbledygook…I later had an expert in dead languages examine the text. It turned out to be a stream of obscenities written in a 2,800-year-old Mesopotamian dialect!
Well, that seems like it’s definitely not your typical computer virus. If your computer starts berating you and spewing forth ancient Mesopotamian, then you probably have a bigger problem than just using Windows 10. You may be thinking about what you can do if that is the case, and Peasboro has the answer for you on that, saying that if you suspect that your computer is possessed by the Devil then all you have to do is consult a clergyman, or if that doesn’t work he says “Technicians can replace the hard drive and reinstall the software, getting rid of the wicked spirit permanently.” Hope you have that warranty handy.
Don’t just take Paesboro’s word for it all, though. There are some other pretty spooky cases out there that seem to point to real demons actually lashing out and pushing through our computer screens, and one weird account comes from a poster on the site dreamsofdunamis, who says that as she was surfing the net one evening she came across a car ad that was filled with what seemed to be sinister and cryptic Illuminati symbolism. As she scrolled down, she found more creepy symbols and a line of spectral black and white figures, and that at that point she claims to have actually felt a demon physically leap out from the screen and actually pass through her. This is all strange enough as it is, but whatever presence had come through the computer had apparently gone on to prowl around the house, as her son soon came into the room complaining of having been woken up and attacked by some sort of terrifying entity. She said of what happened next:
It sounded just like the black and white ghostly picture that I had just seen on that web page just moments before. The symbols were the same, so I knew it had to have come from that site.
As I apologized to my child, I realized that I may have to stop surfing the web late at night, for I did not want to disturb my kids sleep like this any more.
I shared this with my child, and told him that as soon as I had sensed the demon come through the screen, I cast it out in the name of Jesus, exited the page, and closed up the computer. I was surprised that the demon did not leave once it had been cast out. I was also surprised (and a bit frustrated,) that the demon attacking him was almost instantaneous; there was no pause or time elapse from when it went through my computer screen to when it entered into my child’s bedroom to attack him.
My child then reminded me, that the demon that I had cast out had probably left, but there were numerous demons that could come through just one demonic doorway. And in this case, viewing the photo was the doorway into our house.
She then goes on to speculate that the demon had come through the image on her computer, and that others might have entered her house as well. This caused her to go about praying to cast out any residual demonic forces lurking within the home. As they did this she claims to have heard a startling, loud noise like something wet hitting the floor nearby, and they looked to see a shadowy figure about 4 feet in height and possessing wings, which crouched there for a moment before screaming as if in pain and falling backwards to seemingly phase right through the wall. She goes on to claim that her family has been attacked on several occasions by such supernatural forces coming through their computer screen or even TV. She says of this:
Our first encounter with demons coming out of computer and TV screens, happened several years ago, when one of my kids had clicked on a video that promised the viewer a glimpse of a real alien. We were all sitting there at the kitchen table, with the kids doing their school work, and this one kid had finished early, so as a reward, I told him he could use the computer while he waited for the rest of the kids to finish.
Well, most of the youtube video that he had decided to view, was silent and dark, which caused one to lean in closer to the computer screen, to see if you could see anything. Suddenly, a drawing of an alien’s face flashed upon the screen, and a loud roar came from the speakers, and as everyone there at the table turned to look at the computer screen, a large black ghost-like hook, (reminiscent of Peter Pan’s Captain Hook, but very very black and wraith-like,) reached out through the computer screen and tried to stab itself into my child’s forehead. It glanced off the surface of his skin, and then gave an even louder roar of frustration, once it realized it had failed in its attack. The claw then evaporated back into the computer screen. Laughter was then heard coming from the video, as the perps laughed out loud at their supposed joke.
As you can imagine, we were all left quite shaken, after seeing such a thing. It was a lesson none of us have forgotten!
When most people think of AI’s relative strengths over humans, they think of its convergent intelligence. With superior memory capacity and processing power, computers outperform people at rules-based games, complex calculations, and data storage: chess, advanced math, and Jeopardy. What computers lack, some might say, is any form of imagination, or rule-breaking curiosity—that is, divergence.
But what if that common view is wrong? What if AI’s real comparative advantage over humans is precisely its divergent intelligence—its creative potential? That’s the subject of the latest episode of the podcast Crazy/Genius, produced by Kasia Mychajlowycz and Patricia Yacob.
AI’s divergent potential is one of the hottest subjects in the field. This spring, several dozen computer scientists published an unusual paper on the history of AI. This paper was not a work of research. It was a collection of stories—some ominous, some hilarious—that showed AI shocking its own designers with its ingenuity. Most of the stories involved a kind of AI called machine learning, where programmers give the computer data and a problem to solve without explicit instructions, in the hopes that the algorithm will figure out how to answer it.
First, an ominous example. One algorithm was supposed to figure out how to land a virtual airplane with minimal force. But the AI soon discovered that if it crashed the plane, the program would register a force so large that it would overwhelm its own memory and count it as a perfect score. So the AI crashed the plane, over and over again, presumably killing all the virtual people on board. This is the sort of nefarious rules-hacking that makes AI alarmists fear that a sentient AI could ultimately destroy mankind. (To be clear, there is a cavernous gap between a simulator snafu and SkyNet.)
In the classic 1973 heist movie The Sting, two con men—played by Robert Redford and Paul Newman—build a fictitious world in a Depression-era Chicago basement to defraud a corrupt banker. They make an offtrack-betting room, hire actors to ensure the scene is convincing, and even enlist pretend law enforcement to fake-bust their mark. The film is memorable because it is one of the finest movies in the genre, well written and funny, but also because the duo’s work is so meticulously detailed.
The con has changed since then, both short and long. In this age, the online equivalent of The Sting is a phishing site: a fake reality that lives online, set up to capture precious information such as logins and passwords, bank-account numbers, and the other functional secrets of modern life. You don’t get to see these spaces being built, but—like The Sting’s betting room—they can be perfect in every detail. Or they can be thrown together at the last minute like a clapboard set.
This might be the best way to think about phishing: a set built for you, to trick information out of you; built either by con men or, in the case of the recent spear-phishing attack caught and shut down by Microsoft, by spies and agents working for (or with) interfering governments, which seems a bit more sinister than Paul Newman with a jaunty smile and a straw hat.
But that’s the untargeted stuff. Enticing someone to click on a phishing link, in an email or elsewhere, is where a targeted attack, also known as spear-phishing, comes in: learning about someone’s life and habits to know just what email would get them unthinkingly to click. A reality built for one person, or one cohort of people. The con is on, the set is built, and the actors are hired to make the sting, all from a web browser.
Protecting your identity in the lawless wild of the Internet is considered to be a lost cause. But an article on Survival Blog.com tackled how you can set up a secure virtual machine for browsing online.
Start by getting a gaming laptop with Windows 10 Professional. To make sure the purchase can’t be traced back to you, buy a used unit with cash or have someone else order it from Amazon for you.
Windows 10 Professional has Bitlocker, a full disk encryption program. To use this, click the Start button, go to Settings, and enter “manage Bitlocker” in the search bar. Use Bitlocker to encrypt your C drive. It will provide you with instructions for the process.
Create a secure password for your laptop. Go for a sentence that has 21 characters and contains punctuation. Make it something you can remember easily.
After setting your password, Bitlocker will ask you if you want a recovery key to recover your drive. It’s up to you; make sure you keep it safe. (Related: Online censorship is a WAR against human knowledge and sustainable civilization.)
The next step is to set up a virtual private network (VPN), which encrypts the data you exchange with a VPN provider. A hacker will only see encrypted data.
Make sure you pick a VPN provider that can be trusted to not sell your data. Two good choices are Proton VPN and Private Internet Access VPN.
Now you need to get a secure email service. Proton Mail is a good choice and free, to boot. But there are others you may want to consider. Do be warned that many secure mail services support themselves by advertising pornography.
After getting your VPN and secure email, update the defenses of your laptop. Windows 10 comes with its own antivirus, Windows Defender. Go to Settings and look up “check for updates.” Bring Defender and everything else up to date.
If you feel you need more protection, consider getting ESET or F-Secure antivirus programs. Always remember that these programs cannot protect you from every danger, so do not do risky things.
All this effort is intended to set up your laptop as the host for a virtual machine, a computer that is made of pure software. A virtual machine is much more secure than a physical computer. It is also much easier to replace because new ones can simply be downloaded or set up.
Download two virtual machine software: Virtual Box for Windows, and Tails for Linux. As a bonus, Virtual Box is free, and installing it is a matter of clicking “Next” a few times.
In Virtual Box, select the “MS Edge Windows 10 Stable” virtual machine for downloading. The download page will have the default user name and password; you will need these later.
Go to your Downloads folder and extract the contents of the file. Look for the .ovf file that will import the virtual machine into Virtual Box. Double-click this file. When the computer prompts you, pick “Import.”
You will need to take “snapshots” of your new virtual machine so that you can restore it to those earlier states. Take a snapshot immediately after you import it.
Select the virtual machine and pick “Screenshots.” Click the camera icon and name it “Baseline.” You can restore the virtual machine to this state whenever you need to.
Start the virtual machine up. Enter the default user name and password. Then pick “Input,” “Keyboard,” and press Ctrl-Alt-Del. That brings up an option to change the password. Give it a strong one.
Find more ways to shield your identity every time you go online at Glitch.news.