The ability to hide in plain sight isn’t just useful when you’re running from the law or hiding out in witness protection: It can also be a useful survival skill in disaster situations. Known as the “gray man theory,” blending in during a catastrophe is an art in and of itself.
A simple disguise
Dr. Eilidh Noyes, lecturer in cognitive psychology at the University of Huddersfield and leader of the disguise study, teamed up with Dr. Rob Jenkins, from the University of York, to investigate how disguises affect facial recognition.
To conduct the research, 26 volunteer models were photographed three ways: As themselves, disguised to alter their appearance (evasion) and disguised as another volunteer (impersonation). The models were allowed to disguise themselves by any means necessary. Changes to hairstyle, hair color, facial hair and makeup were all permitted — but, items like hats, scarves and other items not allowed in passport identification were excluded.
The scientists ultimately found that simple disguises like these reduced facial recognition by 30 percent. However, evasion disguises worked slightly better than impersonation disguises.
Dr. Noyes commented on the research and stated, “Our models used inexpensive simple disguises and there were no make-up artists involved. If people want to, it’s very easy to change their appearance.”
“Even simple disguise reduces the accuracy of human face recognition. Next, we will test how computer face recognition algorithms fare on the same tasks,” the study leader said.
“Most reports of human face recognition ability consider performance for ‘cooperative’ face images, meaning that the photographed person makes no attempt to change their appearance. Therefore, we might have previously overestimated real-world face recognition performance,” Noyes commented. Based on their findings, it certainly seems that Dr. Noyes’ hypothesis is correct.
There are many reasons why a person might want to hide in plain sight and not all of them are criminal. More to the point, there are ways to become “invisible” without even needing to wear a disguise –and that ability could one day save your life.
Gray Man Theory
Getting “lost in the crowd” isn’t always a bad thing. Writing for The Bug Out Bag Guide, Chris Ruiz reports, “gray man theory” can be extremely useful in a disaster situation — especially if you are in a crowded place when a catastrophic event occurs.
As Ruiz explains, when something terrible happens, everyone around you is going to have the same goal: Get to safety. But not everyone is going to have a plan in place. Being able to move through a crowd and act on your bug out plan without drawing unnecessary attention to yourself (and any gear you might be carrying) is essential. Otherwise, you will end up with a target on your back.
How you look is important when you’re trying to blend in: Avoid bright colors and distinct branding. These things will not only get you noticed, wearing distinctive clothing makes you easy to spot and track. Avoiding tactical-looking clothing or camouflage is also advisable (unless you’re going hunting). Ruiz also recommends carrying at least one item that can help disguise your appearance if need be.
Keeping any gear you have on you concealed is also very important. Ruiz suggests selecting a backpack or handbag that will blend in with commuters. This way, when disaster strikes, you won’t automatically be singled out as the guy carrying survival gear.
Learn more about bugging out at Preparedness.news.
Sources for this article include:
Here’s a small sample of the current headlines related to facial recognition:
- Most Americans Trust Facial Recognition Technology – But Not At The Airport
- Terrifying AI Matches DNA to Facial Recognition Databases
- Why Is Facial Recognition Important? 3 Signs It’s Too Big to Ignore Any Longer
- Facial recognition smart glasses could make public surveillance discreet and ubiquitous
Even the Washington Post published a warning titled “Don’t smile for surveillance: Why airport face scans are a privacy trap.”
Questions surrounding the emerging technology have reached enough of a tipping point that just this week, House Democrats questioned the Department of Homeland Security over the use of facial recognition tech on U.S. citizens. The Hill reported that more than 20 House Democrats sent a letter on Friday to the DHS over the Customs and Border Protection’s (CBP) use of facial recognition technology at U.S. airports. The Border Patrol claims that they are rolling out the facial recognition program at a number of airports under a congressional mandate and with an executive order from President Donald Trump. Lawmakers say the program was supposed to focus on foreign passengers, not Americans.
The group of lawmakers wrote:
“We write to express concerns about reports that the U.S. Customs and Border Protection (CBP) is using facial recognition technology to scan American citizens under the Biometric Exit Program.”
The letter to DHS comes shortly after a representative with the Government Accountability Office (GAO) the House Oversight and Reform Committee said that the FBI has access to hundreds of millions of photos that are used for facial recognition searches. Gretta Goodwin, a representative with the GAO, said the FBI uses expansive databases of photos—including from driver’s licenses, passports, and mugshots—to search for potential criminals. Goodwin noted that the FBI has a database of 36 million mugshots and access to more than 600 million photos, including access to 21 state driver’s license databases.
Rep. Jim Jordan of Ohio reminded Ms. Goodwin that the FBI has access to more photos than there are people in the country. “There are only 330 million people in the country,” Jordan stated.
The TSA was also questioned about their use of facial recognition at airports. Austin Gould, the TSA’s assistant administrator on Requirements and Capabilities Analysis, said the facial recognition program has been helpful for travelers. However, critics say the potential benefits of saved time and reducing passenger volume should not override the greater risk to privacy. The TSA plans to have facial recognition tech at the top 20 airports for international travelers by 2021 and at all airports by 2023. The TSA has also previously expressed their desire to scan the face of every single American who enters the airport.
The push back against facial recognition—and biometric technology in general—has moved beyond words in some areas. Most recently, San Francisco became the first city to ban government use of facial recognition. Due to the success in San Francisco, California lawmakers are considering AB 1215, a bill that would extend the ban across the entire state. The Electronic Frontier Foundation (EFF) spoke in favor of the bill, stating that the technology has been shown to have disproportionately high error rates for women, the elderly, and people of color. EFF also warned about the dangers of combining face recognition technology with police body cameras.
The editorial board of the Guardian also recently spoke out about the privacy threats, calling the technology “especially inaccurate and prone to bias.” The editorial board also noted that a recent test of Amazon’s facial recognition software by the American Civil Liberties Union found that it falsely identified 28 members of Congress as known criminal. Although the technology is currently dangerous due to its inaccuracy, the Guardian warns:
“It may be too late to stop the collection of this data. But the law must ensure that it is not stored and refined in ways that will harm the innocent and, as Liberty warns, slowly poison our public life.”
It’s clear that the debate on the benefits and threats of facial recognition technology is not going anywhere anytime soon. It’s up to us as individuals to educate ourselves and inform our peers about the threats to privacy and freedom that are becoming increasingly more apparent every day.
Facial recognition advertising as depicted in Minority Report is coming to your local stores sooner than you thought. The technology is becoming so mainstream it will become a part of the new social contract.
Published on 13 Jun 2019
What do a Yemeni refugee in the queue for food aid, a checkout worker in a British supermarket and a depressed university student have in common? They’re all being sifted by some form of artificial intelligence.
Advanced nations and the world’s biggest companies have thrown billions of dollars behind AI – a set of computing practices, including machine learning that collate masses of our data, analyse it, and use it to predict what we would do.
Yet cycles of hype and despair are inseparable from the history of AI. Is that clunky robot really about to take my job? How do the non-geeks among us distinguish AI’s promise from the hot air and decide where to focus concern?
Computer scientist Jaron Lanier ought to know. An inventor of virtual reality, Lanier worked with AI pioneer Marvin Minsky, one of the people who coined the term “artificial intelligence” in the 1950s. Lanier insists AI, then and now, is mostly a marketing term. In our interview, he recalled years of debate with Minsky about whether AI was real or a myth:
“At one point, [Minsky] said to me, ‘Look, whatever you think about this, just play along, because it gets us funding, this’ll be great.’ And it’s true, you know … in those days, the military was the principal source of funding for computer science research. And if you went into the funders and you said, ‘We’re going to make these machines smarter than people some day and whoever isn’t on that ride is going to get left behind and big time. So we have to stay ahead on this, and boy! You got funding like crazy.'”
But at worst, he says, AI can be more insidious: a ploy the powerful use to shirk responsibility for the decisions they make. If “computer says, ‘no,'” as the old joke goes, to whom do you complain?
We’d all better find out quickly. Whether or not you agree with Lanier about the term AI, machine learning is getting more sophisticated, and it’s in use by everyone from the tech giants of Silicon Valley to cash-strapped local authorities. From credit to jobs to policing to healthcare, we’re ceding more and more power to algorithms, or rather – to the people behind them.
Many applications of AI are incredible: we could it to improve wind farms or spot cancer sooner. But that isn’t the only, or even the main, AI trend. The worrying ones involve the assessment and prediction of people – and, in particular, grading for various kinds of risk.
As a human rights lawyer doing “war on terror” cases, I thought a lot about our attitudes to risk. Remember Vice President Dick Cheney’s “one percent doctrine”? He said that any risk – even one percent – of a terror attack would, in the post-9/11 world, to be treated like a certainty.
That was just a complex way of saying that the US would use force based on the barest suspicion about a person. This attitude survived the transition to a new administration – and the shift to a machine learning-driven process in national security, too.
During President Barack Obama’s drone wars, suspicion didn’t even need to be personal – in a “signature strike”, it could be a nameless profile, generated by an algorithm, analyzing where you went and who you talked to on your mobile phone. This was made clear in an unforgettable comment by ex-CIA and NSA director Michael Hayden: “We kill people based on metadata,” he said.
Now a similar logic pervades the modern marketplace, the sense that total certainty and zero risk – that is, zero risk for the class of people Lanier describes as “closest to the biggest computer” – is achievable and desirable. This is what is crucial for us all to understand: AI isn’t just about Google and Facebook targeting you with advertisements. It’s about risk.
The police in Los Angeles believed it was possible to use machine learning to predict crime. London’s Metropolitan Police, and others, want to use it to see your face wherever you go. Credit agencies and insurers want to build a better profile to understand whether you might get heart disease, or drop out of work, or fall behind on payments.
It used to be common to talk about “the digital divide”. This originally meant that the skills and advantages of connected citizens in rich nations would massively outrun poorer citizens without computers and the Internet. The solution: get everyone online and connected. This drove policies like One Laptop Per Child – and it drives newer ones, like Digital ID, the aim to give everyone on Earth a unique identity, in the name of economic participation. And connectivity has, at times, indeed opened people to new ideas and opportunities.
But it also comes at a cost. Today, a new digital divide is opening. One between the knowers and the known. The data miners and optimisers, who optimise, of course, according to their values, and the optimised. The surveillance capitalists, who have the tools and the skills to know more about everyone, all the time, and the world’s citizens.
AI has ushered in a new pecking order, largely set by our proximity to this new computational power. This should be our real concern: how advanced computing could be used to preserve power and privilege.
This is not a dignified future. People are right to be suspicious of this use of AI, and to seek ways to democratise this technology. I use an iPhone and enjoy, on this expensive device, considerably less personalised tracking of me by default than a poorer user of an Android phone.
When I apply for a job in law or journalism, a panel of humans interviews me; not an AI using “expression analysis” as I would experience applying for a job in a Tesco supermarket in the UK. We can do better than to split society into those who can afford privacy and personal human assessment – and everyone else, who gets number-crunched, tagged, and sorted.
Unless we head off what Shoshana Zuboff calls “the substitution of computation for politics” – where decisions are taken outside of a democratic contest, in the grey zone of prediction, scoring, and automation – we risk losing control over our values.
The future of artificial intelligence belongs to us all. The values that get encoded into AI ought to be a matter for public debate and, yes, regulation. Just as we banned certain kinds of discrimination, should certain inferences by AI be taken off the table? Should AI firms have a statutory duty to allow in auditors to test for bias and inequality?
Is a certain platform size (say, Facebook and Google, which drive much AI development now and supply services to over two billion people) just too big – like Big Rail, Big Steel, and Big Oil of the past? Do we need to break up Big Tech?
Everyone has a stake in these questions. Friendly panels and hand-picked corporate “AI ethics boards” won’t cut it. Only by opening up these systems to critical, independent enquiry – and increasing the power of everyone to participate in them – will we build a just future for all.
The views expressed in this article are the author’s own and do not necessarily reflect Al Jazeera’s editorial stance.
by JD Heyes
September 14, 2017
Soon it will be impossible to cover up your face and hide your identity as you engage in a criminal activity, thanks to up-and-coming facial recognition technology.
The bad news is, you won’t be able to hide in plain view either, just to protect your privacy.
As reported by the UK’s Daily Mail, the technology under development has already progressed far enough to virtually “unmask” people in most situations. The Disguised Face Identification (DFI) system employs an AI network as it maps facial features hidden behind scarves, head gear, and even fake beards and mustaches to identify people.
No doubt the system can be integrated with criminal databases so that flagging of wanted people can be done instantaneously; in fact, such systems already exist for automobiles. As Natural News has reported as far back as 2013, police departments have been using license plate readers that allow cops to instantly identify people wanted for various crimes as they drive by their vehicles.
Police aren’t concerned about privacy and the incredible amount of hackable data being collected by the readers. Rather, they’re more concerned with revenues: As the Boston Globe reported in May 2013, one $24,000 plate reader paid for itself in just 11 days. “We located more uninsured vehicles in our first month . . . using [the camera] in one cruiser than the entire department did the whole year before,” said Boston PD Sgt. Robert Griffin.
Now, authorities want to take instant database identification a big step further with new facial recognition technology, which will put a quick end to remaining anonymous in public.
“This is very interesting for law enforcement and other organizations that want to capture criminals,” said Amarjot Singh, a University of Cambridge researcher who helped develop DIF technologies, in an interview with Inverse.
Here’s how the technology works: DFI utilizes a deep-learning AI neural network the research team ‘trained’ by inputting images of test subjects using several different kinds of disguises. In addition, images fed into the network included simple and complex backgrounds that challenged the AI to identify disguised features under a variety of scenarios.
Notes the Daily Mail:
AI identifies people by measuring the distances and angle between 14 facial points — ten for the eyes, three for the lips, and one for the nose.
It uses these readings to estimate the hidden facial structure, and then compares this with learned images to unveil the person’s true identity.
Good, you say. In this age of masked Antifa terrorists, it will be good for police to have the technology to identify who is actually responsible for attacking other people, burning cars, and destroying businesses. (Related: America’s universities now becoming terrorist training hubs for Antifa.)
But what about when the technology misidentifies someone as being guilty of committing a crime or act of violence? Because that’s bound to happen; no technology is 100-percent effective or, in this case, foolproof.
Also, there is so much potential for abuse with this technology. If it is deployed widely, authorities will literally be able to track you no matter where you go.
Plus, this technology dramatically alters the relationship between American citizens and all levels of government. Our founders and subsequent generations established a system of justice that presumes innocence until one can be proven guilty; technologies like this DFI and license plate readers are changing that paradigm from “presumed guilty until authorities can prove you are innocent with a wash through government criminal databases.”
And, of course, there is the dramatic loss of privacy and the threat in the Internet age of having more of your personal information stolen from yet another database.
“…[T]his is maybe the third or fourth most worrying ML paper I’ve seen recently re: AI and emergent authoritarianism. Historical crossroads,” tweeted Dr. Zeynep Tufekci, a sociologist at the University of North Carolina, in posting the research to Twitter.
“Yes, we can & should nitpick this and all papers but the trend is clear. Ever-increasing new capability that will serve authoritarians well,” he added.
J.D. Heyes is a senior writer for NaturalNews.com and NewsTarget.com, as well as editor of The National Sentinel.
A German artist has revealed a new technology that he hopes will make it easier for individuals to avoid the growing Surveillance State.
Adam Harvey is an artist and “technologist” based in Berlin, Germany who is well known for using his artistic prowess to create art and fashion that could potentially disrupt the capability of facial recognition technology. Harvey has been profiled in the past for his elaborate ideas on styling hair and makeup in a way that prevents faces from being recognized by surveillance cameras outfitted with facial recognition software.
The Hyperface project involves printing patterns on to clothing or textiles, which then appear to have eyes, mouths and other features that a computer can interpret as a face.
Speaking at the Chaos Communications Congress hacking conference in Hamburg, Harvey said: ‘As I’ve looked at in an earlier project, you can change the way you appear, but, in camouflage you can think of the figure and the ground relationship. There’s also an opportunity to modify the ‘ground’, the things that appear next to you, around you, and that can also modify the computer vision confidence score.’
According to Harvey, the Hyperface project will work by “overloading an algorithm with what it wants, oversaturating an area with faces to divert the gaze of the computer vision algorithm.” To do this he is working with Hyphen-Labs to create patterns that can be worn or wrapped over an object or person. “It can be used to modify the environment around you, whether it’s someone next to you, whether you’re wearing it, maybe around your head or in a new way,” Harvey told The Guardian.
Harvey also discussed how certain researchers are studying facial characteristics and movements in order to identify potential criminals or the age, gender, and mood of a person. Interestingly, Harvey related the attempt to use facial recognition software to study the human species in such a detailed way is reminiscent of the American Eugenics movement of the early 20th century. The Eugenics movement in the United States would go on to be an important inspiration to Adolf Hitler’s philosophy.
“A lot of other researchers are looking at how to take that very small data and turn it into insights that can be used for marketing,” Harvey said. “What all this reminds me of is Francis Galton and eugenics. The real criminal, in these cases, are people who are perpetrating this idea, not the people who are being looked at.”
Harvey’s last project focused on hair and makeup to deflect the watchful eyes of Big Brother and Sister. The belief was that the odd patterns and colors would work in the same fashion Harvey hopes his new cloth patterns will. Harvey’s website CV Dazzle explained the process:
OpenCV is one of the most widely used face detectors. This algorithm performs best for frontal face imagery and excels at computational speed. It’s ideal for real-time face detection and is used widely in mobile phone apps, web apps, robotics, and for scientific research. OpenCV is based on the the Viola-Jones algorithm. This video shows the process used by the Viola Jones algorithm, a cascading set of features that scans across an image at increasing sizes. By understanding how the algorithm detects a face, the process of designing an “anti-face” becomes more intuitive.
Derrick Broze is an investigative journalist and liberty activist. He is the Lead Investigative Reporter for ActivistPost.com and the founder of the TheConsciousResistance.com. Follow him on Twitter. Derrick is the author of three books: The Conscious Resistance: Reflections on Anarchy and Spirituality and Finding Freedom in an Age of Confusion, Vol. 1 and Finding Freedom in an Age of Confusion, Vol. 2
Derrick is available for interviews. Please contact Derrick@activistpost.com
Image Credit: Kaspersky