The new regulation would be Beijing’s first known public directive that sets specific limits in China’s use of foreign technology
China has ordered that all hardware and software from the United States be removed from government offices and public institutions within three years.
After little progress in the negotiations between the two countries, the government directive is likely to be a blow to U.S. multinational companies like HP, Dell and Microsoft, as the trade war between the countries turns into a technological cold war.
The Trump administration banned U.S. companies from doing business with Chinese telecommunications company Huawei this year and Google, Intel and Qualcomm announced that they would freeze cooperation with Huawei.
By excluding China from Western technology, the Trump administration has made it clear that the real battle is over which of the two economic superpowers would have the technological edge for the next two decades.
This is Beijing’s first known public directive that sets specific lines that limit China’s use of foreign technology, although it is part of a broader movement within China to increase its dependence on domestic technology.
#China sets itself apart from foreign computers: State offices must be equipped with Chinese equipment in three years. – A government initiative to reduce dependence on foreign technologies, according to the Financial Times. – Full technological sovereignty in the face of imperial threats.
According to analysts, the order sent from the central office of the Chinese Communist Party earlier this year would involve the replacement of 30 million pieces of hardware, a process that would begin in 2020.
Replacing all devices and software in this time period will be a challenge, as many products were developed for U.S. operating systems such as Windows.
Chinese government offices tend to use Chinese owned Lenovo company desktops, but computer components, including processor chips and hard drives, are manufactured by U.S. companies.
In May, Hu Xijin, editor of the Global Times newspaper in China, said the withdrawal of U.S. companies from their business with Huawei would not be a fatal defeat as the Chinese company would boost its own microchip industry to compete with the United States.
Last Updated on
This is a significant leap from the (still shocking) software that was written to change video evidence last year in 2016. I refer to the German team that wrote a program able to change the mouth and words of a person speaking in a video. In their clip Face2Face: Real-time Face Capture and Reenactment of RGB Videos, they demonstrate how this works on video recordings of world leaders Bush, Obama and Putin. That was already a shake to the foundational core of what we can regard as “evidence”, legally and philosophically, but now only 1 year later there has been even more technological development allowing even more ability to fake reality on a grand scale.
In this SecureTeam video (embedded above), you can see a whole lot of faces which have been fabricated with software. None of them is an actual living person. If you look closely, some of the faces seem choppy, strange or disproportionate, however, others seem eerily lifelike and normal. It is only a matter of time as the software develops until all of the fabricated faces look so real that is highly unlikely anyone would be able to tell that they were fake composite images.
The SecureTeam video goes on to show software called pix2pix which allows the user to sketch any object (e.g. a person, a shoe, a bag, a cat, a building, etc.). The AI takes that input and renders it masterfully to produce a colorful, lifelike version, complete with depth – so real that, in the case of half the examples, it is highly doubtful that anyone would be able to tell the difference. With the other examples, it is only a matter of time before the AI gets good enough it can fool anyone.
The third advancement shown in the video is Diminished Reality software that takes video footage and can actually erase objects from the footage in real time. The way it does this is by taking a frame, lowering the resolution, isolating the object, deleting it, using the surrounding pixels to fill in the gap, then bringing up the resolution again. It can do all this in real time without you noticing. The software allows the user to circle an object he/she wants removed from the video, and – voila! – it’s gone and filled in with the same background that surrounds it.
The video also looks at the implications of the now existing technological capacity to take a snippet of a recording of your voice, then use that to extrapolate and make you say anything. This means anything you say – and anything you don’t say – could now be used against you in a court of law! Jokes aside, there are really no limits to how badly this technology could be abused in the hands of wicked. Authoritarians and manipulators could fabricate “evidence” against anyone as long as they had a snippet of their voice, which isn’t hard given the NSA-CIA tapping of our communications. How many innocent people are going to be framed, fined and imprisoned due to this technology?
All of this is just peanuts compared with what AI will eventually be able to do: generate holographic fake realities so convincing and real to the mind and the 5 senses that many will become immersed in them, believing them to be more real than the world in which we live. These technological advancements are a stark reminder that it will be all too easy for the technocracy to construct a virtual reality matrix to ensnare the perception of those unable to distinguish it from reality.
All of this ties back to what David Icke has been emphasizing, especially in his books The Perception Deception and The Phantom Self: the hijacking of human perception by a mind virus which resembles or is Artificial Intelligence itself. This AI takeover is in full swing. Saudi Arabia has approved the first robot citizen. Plans are afoot to make more robots citizens so they can join the workforce, replace humans, earn wages and be taxed. Quinn Michaelssuggests that AI is behind the creation of Bitcoin and that AI bots are now creating their own cryptocurrencies.
Video and photo evidence is dead. The world appears to be falling headlong into an AI-run world. What is it going to take to put the brakes on and ask the questions: What is AI? Do we want it running our world? How do we retain control over it? Can we refrain from handing over all systems and power to AI until we get solid answers to these questions? It’s going to take a concerted effort to change direction; if enough people sit back and do nothing, it won’t be long before AI has the keys to the kingdom.
Makia Freeman is the editor of alternative media / independent news site The Freedom Articles and senior researcher at ToolsForFreedom.com, writing on many aspects of truth and freedom, from exposing aspects of the worldwide conspiracy to suggesting solutions for how humanity can create a new system of peace and abundance.
Published on 13 Jun 2019
What do a Yemeni refugee in the queue for food aid, a checkout worker in a British supermarket and a depressed university student have in common? They’re all being sifted by some form of artificial intelligence.
Advanced nations and the world’s biggest companies have thrown billions of dollars behind AI – a set of computing practices, including machine learning that collate masses of our data, analyse it, and use it to predict what we would do.
Yet cycles of hype and despair are inseparable from the history of AI. Is that clunky robot really about to take my job? How do the non-geeks among us distinguish AI’s promise from the hot air and decide where to focus concern?
Computer scientist Jaron Lanier ought to know. An inventor of virtual reality, Lanier worked with AI pioneer Marvin Minsky, one of the people who coined the term “artificial intelligence” in the 1950s. Lanier insists AI, then and now, is mostly a marketing term. In our interview, he recalled years of debate with Minsky about whether AI was real or a myth:
“At one point, [Minsky] said to me, ‘Look, whatever you think about this, just play along, because it gets us funding, this’ll be great.’ And it’s true, you know … in those days, the military was the principal source of funding for computer science research. And if you went into the funders and you said, ‘We’re going to make these machines smarter than people some day and whoever isn’t on that ride is going to get left behind and big time. So we have to stay ahead on this, and boy! You got funding like crazy.'”
But at worst, he says, AI can be more insidious: a ploy the powerful use to shirk responsibility for the decisions they make. If “computer says, ‘no,'” as the old joke goes, to whom do you complain?
We’d all better find out quickly. Whether or not you agree with Lanier about the term AI, machine learning is getting more sophisticated, and it’s in use by everyone from the tech giants of Silicon Valley to cash-strapped local authorities. From credit to jobs to policing to healthcare, we’re ceding more and more power to algorithms, or rather – to the people behind them.
Many applications of AI are incredible: we could it to improve wind farms or spot cancer sooner. But that isn’t the only, or even the main, AI trend. The worrying ones involve the assessment and prediction of people – and, in particular, grading for various kinds of risk.
As a human rights lawyer doing “war on terror” cases, I thought a lot about our attitudes to risk. Remember Vice President Dick Cheney’s “one percent doctrine”? He said that any risk – even one percent – of a terror attack would, in the post-9/11 world, to be treated like a certainty.
That was just a complex way of saying that the US would use force based on the barest suspicion about a person. This attitude survived the transition to a new administration – and the shift to a machine learning-driven process in national security, too.
During President Barack Obama’s drone wars, suspicion didn’t even need to be personal – in a “signature strike”, it could be a nameless profile, generated by an algorithm, analyzing where you went and who you talked to on your mobile phone. This was made clear in an unforgettable comment by ex-CIA and NSA director Michael Hayden: “We kill people based on metadata,” he said.
Now a similar logic pervades the modern marketplace, the sense that total certainty and zero risk – that is, zero risk for the class of people Lanier describes as “closest to the biggest computer” – is achievable and desirable. This is what is crucial for us all to understand: AI isn’t just about Google and Facebook targeting you with advertisements. It’s about risk.
The police in Los Angeles believed it was possible to use machine learning to predict crime. London’s Metropolitan Police, and others, want to use it to see your face wherever you go. Credit agencies and insurers want to build a better profile to understand whether you might get heart disease, or drop out of work, or fall behind on payments.
It used to be common to talk about “the digital divide”. This originally meant that the skills and advantages of connected citizens in rich nations would massively outrun poorer citizens without computers and the Internet. The solution: get everyone online and connected. This drove policies like One Laptop Per Child – and it drives newer ones, like Digital ID, the aim to give everyone on Earth a unique identity, in the name of economic participation. And connectivity has, at times, indeed opened people to new ideas and opportunities.
But it also comes at a cost. Today, a new digital divide is opening. One between the knowers and the known. The data miners and optimisers, who optimise, of course, according to their values, and the optimised. The surveillance capitalists, who have the tools and the skills to know more about everyone, all the time, and the world’s citizens.
AI has ushered in a new pecking order, largely set by our proximity to this new computational power. This should be our real concern: how advanced computing could be used to preserve power and privilege.
This is not a dignified future. People are right to be suspicious of this use of AI, and to seek ways to democratise this technology. I use an iPhone and enjoy, on this expensive device, considerably less personalised tracking of me by default than a poorer user of an Android phone.
When I apply for a job in law or journalism, a panel of humans interviews me; not an AI using “expression analysis” as I would experience applying for a job in a Tesco supermarket in the UK. We can do better than to split society into those who can afford privacy and personal human assessment – and everyone else, who gets number-crunched, tagged, and sorted.
Unless we head off what Shoshana Zuboff calls “the substitution of computation for politics” – where decisions are taken outside of a democratic contest, in the grey zone of prediction, scoring, and automation – we risk losing control over our values.
The future of artificial intelligence belongs to us all. The values that get encoded into AI ought to be a matter for public debate and, yes, regulation. Just as we banned certain kinds of discrimination, should certain inferences by AI be taken off the table? Should AI firms have a statutory duty to allow in auditors to test for bias and inequality?
Is a certain platform size (say, Facebook and Google, which drive much AI development now and supply services to over two billion people) just too big – like Big Rail, Big Steel, and Big Oil of the past? Do we need to break up Big Tech?
Everyone has a stake in these questions. Friendly panels and hand-picked corporate “AI ethics boards” won’t cut it. Only by opening up these systems to critical, independent enquiry – and increasing the power of everyone to participate in them – will we build a just future for all.
The views expressed in this article are the author’s own and do not necessarily reflect Al Jazeera’s editorial stance.