Hey Bad Orange Man, how do you like the tit for tat wars now? Eejit.
The new regulation would be Beijing’s first known public directive that sets specific limits in China’s use of foreign technology
China has ordered that all hardware and software from the United States be removed from government offices and public institutions within three years.
After little progress in the negotiations between the two countries, the government directive is likely to be a blow to U.S. multinational companies like HP, Dell and Microsoft, as the trade war between the countries turns into a technological cold war.
The Trump administration banned U.S. companies from doing business with Chinese telecommunications company Huawei this year and Google, Intel and Qualcomm announced that they would freeze cooperation with Huawei.
By excluding China from Western technology, the Trump administration has made it clear that the real battle is over which of the two economic superpowers would have the technological edge for the next two decades.
This is Beijing’s first known public directive that sets specific lines that limit China’s use of foreign technology, although it is part of a broader movement within China to increase its dependence on domestic technology.
#China sets itself apart from foreign computers: State offices must be equipped with Chinese equipment in three years. – A government initiative to reduce dependence on foreign technologies, according to the Financial Times. – Full technological sovereignty in the face of imperial threats.
According to analysts, the order sent from the central office of the Chinese Communist Party earlier this year would involve the replacement of 30 million pieces of hardware, a process that would begin in 2020.
Replacing all devices and software in this time period will be a challenge, as many products were developed for U.S. operating systems such as Windows.
Chinese government offices tend to use Chinese owned Lenovo company desktops, but computer components, including processor chips and hard drives, are manufactured by U.S. companies.
In May, Hu Xijin, editor of the Global Times newspaper in China, said the withdrawal of U.S. companies from their business with Huawei would not be a fatal defeat as the Chinese company would boost its own microchip industry to compete with the United States.
Last Updated on
This is a significant leap from the (still shocking) software that was written to change video evidence last year in 2016. I refer to the German team that wrote a program able to change the mouth and words of a person speaking in a video. In their clip Face2Face: Real-time Face Capture and Reenactment of RGB Videos, they demonstrate how this works on video recordings of world leaders Bush, Obama and Putin. That was already a shake to the foundational core of what we can regard as “evidence”, legally and philosophically, but now only 1 year later there has been even more technological development allowing even more ability to fake reality on a grand scale.
In this SecureTeam video (embedded above), you can see a whole lot of faces which have been fabricated with software. None of them is an actual living person. If you look closely, some of the faces seem choppy, strange or disproportionate, however, others seem eerily lifelike and normal. It is only a matter of time as the software develops until all of the fabricated faces look so real that is highly unlikely anyone would be able to tell that they were fake composite images.
Video evidence can easily be tampered with now. This is a still from a video showing a missing (i.e. deliberately deleted) trash can.
The SecureTeam video goes on to show software called pix2pix which allows the user to sketch any object (e.g. a person, a shoe, a bag, a cat, a building, etc.). The AI takes that input and renders it masterfully to produce a colorful, lifelike version, complete with depth – so real that, in the case of half the examples, it is highly doubtful that anyone would be able to tell the difference. With the other examples, it is only a matter of time before the AI gets good enough it can fool anyone.
The third advancement shown in the video is Diminished Reality software that takes video footage and can actually erase objects from the footage in real time. The way it does this is by taking a frame, lowering the resolution, isolating the object, deleting it, using the surrounding pixels to fill in the gap, then bringing up the resolution again. It can do all this in real time without you noticing. The software allows the user to circle an object he/she wants removed from the video, and – voila! – it’s gone and filled in with the same background that surrounds it.
The video also looks at the implications of the now existing technological capacity to take a snippet of a recording of your voice, then use that to extrapolate and make you say anything. This means anything you say – and anything you don’t say – could now be used against you in a court of law! Jokes aside, there are really no limits to how badly this technology could be abused in the hands of wicked. Authoritarians and manipulators could fabricate “evidence” against anyone as long as they had a snippet of their voice, which isn’t hard given the NSA-CIA tapping of our communications. How many innocent people are going to be framed, fined and imprisoned due to this technology?
All of this is just peanuts compared with what AI will eventually be able to do: generate holographic fake realities so convincing and real to the mind and the 5 senses that many will become immersed in them, believing them to be more real than the world in which we live. These technological advancements are a stark reminder that it will be all too easy for the technocracy to construct a virtual reality matrix to ensnare the perception of those unable to distinguish it from reality.
All of this ties back to what David Icke has been emphasizing, especially in his books The Perception Deception and The Phantom Self: the hijacking of human perception by a mind virus which resembles or is Artificial Intelligence itself. This AI takeover is in full swing. Saudi Arabia has approved the first robot citizen. Plans are afoot to make more robots citizens so they can join the workforce, replace humans, earn wages and be taxed. Quinn Michaelssuggests that AI is behind the creation of Bitcoin and that AI bots are now creating their own cryptocurrencies.
Video and photo evidence is dead. The world appears to be falling headlong into an AI-run world. What is it going to take to put the brakes on and ask the questions: What is AI? Do we want it running our world? How do we retain control over it? Can we refrain from handing over all systems and power to AI until we get solid answers to these questions? It’s going to take a concerted effort to change direction; if enough people sit back and do nothing, it won’t be long before AI has the keys to the kingdom.
*****
Makia Freeman is the editor of alternative media / independent news site The Freedom Articles and senior researcher at ToolsForFreedom.com, writing on many aspects of truth and freedom, from exposing aspects of the worldwide conspiracy to suggesting solutions for how humanity can create a new system of peace and abundance.
Sources:
*https://www.youtube.com/watch?v=ohmajJTcpNk
*https://www.youtube.com/watch?v=SFWcS1aNXZg
*https://thefreedomarticles.com/david-ickes-phantom-self-book-review/
*https://thefreedomarticles.com/mind-virus-wetiko-collective-shadow/
What do a Yemeni refugee in the queue for food aid, a checkout worker in a British supermarket and a depressed university student have in common? They’re all being sifted by some form of artificial intelligence.
Advanced nations and the world’s biggest companies have thrown billions of dollars behind AI – a set of computing practices, including machine learning that collate masses of our data, analyse it, and use it to predict what we would do.
Yet cycles of hype and despair are inseparable from the history of AI. Is that clunky robot really about to take my job? How do the non-geeks among us distinguish AI’s promise from the hot air and decide where to focus concern?
Computer scientist Jaron Lanier ought to know. An inventor of virtual reality, Lanier worked with AI pioneer Marvin Minsky, one of the people who coined the term “artificial intelligence” in the 1950s. Lanier insists AI, then and now, is mostly a marketing term. In our interview, he recalled years of debate with Minsky about whether AI was real or a myth:
“At one point, [Minsky] said to me, ‘Look, whatever you think about this, just play along, because it gets us funding, this’ll be great.’ And it’s true, you know … in those days, the military was the principal source of funding for computer science research. And if you went into the funders and you said, ‘We’re going to make these machines smarter than people some day and whoever isn’t on that ride is going to get left behind and big time. So we have to stay ahead on this, and boy! You got funding like crazy.'”
But at worst, he says, AI can be more insidious: a ploy the powerful use to shirk responsibility for the decisions they make. If “computer says, ‘no,'” as the old joke goes, to whom do you complain?
We’d all better find out quickly. Whether or not you agree with Lanier about the term AI, machine learning is getting more sophisticated, and it’s in use by everyone from the tech giants of Silicon Valley to cash-strapped local authorities. From credit to jobs to policing to healthcare, we’re ceding more and more power to algorithms, or rather – to the people behind them.
Many applications of AI are incredible: we could it to improve wind farms or spot cancer sooner. But that isn’t the only, or even the main, AI trend. The worrying ones involve the assessment and prediction of people – and, in particular, grading for various kinds of risk.
As a human rights lawyer doing “war on terror” cases, I thought a lot about our attitudes to risk. Remember Vice President Dick Cheney’s “one percent doctrine”? He said that any risk – even one percent – of a terror attack would, in the post-9/11 world, to be treated like a certainty.
That was just a complex way of saying that the US would use force based on the barest suspicion about a person. This attitude survived the transition to a new administration – and the shift to a machine learning-driven process in national security, too.
During President Barack Obama’s drone wars, suspicion didn’t even need to be personal – in a “signature strike”, it could be a nameless profile, generated by an algorithm, analyzing where you went and who you talked to on your mobile phone. This was made clear in an unforgettable comment by ex-CIA and NSA director Michael Hayden: “We kill people based on metadata,” he said.
Now a similar logic pervades the modern marketplace, the sense that total certainty and zero risk – that is, zero risk for the class of people Lanier describes as “closest to the biggest computer” – is achievable and desirable. This is what is crucial for us all to understand: AI isn’t just about Google and Facebook targeting you with advertisements. It’s about risk.
The police in Los Angeles believed it was possible to use machine learning to predict crime. London’s Metropolitan Police, and others, want to use it to see your face wherever you go. Credit agencies and insurers want to build a better profile to understand whether you might get heart disease, or drop out of work, or fall behind on payments.
It used to be common to talk about “the digital divide”. This originally meant that the skills and advantages of connected citizens in rich nations would massively outrun poorer citizens without computers and the Internet. The solution: get everyone online and connected. This drove policies like One Laptop Per Child – and it drives newer ones, like Digital ID, the aim to give everyone on Earth a unique identity, in the name of economic participation. And connectivity has, at times, indeed opened people to new ideas and opportunities.
But it also comes at a cost. Today, a new digital divide is opening. One between the knowers and the known. The data miners and optimisers, who optimise, of course, according to their values, and the optimised. The surveillance capitalists, who have the tools and the skills to know more about everyone, all the time, and the world’s citizens.
AI has ushered in a new pecking order, largely set by our proximity to this new computational power. This should be our real concern: how advanced computing could be used to preserve power and privilege.
This is not a dignified future. People are right to be suspicious of this use of AI, and to seek ways to democratise this technology. I use an iPhone and enjoy, on this expensive device, considerably less personalised tracking of me by default than a poorer user of an Android phone.
When I apply for a job in law or journalism, a panel of humans interviews me; not an AI using “expression analysis” as I would experience applying for a job in a Tesco supermarket in the UK. We can do better than to split society into those who can afford privacy and personal human assessment – and everyone else, who gets number-crunched, tagged, and sorted.
Unless we head off what Shoshana Zuboff calls “the substitution of computation for politics” – where decisions are taken outside of a democratic contest, in the grey zone of prediction, scoring, and automation – we risk losing control over our values.
The future of artificial intelligence belongs to us all. The values that get encoded into AI ought to be a matter for public debate and, yes, regulation. Just as we banned certain kinds of discrimination, should certain inferences by AI be taken off the table? Should AI firms have a statutory duty to allow in auditors to test for bias and inequality?
Is a certain platform size (say, Facebook and Google, which drive much AI development now and supply services to over two billion people) just too big – like Big Rail, Big Steel, and Big Oil of the past? Do we need to break up Big Tech?
Everyone has a stake in these questions. Friendly panels and hand-picked corporate “AI ethics boards” won’t cut it. Only by opening up these systems to critical, independent enquiry – and increasing the power of everyone to participate in them – will we build a just future for all.
The views expressed in this article are the author’s own and do not necessarily reflect Al Jazeera’s editorial stance.
Now we know. And just wait until the Internet of Things really takes off.
Do you have trouble with your computer? Does is freeze or shut down at the most inopportune times? Does it go haywire suddenly or arbitrarily erase important data for no reason? Does it vomit forth long dead languages in a deep, growling voice? If so, then you may have a serious problem. According to some reports, your computer or even your smartphone may actually be possessed by an evil spirit.
Demonic possession is a very well-documented phenomenon among human beings, and has been for centuries, but what of modern devices such as computers and smartphones? Can these machines serve as some sort of conduit for evil forces? One person who would say yes to that is a Reverend Jim Peasboro, of Savannah, Georgia, in the United States, who has spent a lot of time denouncing how computers are powerful tools of the Devil for corrupting our souls. So far, so standard, but Peasboro goes beyond just using the Devil working through computers as a metaphor for their bad influence on our youth, and rather seems to believe that demonic forces can literally possess computers.
Peasboro has written a whole book on this, called The Devil in the Machine: Is Your Computer Possessed by a Demon?, in which he outlines his belief that possession by demons can be experienced by anything with a mind, including humans, animals, and even the processor of your computer. According to Peasboro, “Any PC built after 1985 has the storage capacity to house an evil spirit,” with storage capacity seeming to make a difference, and he asserts that “one in 10 computers in America now houses some type of evil spirit.” He seems to take this all quite literally, and claims that these malicious spirits are responsible for seeping through our screens to exert their influence, which has led to much of the crime and gun violence among young people seen in the country. As to other effects of these malevolent cyber-demons he says:
I learned that many members of my congregation became in touch with a dark force whenever they used their computers. Decent, happily married family men were drawn irresistibly to pornographic websites and forced to witness unspeakable abominations. Housewives who had never expressed an impure thought were entering Internet chat rooms and found themselves spewing foul, debasing language they would never use normally…One woman wept as she confessed to me, ‘I feel when I’m on the computer as if someone else or something else just takes over.’
Surely this seems like it still must be just talking in metaphor. After all, the Internet can indeed be a scary, lawless badland where terrible people do terrible things, we get that, but Peasboro truly seems to think that it is not just the barrage of impure images and the limitless new opportunities to be exposed to violence and pornography online, but rather actual supernatural demons worming their way into our technology to reach out into us. The most bizarre story he tells is of coming face to face with one of these demons while inspecting a computer that was suspected of being possessed. He would say of the experience:
The program began talking directly to me, openly mocked me. It typed out, ‘Preacher, you are a weakling and your God is a damn liar.’ Then the device went haywire and started printing out what looked like gobbledygook…I later had an expert in dead languages examine the text. It turned out to be a stream of obscenities written in a 2,800-year-old Mesopotamian dialect!
Well, that seems like it’s definitely not your typical computer virus. If your computer starts berating you and spewing forth ancient Mesopotamian, then you probably have a bigger problem than just using Windows 10. You may be thinking about what you can do if that is the case, and Peasboro has the answer for you on that, saying that if you suspect that your computer is possessed by the Devil then all you have to do is consult a clergyman, or if that doesn’t work he says “Technicians can replace the hard drive and reinstall the software, getting rid of the wicked spirit permanently.” Hope you have that warranty handy.
The Greatest Trick the Devil Ever Pulled Was Convincing the World He Didn’t Exist.
Don’t just take Paesboro’s word for it all, though. There are some other pretty spooky cases out there that seem to point to real demons actually lashing out and pushing through our computer screens, and one weird account comes from a poster on the site dreamsofdunamis, who says that as she was surfing the net one evening she came across a car ad that was filled with what seemed to be sinister and cryptic Illuminati symbolism. As she scrolled down, she found more creepy symbols and a line of spectral black and white figures, and that at that point she claims to have actually felt a demon physically leap out from the screen and actually pass through her. This is all strange enough as it is, but whatever presence had come through the computer had apparently gone on to prowl around the house, as her son soon came into the room complaining of having been woken up and attacked by some sort of terrifying entity. She said of what happened next:
It sounded just like the black and white ghostly picture that I had just seen on that web page just moments before. The symbols were the same, so I knew it had to have come from that site.
As I apologized to my child, I realized that I may have to stop surfing the web late at night, for I did not want to disturb my kids sleep like this any more.
I shared this with my child, and told him that as soon as I had sensed the demon come through the screen, I cast it out in the name of Jesus, exited the page, and closed up the computer. I was surprised that the demon did not leave once it had been cast out. I was also surprised (and a bit frustrated,) that the demon attacking him was almost instantaneous; there was no pause or time elapse from when it went through my computer screen to when it entered into my child’s bedroom to attack him.
My child then reminded me, that the demon that I had cast out had probably left, but there were numerous demons that could come through just one demonic doorway. And in this case, viewing the photo was the doorway into our house.
She then goes on to speculate that the demon had come through the image on her computer, and that others might have entered her house as well. This caused her to go about praying to cast out any residual demonic forces lurking within the home. As they did this she claims to have heard a startling, loud noise like something wet hitting the floor nearby, and they looked to see a shadowy figure about 4 feet in height and possessing wings, which crouched there for a moment before screaming as if in pain and falling backwards to seemingly phase right through the wall. She goes on to claim that her family has been attacked on several occasions by such supernatural forces coming through their computer screen or even TV. She says of this:
Our first encounter with demons coming out of computer and TV screens, happened several years ago, when one of my kids had clicked on a video that promised the viewer a glimpse of a real alien. We were all sitting there at the kitchen table, with the kids doing their school work, and this one kid had finished early, so as a reward, I told him he could use the computer while he waited for the rest of the kids to finish.
Well, most of the youtube video that he had decided to view, was silent and dark, which caused one to lean in closer to the computer screen, to see if you could see anything. Suddenly, a drawing of an alien’s face flashed upon the screen, and a loud roar came from the speakers, and as everyone there at the table turned to look at the computer screen, a large black ghost-like hook, (reminiscent of Peter Pan’s Captain Hook, but very very black and wraith-like,) reached out through the computer screen and tried to stab itself into my child’s forehead. It glanced off the surface of his skin, and then gave an even louder roar of frustration, once it realized it had failed in its attack. The claw then evaporated back into the computer screen. Laughter was then heard coming from the video, as the perps laughed out loud at their supposed joke.
As you can imagine, we were all left quite shaken, after seeing such a thing. It was a lesson none of us have forgotten!
When most people think of AI’s relative strengths over humans, they think of its convergent intelligence. With superior memory capacity and processing power, computers outperform people at rules-based games, complex calculations, and data storage: chess, advanced math, and Jeopardy. What computers lack, some might say, is any form of imagination, or rule-breaking curiosity—that is, divergence.
But what if that common view is wrong? What if AI’s real comparative advantage over humans is precisely its divergent intelligence—its creative potential? That’s the subject of the latest episode of the podcast Crazy/Genius, produced by Kasia Mychajlowycz and Patricia Yacob.
AI’s divergent potential is one of the hottest subjects in the field. This spring, several dozen computer scientists published an unusual paper on the history of AI. This paper was not a work of research. It was a collection of stories—some ominous, some hilarious—that showed AI shocking its own designers with its ingenuity. Most of the stories involved a kind of AI called machine learning, where programmers give the computer data and a problem to solve without explicit instructions, in the hopes that the algorithm will figure out how to answer it.
First, an ominous example. One algorithm was supposed to figure out how to land a virtual airplane with minimal force. But the AI soon discovered that if it crashed the plane, the program would register a force so large that it would overwhelm its own memory and count it as a perfect score. So the AI crashed the plane, over and over again, presumably killing all the virtual people on board. This is the sort of nefarious rules-hacking that makes AI alarmists fear that a sentient AI could ultimately destroy mankind. (To be clear, there is a cavernous gap between a simulator snafu and SkyNet.)
In the classic 1973 heist movie The Sting, two con men—played by Robert Redford and Paul Newman—build a fictitious world in a Depression-era Chicago basement to defraud a corrupt banker. They make an offtrack-betting room, hire actors to ensure the scene is convincing, and even enlist pretend law enforcement to fake-bust their mark. The film is memorable because it is one of the finest movies in the genre, well written and funny, but also because the duo’s work is so meticulously detailed.
The con has changed since then, both short and long. In this age, the online equivalent of The Sting is a phishing site: a fake reality that lives online, set up to capture precious information such as logins and passwords, bank-account numbers, and the other functional secrets of modern life. You don’t get to see these spaces being built, but—like The Sting’s betting room—they can be perfect in every detail. Or they can be thrown together at the last minute like a clapboard set.
This might be the best way to think about phishing: a set built for you, to trick information out of you; built either by con men or, in the case of the recent spear-phishing attack caught and shut down by Microsoft, by spies and agents working for (or with) interfering governments, which seems a bit more sinister than Paul Newman with a jaunty smile and a straw hat.
But that’s the untargeted stuff. Enticing someone to click on a phishing link, in an email or elsewhere, is where a targeted attack, also known as spear-phishing, comes in: learning about someone’s life and habits to know just what email would get them unthinkingly to click. A reality built for one person, or one cohort of people. The con is on, the set is built, and the actors are hired to make the sting, all from a web browser.
Protecting your identity in the lawless wild of the Internet is considered to be a lost cause. But an article on Survival Blog.com tackled how you can set up a secure virtual machine for browsing online.
Start by getting a gaming laptop with Windows 10 Professional. To make sure the purchase can’t be traced back to you, buy a used unit with cash or have someone else order it from Amazon for you.
Windows 10 Professional has Bitlocker, a full disk encryption program. To use this, click the Start button, go to Settings, and enter “manage Bitlocker” in the search bar. Use Bitlocker to encrypt your C drive. It will provide you with instructions for the process.
Create a secure password for your laptop. Go for a sentence that has 21 characters and contains punctuation. Make it something you can remember easily.
After setting your password, Bitlocker will ask you if you want a recovery key to recover your drive. It’s up to you; make sure you keep it safe. (Related: Online censorship is a WAR against human knowledge and sustainable civilization.)
The next step is to set up a virtual private network (VPN), which encrypts the data you exchange with a VPN provider. A hacker will only see encrypted data.
Make sure you pick a VPN provider that can be trusted to not sell your data. Two good choices are Proton VPN and Private Internet Access VPN.
Now you need to get a secure email service. Proton Mail is a good choice and free, to boot. But there are others you may want to consider. Do be warned that many secure mail services support themselves by advertising pornography.
After getting your VPN and secure email, update the defenses of your laptop. Windows 10 comes with its own antivirus, Windows Defender. Go to Settings and look up “check for updates.” Bring Defender and everything else up to date.
If you feel you need more protection, consider getting ESET or F-Secure antivirus programs. Always remember that these programs cannot protect you from every danger, so do not do risky things.
https://www.real.video/embed/5812011333001
All this effort is intended to set up your laptop as the host for a virtual machine, a computer that is made of pure software. A virtual machine is much more secure than a physical computer. It is also much easier to replace because new ones can simply be downloaded or set up.
Download two virtual machine software: Virtual Box for Windows, and Tails for Linux. As a bonus, Virtual Box is free, and installing it is a matter of clicking “Next” a few times.
In Virtual Box, select the “MS Edge Windows 10 Stable” virtual machine for downloading. The download page will have the default user name and password; you will need these later.
Go to your Downloads folder and extract the contents of the file. Look for the .ovf file that will import the virtual machine into Virtual Box. Double-click this file. When the computer prompts you, pick “Import.”
You will need to take “snapshots” of your new virtual machine so that you can restore it to those earlier states. Take a snapshot immediately after you import it.
Select the virtual machine and pick “Screenshots.” Click the camera icon and name it “Baseline.” You can restore the virtual machine to this state whenever you need to.
Start the virtual machine up. Enter the default user name and password. Then pick “Input,” “Keyboard,” and press Ctrl-Alt-Del. That brings up an option to change the password. Give it a strong one.
Find more ways to shield your identity every time you go online at Glitch.news.
Sources include:
CLICK HERE for transcript and sources for this video
CLICK HERE to watch this video on BitChute
CLICK HERE to watch this video on YouTube
With permission from
If someone secretly installed software on your computer that recorded every single keystroke that you made, would you be alarmed? Of course you would be, and that is essentially what is taking place on more than 400 of the most popular websites on the entire Internet. For a long time we have known that nothing that we do on the Internet is private, but this new revelation is deeply, deeply disturbing. In my novel entitled “The Beginning Of The End”, I attempted to portray the “Big Brother” surveillance grid which is constantly evolving all around us, but even I didn’t know that things were quite this bad. According to an article that was just published by Ars Technica, when you visit the websites that have installed this secret surveillance code, it is like someday is literally “looking over your shoulder”…
If you have the uncomfortable sense someone is looking over your shoulder as you surf the Web, you’re not being paranoid. A new study finds hundreds of sites—including microsoft.com, adobe.com, and godaddy.com—employ scripts that record visitors’ keystrokes, mouse movements, and scrolling behavior in real time, even before the input is submitted or is later deleted.
Go back and read that again.
Do you understand what that means?
Even if you ultimately decide not to post something, these websites already know what you were typing, where you clicked and how you were moving your mouse.
Essentially, it is like someone is literally sitting behind you and watching every single thing that you do on that website. The following comes from the Daily Mail…
In a blog post revealing the findings, Steven Englehardt, a PhD candidate at Princeton, said: ‘Unlike typical analytics services that provide aggregate statistics, these scripts are intended for the recording and playback of individual browsing sessions, as if someone is looking over your shoulder.
This is fundamentally wrong, and if I am elected to Congress I am going to fight like mad for our privacy rights on the Internet. Nobody should be allowed to literally monitor our keystrokes, but according to a brand new study that has just been released, 482 of the largest websites in the entire world are doing this…
A study published last week reported that 482 of the 50,000 most trafficked websites employ such scripts, usually with no clear disclosure. It’s not always easy to detect sites that employ such scripts. The actual number is almost certainly much higher, particularly among sites outside the top 50,000 that were studied.
“Collection of page content by third-party replay scripts may cause sensitive information, such as medical conditions, credit card details, and other personal information displayed on a page, to leak to the third-party as part of the recording,” Steven Englehardt, a PhD candidate at Princeton University, wrote. “This may expose users to identity theft, online scams, and other unwanted behavior. The same is true for the collection of user inputs during checkout and registration processes.”
I am calling on every website that is using this sort of code to cease and desist immediately. This is a gross violation of our privacy, and Congress needs to pass legislation protecting the American people immediately.
And of course it isn’t just the Internet where are privacy rights are being greatly violated. The CIA has developed software that can remotely turn on the cameras and microphones on our phones whenever they want, and they can also use our phones as GPS locators to track us wherever we go…
CIA-created malware can penetrate and then control the operating systems for both Android and iPhone phones, allege the documents. This software would allow the agency to see the user’s location, copy and transmit audio and text from the phone and covertly turn on the phone’s camera and microphone and then send the resulting images or sound files to the agency.
So just like the Internet, nothing that you do on your phone is ever truly private.
And would you be shocked to learn that our televisions can be used to spy on us as well?
Incredibly, they can even be used to monitor us when they appear to be turned off…
A program dubbed “Weeping Angel” after an episode of the popular British TV science fiction series “Dr. Who,” can set a Samsung smart TV into a fake “off” mode to fool the consumer into thinking the TV isn’t recording room sounds when it still is. The conversations are then sent out via the user’s server. The program was developed in conjunction with MI5, the British FBI equivalent of a domestic counterintelligence and security agency, according to the WikiLeaks documents.
We are rapidly getting to the point where nothing will ever be truly private in our society ever again.
Virtually everything that we do is constantly being watched, tracked, monitored and recorded, and with each passing day our level of privacy is being eroded just a little bit more.
If you don’t want your children to grow up in a world where “Big Brother” is omnipresent, now is the time to stand up and fight. We can put limits on technology and start reclaiming our privacy, but that is only going to happen if we all work together.
Michael Snyder is a Republican candidate for Congress in Idaho’s First Congressional District, and you can learn how you can get involved in the campaign on his official website. His new book entitled “Living A Life That Really Matters” is available in paperback and for the Kindle on Amazon.com.
Source: AI Building AI: Mankind Losing More Control over Artificial Intelligence | The Freedom Articles
freedom-articles.toolsforfreedom.com
Makia Freeman
Dec 7, 2017
We have the reached the stage of AI Building AI. Our AI robots/machines are creating child AI robots/machines. Have we already lost control?
is the next phase humanity appears to be going through in its technological evolution. We are at the point where corporations are designing Artificial Intelligence (AI) machines, robots and programs to make child AI machines, robots and programs – in other words, we have AI building AI. While some praise this development and point out the benefits (the fact that AI is now smarter than humanity in some areas, and thus can supposedly better design AI than humans), there is a serious consequence to all this: humanity is becoming further removed from the design process – and therefore has less control. We have now reached a watershed moment with AI building AI better than humans can. If AI builds a child AI which outperforms, outsmarts and overpowers humanity, what happens if we want to modify it or shut it down – but can’t? After all, we didn’t design it, so how can we be 100% sure there won’t be unintended consequences? How can we be sure we can 100% directly control it?
Google Brain researchers announced in May 2017 that they had created AutoML, an AI which can build children AIs. The “ML” in AutoML stands for Machine Learning. As this article Google’s AI Built Its Own AI That Outperforms Any Made by Humans reveals, AutoML created a child AI called NASNet which outperformed all other computer systems in its task of object recognition:
“The Google researchers automated the design of machine learning models using an approach called reinforcement learning. AutoML acts as a controller neural network that develops a child AI network for a specific task. For this particular child AI, which the researchers called NASNet, the task was recognising objects – people, cars, traffic lights, handbags, backpacks, etc. – in a video in real-time. AutoML would evaluate NASNet’s performance and use that information to improve its child AI, repeating the process thousands of times. When tested on the ImageNet image classification and COCO object detection data sets, which the Google researchers call “two of the most respected large-scale academic data sets in computer vision,” NASNet outperformed all other computer vision systems. According to the researchers, NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP).”
With AutoML, Google is building algorithms that analyze the development of other algorithms, to learn which methods are successful and which are not. This Machine Learning, a significant trend in AI research, is like “learning to learn” or “meta-learning.” We are entering a future where computers will invent algorithms to solve problems faster than we can, and humanity will be further and further removed from the whole process.
The issue is stake is how much “freedom” we give AI. By that I mean this: those pushing the technological agenda boast that AI is qualitatively different to any machines of the past, because AI is autonomous and adaptable, meaning it can “think” for itself, learn from its mistakes and alter its behavior accordingly. This makes AI more formidable and at the same time far more dangerous, because then we lose the ability to predict how it will act. It begins to write its own algorithms in ways we don’t comprehend based on its supposed “self-corrective” ability, and pretty soon we have no way to know what it will do.
Now, what if such an autonomous and adaptable AI is given the leeway to create a child AI which has the same parameters? Humanity is then one step further removed from the creation. Yes, we can program the first AI to only design children AIs within certain parameters, but can we ultimately control that process and ensure the biases are not handed down, given that we are programming AI in the first place to be more human-like and learn from its mistakes?
In his article The US and the Global Artificial Intelligence Arms Race, Ulson Gunnar writes:
“OpenAI’s Dr. Dario Amodei would point out that research conducted into machine learning often resulted in unintended solutions developed by AI. He and other researchers noted that often the decision making process of AI systems is not entirely understood and many results are often difficult to predict.
The danger lies not necessarily in first training AI platforms in labs and then releasing a trained system onto a factory floor, on public roads or even into combat with predetermined and predictable capabilities, but in autonomous AI systems being released with the capacity to continue learning and adapting in unpredictable, undesirable and potentially dangerous ways.
Dr. Kathleen Fisher would reiterate this concern, noting that autonomous, self-adapting cyber weapons could potentially create unpredictable collateral damage. Dr. Fisher would also point out that humans would be unable to defend against AI agents.”
Power and strength without wisdom and kindness is a dangerous thing, and that’s exactly what we are creating with AI. We can’t ever teach it to be wise or kind, since those qualities spring from having consciousness, emotion and empathy. Meanwhile, the best we can do is have very tight ethical parameters, however there are no guarantees here. The average person has no way of knowing what code was created to limit AI’s behavior. Even if all the AI programmers in the world wanted to ensure adequate ethical limitations, what if someone, somewhere, makes a mistake? What if AutoML creates systems so quickly that society can’t keep up in terms of understanding and regulating them? NASNet could easily be employed in automated surveillance systems due to its excellent object recognition. Do you think the NWO controllers would hesitate even for a moment to deploy AI against he public in order to protect their power and destroy their opposition?
The Google’s AI Built Its Own AI That Outperforms Any Made by Humans article tries to reassure us with its conclusion:
“Thankfully, world leaders are working fast to ensure such systems don’t lead to any sort of dystopian future. Amazon, Facebook, Apple, and several others are all members of the Partnership on AI to Benefit People and Society, an organisation focused on the responsible development of AI. The Institute of Electrical and Electronics Engineers (IEE) has proposed ethical standards for AI, and DeepMind, a research company owned by Google’s parent company Alphabet, recently announced the creation of group focused on the moral and ethical implications of AI. Various governments are also working on regulations to prevent the use of AI for dangerous purposes, such as autonomous weapons, and so long as humans maintain control of the overall direction of AI development, the benefits of having an AI that can build AI should far outweigh any potential pitfalls.”
However, I am anything but reassured. We can set up all the ethics committees we want. The fact remains that it is theoretically impossible to ever protect ourselves 100% from AI. The article Containing a Superintelligent AI Is Theoretically Impossible explains:
” … according to some new work from researchers at the Universidad Autónoma de Madrid,as well as other schools in Spain, the US, and Australia, once an AI becomes “super intelligent”… it will be impossible to contain it.
Well, the researchers use the word “incomputable” in their paper, posted on the ArXiv preprint server, which in the world of theoretical computer science is perhaps even more damning. The crux of the matter is the “halting problem” devised by Alan Turing, which holds that no algorithm is able to correctly predict whether another algorithm will run forever or whether it will eventually halt—that is, stop running.
Imagine a superintelligent AI with a program that contains every other program in existence. The researchers provided a logical proof that if such an AI could be contained, then the halting problem would by definition be solved. To contain that AI, the argument is that you’d have to simulate it first, but it already simulates everything else, and so we arrive at a paradox.
It would not be feasible to make sure that [an AI] won’t ever cause harm to humans.”
Meanwhile, it appears there are too many lures and promises of profit, convenience and control for humanity to slow down. AI is starting to take everything over. Facebook just deployed a new AI which scans users’ posts for “troubling” or “suicidal” comments and then reports them to the police! This article states:
“Facebook admits that they have asked the police to conduct more than ONE HUNDRED wellness checks on people.
‘Over the last month, we’ve worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts. This is in addition to reports we received from people in the Facebook community.’“
With AI building AI, we are taking another key step forward into a future where we are allowing power to flow out of our hands. This is another watershed moment in the evolution of AI. What is going to happen?
*****
Makia Freeman is the editor of alternative media / independent news site The Freedom Articles and senior researcher at ToolsForFreedom.com, writing on many aspects of truth and freedom, from exposing aspects of the worldwide conspiracy to suggesting solutions for how humanity can create a new system of peace and abundance.
Sources:
*https://research.googleblog.com/2017/05/using-machine-learning-to-explore.html
*http://www.sciencealert.com/google-s-ai-built-it-s-own-ai-that-outperforms-any-made-by-humans
*https://www.activistpost.com/2017/12/us-global-artificial-intelligence-arms-race.html
*http://www.networks.imdea.org/whats-new/news/2016/containing-superintelligent-ai-theoretically-impossible
*https://www.activistpost.com/2017/11/facebooks-new-suicide-detection-put-innocent-people-behind-bars.html
Hypocrites! The only reason the US is going after Kaspersky is because he divulged that the terrifying Suxnet virus, which attacked Iran’s nuclear program, was made by Americans and Israelis.
The whole situation around the US ban on the use of Kaspersky Lab antivirus products by federal agencies “looks very strange,” Kaspersky told Germany’s Die Zeit daily, adding that the whole issue in fact lacks substance. “It was much more hype and noise than real action,” he said.
Kaspersky then explained that the US authorities ordered all governmental agencies to remove all the company’s software from their computers, even though “we had almost zero installations there.” With little real need for such measures, they were apparently aimed at damaging the company’s reputation.
“It seems that we just do our job better than others and that made someone very disappointed,” Kaspersky said of the motives behind the US government’s move. “It seems that we detected some unknown or probably very well-known malware that made someone in the US very disappointed.”
At the same time, he stressed that his company does not collect “any sensitive personal data,” not to mention any classified documents, adding that the only data Kaspersky Lab is hunting for is “new types of malware, unknown or suspicious apps.”
The Russian cybersecurity company was indeed accused by the US media of using its software to collect the NSA technology for the Russian government – something that Kaspersky Lab vehemently denied.
According to US media reports in October 2017, an employee from the National Security Agency (NSA) elite hacking unit lost some of the agency’s espionage tools after storing them on his home computer in 2015. The media jumped to blame Kaspersky Lab and the Kremlin.
Following the reports, the company conducted an internal investigation and stumbled upon an incident dating back to 2014. At the time, Kaspersky Lab was investigating the activities of the Equation Group – a powerful group of hackers that later was identified as an arm of the NSA.
As part of Kaspersky’s investigation, it analyzed information received from a computer of an unidentified user, who is alleged to be the security service employee in question. It turned out that the user installed pirated software containing Equation malware, then “scanned the computer multiple times,” which resulted in antivirus software detecting suspicious files, including a 7z archive.
“The archive itself was detected as malicious and submitted to Kaspersky Lab for analysis, where it was processed by one of the analysts. Upon processing, the archive was found to contain multiple malware samples and source code for what appeared to be Equation malware,” the company’s October statement explained.
The analyst then reported the matter directly to Eugene Kaspersky, who ordered the company’s copy of the code to be destroyed.
On Thursday, Kaspersky Lab issued another statement concerning this incident following a more extensive investigation. The results of the investigation showed that the computer in question was infected with several types of malware in addition to the one created by Equation. Some of this malware provided access to the data on this computer to an “unknown number of third parties.”
In particular, the computer was infected with backdoor malware called Mokes, which is also known as Smoke Bot and Smoke Loader. It is operated by an organization called Zhou Lou, based in China.
Kaspersky Lab, a world leader in cybersecurity founded in Moscow in 1997, has been under pressure in the US for years. It repeatedly faced allegations of ties to the Kremlin, though no smoking gun has ever been produced.
In July, Kaspersky offered to hand over source code for his software to the US government, but wasn’t taken up on the offer. In October, the cybersecurity company pledged to reveal its code to independent experts as part of an unprecedented Global Transparency Initiative aimed at staving off US accusations.
Kaspersky has been swept up in the ongoing anti-Russian hysteria in the US, which centers on the unproven allegations of Russian meddling in the 2016 presidential elections. In September, the US government banned federal agencies from using Kaspersky Lab antivirus products, citing concerns that it could jeopardize national security and claiming the company might have links to the Kremlin. Eugene Kaspersky denounced the move as “baseless paranoia at best.”
Even as Kaspersky Lab is offering its cooperation to US authorities, on Thursday, WikiLeaks published source code for the CIA hacking tool “Hive,” which was used by US intelligence agencies to imitate the Kaspersky Lab code and leave behind false digital fingerprints.
The US might be targeting Kaspersky Lab in its witch hunt because the company might be able to disprove American allegations against Russia, experts told RT. “We have Kaspersky saying, ‘We can do this. We can prove some of these hacks are not Russian, they are American,’ when it comes to the presidential elections. And so they needed to discredit them,” former MI5 analyst Annie Machon said.
The campaign against the Russian cybersecurity firm could go back as early as to 2010, when Kaspersky Lab revealed the origin of the Stuxnet virus that hit Iran’s nuclear centrifuges, she told RT. Back then, Kaspersky Lab stated that “this type of attack could only be conducted with nation-state support and backing.” Nobody claimed responsibility for the creation of the malware that targeted Iran. However, it is widely believed that the US and Israeli intelligence agencies were behind Stuxnet.
China will catch up with the US in artificial intelligence will dominate by 2030, Alphabet chairman Eric Schmidt warns.
“Trust me, these Chinese people are good,” the executive chairman of Google’s parent company, Alphabet Inc., Eric Schmidt said at the Artificial Intelligence and Global Security Summit on Wednesday. “By 2020 they will have caught up. By 2025 they will be better than us. And by 2030 they will dominate the industries of AI.”
“Just stop for a sec. The [Chinese] government said that,” the former Google CEO said, referring to Beijing’s strategy, which sees the AI as an important driver for future economic and military power.
“Weren’t we the ones in charge of AI dominance here in our country? Weren’t we the ones that invented this stuff?” Schmidt continued, asking if the US was going to “exploit the benefits of all this technology for betterment and American exceptionalism in our own arrogant view.”
Those doubting the ability of the Chinese system and education to produce the necessary AI researchers are “wrong,” Schmidt said, noting that Asian programmers, particularly Chinese ones, “tend to win many of the top spots” in Google’s coding contests.
To remain competitive in artificial intelligence, America needs to “get [its] act together as a country,” Schmidt believes, emphasizing that the US, unlike China, lacks a strategy. In addition to research funding, the government, he said, should focus on investing in research and immigration as a source of new talents.
“Shockingly, some of the very best people are in countries that we won’t let into America. Would you rather have them building AI somewhere else, or rather have them here?” Schmidt asked. “Iran produces some of the top computer scientists in the world, and I want them here. To be clear, I want them working for Alphabet and Google!” he confessed, adding it is “crazy” not to allow them entry to the US.
A secret, malicious advert works by infiltrating people’s computer and then having their machine taken over, all without a users’ knowledge.
Source: Millions Of Pornhub Users Hacked In Malware Attack
anonymous-news.com
(Anti Media) A hack of the popular adult website, Pornhub.com, may have affected millions of users by infecting their devices with malware. The Independent summarized how the virus infected computers:
“A secret, malicious advert has been running on the free pornography site for more than a year. And it works by infiltrating people’s computer and then having their machine taken over, all without a users’ knowledge.”
The Pornhub hack, which was shut down shortly after it was discovered, worked by appearing “to be a browser or operating system update. That would trick a user into clicking on it and installing the software.”
Proofpoint, the security firm that discovered the breach, explained that after the virus was installed, it automatically clicked on ads to generate revenue. Though it was malware, it could have taken many different forms and could have stolen private information.
“While the payload in this case is ad fraud malware, it could just as easily have been ransomware, an information stealer, or any other malware,” Proofpoint said, as noted by the Independent. “Regardless, threat actors are following the money and looking to more effective combinations of social engineering, targeting and pre-filtering to infect new victims at scale.”
Proofpoint identified the hacker group as KovCoreG. The ad fraud malware they used is called Kovter, and the attack is still active on other sites.
Pornhub is the largest porn site in the world, with 26 billion yearly visits. The hack created millions of potential victims in the United States, Canada, the U.K., and Australia.
Fortunately for Pornhub users, the virus did not target their private data, but even so, the fact that it worked through a porn site likely deterred people from seeking assistance for the problems on their computers.
As Mark James, a security specialist at the IT firm ESET, told the Guardian:
“The audience is possibly less likely to have security in place or active as people’s perception is that it’s already a dark place to surf. Also, the user may be less likely to call for help and try to click through any popups or install any software themselves, not wanting others to see their browsing habits.”
Pornhub did not return the Guardian’s request for comment.