CLICK HERE for transcript and sources for this video
CLICK HERE to watch this video on BitChute
CLICK HERE to watch this video on YouTube
With permission from
If someone secretly installed software on your computer that recorded every single keystroke that you made, would you be alarmed? Of course you would be, and that is essentially what is taking place on more than 400 of the most popular websites on the entire Internet. For a long time we have known that nothing that we do on the Internet is private, but this new revelation is deeply, deeply disturbing. In my novel entitled “The Beginning Of The End”, I attempted to portray the “Big Brother” surveillance grid which is constantly evolving all around us, but even I didn’t know that things were quite this bad. According to an article that was just published by Ars Technica, when you visit the websites that have installed this secret surveillance code, it is like someday is literally “looking over your shoulder”…
If you have the uncomfortable sense someone is looking over your shoulder as you surf the Web, you’re not being paranoid. A new study finds hundreds of sites—including microsoft.com, adobe.com, and godaddy.com—employ scripts that record visitors’ keystrokes, mouse movements, and scrolling behavior in real time, even before the input is submitted or is later deleted.
Go back and read that again.
Do you understand what that means?
Even if you ultimately decide not to post something, these websites already know what you were typing, where you clicked and how you were moving your mouse.
Essentially, it is like someone is literally sitting behind you and watching every single thing that you do on that website. The following comes from the Daily Mail…
In a blog post revealing the findings, Steven Englehardt, a PhD candidate at Princeton, said: ‘Unlike typical analytics services that provide aggregate statistics, these scripts are intended for the recording and playback of individual browsing sessions, as if someone is looking over your shoulder.
This is fundamentally wrong, and if I am elected to Congress I am going to fight like mad for our privacy rights on the Internet. Nobody should be allowed to literally monitor our keystrokes, but according to a brand new study that has just been released, 482 of the largest websites in the entire world are doing this…
A study published last week reported that 482 of the 50,000 most trafficked websites employ such scripts, usually with no clear disclosure. It’s not always easy to detect sites that employ such scripts. The actual number is almost certainly much higher, particularly among sites outside the top 50,000 that were studied.
“Collection of page content by third-party replay scripts may cause sensitive information, such as medical conditions, credit card details, and other personal information displayed on a page, to leak to the third-party as part of the recording,” Steven Englehardt, a PhD candidate at Princeton University, wrote. “This may expose users to identity theft, online scams, and other unwanted behavior. The same is true for the collection of user inputs during checkout and registration processes.”
I am calling on every website that is using this sort of code to cease and desist immediately. This is a gross violation of our privacy, and Congress needs to pass legislation protecting the American people immediately.
And of course it isn’t just the Internet where are privacy rights are being greatly violated. The CIA has developed software that can remotely turn on the cameras and microphones on our phones whenever they want, and they can also use our phones as GPS locators to track us wherever we go…
CIA-created malware can penetrate and then control the operating systems for both Android and iPhone phones, allege the documents. This software would allow the agency to see the user’s location, copy and transmit audio and text from the phone and covertly turn on the phone’s camera and microphone and then send the resulting images or sound files to the agency.
So just like the Internet, nothing that you do on your phone is ever truly private.
And would you be shocked to learn that our televisions can be used to spy on us as well?
Incredibly, they can even be used to monitor us when they appear to be turned off…
A program dubbed “Weeping Angel” after an episode of the popular British TV science fiction series “Dr. Who,” can set a Samsung smart TV into a fake “off” mode to fool the consumer into thinking the TV isn’t recording room sounds when it still is. The conversations are then sent out via the user’s server. The program was developed in conjunction with MI5, the British FBI equivalent of a domestic counterintelligence and security agency, according to the WikiLeaks documents.
We are rapidly getting to the point where nothing will ever be truly private in our society ever again.
Virtually everything that we do is constantly being watched, tracked, monitored and recorded, and with each passing day our level of privacy is being eroded just a little bit more.
If you don’t want your children to grow up in a world where “Big Brother” is omnipresent, now is the time to stand up and fight. We can put limits on technology and start reclaiming our privacy, but that is only going to happen if we all work together.
Michael Snyder is a Republican candidate for Congress in Idaho’s First Congressional District, and you can learn how you can get involved in the campaign on his official website. His new book entitled “Living A Life That Really Matters” is available in paperback and for the Kindle on Amazon.com.
Dec 7, 2017
is the next phase humanity appears to be going through in its technological evolution. We are at the point where corporations are designing Artificial Intelligence (AI) machines, robots and programs to make child AI machines, robots and programs – in other words, we have AI building AI. While some praise this development and point out the benefits (the fact that AI is now smarter than humanity in some areas, and thus can supposedly better design AI than humans), there is a serious consequence to all this: humanity is becoming further removed from the design process – and therefore has less control. We have now reached a watershed moment with AI building AI better than humans can. If AI builds a child AI which outperforms, outsmarts and overpowers humanity, what happens if we want to modify it or shut it down – but can’t? After all, we didn’t design it, so how can we be 100% sure there won’t be unintended consequences? How can we be sure we can 100% directly control it?
Google Brain researchers announced in May 2017 that they had created AutoML, an AI which can build children AIs. The “ML” in AutoML stands for Machine Learning. As this article Google’s AI Built Its Own AI That Outperforms Any Made by Humans reveals, AutoML created a child AI called NASNet which outperformed all other computer systems in its task of object recognition:
“The Google researchers automated the design of machine learning models using an approach called reinforcement learning. AutoML acts as a controller neural network that develops a child AI network for a specific task. For this particular child AI, which the researchers called NASNet, the task was recognising objects – people, cars, traffic lights, handbags, backpacks, etc. – in a video in real-time. AutoML would evaluate NASNet’s performance and use that information to improve its child AI, repeating the process thousands of times. When tested on the ImageNet image classification and COCO object detection data sets, which the Google researchers call “two of the most respected large-scale academic data sets in computer vision,” NASNet outperformed all other computer vision systems. According to the researchers, NASNet was 82.7 percent accurate at predicting images on ImageNet’s validation set. This is 1.2 percent better than any previously published results, and the system is also 4 percent more efficient, with a 43.1 percent mean Average Precision (mAP).”
With AutoML, Google is building algorithms that analyze the development of other algorithms, to learn which methods are successful and which are not. This Machine Learning, a significant trend in AI research, is like “learning to learn” or “meta-learning.” We are entering a future where computers will invent algorithms to solve problems faster than we can, and humanity will be further and further removed from the whole process.
The issue is stake is how much “freedom” we give AI. By that I mean this: those pushing the technological agenda boast that AI is qualitatively different to any machines of the past, because AI is autonomous and adaptable, meaning it can “think” for itself, learn from its mistakes and alter its behavior accordingly. This makes AI more formidable and at the same time far more dangerous, because then we lose the ability to predict how it will act. It begins to write its own algorithms in ways we don’t comprehend based on its supposed “self-corrective” ability, and pretty soon we have no way to know what it will do.
Now, what if such an autonomous and adaptable AI is given the leeway to create a child AI which has the same parameters? Humanity is then one step further removed from the creation. Yes, we can program the first AI to only design children AIs within certain parameters, but can we ultimately control that process and ensure the biases are not handed down, given that we are programming AI in the first place to be more human-like and learn from its mistakes?
In his article The US and the Global Artificial Intelligence Arms Race, Ulson Gunnar writes:
“OpenAI’s Dr. Dario Amodei would point out that research conducted into machine learning often resulted in unintended solutions developed by AI. He and other researchers noted that often the decision making process of AI systems is not entirely understood and many results are often difficult to predict.
The danger lies not necessarily in first training AI platforms in labs and then releasing a trained system onto a factory floor, on public roads or even into combat with predetermined and predictable capabilities, but in autonomous AI systems being released with the capacity to continue learning and adapting in unpredictable, undesirable and potentially dangerous ways.
Dr. Kathleen Fisher would reiterate this concern, noting that autonomous, self-adapting cyber weapons could potentially create unpredictable collateral damage. Dr. Fisher would also point out that humans would be unable to defend against AI agents.”
Power and strength without wisdom and kindness is a dangerous thing, and that’s exactly what we are creating with AI. We can’t ever teach it to be wise or kind, since those qualities spring from having consciousness, emotion and empathy. Meanwhile, the best we can do is have very tight ethical parameters, however there are no guarantees here. The average person has no way of knowing what code was created to limit AI’s behavior. Even if all the AI programmers in the world wanted to ensure adequate ethical limitations, what if someone, somewhere, makes a mistake? What if AutoML creates systems so quickly that society can’t keep up in terms of understanding and regulating them? NASNet could easily be employed in automated surveillance systems due to its excellent object recognition. Do you think the NWO controllers would hesitate even for a moment to deploy AI against he public in order to protect their power and destroy their opposition?
The Google’s AI Built Its Own AI That Outperforms Any Made by Humans article tries to reassure us with its conclusion:
“Thankfully, world leaders are working fast to ensure such systems don’t lead to any sort of dystopian future. Amazon, Facebook, Apple, and several others are all members of the Partnership on AI to Benefit People and Society, an organisation focused on the responsible development of AI. The Institute of Electrical and Electronics Engineers (IEE) has proposed ethical standards for AI, and DeepMind, a research company owned by Google’s parent company Alphabet, recently announced the creation of group focused on the moral and ethical implications of AI. Various governments are also working on regulations to prevent the use of AI for dangerous purposes, such as autonomous weapons, and so long as humans maintain control of the overall direction of AI development, the benefits of having an AI that can build AI should far outweigh any potential pitfalls.”
However, I am anything but reassured. We can set up all the ethics committees we want. The fact remains that it is theoretically impossible to ever protect ourselves 100% from AI. The article Containing a Superintelligent AI Is Theoretically Impossible explains:
” … according to some new work from researchers at the Universidad Autónoma de Madrid,as well as other schools in Spain, the US, and Australia, once an AI becomes “super intelligent”… it will be impossible to contain it.
Well, the researchers use the word “incomputable” in their paper, posted on the ArXiv preprint server, which in the world of theoretical computer science is perhaps even more damning. The crux of the matter is the “halting problem” devised by Alan Turing, which holds that no algorithm is able to correctly predict whether another algorithm will run forever or whether it will eventually halt—that is, stop running.
Imagine a superintelligent AI with a program that contains every other program in existence. The researchers provided a logical proof that if such an AI could be contained, then the halting problem would by definition be solved. To contain that AI, the argument is that you’d have to simulate it first, but it already simulates everything else, and so we arrive at a paradox.
It would not be feasible to make sure that [an AI] won’t ever cause harm to humans.”
Meanwhile, it appears there are too many lures and promises of profit, convenience and control for humanity to slow down. AI is starting to take everything over. Facebook just deployed a new AI which scans users’ posts for “troubling” or “suicidal” comments and then reports them to the police! This article states:
“Facebook admits that they have asked the police to conduct more than ONE HUNDRED wellness checks on people.
‘Over the last month, we’ve worked with first responders on over 100 wellness checks based on reports we received via our proactive detection efforts. This is in addition to reports we received from people in the Facebook community.’“
With AI building AI, we are taking another key step forward into a future where we are allowing power to flow out of our hands. This is another watershed moment in the evolution of AI. What is going to happen?
Makia Freeman is the editor of alternative media / independent news site The Freedom Articles and senior researcher at ToolsForFreedom.com, writing on many aspects of truth and freedom, from exposing aspects of the worldwide conspiracy to suggesting solutions for how humanity can create a new system of peace and abundance.
Hypocrites! The only reason the US is going after Kaspersky is because he divulged that the terrifying Suxnet virus, which attacked Iran’s nuclear program, was made by Americans and Israelis.
The whole situation around the US ban on the use of Kaspersky Lab antivirus products by federal agencies “looks very strange,” Kaspersky told Germany’s Die Zeit daily, adding that the whole issue in fact lacks substance. “It was much more hype and noise than real action,” he said.
Kaspersky then explained that the US authorities ordered all governmental agencies to remove all the company’s software from their computers, even though “we had almost zero installations there.” With little real need for such measures, they were apparently aimed at damaging the company’s reputation.
“It seems that we just do our job better than others and that made someone very disappointed,” Kaspersky said of the motives behind the US government’s move. “It seems that we detected some unknown or probably very well-known malware that made someone in the US very disappointed.”
At the same time, he stressed that his company does not collect “any sensitive personal data,” not to mention any classified documents, adding that the only data Kaspersky Lab is hunting for is “new types of malware, unknown or suspicious apps.”
The Russian cybersecurity company was indeed accused by the US media of using its software to collect the NSA technology for the Russian government – something that Kaspersky Lab vehemently denied.
According to US media reports in October 2017, an employee from the National Security Agency (NSA) elite hacking unit lost some of the agency’s espionage tools after storing them on his home computer in 2015. The media jumped to blame Kaspersky Lab and the Kremlin.
Following the reports, the company conducted an internal investigation and stumbled upon an incident dating back to 2014. At the time, Kaspersky Lab was investigating the activities of the Equation Group – a powerful group of hackers that later was identified as an arm of the NSA.
As part of Kaspersky’s investigation, it analyzed information received from a computer of an unidentified user, who is alleged to be the security service employee in question. It turned out that the user installed pirated software containing Equation malware, then “scanned the computer multiple times,” which resulted in antivirus software detecting suspicious files, including a 7z archive.
“The archive itself was detected as malicious and submitted to Kaspersky Lab for analysis, where it was processed by one of the analysts. Upon processing, the archive was found to contain multiple malware samples and source code for what appeared to be Equation malware,” the company’s October statement explained.
The analyst then reported the matter directly to Eugene Kaspersky, who ordered the company’s copy of the code to be destroyed.
On Thursday, Kaspersky Lab issued another statement concerning this incident following a more extensive investigation. The results of the investigation showed that the computer in question was infected with several types of malware in addition to the one created by Equation. Some of this malware provided access to the data on this computer to an “unknown number of third parties.”
In particular, the computer was infected with backdoor malware called Mokes, which is also known as Smoke Bot and Smoke Loader. It is operated by an organization called Zhou Lou, based in China.
Kaspersky Lab, a world leader in cybersecurity founded in Moscow in 1997, has been under pressure in the US for years. It repeatedly faced allegations of ties to the Kremlin, though no smoking gun has ever been produced.
In July, Kaspersky offered to hand over source code for his software to the US government, but wasn’t taken up on the offer. In October, the cybersecurity company pledged to reveal its code to independent experts as part of an unprecedented Global Transparency Initiative aimed at staving off US accusations.
Kaspersky has been swept up in the ongoing anti-Russian hysteria in the US, which centers on the unproven allegations of Russian meddling in the 2016 presidential elections. In September, the US government banned federal agencies from using Kaspersky Lab antivirus products, citing concerns that it could jeopardize national security and claiming the company might have links to the Kremlin. Eugene Kaspersky denounced the move as “baseless paranoia at best.”
Even as Kaspersky Lab is offering its cooperation to US authorities, on Thursday, WikiLeaks published source code for the CIA hacking tool “Hive,” which was used by US intelligence agencies to imitate the Kaspersky Lab code and leave behind false digital fingerprints.
The US might be targeting Kaspersky Lab in its witch hunt because the company might be able to disprove American allegations against Russia, experts told RT. “We have Kaspersky saying, ‘We can do this. We can prove some of these hacks are not Russian, they are American,’ when it comes to the presidential elections. And so they needed to discredit them,” former MI5 analyst Annie Machon said.
The campaign against the Russian cybersecurity firm could go back as early as to 2010, when Kaspersky Lab revealed the origin of the Stuxnet virus that hit Iran’s nuclear centrifuges, she told RT. Back then, Kaspersky Lab stated that “this type of attack could only be conducted with nation-state support and backing.” Nobody claimed responsibility for the creation of the malware that targeted Iran. However, it is widely believed that the US and Israeli intelligence agencies were behind Stuxnet.
China will catch up with the US in artificial intelligence will dominate by 2030, Alphabet chairman Eric Schmidt warns.
“Trust me, these Chinese people are good,” the executive chairman of Google’s parent company, Alphabet Inc., Eric Schmidt said at the Artificial Intelligence and Global Security Summit on Wednesday. “By 2020 they will have caught up. By 2025 they will be better than us. And by 2030 they will dominate the industries of AI.”
“Just stop for a sec. The [Chinese] government said that,” the former Google CEO said, referring to Beijing’s strategy, which sees the AI as an important driver for future economic and military power.
“Weren’t we the ones in charge of AI dominance here in our country? Weren’t we the ones that invented this stuff?” Schmidt continued, asking if the US was going to “exploit the benefits of all this technology for betterment and American exceptionalism in our own arrogant view.”
Those doubting the ability of the Chinese system and education to produce the necessary AI researchers are “wrong,” Schmidt said, noting that Asian programmers, particularly Chinese ones, “tend to win many of the top spots” in Google’s coding contests.
To remain competitive in artificial intelligence, America needs to “get [its] act together as a country,” Schmidt believes, emphasizing that the US, unlike China, lacks a strategy. In addition to research funding, the government, he said, should focus on investing in research and immigration as a source of new talents.
“Shockingly, some of the very best people are in countries that we won’t let into America. Would you rather have them building AI somewhere else, or rather have them here?” Schmidt asked. “Iran produces some of the top computer scientists in the world, and I want them here. To be clear, I want them working for Alphabet and Google!” he confessed, adding it is “crazy” not to allow them entry to the US.
A secret, malicious advert works by infiltrating people’s computer and then having their machine taken over, all without a users’ knowledge.
(Anti Media) A hack of the popular adult website, Pornhub.com, may have affected millions of users by infecting their devices with malware. The Independent summarized how the virus infected computers:
“A secret, malicious advert has been running on the free pornography site for more than a year. And it works by infiltrating people’s computer and then having their machine taken over, all without a users’ knowledge.”
The Pornhub hack, which was shut down shortly after it was discovered, worked by appearing “to be a browser or operating system update. That would trick a user into clicking on it and installing the software.”
Proofpoint, the security firm that discovered the breach, explained that after the virus was installed, it automatically clicked on ads to generate revenue. Though it was malware, it could have taken many different forms and could have stolen private information.
“While the payload in this case is ad fraud malware, it could just as easily have been ransomware, an information stealer, or any other malware,” Proofpoint said, as noted by the Independent. “Regardless, threat actors are following the money and looking to more effective combinations of social engineering, targeting and pre-filtering to infect new victims at scale.”
Proofpoint identified the hacker group as KovCoreG. The ad fraud malware they used is called Kovter, and the attack is still active on other sites.
Pornhub is the largest porn site in the world, with 26 billion yearly visits. The hack created millions of potential victims in the United States, Canada, the U.K., and Australia.
Fortunately for Pornhub users, the virus did not target their private data, but even so, the fact that it worked through a porn site likely deterred people from seeking assistance for the problems on their computers.
As Mark James, a security specialist at the IT firm ESET, told the Guardian:
“The audience is possibly less likely to have security in place or active as people’s perception is that it’s already a dark place to surf. Also, the user may be less likely to call for help and try to click through any popups or install any software themselves, not wanting others to see their browsing habits.”
Pornhub did not return the Guardian’s request for comment.
September 17, 2017
Talk about a nightmare. It is being reported that criminals were able to hack into Equifax and make off with the credit information of 143 million Americans. We are talking about names, Social Security numbers, dates of birth, home addresses and even driver’s license numbers. If this data breach was an earthquake, we would be talking about a magnitude-10.0 on the identity theft scale. We have never seen anything like this before, and to say that this will be “disastrous” for the credit industry would be a massive understatement.What really disturbed me about this story is that this hack reportedly occurred between “mid-May and July of this year”…
Credit monitoring company Equifax has been hit by a high-tech heist that exposed the Social Security numbers and other sensitive information about 143 million Americans. Now the unwitting victims have to worry about the threat of having their identities stolen.
The Atlanta-based company, one of three major U.S. credit bureaus, said Thursday that “criminals” exploited a U.S. website application to access files between mid-May and July of this year.
So why didn’t we learn about this until September?
Somebody out there really needs to answer that question for us.
And even though the “143 million” number is being thrown around constantly, according to USA Today we may never know the true number of victims…
When asked if there’s a way to quantify how many people have been harmed, John Ulzheimer, a credit expert and former employee at Equifax and credit score firm FICO, said: “There’s no way to know, and there may never be a way to know.”
Personally, I don’t see how Equifax can possibly survive after this. Their stock price is already crashing, and now it has come out that they had put a “music major” in charge of data security…
When Congress hauls in Equifax CEO Richard Smith to grill him, it can start by asking why he put someone with degrees in music in charge of the company’s data security.
And then they might also ask him if anyone at the company has been involved in efforts to cover up Susan Mauldin’s lack of educational qualifications since the data breach became public.
It would be fascinating to hear Smith try to explain both of those extraordinary items.
Also, we are now finding out that Equifax has not just had security problems here in the United States.
According to the New York Post, data breaches have been taking place all over the globe…
Hackers had access to the names, dates of birth and e-mail addresses of nearly 400,000 people in the United Kingdom, said Equifax’s British subsidiary in a statement last week.
In Canada, sensitive data belonging to 10,000 consumers may have been hacked in the breach, said a statement from the Canadian Automobile Association.
In Argentina, one of the company’s portals was so easily accessible that it allowed quick exposure to the personal information of more than 14,000 people.
As noted above, the public didn’t learn about any of this until September.
But once top Equifax officials learned what had happened, some of them started dumping their shares of Equifax very rapidly…
Three Equifax executives — not the ones who are departing — sold shares worth a combined $1.8 million just a few days after the company discovered the breach, according to documents filed with securities regulators.
Equifax shares have lost a third of their value since it announced the breach.
Needless to say, the SEC is going to be looking into this very closely.
As we move forward, there is a tremendous amount of concern as to how much this data breach will affect the U.S. economy.
Only time will tell, but without a doubt it will have an impact. For example, according to Bloomberg this data breach could potentially have an absolutely disastrous impact on store-branded credit cards…
Equifax Inc.’s massive data breach could make an already tough market outlook even more daunting for the firms behind Gap Inc.’s and Ann Taylor’s store-branded credit cards.
Those retailers’ banking partners, including Synchrony Financial and Alliance Data Systems Corp., could see fewer account originations as more consumers freeze their credit to avoid hack-related fraud. Consumers have to take extra steps — including calling the credit bureau, going online or paying fees — to lift a block and get a new card.
“If people are defaulting to credit freezes, then if you’re a Macy’s retailer trying to sell credit cards, you can’t get that done at the point of sale,” said Vincent Caintic, an analyst at Stephens Inc. “It could become a regular thing, these freezes. It does slow down the origination process and it’s probably going to increase acquisition costs.”
If you believe that your data may have been compromised in this breach, there are some things that you can do right away to help protect against identity theft. You can sign up for 24 hour a day credit monitoring, you can request fraud alerts, you can enable “two factor authentication” and beyond all of that you could go as far as to freeze your credit.
But if everybody in America suddenly started freezing their credit, that would slow down economic activity dramatically. So needless to say authorities are hoping that does not happen.
In this case, Equifax needs to step up and do the right thing. They need to inform all of the victims (even if that means reaching out to 143 million different people), and they should automatically provide free credit monitoring for all of those that were affected.
I seriously doubt that Equifax will take these measures, and I also seriously doubt that Equifax will be able to survive much longer.
When you bungle something as badly as Equifax has done, it is nearly impossible to restore faith in an organization. The credit information of 143 million Americans is now in the hands of criminals, and the potential damage that could be done is absolutely off the charts.
Michael Snyder is a Republican candidate for Congress in Idaho’s First Congressional District, and you can learn how you can get involved in the campaign on his official website. His new book entitled “Living A Life That Really Matters” is available in paperback and for the Kindle on Amazon.com.