Over 1.08 million animals were used for scientific “experiments” in total, and now that number is rising for the first time in supposedly three years in this state.
Over 1.08 million animals were used for scientific “experiments” in total, and now that number is rising for the first time in supposedly three years in this state.
Think that it’s not that inhumane to experiment on mice or rodents? Well, these animals being experimented upon include koalas, pound dogs, rabbits, horses, monkeys and sheep.
This is happening in the Australian state of Victoria.
The official justification is that most of these animals were “simply observed” (does that mean a control group? Probably not), but other animals suffered through horrific, violent “tests” in the name of science, as phrased by an article from the Herald Sun.
This isn’t in the name of science, it’s in the name of scientism and money.
One weird fact about this statistic is that the animals undergoing genetic modifications in particular rose tremendously, with 133,852 animals being used for 259 separate projects.
Animals used in experiments during 2016 in Victoria
Lab Mice — 398,277
Fish — 294,950
Sheep — 85,677
Guinea pigs — 3962
Pounds dogs — 2945
Lab Rabbits — 1464
Horses — 704
Pound cats — 597
Marmosets — 95
Macaques — 86
Reptiles — 52
Some animals were subject to being implanted with who knows what, for long-term monitoring. Other animals were afflicted with longterm, major physical impediments and deformities as animal experimentation always causes, and some of the animals passed away.
Please beware of the disturbing photos you are about to see. They are not necessarily of animals in Victoria, but animals being experimented on in general.
What is the Australian mainstream media saying about these animals? They’re trying to look on the bright side and support the practice, insisting that the total number of animals being killed “in the name of science” is decreasing.
“Only 6897 animals” have died in tests where death was the “proposed end point,” laments the Australian mainstream media. That’s so much better than zero animals dying in a study that basically intended for them to die.
A lot of these animals were subject to euthanasia after the tests: a whopping 79,858 were discarded and euthanized.
These numbers for just the state of Victoria alone were revealed in what they simply call annual data. The mainstream reported that it was simply the “sad cost of scientific advancement.”
People who aren’t insane can probably recognize that this isn’t “scientific advancement,” and there is no quantifiable evidence that this advanced anything for anybody, except the advanced depth of the pockets of entities funding this research.
While the original article was a little vague on whether or not these figures were for one year or in total, it wouldn’t be surprising if this was for just one year because for some reason the Australian state of Victoria is really bad about this.
“The state government records every live specimen used to help tackle numerous human diseases, products and breeding practices.”
It is believed that Australia is one of the worst countries in the world when it comes to animal experimentation. For some reason Victoria in particular is a place where a lot of “research” is done, and a lot of animals have to suffer.
Make no mistake, these animals do not suffer for some kind of vague notion of “the advancement of science,” for the betterment of mankind.
No, research grants are provided to academics, researchers, and scientists because the people with money want something to be generated from that money. Money is not given out for free, or for some vague altruistic purpose to these researchers, just as scholarships are not given to people to go to college for no reason: they want money.
Apparently a department of government in Victoria, Agriculture Victoria is supposed to ensure that all tests were approved and overseen by them, to “ensure ethical treatment.”
But if their standard of “ethical treatment” is to euthanize and discard tens of thousands of animals who may not even be that affected by what they did. Their standard of “ethical treatment” is to do things that result in permanent, strange deformations and injuries, so they have a really messed up sense of “ethics.”
To justify all this, a spokesman for the department said:
“The use of animals in research and teaching is, at this moment, an unfortunate necessity for the purposes of promoting the health of humans, animals and the environment.
Victorian legislation ensures that unnecessary animal use is not permitted and mandates the considerate and respectful treatment of research animals.”
So why would this horrific animal experimentation be performed in Australia, and not elsewhere?
Take it from population control advocate and ruthless “kill the poor”-type-person Paul Ehrlich. This is an extensive video about him worth watching on its own, but at one point in that video, he said something interesting.
Paul Ehrlich said that Australia, not America would tolerate a conversation about placing sterilization-inducing chemicals in the public water supply, sterilizing people en-mass to ensure that they don’t have any more children, only giving out antidotes to this sterilizing agent if the government deemed a family worthy of reproducing.
Unfortunately as a person who has lived in both Australia and America, I can confirm this is correct. Australia is more open to having a conversation about everybody being sterilized, forced to only reproduce if the government allows it, than America.
It’s not the people of Australia inherently, it’s a culture that exists there that really came from Britain: a culture where people are made really safe, really docile and willing to accept whatever the government does to sum it up.
Well, this is the state’s sense of ethics. Millions of animals are nothing but test subjects.
When will we have enough technology? Do we really need more “scientific advancement,” or do we need to just be at peace with the plants, animals, and people of this planet?
By Valérie Ouellet, Dave Seglins, Rachel Houlihan,
Posted: Nov 06, 2017
Montreal Canadiens and Loblaw among Canadian entities found in massive international offshore leak
The names of more than 3,000 Canadian companies, trusts, foundations and individuals appear in the Paradise Papers, a leak of millions of records from offshore law firm Appleby and the corporate registries of 19 tax havens. (Gary Hershorn/Reuters)
A supermarket giant, an NHL hockey team, several billionaires and a yacht captain.
These are just a few of the roughly 3,300 Canadian companies, trusts, foundations and individuals whose names appear in the Paradise Papers, a leak of millions of records from offshore law firm Appleby and the corporate registries of 19 tax havens.
The leak, revealed Sunday by CBC/Radio Canada and the Toronto Star, in partnership with the International Consortium of Investigative Journalists (ICIJ), is the largest ever involving Canadians who keep money in tax havens. It contains more than five times more Canadian companies and individuals than the 625 found in last year’s Panama Papers leak.
The records also show that Canada is one of Appleby’s biggest markets for offshore financial services clients, behind the U.S., the U.K. and China.
Leaked data from the law firm shines a light on hundreds of well-known companies and wealthy Canadians who benefit from offshore trusts and corporations set up in countries where they pay little or no taxes, as a way to legally avoid — or potentially evade — paying taxes at home.
The Paradise Papers name hundreds of Canadian contacts working as accountants, attorneys or consultants for clients of Appleby. Also named are companies, individuals and 306 organizers and beneficiaries of trust funds incorporated in Bermuda or the Cayman Islands.
When news of the leak broke Sunday, John Power, a spokesperson for the minister of national revenue, said “the CRA is reviewing links to Canadian entities and will take appropriate action.”
Canadian offshore clients
The Montreal Canadiens, an Appleby client since 1980, set up two trusts in Bermuda, including an employee benefit fund that was shut down in 2010. In a statement to CBC News, the organization says its offshore business was “in full compliance with the existing Canadian tax legislation.”
The Montreal Canadiens set up two trusts in Bermuda with Appleby’s help. In a statement to CBC News, the organization says its offshore business was ‘in full compliance with the existing Canadian tax legislation.’ (Graham Hughes/Canadian Press)
Canadian supermarket giant Loblaw says it paid all appropriate taxes on the two subsidiaries it set up with Appleby’s help in Barbados and Bermuda in 2005. They were used to invest tens of millions collected from users of its President’s Choice Financial MasterCard, according to leaked documents.
In a statement, Loblaw writes: “the CRA is aware of all of our international income. Our activities […] are legal and transparent.”
The late Carl M. Dare asked for Appleby’s help to administer a trust set up in 1975. According to leaked documents, the patriarch of Canada’s Dare Foods cookie and candy empire, who died in 2014, incorporated his $5-million fund in the Cayman Islands “for members of [his] family.”
The Appleby files also name lesser-known Canadians, including several doctors, engineers, geologists, housewives, a police officer, a retired admiral of the Canadian navy, a speech pathologist, students, teachers, two writers and a scientist living in the Yukon, most of them acting as officers for corporations or benefiting from trusts.
Dennis Howlett of Canadians For Tax Fairness says the leak reveals a bigger problem that goes beyond wealthy individuals.
“It’s big corporations taking advantage of subsidiaries in tax havens to shift their profits and pay a lot lower taxes,” he said.
Howlett said the Canadian government facilitates this practice by signing tax agreements with tax havens. “This is legal, and it should not be legal. That’s the point.”
Drinks at the Ritz
Appleby repeatedly targeted Canada between 2011 and 2013 to look for new clients, especially in the mining industry. The firm’s strategy included networking trips to Vancouver, Calgary and the annual convention of the Prospectors & Developers Association of Canada in Toronto.
Records suggest all the wining and dining — at Toronto’s Ritz-Carlton, the Fairmont Royal York and top-ranked restaurant Canoe — may have paid off, as the firm secured 127 new Canadian accounts in 2012 alone.
The Paradise Papers reveal Appleby’s clients in 2014 included at least 17 Canada-based resource companies.
Appleby billed Canadian clients and law firms at least $12 million between 2009 and 2013, including $8.2 million through its Bermuda office.
In a 2013 email exchange, a pair of partners and a senior analyst discussed ramping up marketing efforts in Canada, including targeting national accounting and law firms.
“There has been significant law firm consolidation over the years especially on the national level so perhaps well worth us looking again.”
Appleby was clearly enthusiastic about signing up Canadians as offshore clients, but it was also interested in setting up its own shop here.
As last year’s Panama Papers investigation revealed, Canada is fast becoming a tax haven because of the combination of its sterling reputation and registry rules that allow the true owners of companies to hide their identities.
In 2007, Appleby launched “Project Kerry,” a plan to open a headquarters in a new jurisdiction. The firm’s short list of potential sites included the European island of Jersey; Mauritius, an island nation in the Indian Ocean; the British Isle of Man; and Halifax.
A memo to managing partners says Halifax is a logical long-term choice, with direct flights to the firm’s Bermuda and London offices, “very reasonable operating costs” and “very significant payroll tax rebates.”
While weighing pros and cons, Appleby worried that its clients who had incorporated in a tax-free jurisdiction may now have to pay taxes if they did business in Canada.
The firm also wondered if Canadian law enforcement could obtain warrants to access sensitive documents on offshore shell companies stored in Halifax.
In a memorandum prepared in August 2007, Appleby partner Michael Burns says he asked a local Halifax lawyer to evaluate whether keeping a server outside Canada “would constitute a complete and lawfully appropriate foil to the normal warrant search and seizure processes which generally apply to Canadian businesses.”
Burns says the initial informal advice from the lawyer “suggests there may be considerable cause for optimism in the outcome.”
CBC partners reached out to Burns, but he declined to comment.
In 2009, Appleby instead chose the Isle of Man for its new office, noting in an email that Halifax was a “very close second.”
The city has been trying to transform itself into a destination for offshore service providers for a few years now, according to Howlett.
“Even it is not illegal, it is definitely unethical,” he said of the strategy.
Howlett said he hopes the Paradise Papers leak acts as a wake up call for federal and provincial governments as well as average Canadians.
In a statement published online shortly after being contacted by the ICIJ, Appleby says it investigated allegations made by the consortium and was “satisfied that there is no evidence of any wrongdoing, either on the part of ourselves or our clients.”
Appleby goes on to say it doesn’t tolerate illegal behaviour and provides advice to clients on “legitimate and lawful ways to conduct their business.” The firm says it’s “not infallible,” and when mistakes have happened, it notified the relevant authorities.
CBC’s conclusions rely on a 2014 copy of Appleby’s Master Client Database, authenticated by the ICIJ. CBC/Radio Canada and the Toronto Star analyzed thousands of addresses and names contained in that data to match offshore entities with their parent company using a unique address code assigned by Appleby. CBC’s definition of a “Canadian” found in this leak relies on finding at least one of the following within the data: a mailing, residential or business address in Canada; a Canadian passport number; or Appleby’s direct reference to a Canadian birthplace or nationality. The ICIJ excluded incomplete, pending and duplicate accounts.
Most of management theory is inane, writes our correspondent, the founder of a consulting firm. If you want to succeed in business, don’t get an M.B.A. Study philosophy instead
During the seven years that I worked as a management consultant, I spent a lot of time trying to look older than I was. I became pretty good at furrowing my brow and putting on somber expressions. Those who saw through my disguise assumed I made up for my youth with a fabulous education in management. They were wrong about that. I don’t have an M.B.A. I have a doctoral degree in philosophy—nineteenth-century German philosophy, to be precise. Before I took a job telling managers of large corporations things that they arguably should have known already, my work experience was limited to part-time gigs tutoring surly undergraduates in the ways of Hegel and Nietzsche and to a handful of summer jobs, mostly in the less appetizing ends of the fast-food industry.The strange thing about my utter lack of education in management was that it didn’t seem to matter. As a principal and founding partner of a consulting firm that eventually grew to 600 employees, I interviewed, hired, and worked alongside hundreds of business-school graduates, and the impression I formed of the M.B.A. experience was that it involved taking two years out of your life and going deeply into debt, all for the sake of learning how to keep a straight face while using phrases like “out-of-the-box thinking,” “win-win situation,” and “core competencies.” When it came to picking teammates, I generally held out higher hopes for those individuals who had used their university years to learn about something other than business administration.
After I left the consulting business, in a reversal of the usual order of things, I decided to check out the management literature. Partly, I wanted to “process” my own experience and find out what I had missed in skipping business school. Partly, I had a lot of time on my hands. As I plowed through tomes on competitive strategy, business process re-engineering, and the like, not once did I catch myself thinking, Damn! If only I had known this sooner! Instead, I found myself thinking things I never thought I’d think, like, I’d rather be reading Heidegger! It was a disturbing experience. It thickened the mystery around the question that had nagged me from the start of my business career: Why does management education exist?Management theory came to life in 1899 with a simple question: “How many tons of pig iron bars can a worker load onto a rail car in the course of a working day?” The man behind this question was Frederick Winslow Taylor, the author of The Principles of Scientific Management and, by most accounts, the founding father of the whole management business.Taylor was forty-three years old and on contract with the Bethlehem Steel Company when the pig iron question hit him. Staring out over an industrial yard that covered several square miles of the Pennsylvania landscape, he watched as laborers loaded ninety-two-pound bars onto rail cars. There were 80,000 tons’ worth of iron bars, which were to be carted off as fast as possible to meet new demand sparked by the Spanish-American War. Taylor narrowed his eyes: there was waste there, he was certain. After hastily reviewing the books at company headquarters, he estimated that the men were currently loading iron at the rate of twelve and a half tons per man per day.
Taylor stormed down to the yard with his assistants (“college men,” he called them) and rounded up a group of top-notch lifters (“first-class men”), who in this case happened to be ten “large, powerful Hungarians.” He offered to double the workers’ wages in exchange for their participation in an experiment. The Hungarians, eager to impress their apparent benefactor, put on a spirited show. Huffing up and down the rail car ramps, they loaded sixteen and a half tons in something under fourteen minutes. Taylor did the math: over a ten-hour day, it worked out to seventy-five tons per day per man. Naturally, he had to allow time for bathroom breaks, lunch, and rest periods, so he adjusted the figure approximately 40 percent downward. Henceforth, each laborer in the yard was assigned to load forty-seven and a half pig tons per day, with bonus pay for reaching the target and penalties for failing.When the Hungarians realized that they were being asked to quadruple their previous daily workload, they howled and refused to work. So Taylor found a “high-priced man,” a lean Pennsylvania Dutchman whose intelligence he compared to that of an ox. Lured by the promise of a 60 percent increase in wages, from $1.15 to a whopping $1.85 a day, Taylor’s high-priced man loaded forty-five and three-quarters tons over the course of a grueling day—close enough, in Taylor’s mind, to count as the first victory for the methods of modern management.Taylor went on to tackle the noble science of shoveling and a host of other topics of concern to his industrial clients. He declared that his new and unusual approach to solving business problems amounted to a “complete mental revolution.” Eventually, at the urging of his disciples, he called his method “scientific management.” Thus was born the idea that management is a science—a body of knowledge collected and nurtured by experts according to neutral, objective, and universal standards.At the same moment was born the notion that management is a distinct function best handled by a distinct group of people—people characterized by a particular kind of education, way of speaking, and fashion sensibility. Taylor, who favored a manly kind of prose, expressed it best in passages like this:
… the science of handling pig iron is so great and amounts to so much that it is impossible for the man who is best suited to this type of work to understand the principles of this science, or even to work in accordance with these principles, without the aid of a man better educated than he is.
From a metaphysical perspective, one could say that Taylor was a “dualist”: there is brain, there is brawn, and the two, he believed, very rarely meet.
Taylor went around the country repeating his pig iron story and other tales from his days in the yard, and these narratives formed something like a set of scriptures for a new and highly motivated cult of management experts. This vanguard ultimately vaulted into the citadel of the Establishment with the creation of business schools. In the spring of 1908, Taylor met with several Harvard professors, and later that year Harvard opened the first graduate school in the country to offer a master’s degree in business. It based its first-year curriculum on Taylor’s scientific management. From 1909 to 1914, Taylor visited Cambridge every winter to deliver a series of lectures—inspirational discourses marred only by the habit he’d picked up on the shop floor of swearing at inappropriate moments.
Yet even as Taylor’s idea of management began to catch on, a number of flaws in his approach were evident. The first thing many observers noted about scientific management was that there was almost no science to it. The most significant variable in Taylor’s pig iron calculation was the 40 percent “adjustment” he made in extrapolating from a fourteen-minute sample to a full workday. Why time a bunch of Hungarians down to the second if you’re going to daub the results with such a great blob of fudge? When he was grilled before Congress on the matter, Taylor casually mentioned that in other experiments these “adjustments” ranged from 20 percent to 225 percent. He defended these unsightly “wags” (wild-ass guesses, in M.B.A.-speak) as the product of his “judgment” and “experience”—but, of course, the whole point of scientific management was to eliminate the reliance on such inscrutable variables.One of the distinguishing features of anything that aspires to the name of science is the reproducibility of experimental results. Yet Taylor never published the data on which his pig iron or other conclusions were based. When Carl Barth, one of his devotees, took over the work at Bethlehem Steel, he found Taylor’s data to be unusable. Another, even more fundamental feature of science—here I invoke the ghost of Karl Popper—is that it must produce falsifiable propositions. Insofar as Taylor limited his concern to prosaic activities such as lifting bars onto rail cars, he did produce propositions that were falsifiable—and, indeed, were often falsified. But whenever he raised his sights to management in general, he seemed capable only of soaring platitudes. At the end of the day his “method” amounted to a set of exhortations: Think harder! Work smarter! Buy a stopwatch!The trouble with such claims isn’t that they are all wrong. It’s that they are too true. When a congressman asked him if his methods were open to misuse, Taylor replied, No. If management has the right state of mind, his methods will always lead to the correct result. Unfortunately, Taylor was right about that. Taylorism, like much of management theory to come, is at its core a collection of quasi-religious dicta on the virtue of being good at what you do, ensconced in a protective bubble of parables (otherwise known as case studies).Curiously, Taylor and his college men often appeared to float free from the kind of accountability that they demanded from everybody else. Others might have been asked, for example: Did Bethlehem’s profits increase as a result of their work? Taylor, however, rarely addressed the question head-on. With good reason. Bethlehem fired him in 1901 and threw out his various systems. Yet this evident vacuum of concrete results did not stop Taylor from repeating his parables as he preached the doctrine of efficiency to countless audiences across the country.In the management literature these days, Taylorism is presented, if at all, as a chapter of ancient history, a weird episode about an odd man with a stopwatch who appeared on the scene sometime after Columbus discovered the New World. Over the past century Taylor’s successors have developed a powerful battery of statistical methods and analytical approaches to business problems. And yet the world of management remains deeply Taylorist in its foundations.At its best, management theory is part of the democratic promise of America. It aims to replace the despotism of the old bosses with the rule of scientific law. It offers economic power to all who have the talent and energy to attain it. The managerial revolution must be counted as part of the great widening of economic opportunity that has contributed so much to our prosperity. But, insofar as it pretends to a kind of esoteric certitude to which it is not entitled, management theory betrays the ideals on which it was founded.That Taylorism and its modern variants are often just a way of putting labor in its place need hardly be stated: from the Hungarians’ point of view, the pig iron experiment was an infuriatingly obtuse way of demanding more work for less pay. That management theory represents a covert assault on capital, however, is equally true. (The Soviet five-year planning process took its inspiration directly from one of Taylor’s more ardent followers, the engineer H. L. Gantt.) Much of management theory today is in fact the consecration of class interest—not of the capitalist class, nor of labor, but of a new social group: the management class.I can confirm on the basis of personal experience that management consulting continues to worship at the shrine of numerology where Taylor made his first offering of blobs of fudge. In many of my own projects, I found myself compelled to pacify recalcitrant data with entirely confected numbers. But I cede the place of honor to a certain colleague, a gruff and street-smart Belgian whose hobby was to amass hunting trophies. The huntsman achieved some celebrity for having invented a new mathematical technique dubbed “the Two-Handed Regression.” When the data on the correlation between two variables revealed only a shapeless cloud—even though we knew damn well there had to be a correlation—he would simply place a pair of meaty hands on the offending bits of the cloud and reveal the straight line hiding from conventional mathematics.
The thing that makes modern management theory so painful to read isn’t usually the dearth of reliable empirical data. It’s that maddening papal infallibility. Oh sure, there are a few pearls of insight, and one or two stories about hero-CEOs that can hook you like bad popcorn. But the rest is just inane. Those who looked for the true meaning of “business process re-engineering,” the most overtly Taylorist of recent management fads, were ultimately rewarded with such gems of vacuity as “BPR is taking a blank sheet of paper to your business!” and “BPR means re-thinking everything, everything!”
Each new fad calls attention to one virtue or another—first it’s efficiency, then quality, next it’s customer satisfaction, then supplier satisfaction, then self-satisfaction, and finally, at some point, it’s efficiency all over again. If it’s reminiscent of the kind of toothless wisdom offered in self-help literature, that’s because management theory is mostly a subgenre of self-help. Which isn’t to say it’s completely useless. But just as most people are able to lead fulfilling lives without consulting Deepak Chopra, most managers can probably spare themselves an education in management theory.
Guantánamo Bay is a disgrace to humanity. It should be closed yesterday, as it flies against everything humanity believes in terms of justice. If it remains open, it should be filled up with all the sold-out morons in Congress and in the Senate. Also, every president since JFK, and their extended families, should be sent there for crimes against humanity. And it does not belong to the USA, as it is located in the territory of a sovereign nation, Cuba. Yankees go home mofos!
Torture is not acceptable. Are we not sick and tired of these pathetic cowards that believe otherwise? And these cowards are in the US congress and the Senate. Pathetic human beings.
Shut down your churches and your universities USA, as you obviously do not pay attention to truth and ethics. The world is getting sick of your lies and propaganda USA.
John Oliver examines the legal and moral issues surrounding the military prison at Guantánamo Bay.
“A recent study of 261 U.S. senior professionals found that 21 percent had clinically significant levels of psychopathic traits, compared to about one percent in the general population. That’s roughly the same rate as for prisoners.”
When Donald Trump blurted out that not paying his taxes “makes me smart,” he was revealing a truth about the American narcissist. Senator Lindsey Graham was being equally arrogant when he stated, “It’s really American to avoid paying taxes, legally…It’s a game we play.”
The game has become very popular, with an incomprehensiblethree-quarters of Fortune 500 companies stashing profits in offshore tax havens, avoiding over $700 billion in U.S. taxes.
Who Are the Narcissists?
They are people who don’t feel any responsibility to the society that made them rich, largely because they believe in the “self-made” myth. Their numbers are growing. For every 100 households with $100 million in assets in 2010, there are now 160.
Some of the super-rich care about average Americans, and some are well-intentioned philanthropists, but in general, as numerous studies have shown, super-wealthy people tend to be imbued with a distinct sense of entitlement. They believe their talents and attributes have earned them a rightful position of status over everyone else.
One study showed that those in the wealthy classes tend to behave more unethically than average citizens, especially at the highest levels, where career success has been associated with Machiavellianism—doing anything necessary to get ahead. A recent study of 261 U.S. senior professionals found that 21 percent had clinically significant levels of psychopathic traits, compared to about one percent in the general population. That’s roughly the same rate as for prisoners.
Narcissists Blame U.S. for Collapse of the Job Market
Stunning hypocrisy: Apple claims to be responsible for “creating and supporting 1.9 million jobs” while actually employing 115,000; but the company complains that “the U.S. has stopped producing people with the skills we need.” Yet Apple undermines job creation in its role as the biggest overseas profit hoarder and is a leading tax avoider. Apple CEO Tim Cook said, “We pay all the taxes we owe—every single dollar.”
Apple’s store workers make less than $30,000 per year. That’s typical of today’s jobs, as 7 of the 8 fastest-growing occupations pay less than a living wage. Even the Wall Street Journal admits that “many middle-wage occupations, those with average earnings between $32,000 and $53,000, have collapsed.”
For its part, Congress has done little to restore these jobs, and in fact has gone out of its way to stifle job creation attempts. The narcissists in Congress are preoccupied with their own security rather than the securing of a strong society. As we spend a trillion dollars on the military, Asian nations are spending almost as much on infrastructure.
‘They Will Die’
A Forbes writer summarizes: “Somewhere, right now, a cash-strapped parent or budget-limited patient with a severe allergy will skip acquiring an EpiPen. And someday, they will need it in a life-threatening situation…and they won’t have it. And they will die.”
The effects of greater health spending on the wealthy are becoming clear. The richest 1% of American males live nearly 15 years longer than the poorest 1% (10 years for women).
A lack of empathy on the global scale is confirmed by the Global Forum for Health Research, which estimates that less than 10 percent of the world’s health research budget is spent on health problems that account for 90 percent of global disease.
Billions for One Man to ‘Live Forever’
Amidst all this health trauma, the empathy-devoid focus on self is manifested in the effort by billionaires to prolong their own lives.
According to the Washington Post, “Larry Ellison has proclaimed his wish to live forever.” He and fellow Silicon Valley CEOs Peter Thiel and Larry Page are “using their billions to rewrite the nation’s science agenda,” as some scientists marvel at the “superiority complex” of the big-money men.
100 Million Narcissist-Lovers
Narcissism is defined in part as involving an “inflated sense of their own importance…a lack of empathy for others.” Scary enough with such a man running for president. But scarier yet is so many Americans support him.
A phenomenon called the Dunning-Kruger Effect suggests that uninformed people don’t know they’re uninformed, and so they have no reason to question their misperceptions. In Donald Trump’s case, they are happy to share in the narcissism. Even to the extent of a profanity spouted by Trump himself: “I could stand in the middle of Fifth Avenue and shoot somebody and I wouldn’t lose any voters.” Like other great narcissists, Trump is a very important man in his own head.
Paul Buchheit’s essays,videos and poems can be found at YouDeserveFacts.org.
There seems to be a troubling uptick around “ethics” recently within scientific circles that are focusing on robotics, artificial intelligence, and brain research. I say troubling because embedded within the standard appeals for caution which should appear in science, there also seems to be a tacit admission that things might be quickly spiraling out of control, as we are told of meetings, conventions, and workshops that have the ring of emergency scrambles more than debating society confabs.
Yesterday, Activist Post republished commentary from Harvard which cited a 52-page Stanford study looking into what Artificial Intelligence might look like in the year 2030. That report admits that much of what the general public believes to be science fiction – like pre-crime, for example – is already being implemented or is well on the way to impacting people’s day-to-day lives. We have seen the same call for ethical standards and caution about “killer robots” when, in fact, robots are already killing and injuring humans. Really all that is left to be considered, presumably, is the degree to which these systems should be permitted to become fully autonomous.
The same dichotomy between properly addressing the role of future technology and “uh oh, I think the genie is out of the bottle” also appears in the following article from Arizona State University, which some readers might remember was the source of a whistleblower that came to Activist Post some years ago with extreme concern about a secret DARPA program being conducted at Arizona State that aimed to develop a form of remote mind control using the technology of Transcranial Magnetic Stimulation. One of the ways that this technology could become remote-controlled is via the use of “neural dust” or “smart dust” that literally would open a two-way connection between brain and computer. You will read more about where that technology stands today in the article below, as well as other forms of implants that are slated for development.
It used to be the case that I would highlight a select few words from university, military, and scientific press releases; this time, the entire article would have to be highlighted, as it runs the full gamut of open admission about what previously has been “conspiratorial” or “sci-fi” (there is even mention of geoengineering here).
Lastly, can we really entrust the exact same players who are developing these systems – many for profit and control – to be involved in the formulation of an ethical framework?
If you share a concern that the technology we have developed is beginning to take on a life of its own, please share this information as we try to keep pace and hopefully corral our own creations into the most positive functions possible.
Considering Ethics Now Before Radically New Brain Technologies Get Away From Us
By Andrew Maynard, Arizona State University
Imagine infusing thousands of wireless devices into your brain, and using them to both monitor its activity and directly influence its actions. It sounds like the stuff of science fiction, and for the moment it still is – but possibly not for long.
Brain research is on a roll at the moment. And as it converges with advances in science and technology more broadly, it’s transforming what we are likely to be able to achieve in the near future.
Spurring the field on is the promise of more effective treatments for debilitating neurological and psychological disorders such as epilepsy, Parkinson’s disease and depression. But new brain technologies will increasingly have the potential to alter how someone thinks, feels, behaves and even perceives themselves and others around them – and not necessarily in ways that are within their control or with their consent.
This is where things begin to get ethically uncomfortable.
Because of concerns like these, the U.S. National Academies of Sciences, Engineering and Medicine (NAS) are co-hosting a meeting of experts this week on responsible innovation in brain science.
Berkeley’s ‘neural dust’ sensors are one of the latest neurotech advances. Where are neurotechnologies now?
Brain research is intimately entwined with advances in the “neurotechnologies” that not only help us study the brain’s inner workings, but also transform the ways we can interact with and influence it.
For example, researchers at the University of California Berkeley recently published the first in-animal trials of what they called “neural dust” – implanted millimeter-sized sensors. They inserted the sensors in the nerves and muscles of rats, showing that these miniature wirelessly powered and connected sensors can monitor neural activity. The long-term aim, though, is to introduce thousands of neural dust particles into human brains.
The UC Berkeley sensors are still relatively large, on par with a coarse piece of sand, and just report on what’s happening around them. Yet advances in nanoscale fabrication are likely to enable their further miniaturization. (The researchers estimate they could be made thinner than a human hair.) And in the future, combining them with technologies like optogenetics – using light to stimulate genetically modified neurons – could enable wireless, localized brain interrogation and control.
Used in this way, future generations of neural dust could transform how chronic neurological disorders are managed. They could also enable hardwired brain-computer interfaces (the original motivation behind this research), or even be used to enhance cognitive ability and modify behavior.
The BRAIN Initiative is one of the Obama administration’s ‘Grand Challenges.’ Jason Reed/Reuters
In 2013, President Obama launched the multi-year, multi-million dollar U.S. BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies). The same year, the European Commission launched the Human Brain Project, focusing on advancing brain research, cognitive neuroscience and brain-inspired computing. There are also active brain research initiatives in China, Japan, Korea, Latin America, Israel, Switzerland, Canada and even Cuba.
Together, these represent an emerging and globally coordinated effort to not only better understand how the brain works, but to find new ways of controlling and enhancing it (in particular in disease treatment and prevention); to interface with it; and to build computers and other artificial systems that are inspired by it. Cutting-edge tech comes with ethical questions
This week’s NAS workshop – organized by the Organization for Economic Cooperation and Development and supported by the National Science Foundation and my home institution of Arizona State University – isn’t the first gathering of experts to discuss the ethics of brain technologies. In fact there’s already an active international community of experts addressing “neuroethics.”
Many of these scientific initiatives do have a prominent ethics component. The U.S. BRAIN initiative for example includes a Neuroethics Workgroup, while the E.C. Human Brain Project is using an Ethics Map to guide research and development. These and others are grappling with the formidable challenges of developing future neurotechnologies responsibly.
It’s against this backdrop that the NAS workshop sets out to better understand the social and ethical opportunities and challenges emerging from global brain research and neurotechnologies. A goal is to identify ways of ensuring these technologies are developed in ways that are responsive to social needs, desires and concerns. And it comes at a time when brain research is beginning to open up radical new possibilities that were far beyond our grasp just a few years ago.
Transcranial magnetic stimulation uses a powerful and rapidly changing electrical current to excite neural processes in the brain, similar to direct stimulation with electrodes. Eric Wassermann, M.D., CC BY
In 2010, for instance, researchers at MIT demonstrated that Transcranial Magnetic Stimulation, or TMS – a noninvasive neurotechnology – could temporarily alter someone’s moral judgment. Another noninvasive technique called transcranial Direct Current Stimulation (tDCS) delivers low-level electrical currents to the brain via electrodes on the scalp; it’s being explored as a treatment for clinical conditions from depression to chronic pain – while already being used in consumer products and by do-it-yourselfers to allegedly self-induce changes in mental state and ability.
Crude as current capabilities using TMS and tDCS are, they are forcing people to think about the responsible development and use of technologies which have the ability to potentially change behavior, personality and thinking ability, at the flick of a switch. And the ethical questions they raise are far from straightforward.
For instance, should students be allowed to take exams while using tDCS? Should teachers be able to use tDCS in the classroom? Should TMS be used to prevent a soldier’s moral judgment from interfering with military operations?
These and similar questions grapple with what is already possible. Complex as they are, they pale against the challenges emerging neurotechnologies are likely to raise. Preparing now for what’s to come
As research leads to an increasingly sophisticated and fine-grained understanding of how our brains function, related neurotechnologies are likely to become equally sophisticated. As they do, our abilities to precisely control function, thinking, behavior and personality, will extend far beyond what is currently possible.
To get a sense of the emerging ethical and social challenges such capabilities potentially raise, consider this speculative near-future scenario:
Imagine that in a few years’ time, the UC Berkeley neural dust has been successfully miniaturized and combined with optogenetics, allowing thousands of micrometer-sized devices to be seeded through someone’s brain that are capable of monitoring and influencing localized brain functions. Now imagine this network of neural transceivers is wirelessly connected to an external computer, and from there, to the internet.
Such a network – a crude foreshadowing of science fiction author Iain M. Banks’ “neural lace” (a concept that has already grabbed the attention of Elon Musk) – would revolutionize the detection and treatment of neurological conditions, potentially improving quality of life for millions of people. It would enable external devices to be controlled through thought, effectively integrating networked brains into the Internet of Things. It could help overcome restricted physical abilities for some people. And it would potentially provide unprecedented levels of cognitive enhancement, by allowing people to interface directly with cloud-based artificial intelligence and other online systems.
Think Apple’s Siri or Amazon’s Echo hardwired into your brain, and you begin to get the idea.
Yet this neurotech – which is almost within reach of current technological capabilities – would not be risk-free. These risks could be social – a growing socioeconomic divide perhaps between those who are neuro-enhanced and those who are not. Or they could be related to privacy and autonomy – maybe the ability of employers and law enforcement to monitor, and even alter, thoughts and feelings. The innovation might threaten personal well-being and societal cohesion through (hypothetical) cyber substance abuse, where direct-to-brain code replaces psychoactive substances. It could make users highly vulnerable to neurological cyberattacks.
Of course, predicting and responding to possible future risks is fraught with difficulties, and depends as much on who considers what a risk (and to whom) as it does the capabilities of emerging technologies to do harm. Yet it’s hard to avoid the likely disruptive potential of near-future neurotechnologies. Thus the urgent need to address – as a society – what we want the future of brain technologies to look like.
Moving forward, the ethical and responsible development of emerging brain technologies will require new thinking, along with considerable investment, in what might go wrong, and how to avoid it. Here, we can learn from thinking about responsible and ethical innovation that has come to light around recombinant DNA, nanotechnology, geoengineering and other cutting-edge areas of science and technology.
To develop future brain technologies both successfully and responsibly, we need to do so in ways that avoid potential pitfalls while not stifling innovation. We need approaches that ensure ordinary people can easily find out how these technologies might affect their lives – and they must have a say in how they’re used.
All this won’t necessarily be easy – responsible innovation rarely is. But through initiatives like this week’s NAS workshop and others, we have the opportunity to develop brain technologies that are profoundly beneficial, without getting caught up in an ethical minefield.
“Our citizens should know the urgent facts…but they don’t because our media serves imperial, not popular interests. They lie, deceive, connive and suppress what everyone needs to know, substituting managed news misinformation and rubbish for hard truths…”—Oliver Stone