Most of management theory is inane, writes our correspondent, the founder of a consulting firm. If you want to succeed in business, don’t get an M.B.A. Study philosophy instead
During the seven years that I worked as a management consultant, I spent a lot of time trying to look older than I was. I became pretty good at furrowing my brow and putting on somber expressions. Those who saw through my disguise assumed I made up for my youth with a fabulous education in management. They were wrong about that. I don’t have an M.B.A. I have a doctoral degree in philosophy—nineteenth-century German philosophy, to be precise. Before I took a job telling managers of large corporations things that they arguably should have known already, my work experience was limited to part-time gigs tutoring surly undergraduates in the ways of Hegel and Nietzsche and to a handful of summer jobs, mostly in the less appetizing ends of the fast-food industry.The strange thing about my utter lack of education in management was that it didn’t seem to matter. As a principal and founding partner of a consulting firm that eventually grew to 600 employees, I interviewed, hired, and worked alongside hundreds of business-school graduates, and the impression I formed of the M.B.A. experience was that it involved taking two years out of your life and going deeply into debt, all for the sake of learning how to keep a straight face while using phrases like “out-of-the-box thinking,” “win-win situation,” and “core competencies.” When it came to picking teammates, I generally held out higher hopes for those individuals who had used their university years to learn about something other than business administration.
After I left the consulting business, in a reversal of the usual order of things, I decided to check out the management literature. Partly, I wanted to “process” my own experience and find out what I had missed in skipping business school. Partly, I had a lot of time on my hands. As I plowed through tomes on competitive strategy, business process re-engineering, and the like, not once did I catch myself thinking, Damn! If only I had known this sooner! Instead, I found myself thinking things I never thought I’d think, like, I’d rather be reading Heidegger! It was a disturbing experience. It thickened the mystery around the question that had nagged me from the start of my business career: Why does management education exist?Management theory came to life in 1899 with a simple question: “How many tons of pig iron bars can a worker load onto a rail car in the course of a working day?” The man behind this question was Frederick Winslow Taylor, the author of The Principles of Scientific Management and, by most accounts, the founding father of the whole management business.Taylor was forty-three years old and on contract with the Bethlehem Steel Company when the pig iron question hit him. Staring out over an industrial yard that covered several square miles of the Pennsylvania landscape, he watched as laborers loaded ninety-two-pound bars onto rail cars. There were 80,000 tons’ worth of iron bars, which were to be carted off as fast as possible to meet new demand sparked by the Spanish-American War. Taylor narrowed his eyes: there was waste there, he was certain. After hastily reviewing the books at company headquarters, he estimated that the men were currently loading iron at the rate of twelve and a half tons per man per day.
Taylor stormed down to the yard with his assistants (“college men,” he called them) and rounded up a group of top-notch lifters (“first-class men”), who in this case happened to be ten “large, powerful Hungarians.” He offered to double the workers’ wages in exchange for their participation in an experiment. The Hungarians, eager to impress their apparent benefactor, put on a spirited show. Huffing up and down the rail car ramps, they loaded sixteen and a half tons in something under fourteen minutes. Taylor did the math: over a ten-hour day, it worked out to seventy-five tons per day per man. Naturally, he had to allow time for bathroom breaks, lunch, and rest periods, so he adjusted the figure approximately 40 percent downward. Henceforth, each laborer in the yard was assigned to load forty-seven and a half pig tons per day, with bonus pay for reaching the target and penalties for failing.When the Hungarians realized that they were being asked to quadruple their previous daily workload, they howled and refused to work. So Taylor found a “high-priced man,” a lean Pennsylvania Dutchman whose intelligence he compared to that of an ox. Lured by the promise of a 60 percent increase in wages, from $1.15 to a whopping $1.85 a day, Taylor’s high-priced man loaded forty-five and three-quarters tons over the course of a grueling day—close enough, in Taylor’s mind, to count as the first victory for the methods of modern management.Taylor went on to tackle the noble science of shoveling and a host of other topics of concern to his industrial clients. He declared that his new and unusual approach to solving business problems amounted to a “complete mental revolution.” Eventually, at the urging of his disciples, he called his method “scientific management.” Thus was born the idea that management is a science—a body of knowledge collected and nurtured by experts according to neutral, objective, and universal standards.At the same moment was born the notion that management is a distinct function best handled by a distinct group of people—people characterized by a particular kind of education, way of speaking, and fashion sensibility. Taylor, who favored a manly kind of prose, expressed it best in passages like this:
… the science of handling pig iron is so great and amounts to so much that it is impossible for the man who is best suited to this type of work to understand the principles of this science, or even to work in accordance with these principles, without the aid of a man better educated than he is.
From a metaphysical perspective, one could say that Taylor was a “dualist”: there is brain, there is brawn, and the two, he believed, very rarely meet.
Taylor went around the country repeating his pig iron story and other tales from his days in the yard, and these narratives formed something like a set of scriptures for a new and highly motivated cult of management experts. This vanguard ultimately vaulted into the citadel of the Establishment with the creation of business schools. In the spring of 1908, Taylor met with several Harvard professors, and later that year Harvard opened the first graduate school in the country to offer a master’s degree in business. It based its first-year curriculum on Taylor’s scientific management. From 1909 to 1914, Taylor visited Cambridge every winter to deliver a series of lectures—inspirational discourses marred only by the habit he’d picked up on the shop floor of swearing at inappropriate moments.
Yet even as Taylor’s idea of management began to catch on, a number of flaws in his approach were evident. The first thing many observers noted about scientific management was that there was almost no science to it. The most significant variable in Taylor’s pig iron calculation was the 40 percent “adjustment” he made in extrapolating from a fourteen-minute sample to a full workday. Why time a bunch of Hungarians down to the second if you’re going to daub the results with such a great blob of fudge? When he was grilled before Congress on the matter, Taylor casually mentioned that in other experiments these “adjustments” ranged from 20 percent to 225 percent. He defended these unsightly “wags” (wild-ass guesses, in M.B.A.-speak) as the product of his “judgment” and “experience”—but, of course, the whole point of scientific management was to eliminate the reliance on such inscrutable variables.One of the distinguishing features of anything that aspires to the name of science is the reproducibility of experimental results. Yet Taylor never published the data on which his pig iron or other conclusions were based. When Carl Barth, one of his devotees, took over the work at Bethlehem Steel, he found Taylor’s data to be unusable. Another, even more fundamental feature of science—here I invoke the ghost of Karl Popper—is that it must produce falsifiable propositions. Insofar as Taylor limited his concern to prosaic activities such as lifting bars onto rail cars, he did produce propositions that were falsifiable—and, indeed, were often falsified. But whenever he raised his sights to management in general, he seemed capable only of soaring platitudes. At the end of the day his “method” amounted to a set of exhortations: Think harder! Work smarter! Buy a stopwatch!The trouble with such claims isn’t that they are all wrong. It’s that they are too true. When a congressman asked him if his methods were open to misuse, Taylor replied, No. If management has the right state of mind, his methods will always lead to the correct result. Unfortunately, Taylor was right about that. Taylorism, like much of management theory to come, is at its core a collection of quasi-religious dicta on the virtue of being good at what you do, ensconced in a protective bubble of parables (otherwise known as case studies).Curiously, Taylor and his college men often appeared to float free from the kind of accountability that they demanded from everybody else. Others might have been asked, for example: Did Bethlehem’s profits increase as a result of their work? Taylor, however, rarely addressed the question head-on. With good reason. Bethlehem fired him in 1901 and threw out his various systems. Yet this evident vacuum of concrete results did not stop Taylor from repeating his parables as he preached the doctrine of efficiency to countless audiences across the country.In the management literature these days, Taylorism is presented, if at all, as a chapter of ancient history, a weird episode about an odd man with a stopwatch who appeared on the scene sometime after Columbus discovered the New World. Over the past century Taylor’s successors have developed a powerful battery of statistical methods and analytical approaches to business problems. And yet the world of management remains deeply Taylorist in its foundations.At its best, management theory is part of the democratic promise of America. It aims to replace the despotism of the old bosses with the rule of scientific law. It offers economic power to all who have the talent and energy to attain it. The managerial revolution must be counted as part of the great widening of economic opportunity that has contributed so much to our prosperity. But, insofar as it pretends to a kind of esoteric certitude to which it is not entitled, management theory betrays the ideals on which it was founded.That Taylorism and its modern variants are often just a way of putting labor in its place need hardly be stated: from the Hungarians’ point of view, the pig iron experiment was an infuriatingly obtuse way of demanding more work for less pay. That management theory represents a covert assault on capital, however, is equally true. (The Soviet five-year planning process took its inspiration directly from one of Taylor’s more ardent followers, the engineer H. L. Gantt.) Much of management theory today is in fact the consecration of class interest—not of the capitalist class, nor of labor, but of a new social group: the management class.I can confirm on the basis of personal experience that management consulting continues to worship at the shrine of numerology where Taylor made his first offering of blobs of fudge. In many of my own projects, I found myself compelled to pacify recalcitrant data with entirely confected numbers. But I cede the place of honor to a certain colleague, a gruff and street-smart Belgian whose hobby was to amass hunting trophies. The huntsman achieved some celebrity for having invented a new mathematical technique dubbed “the Two-Handed Regression.” When the data on the correlation between two variables revealed only a shapeless cloud—even though we knew damn well there had to be a correlation—he would simply place a pair of meaty hands on the offending bits of the cloud and reveal the straight line hiding from conventional mathematics.
The thing that makes modern management theory so painful to read isn’t usually the dearth of reliable empirical data. It’s that maddening papal infallibility. Oh sure, there are a few pearls of insight, and one or two stories about hero-CEOs that can hook you like bad popcorn. But the rest is just inane. Those who looked for the true meaning of “business process re-engineering,” the most overtly Taylorist of recent management fads, were ultimately rewarded with such gems of vacuity as “BPR is taking a blank sheet of paper to your business!” and “BPR means re-thinking everything, everything!”
Each new fad calls attention to one virtue or another—first it’s efficiency, then quality, next it’s customer satisfaction, then supplier satisfaction, then self-satisfaction, and finally, at some point, it’s efficiency all over again. If it’s reminiscent of the kind of toothless wisdom offered in self-help literature, that’s because management theory is mostly a subgenre of self-help. Which isn’t to say it’s completely useless. But just as most people are able to lead fulfilling lives without consulting Deepak Chopra, most managers can probably spare themselves an education in management theory.
Guantánamo Bay is a disgrace to humanity. It should be closed yesterday, as it flies against everything humanity believes in terms of justice. If it remains open, it should be filled up with all the sold-out morons in Congress and in the Senate. Also, every president since JFK, and their extended families, should be sent there for crimes against humanity. And it does not belong to the USA, as it is located in the territory of a sovereign nation, Cuba. Yankees go home mofos!
Torture is not acceptable. Are we not sick and tired of these pathetic cowards that believe otherwise? And these cowards are in the US congress and the Senate. Pathetic human beings.
Shut down your churches and your universities USA, as you obviously do not pay attention to truth and ethics. The world is getting sick of your lies and propaganda USA.
John Oliver examines the legal and moral issues surrounding the military prison at Guantánamo Bay.
“A recent study of 261 U.S. senior professionals found that 21 percent had clinically significant levels of psychopathic traits, compared to about one percent in the general population. That’s roughly the same rate as for prisoners.”
When Donald Trump blurted out that not paying his taxes “makes me smart,” he was revealing a truth about the American narcissist. Senator Lindsey Graham was being equally arrogant when he stated, “It’s really American to avoid paying taxes, legally…It’s a game we play.”
The game has become very popular, with an incomprehensiblethree-quarters of Fortune 500 companies stashing profits in offshore tax havens, avoiding over $700 billion in U.S. taxes.
Who Are the Narcissists?
They are people who don’t feel any responsibility to the society that made them rich, largely because they believe in the “self-made” myth. Their numbers are growing. For every 100 households with $100 million in assets in 2010, there are now 160.
Some of the super-rich care about average Americans, and some are well-intentioned philanthropists, but in general, as numerous studies have shown, super-wealthy people tend to be imbued with a distinct sense of entitlement. They believe their talents and attributes have earned them a rightful position of status over everyone else.
One study showed that those in the wealthy classes tend to behave more unethically than average citizens, especially at the highest levels, where career success has been associated with Machiavellianism—doing anything necessary to get ahead. A recent study of 261 U.S. senior professionals found that 21 percent had clinically significant levels of psychopathic traits, compared to about one percent in the general population. That’s roughly the same rate as for prisoners.
Narcissists Blame U.S. for Collapse of the Job Market
Stunning hypocrisy: Apple claims to be responsible for “creating and supporting 1.9 million jobs” while actually employing 115,000; but the company complains that “the U.S. has stopped producing people with the skills we need.” Yet Apple undermines job creation in its role as the biggest overseas profit hoarder and is a leading tax avoider. Apple CEO Tim Cook said, “We pay all the taxes we owe—every single dollar.”
Apple’s store workers make less than $30,000 per year. That’s typical of today’s jobs, as 7 of the 8 fastest-growing occupations pay less than a living wage. Even the Wall Street Journal admits that “many middle-wage occupations, those with average earnings between $32,000 and $53,000, have collapsed.”
For its part, Congress has done little to restore these jobs, and in fact has gone out of its way to stifle job creation attempts. The narcissists in Congress are preoccupied with their own security rather than the securing of a strong society. As we spend a trillion dollars on the military, Asian nations are spending almost as much on infrastructure.
‘They Will Die’
A Forbes writer summarizes: “Somewhere, right now, a cash-strapped parent or budget-limited patient with a severe allergy will skip acquiring an EpiPen. And someday, they will need it in a life-threatening situation…and they won’t have it. And they will die.”
The effects of greater health spending on the wealthy are becoming clear. The richest 1% of American males live nearly 15 years longer than the poorest 1% (10 years for women).
A lack of empathy on the global scale is confirmed by the Global Forum for Health Research, which estimates that less than 10 percent of the world’s health research budget is spent on health problems that account for 90 percent of global disease.
Billions for One Man to ‘Live Forever’
Amidst all this health trauma, the empathy-devoid focus on self is manifested in the effort by billionaires to prolong their own lives.
According to the Washington Post, “Larry Ellison has proclaimed his wish to live forever.” He and fellow Silicon Valley CEOs Peter Thiel and Larry Page are “using their billions to rewrite the nation’s science agenda,” as some scientists marvel at the “superiority complex” of the big-money men.
100 Million Narcissist-Lovers
Narcissism is defined in part as involving an “inflated sense of their own importance…a lack of empathy for others.” Scary enough with such a man running for president. But scarier yet is so many Americans support him.
A phenomenon called the Dunning-Kruger Effect suggests that uninformed people don’t know they’re uninformed, and so they have no reason to question their misperceptions. In Donald Trump’s case, they are happy to share in the narcissism. Even to the extent of a profanity spouted by Trump himself: “I could stand in the middle of Fifth Avenue and shoot somebody and I wouldn’t lose any voters.” Like other great narcissists, Trump is a very important man in his own head.
Paul Buchheit’s essays,videos and poems can be found at YouDeserveFacts.org.
There seems to be a troubling uptick around “ethics” recently within scientific circles that are focusing on robotics, artificial intelligence, and brain research. I say troubling because embedded within the standard appeals for caution which should appear in science, there also seems to be a tacit admission that things might be quickly spiraling out of control, as we are told of meetings, conventions, and workshops that have the ring of emergency scrambles more than debating society confabs.
Yesterday, Activist Post republished commentary from Harvard which cited a 52-page Stanford study looking into what Artificial Intelligence might look like in the year 2030. That report admits that much of what the general public believes to be science fiction – like pre-crime, for example – is already being implemented or is well on the way to impacting people’s day-to-day lives. We have seen the same call for ethical standards and caution about “killer robots” when, in fact, robots are already killing and injuring humans. Really all that is left to be considered, presumably, is the degree to which these systems should be permitted to become fully autonomous.
The same dichotomy between properly addressing the role of future technology and “uh oh, I think the genie is out of the bottle” also appears in the following article from Arizona State University, which some readers might remember was the source of a whistleblower that came to Activist Post some years ago with extreme concern about a secret DARPA program being conducted at Arizona State that aimed to develop a form of remote mind control using the technology of Transcranial Magnetic Stimulation. One of the ways that this technology could become remote-controlled is via the use of “neural dust” or “smart dust” that literally would open a two-way connection between brain and computer. You will read more about where that technology stands today in the article below, as well as other forms of implants that are slated for development.
It used to be the case that I would highlight a select few words from university, military, and scientific press releases; this time, the entire article would have to be highlighted, as it runs the full gamut of open admission about what previously has been “conspiratorial” or “sci-fi” (there is even mention of geoengineering here).
Lastly, can we really entrust the exact same players who are developing these systems – many for profit and control – to be involved in the formulation of an ethical framework?
If you share a concern that the technology we have developed is beginning to take on a life of its own, please share this information as we try to keep pace and hopefully corral our own creations into the most positive functions possible.
Considering Ethics Now Before Radically New Brain Technologies Get Away From Us
By Andrew Maynard, Arizona State University
Imagine infusing thousands of wireless devices into your brain, and using them to both monitor its activity and directly influence its actions. It sounds like the stuff of science fiction, and for the moment it still is – but possibly not for long.
Brain research is on a roll at the moment. And as it converges with advances in science and technology more broadly, it’s transforming what we are likely to be able to achieve in the near future.
Spurring the field on is the promise of more effective treatments for debilitating neurological and psychological disorders such as epilepsy, Parkinson’s disease and depression. But new brain technologies will increasingly have the potential to alter how someone thinks, feels, behaves and even perceives themselves and others around them – and not necessarily in ways that are within their control or with their consent.
This is where things begin to get ethically uncomfortable.
Because of concerns like these, the U.S. National Academies of Sciences, Engineering and Medicine (NAS) are co-hosting a meeting of experts this week on responsible innovation in brain science.
Berkeley’s ‘neural dust’ sensors are one of the latest neurotech advances. Where are neurotechnologies now?
Brain research is intimately entwined with advances in the “neurotechnologies” that not only help us study the brain’s inner workings, but also transform the ways we can interact with and influence it.
For example, researchers at the University of California Berkeley recently published the first in-animal trials of what they called “neural dust” – implanted millimeter-sized sensors. They inserted the sensors in the nerves and muscles of rats, showing that these miniature wirelessly powered and connected sensors can monitor neural activity. The long-term aim, though, is to introduce thousands of neural dust particles into human brains.
The UC Berkeley sensors are still relatively large, on par with a coarse piece of sand, and just report on what’s happening around them. Yet advances in nanoscale fabrication are likely to enable their further miniaturization. (The researchers estimate they could be made thinner than a human hair.) And in the future, combining them with technologies like optogenetics – using light to stimulate genetically modified neurons – could enable wireless, localized brain interrogation and control.
Used in this way, future generations of neural dust could transform how chronic neurological disorders are managed. They could also enable hardwired brain-computer interfaces (the original motivation behind this research), or even be used to enhance cognitive ability and modify behavior.
The BRAIN Initiative is one of the Obama administration’s ‘Grand Challenges.’ Jason Reed/Reuters
In 2013, President Obama launched the multi-year, multi-million dollar U.S. BRAIN Initiative (Brain Research through Advancing Innovative Neurotechnologies). The same year, the European Commission launched the Human Brain Project, focusing on advancing brain research, cognitive neuroscience and brain-inspired computing. There are also active brain research initiatives in China, Japan, Korea, Latin America, Israel, Switzerland, Canada and even Cuba.
Together, these represent an emerging and globally coordinated effort to not only better understand how the brain works, but to find new ways of controlling and enhancing it (in particular in disease treatment and prevention); to interface with it; and to build computers and other artificial systems that are inspired by it. Cutting-edge tech comes with ethical questions
This week’s NAS workshop – organized by the Organization for Economic Cooperation and Development and supported by the National Science Foundation and my home institution of Arizona State University – isn’t the first gathering of experts to discuss the ethics of brain technologies. In fact there’s already an active international community of experts addressing “neuroethics.”
Many of these scientific initiatives do have a prominent ethics component. The U.S. BRAIN initiative for example includes a Neuroethics Workgroup, while the E.C. Human Brain Project is using an Ethics Map to guide research and development. These and others are grappling with the formidable challenges of developing future neurotechnologies responsibly.
It’s against this backdrop that the NAS workshop sets out to better understand the social and ethical opportunities and challenges emerging from global brain research and neurotechnologies. A goal is to identify ways of ensuring these technologies are developed in ways that are responsive to social needs, desires and concerns. And it comes at a time when brain research is beginning to open up radical new possibilities that were far beyond our grasp just a few years ago.
Transcranial magnetic stimulation uses a powerful and rapidly changing electrical current to excite neural processes in the brain, similar to direct stimulation with electrodes. Eric Wassermann, M.D., CC BY
In 2010, for instance, researchers at MIT demonstrated that Transcranial Magnetic Stimulation, or TMS – a noninvasive neurotechnology – could temporarily alter someone’s moral judgment. Another noninvasive technique called transcranial Direct Current Stimulation (tDCS) delivers low-level electrical currents to the brain via electrodes on the scalp; it’s being explored as a treatment for clinical conditions from depression to chronic pain – while already being used in consumer products and by do-it-yourselfers to allegedly self-induce changes in mental state and ability.
Crude as current capabilities using TMS and tDCS are, they are forcing people to think about the responsible development and use of technologies which have the ability to potentially change behavior, personality and thinking ability, at the flick of a switch. And the ethical questions they raise are far from straightforward.
For instance, should students be allowed to take exams while using tDCS? Should teachers be able to use tDCS in the classroom? Should TMS be used to prevent a soldier’s moral judgment from interfering with military operations?
These and similar questions grapple with what is already possible. Complex as they are, they pale against the challenges emerging neurotechnologies are likely to raise. Preparing now for what’s to come
As research leads to an increasingly sophisticated and fine-grained understanding of how our brains function, related neurotechnologies are likely to become equally sophisticated. As they do, our abilities to precisely control function, thinking, behavior and personality, will extend far beyond what is currently possible.
To get a sense of the emerging ethical and social challenges such capabilities potentially raise, consider this speculative near-future scenario:
Imagine that in a few years’ time, the UC Berkeley neural dust has been successfully miniaturized and combined with optogenetics, allowing thousands of micrometer-sized devices to be seeded through someone’s brain that are capable of monitoring and influencing localized brain functions. Now imagine this network of neural transceivers is wirelessly connected to an external computer, and from there, to the internet.
Such a network – a crude foreshadowing of science fiction author Iain M. Banks’ “neural lace” (a concept that has already grabbed the attention of Elon Musk) – would revolutionize the detection and treatment of neurological conditions, potentially improving quality of life for millions of people. It would enable external devices to be controlled through thought, effectively integrating networked brains into the Internet of Things. It could help overcome restricted physical abilities for some people. And it would potentially provide unprecedented levels of cognitive enhancement, by allowing people to interface directly with cloud-based artificial intelligence and other online systems.
Think Apple’s Siri or Amazon’s Echo hardwired into your brain, and you begin to get the idea.
Yet this neurotech – which is almost within reach of current technological capabilities – would not be risk-free. These risks could be social – a growing socioeconomic divide perhaps between those who are neuro-enhanced and those who are not. Or they could be related to privacy and autonomy – maybe the ability of employers and law enforcement to monitor, and even alter, thoughts and feelings. The innovation might threaten personal well-being and societal cohesion through (hypothetical) cyber substance abuse, where direct-to-brain code replaces psychoactive substances. It could make users highly vulnerable to neurological cyberattacks.
Of course, predicting and responding to possible future risks is fraught with difficulties, and depends as much on who considers what a risk (and to whom) as it does the capabilities of emerging technologies to do harm. Yet it’s hard to avoid the likely disruptive potential of near-future neurotechnologies. Thus the urgent need to address – as a society – what we want the future of brain technologies to look like.
Moving forward, the ethical and responsible development of emerging brain technologies will require new thinking, along with considerable investment, in what might go wrong, and how to avoid it. Here, we can learn from thinking about responsible and ethical innovation that has come to light around recombinant DNA, nanotechnology, geoengineering and other cutting-edge areas of science and technology.
To develop future brain technologies both successfully and responsibly, we need to do so in ways that avoid potential pitfalls while not stifling innovation. We need approaches that ensure ordinary people can easily find out how these technologies might affect their lives – and they must have a say in how they’re used.
All this won’t necessarily be easy – responsible innovation rarely is. But through initiatives like this week’s NAS workshop and others, we have the opportunity to develop brain technologies that are profoundly beneficial, without getting caught up in an ethical minefield.
United States — Documents and contracts leaked when hackers recently breached the servers of the Fraternal Order of Police have revealed “guarantees that disciplinary records and complaints made against officers are kept secret or even destroyed,” according to a new analysis by the Guardian.
Questions have surrounded how police officers simply move on to the same job with another department after facing disciplinary action or complaints — and these leaked documents appear to offer at least a partial explanation:
“A Guardian analysis of dozens of contracts obtained from the servers of the Fraternal Order of Police (FOP) found that more than a third featured clauses allowing — and often mandating — the destruction of records of civilian complaints, departmental investigations, or disciplinary actions after a negotiated period of time.
“The review also found 30% of the 67 leaked police contracts, which were struck between cities and police unions, included provisions barring public access to records of past civilian complaints, departmental investigations, and disciplinary actions.”
Anyone attempting to hold police accountable for inordinate use of force and other offenses are often stymied by the so-called Blue Wall of Silence — the invisible, and seemingly impenetrable, tendency for police to close ranks to protect their own when an officer is accused of misconduct in some form. But as the Guardian’s report shows, that mythical Blue Wall has very concrete manifestations.
In fact, as University of Nebraska professor of criminology Samuel Walker explained, there could be “no justification” for such purges of officers’ records, particularly since they would evidence use of force against civilians.
“The public has a right to know,” he stated. “If there was a controversial beating, we ought to know what [disciplinary] action was actually taken. Was it reprimanded? A suspension?”
As many Americans already know, police have a different operating procedure than those who are often unwitting victims of their violent tactics. Where law enforcement officers are alerted to a person’s prior record, even if they have served time for a crime — thus ostensibly ‘learning their lesson’ in the eyes of the law — as FOP president Chuck Canterbury asserted, “Disciplinary files are removed because they affect career advancement. People make mistakes and if they learn from them, they should be removed. This is standard HR practice.”
“Expungement” clauses in officers’ contracts, from small towns to major cities, allow the purging of formal investigations and reprimands from their records, while other departments, which actually maintain such records, hold them to internal scrutiny only — and disallow any public disclosure.
For example, the “Police Officers’ Bill of Rights” found in the 2009-2012 FOP contracts in Ralston, Nebraska, according to The Guardian, stated, “Unless agreed to by an Officer, the City shall not divulge the reason for any disciplinary action that is not appealed to the Civil Service Commission.” And the city was obligated to “make every reasonable effort” to ensure a photograph of the officer didn’t end up in the hands of the public or the media.
Canterbury justified non-disclosure of complaint records by saying, “It’s mostly the false or unsustained complaints that officers feel unduly hurt their careers. Nobody expunges guilty adjudicated use-of-forces, so if these acts are found unsustained in the first place, why should they continue to have any bearing on officers?”
Why? Perhaps because investigations into such complaints are most often completed by the officer’s own police department, and therefore lack impartiality — and because such investigations notoriously find officers not guilty of any offense, even in the face of video evidence.
“These are public employees, so their performance should be available to the public,” said civil rights attorney and former police officer, Devon M. Jacob. “There’s no reason matters of waste or wrongdoing should be kept away from the public. I disagree with this idea that unsustained complaints or investigations don’t matter.”
Despite Canterbury’s claims that departments are superior to civilian review boards “because civilians have no knowledge of law enforcement or expertise on procedures,” as former director of the National Black Police Association, Ron Hampton aptly stated, “People just don’t feel that the police can investigate themselves thoroughly or impartially.”
Just when you think the world can’t get any more bizarre, it does.
Case in point: the latest in “let’s play Mother Nature” news, is that United States researchers now have their sights set on growing human organs … inside farm animals. Oh, but it gets better. The point of all of this? It’s to then take the Franken-organs and use them for transplant procedures, despite the fact that not enough is known about this. As a result, there are lots of folks with Island of Dr. Moreau movie thoughts.
Talk about inter-species dilemmas, ethical boundaries being pushed, and of course, a distinct departure from Mother Nature unfolding as it should: without humans severely interfering, and ultimately compromising life, every step of the way.
The plan involves growing human tissue inside the likes of pigs and sheep, so that livers, hearts and other organs can be created and used for transplants. Such injections of cells from one species into the embryo of another creates mixtures that are referred to as “chimeras.” In the case of incubating human organs in farm animals, human-animal chimeras are created.(1)
The NIH’s stance: not funding these Franken-efforts
The eyebrow-raising technique has drawn criticism from the National Institutes of Health (NIH), who just a few months ago, reversed their previously-held decision about such methods. A September 2015 announcement by NIH said that, “The National Institutes of Health (NIH) is informing the research community that it will not fund research in which human pluripotent cells are introduced into non-human vertebrate animal pre-gastrulation stage embryos while the Agency considers a possible policy revision in this area.” The agency goes on to say that, “NIH will not consider requests for administrative supplements or revisions to any grants or modification to R&D contracts that include costs for or involve research introducing human pluripotent cells into non-human vertebrate animal pre-gastrulation stage embryos. Ongoing NIH awards will be addressed with the awardees on a case-by-case basis.”(1,2)
The NIH, therefore, has made it clear that they frown on the idea across the board, ranging from current research funding requests and contract proposals which are pending submission, to peer reviewed competing applications. It was the discovery that such efforts were occurring from other funding sources (including a $1.4 million grant from the U.S. Army that will focus on growing human hearts in swine), that spurred the NIH to make such declarations.(1,2)
Researchers pressing forward despite ‘negativity towards all chimerism studies’
In particular, three research teams are said to be involved with human-animal chimera efforts (two in California and one from the University of Minnesota). Despite there not being any published scientific papers touting these teams’ so-called successes, MIT Technology Review believes that approximately 20 pregnancies of pig-human or sheep-human chimeras have taken place over the past year in the United States. However, none of these animals have been brought to term.(1)
As you might guess, human-animal chimera advocates are scratching their heads over the NIH’s funding decisions, most notably in a letter touting the benefits of growing human organs in farm animals. The letter, penned by several university professionals, including Daniel Garry, a cardiologist who leads a chimera project at the University of Minnesota, states, “By eliminating federal funding for this research, the NIH casts a shadow of negativity towards all chimerism studies regardless of whether human cells are involved.” The letter appeared in Science magazine, where the authors also state their collective belief that such efforts are essential for learning purposes, including gaining an understanding of disease, development and therapeutic discoveries.(3)
Animals with human hair and human intelligence on the horizon?
On the flip side, are those who fear that some of these animals might end up taking on behaviors and physical characteristics that are eerily representative of humans. We’re talking about animals with a close to human-like thinking ability, or perhaps ending up with patches of human hair. “We are not near the island of Dr. Moreau, but science moves fast,” says NIH ethicist David Resnik. However, the says that, “The specter of an intelligent mouse stuck in a laboratory somewhere screaming ‘I want to get out’ would be very troubling to people.” The scenario he presents is worrisome to many people, although Hiromitsu Nakauchi says he’s not concerned.
Nakauchi is a stem-cell biologist at Stanford University who has attempted to make human-sheep chimeras. The picture painted by Resnik, he feels, is an over-exaggeration. “If the extent of human cells is 0.5 percent,” he says, “it’s very unlikely to get thinking pigs or standing sheep. But if it’s large, like 40 percent, then we’d have to do something about that.”
Sources for this article include: