Materialism (Part One) Steven Yates
“ … we are living in a material world
And I am a material girl.
You know that we are living in a material world
And I am a material girl.” ~Madonna, “Material Girl” (1984).
Madonna’s first big hit defined, in part anyway, the mindset of a generation. What did this generation want? Material affluence, which became an ideal in the 1980s and beyond after its rejection two decades before. While that earlier era had its problems as we’ll see, the new materialism opened the door to the kind of business behavior which brought about the S&L crisis, a couple of decades later the Enron and Worldcom debacles, and then finally, the worst economic meltdown since the Great Depression. No, greed is not good! Greed does not work! But the story is longer and more complicated than this.
The word materialism has more than one meaning. It does not refer just to a preoccupation with material goods, affluence, or pleasures and excess, although those are legitimate uses of the term. As explained in my Four Cardinal Errors (2011) and in my ebook Philosophy Is Not Dead (2014), materialism also names a comprehensive philosophical worldview which began to replace Christianity as an intellectual and cultural force, first in Europe in the latter half of the 1700s and then more rapidly in the 1800s. In the 1830s, Auguste Comte (1798 – 1857) offered a philosophical ideology known as positivism, which would ensure that materialism became the dominant philosophy of science. Meanwhile, Karl Marx (1818 – 1883) developed his dialectical materialism underwriting a theory of historical progress resulting from material economic forces that would culminate in Communism. Charles Darwin (1809 – 1882) offered a materialist theory of the origins of species, including humanity, with his theory of evolution by natural selection which quickly became the mainstay of modern biology. Wilhelm Wundt (1832 – 1920) developed an experimental psychology suitable for studying human beings conceived as little bundles of responses to stimuli: material boys and girls. As some of his leading students were Americans (e.g., G. Stanley Hall of Johns Hopkins University), he became the intellectual godfather of behaviorism as well as educational psychology as materialism came to the U.S.
Philosophically, the replacement of Christianity with materialism culminated when Friedrich Nietzsche’s (1844 – 1900) Zarathustra proclaimed that “God is dead!” followed by Nietzsche’s own call for a “revaluation of all values.” Nietzsche was one of the most ruthlessly honest thinkers who ever lived. He realized that once God and transcendence vanished from your worldview, everything changed. Everything those notions made meaningful also vanished. The death of God meant the death of morality in any standard sense, and this could not be evaded forever. Bertrand Russell (1872 – 1970) penned “A Free Man’s Worship” (1903) and was more hopeful in finding ideals of peace and justice in the dead universe disclosed by materialist science. The impotence of these ideals was revealed in the disasters of future decades: the world wars, the brutal dictatorships, the casual genocides, the many hidden cruelties such as global sex trafficking (young girls kidnapped and forced into prostitution so sociopathic pimps could get rich), and finally the rising specter of scientific oligarchy by a technocratic elite.
Optimists insist that despite the carnage we have made incredible levels of technological progress thanks to scientific advances. This much is true. We have cured diseases, sent men into space and returned them safely, and now have the means to communicate with one another visually across oceans (e.g., with Skype). Violent crime in advanced nations has fallen consistently over time. Poverty has diminished. But we inhabit a world of dangerous imbalances. Viewed ethically, something is terribly wrong, and every thinking person knows this. Technology has also produced weapons capable of laying waste to continents. Some of its products threaten the food chain, the disruption of which would precipitate mass extinctions. Technologists splice genes, work on AI, and speak of “transhumanism.” I am reminded of one of the great movie lines of all time, from Jurassic Park: “Your scientists were so preoccupied with whether or not they could that they forgot to stop and ask if they should” (Dr. Ian Malcolm, played by Jeff Goldblum).
Materialism as trained philosophers use the term refers to a metaphysics, or theory of reality, a range of views we can summarize succinctly with the following:
(1) Reality means spatiotemporal reality, the physical universe. No sense is to be made of anything existing “outside of,” or “beyond” space and time, such as a God, or Heaven or Hell. Eternity and transcendence are meaningless concepts.
(2) What exists in spatiotemporal reality comes down to material things — physical entities (particles, forces, energy, however we cash out the specifics), which obey unyielding laws of physics and chemistry. Again, no sense is to be made of anything outside of these laws, such as God creating a world ex nihilo or performing miracles which would suspend or violate universal causality. The universe is self-existing and uncreated; it may have an origin and its present form an explanation, but these need not resort to any form of causality “beyond” physical or material nature. Naturalism, in other words, is the mandated methodology.
(3) Our only reliable means of knowing the world is naturalistic science, based on observation, hypothesis, experiment, data collection, theorizing, and replication. Natural science is just the use of the collective sense experience of trained and disciplined observers and experimenters in their various specialized domains. Science, unlike religion in this view, is not infallible but is self-correcting and progressive. Its authority, even if never settled, is decisive in what we can legitimately say we know about the universe. Outside of science, all is superstition, emotion, and unreason. Scientific methods have given rise to technological and economic progress, the world of relative comfort we now enjoy, and this is their primary validation.
(4) A human being is no less physical than anything else in nature. Our differences from other animals are differences in complexity, not in kind. Evolution did not “aim” to produce us. It had/has no “goals.” Our existence is a grand cosmic accident, as is life itself. The mind is just the brain (or, perhaps, the brain, senses, and central nervous system). Free will is an illusion created by ignorance of the actual causes of our behaviors. It makes no sense to say we can act outside the world’s causal structure.
(5) The diagnosis of the human predicament comes down to the prevalence of superstition, a lack of disciplined scientific reason, flawed institutions (governmental, commercial, educational); prejudice, hatred, and fear of what is different; and — in general — ignorance. The cure: knowledge, through universal education, leading to consciousness of public goods, more responsible governance, better use of science and technology. Thus we will find our way to our philosophical adulthood, giving up childish notions about a ghostly man in the sky and standing on our own feet.
(6) In term of ethics, if there is no God or transcendence, then as Russell observed, morality must be found in this world, or in ourselves. While Darwinians have had ideas about morality as an evolutionary adaptation, in the last analysis if materialism is true there is nothing standing over us, even figuratively, insisting that we “be moral” and threatening punishment us if we disobey — aside from social sanction and the state, of course. Morality is in this sense an invention of cultures (a cultural artifact, an anthropologist might say), not a discovery of something built into nature. Morality differs from place to place and from time to time, as anthropologist Ruth Benedict (1887 – 1948) spelled out in her magnum opus Patterns of Culture (1934): cultural relativism, in which cultures draw from the many possible patterns of human behaviors possible for them to approve. What is “immoral” is whatever set of patterns a culture refuses to use.
Philosophers have struggled since the late 1700s to find some basis for stable morality in a material world, a very practical endeavor, as it was clear even then: if the majority of people did not behave ethically at least most of the time in the sense of being honest in their dealings, respecting others, helping and not hurting them, taking responsibility for their mistakes, etc., very soon you did not have a society. You had breakdown and chaos. This, though, was compatible with the idea that all morality does is serve as a kind of glue that cements cultures together by supplying a basis for resolving disagreements and solving societal problems, without any transcendent grounding. The Darwinians would say it had survival value. Cultures not adopting such mores do not survive. Simple as that.
Is this the only reason we should be moral in the material world? What if I can get away with not being moral as my culture defines morality? Is there any reason I shouldn’t make the attempt?
Materialism (Part Two) Steven Yates
“Same old song, just a drop of water in an endless sea
All we do crumbles to the ground although we refuse to see.
Dust in the wind
All we are is dust in the wind.” ~Kansas, “Dust in the Wind” (1977)
I’ve always enjoyed progressive rock, even if it raises my Christian friends’ brows sometimes. Much of it is well done, and sounds like some thought went into it. As implied by my referencing Madonna at the outset, popular music is often a good guide to the zeitgeist of a culture. Many singers/songwriters are sensitive to this in ways academics are not. Our cultural worldview, as I’ve emphasized, is fundamentally materialist, and even those uninterested in the philosophical specifics laid out in Part One will find themselves immersed in its consequences, one of which is the exclusive preoccupation with material goods amidst ethical ambiguity. One of the questions underwriting this ambiguity was best put by one of the first philosophy professors I was a teaching assistant for, back in the early 1980s. Are there any absolute values? was the question she posed in class. Must we rest content with the relativism of the anthropologists?
The German philosopher Immanuel Kant (1724 – 1804) believed we could deduce absolute duties from Pure Reason, and they would apply to all rational beings. He called his main principle the categorical imperative: always act as if the maxim or principle guiding your action could apply to everyone (I am paraphrasing, of course). Always tell the truth out of respect for the truth and respect for others as moral agents. Always keep your promises out of the same respect. Honor contracts. What is morally wrong is making exceptions for oneself, or treating oneself as a special case. Morality is universal or it is useless. Kant had problems, however, when universal duties appeared to conflict, as they sometimes did.
Great Britain’s John Stuart Mill (1806 – 1873), a Utilitarian, argued that morality is a matter of following the greatest happiness principle: your action is ethical if it creates a greater balance of happiness over unhappiness in the world, where happiness tends to mean pleasures of various sorts (those of the mind, such as scientific knowledge or appreciation of the arts, take precedence over those of the body, involving sensuality and appetites). This kind of position logically permits the sacrifice of some if it brings about enough knowledge and social benefits for the rest to enjoy a greater balance of happiness. And by the way, these are not idle games played by intellectuals locked away in academic cubicles. Mill’s ideas were widely studied and absorbed into governing bodies throughout the English-speaking world. They came to affect policy decisions in a variety of arenas, and were furthered by people who barely even heard of Mill himself. The sacrifice of dozens of black men in Macon Co., Ala., during the Tuskegee syphilis experiment is consistent with utilitarian thought! The public health community got away with this for decades! Also compatible with utilitarianism is every decision to send the children of the masses to fight wars of choice!
So is it the case that, as Russian novelist Fyodor Dostoevsky’s (1821 – 1881) character Ivan Karamazov put it, “If God does not exist, then everything is permitted”? Twentieth century secular ethics has been a struggle against this wretched conclusion, as well as against the relativism of anthropologists such as Benedict. Thus far, the results are less than promising!
A few major thinkers of the later twentieth century weighed in with fresh proposals. Among the best known is John Rawls (1921 – 2002), who pursued a theory of social justice as fairness. He sought to identify rules that would be adopted by rational persons from behind a veil of ignorance: that is, from an ideal vantage point where the adopter does not know his race or class standing or other particulars. What principles would be most worth embracing by the rational and fair-minded? Rawls’s answer: every person should have basic liberties no government can take away, to the extent compatible with equal liberties for all (the liberty principle); “offices and positions” should be open to all persons regardless of race and sex (an equality of opportunity principle); inequalities, to be acceptable, must work to the advantage of the worst off (the difference principle).
Rawls’s critics noted that his original position (behind the veil of ignorance) works under the assumption that most people are risk averse. They would not want to risk the results of principles that left disadvantaged groups to fend for themselves, as they might be in one such group. Saying this is a bit strange, however, and others wondered if the thought experiment was realistic. Can anyone actually imagine themselves behind a “veil of ignorance”? It certainly doesn’t comport with the identity-politics that has come about since Rawls wrote his major work A Theory of Justice (1971). Rawls did not see any connection between morality and justice on the one hand and metaphysics or worldviews on the other. The idea that these areas can be divorced from one another is part of secular ethics in the material world.
One of Rawls’s Harvard colleagues, Robert Nozick (1938 – 2002), developed an individualist ethic, as have other notable libertarian philosophers such as Tibor R. Machan (1939 – 2016*), some influenced by Ayn Rand (1905 – 1982). They focused instead on negative rights of individuals, rights to be left alone in ways that imply no duties to others except to leave them alone. These they contrasted with supposed positive rights to specific goods someone is obligated to supply, which led to collectivism. Their view was that all individuals have the right to act freely, pursue their own goals, and keep the fruits of their labors (private property) so long as they do not interfere with the same negative rights of others. All should deal voluntarily with one another in the free market. According to the non-aggression principle, central in the libertarian ethos, what is forbidden is physical aggression or coercion against others.
This view appeals to defenders of freedom and Constitutionally limited government, obviously, since to the libertarian government is the primary aggressor against individuals’ rights, to be kept very small (minarchism, what Nozick called the night watchman state) or eliminated (anarchocapitalism), therefore. The downside is that individuals rendered helpless or infirm, e.g., by illness or infirmity late in life, would have no inherent right to care, as that would be a positive right. For libertarian purists, even social security is the collectivized and forcible taking from some and giving to others. Negative rights do not do you much good, however, if all they come down to is a “right” to starve, or to die helpless. Families are considered responsible for helping their own, but reality is that in industrial civilization family members have had to spread everywhere in search of work, often leaving elderly parents behind. Nothing in libertarianism forbids a person from acting on his own to help, e.g., Alzheimer’s patients who are alone. This is hardly reassuring, though. An ethic of purely negative rights seems neither realistic nor humane. Libertarians assumed, moreover, that free market dynamics plus what Nozick’s night watchman state would be sufficient to control corporate greed or prevent the dominance of the state by corporations acting in consort as they hungered after power. History suggests that this is wrong, that the locus of power is not government per se but networked corporate leviathans who can buy political loyalty. One need only read John Perkins’s (1945 – ) Confessions of an Economic Hit Man (2004) to see the role corporations have played in controlling governments and bringing about a wide variety of regime changes and cultural catastrophes against those who resisted.
While all these various notions have all received great discussion and debate, no one position has emerged as dominant. Richard Rorty (1931 – 2007), arguably the last major philosopher of the twentieth century (and possibly the last major philosopher the West will produce), put it like this in his Consequences of Pragmatism (1982): again to paraphrase, in the actual world, people have the rights and obligations society says they have, no more and no less. We are back to the anthropological view. Society, neither Rorty nor they quite tell us, devolves upon authority, especially those with the capacity to enforce their will on others, or to use language in ways ensuring psychological conditioning and de facto control. One of Rorty’s favorite philosophers was educationist John Dewey (1859 – 1952). Dewey, who had studied under Wundtian G. Stanley Hall whom we mentioned earlier, had also seen merit in behaviorism.
All of the philosophers we have considered were atheists except for Kant who believed society benefited from a general belief in God, although from a philosophical standpoint Kant decoupled God from morality. Later philosophers just built on this separation. Kant did not believe our reason was capable of solving the problem of whether or not God exists; its categories, Kant called them, limited its possibilities. But we cannot really evade the choice: believe in God and His commands, or not? To not choose is to be an operational atheist, acting as if God does not exist while going along with what is fashionable, ethically speaking.
Rorty’s implicit answer to Dostoevsky is: “If God doesn’t exist, then everything is permitted that your fellows allow, the state permits, or that you can get away with.” The infamous “eleventh commandment”: thou shalt not get caught. If your culture has not convinced you that you shouldn’t lie, cheat, steal, or go on stage and perform nearly naked (think Miley Cyrus!), then so much the worse for your culture! Any ethical objections to the idea that corporations may do as they please and call it “the free market at work” turn out to be toothless.
Parts Three and Four will appear next week.
*Sadly, Professor Machan passed away just since the first appearance of this essay.