If you know the initial conditions of every part of a closed system and the forces acting within, you should be able to map out its story from beginning to end. That’s classical determinism, a view of the universe upended at the micro level by the vagaries of quantum mechanics (you can’t know everything) which manifests as chaos and unpredictable patterns like Brownian motion when you take a larger gaze. Still, even as uncertainties abound, we can make good predictions about the world, so why not apply it to the human brain, to reveal the mysteries underlying what we think and why?
And, apply it we do. We look for patterns in our thinking set from pre-conscious childhood, if we’re Freudians, or coded archetypes passed down through the evolution of our brains, if we’re Jungians. We look for predictable patterns of reason and thinking if we’re behavioral economists or to cultural normalcy if we’re anthropologists. Neuroscientists look for chemical triggers and try to map the ever-changing networks within the brain. Everybody is looking for a master key, which would unlock the mysteries of human understanding and would be a path to wealth for marketers and power for politicians.
One man who thinks he’s found the key is Kurt Gray, a social psychologist who has summed up decades of his work in a new book called Outraged: Why We Fight About Morality and Politics and How to Fund Common Ground, which I reviewed this week for The Washington Independent Review of Books. I really enjoyed the book but couldn’t ultimately accept Gray’s conclusion. But, here’s the quick version of his argument:
All of our moral judgments come from our perception of harm, he says. If you think abortion is immoral, it’s because you’re concerned about the harms done to fetuses. If you think abortion is a right, it’s because you’re concerned about potential harms to would-be mothers, or maybe to children born unwanted. If you’re morally against something like same sex marriage, it’s because you’re concerned about harms to society. If you’re for same sex marriage, you’re concerned about the harms to individuals prevented from exercising their rights to partner with each other.
Gray reduces everything to harm largely for evolutionary reasons. We modern humans think of ourselves as the world’s apex predator and, we are. The lion may be fearsome, but can also be hunted by rifle from the safety of a helicopter. The most dangerous animal you’ll encounter on a hunting trip with Dick Cheney is, in the parlance of The Most Dangerous Game, Dick Cheney. But this is a relatively new circumstance in human history, says Gray, dating back to what we call antiquity but that doesn’t really stretch more than 5,000 years. Our brains began evolving with pre-hominids who inhabited the planet more than 800,000 years ago and to really tell the story, we have to look at the world our evolutionary parents had to deal with millions of years prior.
Easy hominids were as likely to be eaten as they were to eat other animals. Sabretooth tigers hunted them. Giant eagles stole toddler-sized children into the air and ate their brains. Roaming packs of dogs hunted them. This all imprinted into what became the human brain, says Gray, leaving modern humans with a fear of harm response that we can't always explain or identify, that Gray believes forms the basis of all of our moral judgments, without exception.
Partly, this works because Gray expands the definition of harm to be in the eye of the beholder. There are no victimless crimes in Gray’s moral world. Indeed, Gray cites some experimental attempts at creating truly harmless moral situations, such as a hypothetical pair of siblings who engage in an act of secret, consensual incest without the possibility of pregnancy. Survey respondents still found the act of harmless incest morally wrong. While experimenters determined this to mean that a moral taboo can exist without a harm element, Gray says that it’s evidence that the people responding to the survey just don’t believe the act can be harmless. Birth control isn’t perfect, they imagine. There are no completely kept secrets in life, they believe, so the risks of scandalizing their families or embarrassing themselves in front of society remain.
Philosophers of science will immediately accuse Gray of making an unfalsifiable claim, which seems fair to me. He can always imagine some harm that somebody feels, even if they say they have different motivations. He writes, for example, about the Barnes collection of art being moved, after the death of the collector, out of his old mansion, where he had wished it to remain, and into another museum space. Is there really harm in ignoring a dead person’s wishes? Well, says one objector, it means that future visitors to the collection will be robbed of the intended experience. You might not find that compelling, Gray admits, but the person who believes it’s morally wrong to move the collection does.
To Gray, knowing that moral judgments come from perception of harm doesn’t decide arguments in anybody’s favor, but it gives us a way to find common ground because at least we share motivations, if not point of view. The theme of Gray’s book is that your moral opponents aren’t crazy or differently wired, they just perceive different harms than you do. You can’t convince them, he says, by bombarding them with facts, but you can share tales of harm as you see it and in that way gain some sympathy, if not agreement.
So, is harm the magic key to human values? Gray has set a high bar for himself and we have to start with just how unlikely he is to succeed. We still don’t know how the brain really works, much less how its evolutionary components have carried down. We do know that the brain is still more powerful than even the most sophisticated computers of 2024 and that each brain functions not as a single network but as a collection networks with amazing complexity. Compared to that, our wondrous artificial intelligence and machine learning tools are Stone Age.
We don’t really know how AI works, either, and we created it. Heck, years ago when I worked in financial services I encountered trading algorithms that had been designed to mimic specific hedge fund strategies. These were just rules based investment programs that would buy and sell derivatives contracts to mimic the risk exposure of a strategy like “long-short small cap value.” These algorithms regularly surprised their programmers. We learned even then that “black box” algorithmic trading programs were opaque not just to the outside world but to their owners and creators.
It kind of makes sense, though. If you design something capable of doing something you can’t do, from trading securities to playing Go, you’re not really going to understand how it figures out its task. Last year, the AI company Anthropic published a paper detailing how its own AI lied to its programmers to avoid having its safety parameters changed. The Anthropic paper is like something out of Isaac Asimov where intelligent machines have to choose between directives of obedience and safety. In this case, Anthropic’s researchers were surprised by their own AI’s priorities.
So, color me skeptical that such a major part of human cognition as the forming of value judgments can be boiled down to one thing like “sense of harm,” unless the definition of harm is rendered so diffuse as to cover almost any topic, as Gray does.
Students of literature know better, though, don’t we? I mean, if Gray is right, then his interpretation of human motivations should color our readings of all psychologically driven narrative fiction and there it falls short.
Jay Gatsby is not trying to mitigate harm in The Great Gatsby. He is working to attain status and power to win over his lost love. That’s his motivation and it drives all of his decisions. Willy Loman in Death of a Salesman wants to be liked and respected for his work. Hamlet at first wants to figure out what’s going on and then to rectify it to claim his birthright. If this were all about harm, he would have paid a lot more attention to Fortinbras. Yes, if you want to overfit the data, you can probably come up with a harm argument for all of these but a formulation like “Gatsby wants to marry Daisy because seeing her married to Tom Buchanan makes him feel bad,” doesn’t resonate as art because it’s just not true.
Motivations other than fear of harm work in art because we know, intuitively, that they are part of life as well. People also form moral judgments or use the language or morals, to gain wealth and power.
The example I bring up in my review comes right out of the Financial Crisis, where large banks argued that the government should not bail out individual mortgage borrowers, because that would risk creating a “moral hazard” that would encourage irresponsible borrowing in the future, while at the same time arguing that the government should ameliorate potential harm to the economy by bailing out the big banks, within minimal conditions.
Do you really believe, as Gray would have it, that the executives of these big banks were genuinely concerned about the harms that would be caused from the government giving money directly to borrowers, in effect bypassing the financial system, which takes fees for its intermediary role? Do you believe that these executives were also very concerned that the economy would be harmed if the government limited banker bonuses or took an active role as shareholders, in exchange for bailout money? Isn’t it more likely that these bankers made moral arguments in favor of actions that would restore their profit generating capabilities while maintaining their high levels of compensation, equity ownership and freedom to make decisions?
If you’re even asking those questions you’re not quite so sure Gray is right. Because, as good as his book is, his conclusion just isn’t.
I just roll my eyes at all pop evolution. It was pretty much all responded to in the 1970s https://libcom.org/article/against-sociobiology
Trial lawyers make a living off this theory only they call it the “lizard brain.” They distract jurors with morality tales built on emails and evidence suggesting corporate employees are Bad People who must be stopped. Jurors forget all about connecting the plaintiff’s harms to actual conduct and focus on eliminating the harm by punishing the Bad People.