The foundation of morality is woven into the fabric of the universe

Jonmaas
Predict
Published in
12 min readApr 15, 2024

--

The origins of right and wrong can be demonstrated empirically, through a replicable experiment

Header image of the cosmos and a Prisoner’s Dilemma flowchart. Photos courtesy of Stephen Rahn from stocknsap.io and Christopher X Jon Jensen (CXJJensen) & Greg Riestenberg from Wikimedia Creative Commons.

There are many types of moral frameworks out there — from Kantian thought, which (among other things) places a value on intent of action, to Consequentialism, which believes that results, and not intent, are paramount.

Even Machiavellianism is a moral framework of sorts, and David Benatar’s antinatalist viewpoint — which claims that it is a moral crime to reproduce — is also moral framework.

But let’s take a step back to the concept of moral frameworks in general. Does morality have any real, provable foundation? Does the concept of right and wrong go any deeper than its labels?

Well, the Moral Nihilists, who claim morality does not exist, would say no.

And many of the devout would say there are moral foundations in the divine, but instead of observable evidence — we just have to have faith.

This article will take another path, one that argues that yes — morality has a foundation, and it can be demonstrated through a replicable experiment.

All right, let’s jump in.

The experiment that shows the basis of morality (and other things) — Robert Axelrod’s Game Theory tournament

Robert Axelrod — image courtesy of Wikimedia Creative Commons
Robert Axelrod

Robert Axelrod is a Political Scientist who brought a massive Game Theory tournament into being during the late 1970s, and this experiment yielded profound results.

First of all, a primer on both Game Theory and Prisoner’s Dilemma

Game Theory a model that often relies on the Prisoner’s Dilemma thought experiment. Prisoner’s Dilemma is a simple game where there are two prisoners in two separate rooms, being interrogated by the police. Neither has any contact with the other, and each is given one of two options — Cooperate with the other prisoner by saying I don’t know anything. Or they can Defect from the other prisoner by stating The other prisoner committed the crime.

Game Theory square — courtesy of Wikimedia Commons

If they both Cooperate, they each get 1 year in prison. If they both Defect, they each get 2 years in prison.

But if one Cooperates and one Defects?

The Cooperator gets 3 years in prison, and the Defector walks free — and that is the way to ‘win’ the game, by Defecting while the other Cooperates.

Prisoner’s Dilemma is played out everywhere, from Diplomacy to Consumerism to Nature

This thought experiment is more than just an abstract concept.

Axelrod and so many others play this game because it is seen everywhere.

The US and the Soviet Union played the game on multiple levels during the Cold War. Both nations stockpiled nuclear weapons, which was a game of mutual defection, but neither were the first to fire them, which was another game of repeated cooperation.

In the consumer’s landscape there is also a game of Prisoner’s Dilemma — case in point the SUV.

An SUV
Photo by Maksym Tymchyk 🇺🇦 on Unsplash

All drivers would be safer if we all cooperated by refusing to get an SUV.

But any consumer can defect and get an SUV, and enjoy its spaciousness and safety advantage. This means the other cooperating consumers now have lower-to-the-ground and accident-prone cars — until they themselves defect and get a Hummer.

Game Theory is also found in nature.

Two fish
These two fish might cooperate — or defect — with every swish of their tails. Photo by David Clode on Unsplash

Think of two small fish swimming by a larger, predatory fish.

Each coordinated swish of their tails is either a cooperation or a defection.

If they cooperate by staying with one another, they appear bigger, but hold some risk.

If they both defect by fleeing in separate directions, they have a bigger risk because the predatory fish can now pursue one of them.

But if one defects by swimming away at an oblique angle, the defector will win by leaving the cooperator to be the predator’s next meal, while the defector swims to safety.

Axelrod invited thinkers — or rather their algorithms — to join in a massive Prisoner’s Dilemma game

Axelrod had political scientists, logicians and everyone else show up at his summit with an algorithm.

We’re not talking about advanced, computer-driven algorithms here per se — this was the 1970s, so think in terms of simple computer programs, or even pencil and paper decision trees.

These algorithms could often be expressed quite easily, i.e.:

Always defect.

Or —

Cooperate until the opponent defects, and then keep defecting until the opponent cooperates two times in a row.

Axelrod assigned positive points to each game win instead of having a prisoner avoid jail time, and then —

Axelrod observed which algorithm got the most points.

From single games to repeated games he measured it all, and whatever algorithm got the most points would win.

And with one-on-one matchups, defection ruled the roost

If two algorithms went head-to-head, fortune favored the defector.

In fact, the algorithm Always defect cannot be beaten one-on-one. You can only hope to tie.

But that was not the end of the experiment.

Axelrod began grouping algorithms into clusters, and everything changed after that.

With clusters of games, cooperation = winning

When Axelrod had groups of algorithms play one another in groups for total points, the path to victory changed.

Clusters of defectors did not score well as a group, whereas clusters of cooperative algorithms did quite well.

And in fact, the more cooperative, the better.

The more efficient the cooperation, the better.

One-on-one, defection rules the roost.

But in groups, cooperation is king.

The implications of this were big — perhaps bigger than the experimenters realized

There was a demonstrable value to cooperation here, which bodes well for human societies and ant colonies both.

Cooperative societies, like cooperative ant colonies, tend to flourish better than their defecting counterparts.

But for our purposes, let’s take a step back and look at how deep this insight went.

The algorithms in Axelrod’s games are non-living systems, and that means something

This was deeper than theories of statecraft, or diplomacy, or consumerism, or even biological altruism.

These algorithms, made with computer programs — or even pencil and paper — do not think.

They can act in some fashion, but they are not sentient, or even alive.

They don’t even have matter or energy.

They exist only in terms of logic and math.

Yet this experiment proceeds with replicable results, which means —

That the value of cooperation lies deeper than biology, chemistry — or perhaps even physics.

A flowchart of the conceptual structure of the universe — Math, logic and cooperation are at the foundation, and then go upwards to Physics, Chemistry, Biology and ultimately Morality
Change the laws of physics and our observable universe would change, but Axelrod’s results might not.

The value of cooperation might exist on the level of math, in other words:

The value of cooperation is woven into the fabric of the universe.

The value of cooperation is in the fabric of the universe?

It appears so.

Axelrod proved that cooperative systems tend to flourish, and perhaps outcompete less cooperative systems.

This is shown at the level of math, which suggests that anything ‘above’ the level of math will obey the same dynamic.

In short, algorithms find value when cooperating in clusters — and so will animals, humans, nations — one could even predict that alien species would cooperate in some degree, and perhaps even predict that their level of cooperation would be commensurate with their technological advancement.

Heck, if you want to see how inconceivably deep the value of cooperation goes, look in the mirror.

A woman looking in a small mirror — Photo by Candace McDaniel on StockSnap
Every time you look in the mirror, you are observing 36 trillion human cells working in cooperation, and cooperating with 38 trillion non human cells

You have 36 trillion cells operating in perfect coordination to keep you going, and about 38 trillion non-human cells that are at the very least leaving you alone — which is itself a form of cooperation.

So now that we’ve shown that that the value of cooperation is woven into the fabric of the universe, what does this have to do with morality?

The link to morality, shown by Yuval Noah Harari

The public intellectual Yuval Noah Harari shows that no creature has the cooperative power, and most importantly the cooperative flexibility of humans.

We control the world basically because we are the only animals that can cooperate flexibly in very large numbers. // Now, there are other animals — like the social insects, the bees, the ants — that can cooperate in large numbers, but they don’t do so flexibly. Their cooperation is very rigid. There is basically just one way in which a beehive can function. And if there’s a new opportunity or a new danger, the bees cannot reinvent the social system overnight. They cannot, for example, execute the queen and establish a republic of bees, or a communist dictatorship of worker bees.

We are so flexible in our cooperation, that we stop seeing how flexible we are

Photo by Brian Yurasits on Unsplash

Let’s say you are camping with friends, maybe even acquaintances you don’t know that well. You are going to be at a site for a few days, so you collectively deem one place ‘the kitchen,’ one place ‘the tent area,’ and one place far away ‘the bathroom.’

There’s a torrential rain and everything needs to be rearranged.

So — after a brief discussion — another place is deemed ‘the kitchen,’ another place becomes ‘the tent area,’ and another place far away is now ‘the bathroom.’

A campsite in the woods — photo courtesy of Awar Memen and Unsplash.
Rearranging a campsite — and the rules of the campsite, is easy for humans. Photo by Awar Meman on Unsplash

A new person arrives (a stranger that seems nice, and is, so everyone accepts her), and after effortlessly following these new rules, she finds that one rock looks like a great place to have drinks, and deems that ‘the bar.’

A silhouette of three women outdoors celebrating — photo courtesy of Levi Guzman and Unsplash.
Humans can effortlessly make many fictions, including calling a place at a campsite ‘the bar.’ Photo by Levi Guzman on Unsplash

Everyone falls in line with that.

And then the new person says there are some reports of bears in the area, so you make a new rule — no wandering off by yourself after sundown. If you want to go anywhere in the dark, bring at least one person with you.

Those actions, so mundane to the human mind, show a deep flexibility of cooperation not found in any other species

Many species can cooperate, but no creature can delineate a campsite with fictional borders, and then change those borders when the environment changes —

And then effortlessly follow a stranger’s new and abstract rules, just as easily as she follows the group’s rules.

Are these fictional cooperative rules morality?

At the very least, they are linked.

It feels wrong to go to the bathroom in the new kitchen, and it is. It feels wrong to venture out alone after sundown, and it is.

A tent beneath tall trees and the night sky with stars. Photo by Denys Nevozhai and Unsplash.
If there is a rule made that there is no leaving the campsite by yourself after sundown, it feels wrong to do this. Photo by Denys Nevozhai on Unsplash

There are rules, and you are breaking them.

Humans swim in a sea of cooperative flexibility and —

Human morality is another affect of this.

Human moral systems are a form of cooperation, and the multitude of human moral systems show the flexibility of human cooperation

Or at the very least, human morality is the human way of understanding the forces of cooperation.

In short, here is the proof:

1. Robert Axelrod’s experiments show that cooperation holds unambiguous power

2. This experiment showed the power of cooperation lies at the level of non-sentient, immaterial algorithms — which suggests that the power of cooperation is written into the level of math and logic, which make up the fabric of the universe

3. Humans cooperate in extraordinarily flexible ways. Some moral systems with a political bent (Communism, Fascism and Liberalism) are overt forms of mass cooperation, but other moral systems (Kantian thought, Consequentialism, and even Machiavellianism and Antinatalism) are forms of cooperation as well

4. Human moral systems have their roots in cooperation, which has its roots in the fabric of the universe, i.e. math and logic

5. Thus, human morality is rooted in the fabric of the universe. It can show itself in many ways, but the roots are there.

In short, cooperation has a power found throughout the universe, and human morality is one way of either expressing this power, or reacting to it

Everywhere in our universe, 5 + 5 will equal 10, and cooperative systems will tend to flourish more than non-cooperative systems.

Axelrod proved the latter, and it can be proved again and again with further experiments.

It can, of course, also be observed everywhere you look. Well-organized (and cooperating) Roman armies conquered disorganized (and non-cooperating) tribes, and a group of spiders are no match for a colony of driver ants.

And no species, including driver ants, shows the cooperative flexibility found in Homo sapiens.

We express our cooperation in many ways, from coming up with laws, and sustaining one another with the shared fiction of money.

We also cooperate by making moral systems. This might be an expression of cooperative power, or a reaction to it.

But still, we humans make moral systems, and — as Homo sapiens are wont to do — we make an inordinate amount of them through the flexibility of our cooperation, and our fictions.

Case study: the matriarch spider versus the human juror

All right, let’s compare moral systems, from a female spider and a human juror.

A female spider has one way of being, which we could call a morality: The female spider chooses a decent mate, and then eats him if she can. She traps flies and feels no empathy towards them when she is consuming them. When it comes time for her spiderlings to leave her, she turns her own enzymes against herself with her final act, which allows her offspring to commit matriphagy, which gives them one final meal before they go out into the world.

A human juror, on the other hand, vacillates between Kantian and Consequentialist morality in the course of their day — even if they don’t know it.

Immanuel Kant’s philosophy, among many things, placed a great value in intent.

Jurors in most human legal systems take intent into account. If the accused shot someone there are questions: was it on accident, on purpose, in a crime of passion, or in a pre-meditated way? To a juror, these answers matter. The victim is still shot regardless, but the juror must determine the judgment (and morality) of the act by taking into account intent.

When the juror goes to lunch however, she becomes a Consequentialist. Consequentialism often acts as foil to Kantian thought, because it finds moral value in results.

When the juror sees a piece of moldy bread in the cafeteria, she does not purchase it, no matter the baker’s intent. She chooses the bread she wants, because in the capitalist consumer landscape — results are paramount.

Three round loaves of bread and a wheat stalk. Photo courtesy of Wesual Click and Unsplash.
When a consumer sees this bread — they do not care about the baker’s intent. They just evaluate it as good bread. Photo by Wesual Click on Unsplash

What do the matriarch spider and the juror have in common? They both hold a morality that is rooted in the cooperation, which in turn is rooted in the fabric of the universe.

There is a reason why matriarch spiders act the way they do — their actions are rooted in a rigid form of Darwinian evolution, which in turn relies on the logic of cooperation.

Yes, the spider cooperates, albeit in an extremely limited way. She cooperates with her mate until she attempts to consume him, and then cooperates with her offspring by offering her body as sustenance.

And of course her countless cells cooperate with one another during her lifetime.

Human morality is more flexible and can rely on abstract notions, but it is a logical cooperation. In the jury box intent matters, and in the cafeteria, results do. That is the cooperative morality Homo sapiens can navigate through with ease.

But like the spider’s narrow morality, a human’s expansive and flexible morality is rooted in the cooperative forces of this universe.

Human morality is flexible, but it is not made up from nothing

Las Vegas from above at night — photo courtesy of Rocker Sta and Unsplash.
Humans have agreed that the laws are slightly different in Las Vegas, and they are. Photo by Rocker Sta on Unsplash

We humans can make up rules. We can change the rules.

And we can also make up moral systems, and then change them.

But we don’t make up moral systems from nothing.

There is a root of cooperation there, a root that lies beneath biology, chemistry and even physics.

It lies in the fabric of the universe.

So though the determination of what might be right and what might be wrong can still be open to interpretation (and inevitably is), the foundation of right and wrong is not non-existent.

The foundation of morality lies in the value of cooperation, which lies in the fabric of the universe, and is very, very real.

Jonathan Maas has a few books on Amazon and has direct the SciFi movie Spanners, which is free to watch on YouTube.

He is a member of the Los Angeles Philosophy and Ethics society, whose conversation helped inform this thesis.

For further reading he suggests Robert Axelrod’s The Evolution of Cooperation, and Yuval Noah Harari’s Sapiens. For a hopeful book that suggests math is on the side of open and tolerant societies, he suggests The Return of Great Power Rivalry by Matthew Kroenig.

--

--

Jonmaas
Predict

I read a lot and occasionally write ;) See more of me at Goodreads.com/JMaas