(in)coherence of Altruism

Quite often, when we speak about ethics, the question of altruism comes up as a default way that truly moral/ethical people act – or should act. Altruism is defined by the OED as “the fact of caring about the needs and happiness of other people more than your own,” or as “unselfishness, devotion to the welfare of others, opposite of egoism,” from Old French altrui “of or to others,” from Latin alteri, “other.”

When we speak of altruism, it is the idea that we act in a way such that our focus is solely the benefit of others, without regard for our own benefit – without regard for getting something out of it. In this way, it is argued, we are really engaged in doing good for the sake of goodness. Where we get something out of it, we are merely pursuing our own benefit, and the fact that others are helped is incidental. That is, we would have done whatever was to our benefit anyway… the fact that goodness happened was incidental.

The argument is also commonly made that there is no altruism, because we are always deriving benefit from our actions – even if we only feel good for having helped others. The argument then goes that, this lack of possible altruism is proof that there is no such thing as goodness itself, or objective right and wrong. That is because the final standard is our own benefit, not doing some impartial “good.”

This argument (both sides of it) is all sorts of problematic. The purpose of this article is to shed some light on this problem, and to give us a functional answer and explanation.

Belonging to any ethical system inherently comes at a cost. This is because every ethical system has requirements of things we must do, things we can’t do, things we have to sacrifice, etc. So, there is a cost component.

Second, the idea of belonging to an ethical system has a notion of working for a benefit (highest moral value/GHE). The simple example we used was the med student working towards becoming a doctor. Therefore, the work that goes into becoming a doctor is done with an eye on the benefit to be achieved. So, there is a benefit component.

Third, the question of the ultimate goal – the highest moral value – sets the agenda for all our other values, interpretation, meaning, etc. What we get is a pyramid with the highest moral value (GHE) sitting at the top (details: here). Like this:

Generic Hierarchy of Values

So, everything that we do happens against the background of that ultimate goal. But that means that our notions of values (right and wrong) happen in reference to this goal (which we assume is the Truth). The resulting system looks like this:

Good and Bad in context of the Hierarchy of Values

This simple distinction is actually more complex. If you imagine values on Cartesian coordinate system, anything that moves you up the Y-axis is good, and anything that moves you down the Y-axis is bad. But up and down are not the only possible motions. There is also the X-axis to consider. Once things are not oriented solely up and down, they are headed – sometime more, sometimes less – in the general direction of good and bad. So, you get a rank-ordering that looks something like this (pardon the atrocious Paint skills):

Action Value in a Hierarchy of Values

What is happening is that all your actions, commitments, engagements, etc. fall somewhere on this spectrum. You are hoping to get everything aligned so that it falls in line with the “best” axis. That way, all the things you do are leading you towards the best possible outcome – everything you do is directed towards the Truth.

What that means is something like, every action is good or bad (to some extent) depending on its orientation, relative to your highest moral value. The Neutral actions (the few there are that fall solely along the X axis), can still be bad, because they are (generally) possible activities that can lead us up our pyramid, and instead we’re spending time and effort and staying in place.

In any case, everything we do either gets us closer to, or away from, the Truth and our ultimate goal. So, what we’re doing is always to our benefit or detriment, with no real middle ground. Whatever we do, whatever we sacrifice, we’re always doing it with our eye on the benefit we’re aiming for. What we believe is “good” or “bad” is a system of sacrifice/investment and corresponding benefits that is ultimately aimed at the Truth.

This gives us an economic notion of “return on investment” (ROI). When we have the option of doing X or Y, we weigh the cost of doing either, with the benefits we derive from each. And, if we’re coherent, we pick the action with the greatest benefit at the end of it. This way, our behavior is optimized. Since time and effort are the only resources we have, it translates into the returns you get for investing your life and effort: the better the returns (according to your highest moral value) the better the investment.

However, that answer presents a different problem: if belief in good/evil depends on the returns on investment, then all good behavior is actually selfish and self-interested – there is no altruism (no being “good” because it is the “right thing to do”). That conclusion does not sit well with us. We like to think that good people are good, bad people are selfish and calculating. The lack of altruism seems like we’re not really good – we’re merely acting out whatever gets us what we want – and the benefit to others (the “good”) is incidental.

Yet, the notion of altruism is problematic as well. What would it mean to act without regard for the outcome (without caring how it affects you – and ultimately, everything you do affects you)? To just do “good?”

The first problem presented by the idea of altruism is that it assumes that there is a universally recognized and agreed-to “good.” Notice that I am not arguing here whether there actually is such a good, just that it has to be recognized and agreed-to. This is critical. Whatever our highest moral value, you have things you believe are, in fact, universal goods. Whether your belief has hit the mark or not, is a separate issue. Altruism supposes that we have actually agreed on the good, since it requires us to simply do that “good,” as if we all had, in fact, agreed. That’s how people would determine what an altruistic act is – an act that is “good,” and is not to your personal benefit.

The second problem, once we take the above diagrams into account, is that we can’t even define “good” without reference to ROI benefits. Good, for Christians, is what gets you to heaven. Good for med-school student is what gets him the MD title. Now, what about Christian med students? Well, Christianity is an umbrella term. That is, it functions as a top-level highest moral value. You can coherently pick any activities under that umbrella, so long as its goals are not in contradiction to the goals of Christianity. But, to be rational and consistent, you would have to restrict activities to only those that were the “best” actions for your highest moral value. So, yes, you can aim to be a Christian and attend med school, provided that your attendance was not, somehow, heading downward on the Y-axis.

But here’s the real problem: if you truly acted without regard for consequences, then there would be no difference between helping the needy and setting your money on fire. How so? 

Well, why is setting fire to your money not as good as helping the needy? Because of the consequences. Actually, because the consequences are connected to the idea of good and bad. Helping the needy is “good,” because it shows respect for the value of human life, for “God’s creation,” or any number of other explanations. But why is that good? Because showing respect is of benefit to you (type of benefit depends on the specific explanation), but it is “good” in that it heads up along the Y-axis towards your highest moral value. If you had a highest moral value/GHE that was focused solely on the survival of the fittest, then helping the needy would be “evil” (this argument is actually presented by a Chinese philosopher Han Feizi, who says that “charity is a sin”). In the end, we cannot get away from the ROI language, because the fact that we have values also means that we evaluate things. That’s the reason we can make a distinction between helping people and burning money. 

Even if we were to drop the silly actions like burning money, acting without regard for the consequence would be like giving away money randomly, without regard for things like need. We can, I think, agree that giving money to Bill Gates instead of a person starving to death is not the same. But the difference is the consequences of what that money does – or context, if you prefer.

In that case, no matter the action, we are not really talking about sacrifice, so much as an investment. An investment is paying some cost X now, but getting some higher return Y later. The key to a functional investment is that Y > X (return is greater than the investment). So, when you invest in your 20’s, you are losing something that is of immediate benefit to you (money). But, you are expecting a significantly higher return when you turn 65. When you invest time, effort, etc. into becoming an MD, you are losing things of value now, but expecting a far greater return later. The same holds true of religion: you invest now, and get your returns later. The only real difference is the start date of your benefits. 

That means that there is no altruism – not technically and not coherently. And that is a good thing. Whenever you do something “good,” it brings you closer to your Highest Moral Value. Whenever you do something “bad,” it takes you further away from that goal. There is no “doing good” without regard for the consequences (the relation of action to your highest moral value) – because the consequences (up or down) are what makes the action good or bad.

Everything you do affects you. It affects you in the present, and it affects you down the line. Every action, because it is made up of your time and effort (and resources are a combination of those two), must affect you. How you invest your time in the moment affects the position you have on the pyramid down the line.

Perhaps we can rephrase the problem of altruism and have it make sense this way: what we mean by altruism is the idea of caring about the material needs and happiness of other people more than your own material needs, happiness, or worldly status. In this case, altruism is about sacrificing one’s material goods in the moment. However, the only meaningful sacrifice in that case is sacrifice in a secular way (secular, because the religious position tends not to be restricted to hard materialism). If we assume that the only available goods are material, and that death means the end of all possible benefits, then sacrificing goods now (without hope of repayment), is a real loss. As soon as we have any notion of an immaterial continuance of life (i.e. afterlife), all actions are material investments. That is, they will be rewarded/punished at some point, even if it extends beyond your lifetime. It is only a secular world, where a deadline for recoupment of investments is present, that altruism might be a coherent notion.

However, even in a secular system, the idea of altruism ultimately falls flat. It falls flat because “good” is only that which is in line with your Highest Moral Value. In a secular system, all goods are in line with the idea of the deadline. And so, losing goods in the moment, which will not benefit you in any way, is incoherent behavior. That is, there is no reasonable way to call something good, unless it produces a good – and the production of good is a benefit – so, the “altruistic” action (producing no benefit) could not be good.

Alternately, unless we had some other standard of goodness, which was being fulfilled by the action, we could not differentiate between different actions that are counterproductive to our own goals and benefits. How do we distinguish between burning money and giving it away with no benefit to ourselves? They’re both bad for us; we are losing the same resource in both; we’re not getting any benefit in either. The only difference is that one is benefitting someone else. So, why is that good, but burning money is not? That’s only possible if benefitting other people is somehow good – in other words, it is a thing we aim for: so that acting that way is of benefit to us (although it can’t be in this case, because we can’t derive any benefit from it – by the wording of the system).

Thus, what we would mean by altruism is something like acting against you best interests. But in that case, we could not claim to be doing good – because all “good” is defined as inherently in our best interest. What we would be doing is evil – because it is aimed in the negative Y-axis. At that point, the idea of altruism fails entirely.

To summarize, the idea of doing good that has no benefit for you is incomprehensible. If an act is good, then it is in your benefit, because it moves you up the pyramid. Thus, we may try to define altruistic behavior in a secular sense. The idea is that I will not receive any material benefit from my action, but the action is good. But in that case, we are positing that “good” is universally agreed upon (it’s not), and that it is good even with a bad effect on the person carrying out the action – which makes no sense, since goodness is related with your idea of Truth and value. So, altruism devolves into acting against your best interests – against your own good – which is the same as acting in an evil way. Thus, it makes no sense there either.

The conclusion we’re left with is as follows: all things good and bad have a good and bad effect on the person carrying them out. Their goodness and badness is relative to their understanding of the Truth and their highest moral values.

1 thought on “(in)coherence of Altruism”

Comments are closed.