Jump to content
GreaseSpot Cafe

Does Neuroscience Refute Ethics?


Ron G.
 Share

Recommended Posts

http://www.mises.org/story/1893

Does Neuroscience Refute Ethics?

by Lucretius

[Posted on Wednesday, August 24, 2005]

The study of the brain, also known as neuroscience, from its modest beginnings as a branch of physiology, has expanded considerably in recent years, now poised to become the queen of the sciences. The advent of techniques such as functional magnetic resonance imaging (fMRI), for example, have attracted many with little interest in traditional neurobiology, which is largely concerned with tracing anatomical pathways and elucidating electro-chemical communication between brain cells.

These new neuroscientists are interested in the type of questions addressed, traditionally, by the humanities and the social sciences. The answers they gave have attracted much attention in the popular press. We now have affective neuroscience, social neuroscience, cognitive neuroscience-indeed neuroscience of everything under the sun. As all human activities can be related to the brain, neuroscience seems to be in a perfect position to bring the prestige of the natural sciences to the "soft" disciplines.

Morality and law are among the latest subjects to be invaded by neuroscience. Here I discuss a recent attempt, based on brain imaging evidence, to debunk universal moral principles and revive utilitarianism in moral judgments, not in the Misesian sense of rules that foster social good, but rather the view that there is no real right or wrong or fixed moral standard by which anything can be judged. (In a forthcoming article, I will discuss a related attempt by the same researchers, again resorting to arguments from neuroscience, to attack the concept of free will and personal responsibility in law.)

Neuroscience and morality

In 2001 a group of researchers published in the journal Sciencea study examining the neural basis of moral judgment. Interestingly, the principal author of this paper, Joshua Greene, is an analytic philosopher by training, having received his Ph.D. from Princeton under the late David Lewis (you can read about his background here).

The moral dilemmas posed to the experimental subjects, while their brains were being scanned, were described thus:

"A runaway trolley is hurtling down the tracks toward five people who will be killed if it proceeds on its present course. The only way to save them is to hit a switch that will turn the trolley onto an alternate set of tracks where it will kill one person instead of five. Ought you to turn the trolley in order to save five people at the expense of one? Most people say yes. Now consider a similar problem, the footbridge dilemma. As before, a trolley threatens to kill five people. You are standing next to a large stranger on a footbridge that spans the tracks in between the oncoming trolley and the five people. In this scenario, the only way to save the five people is to push this stranger off the bridge, onto the tracks below. He will die if you do this, but his body will stop the trolley from reaching the others. Ought you to save the five others by pushing this stranger to his death? Most people say no."[3]

Using fMRI, Greene et al. found that brain areas associated with emotion were activated when the footbridge version of the dilemma was presented, but not when the trolley version was presented. Some moral dilemmas, therefore, appear to engage more "emotional processing" than others. They argue that people are more likely to sacrifice one life to save five if the scenario does not engage their emotional brain areas, as in the trolley case; and they call this type of dilemma "impersonal."

By contrast, in the footbridge case, where one must kill a stranger to save five, the emotional brain areas are engaged, and as a result people are less likely to make this decision; this type of dilemma they call "personal."

So far, so good. Regardless of the validity of their data, Greene et al. have stayed within the boundary of experimental science. In a later study, however, they went further[2]. This time they employed a different moral dilemma: Should one smother a crying baby to death to protect the lives of many when enemy soldiers are approaching? Here they compared the activation patterns in the brains between those who approve (utilitarians) and those who do not (deontologists).

For those new to philosophical jargon, utilitarians believe that morality is a matter of promoting the greater good, while deontologists argue that there are absolute moral principles that can never be violated regardless of the consequences. Hence according to utilitarians, one should kill the baby to save everyone else, but according to deontologists, one should not, since murder is simply wrong.

Greene et al. observed greater activity in brain regions associated with emotion when subjects disapprove of baby killing in this case, and greater activity in brain regions associated with "cognitive control" when utilitarian judgments prevail. Cognitive control processes, moreover, can work against the social-emotional response, resulting in more utilitarian judgments--greater tendency for baby smothering. In one brain region (right anterior dorsolateral prefrontal cortex), activity increases for participants who made the utilitarian choice, but decreases for those who made the non-utilitarian judgment. Again, emotions drive individuals to reject choices that, while violating moral principles, result in more aggregate welfare.

The shock comes from the conclusion drawn by these authors: "The social-emotional responses that we've inherited from our primate ancestors . . . undergird the absolute prohibitions that are central to deontology. In contrast, the 'moral calculus' that defines utilitarianism is made possible by more recently evolved structures in the frontal lobes that support abstract thinking and high-level cognitive control." To put it bluntly, the old emotional brain represents the view of the deontologists, who believe in universal rules of morality, whereas the new rational brain embodies the utilitarian view[2].

At this point, the vigilant reader can already detect some bias surfacing in the interpretation of the data. According to Greene et al., there are two sets of brain structures in competition when humans make moral decisions-the old emotional brain regions and the new rational brain regions. The form of the argument, of course, is not new, but in place of the traditional dichotomy of reason and emotion we now have "areas associated with cognitive control and working memory," and "areas associated with emotion."

The benefit of this transformation is the ability to use an evolutionary argument regarding the brain areas. And the reasoning goes something like this: Because the old emotional brain was evolved to deal only with the personal situation, it is ill-equipped to do the moral calculus, weighing costs and benefits and choosing the action that yields the highest aggregate welfare. For such advanced cognition, the more newly evolved brain areas must be recruited.

Unfortunately, however, Greene et al. are not interested in pursuing their evolutionary line of reasoning further than is convenient for their argument. The dorsolateral prefrontal cortex, a brain area that Greene et al. found so important for utilitarian calculation, is indeed a "recent structure" on the evolutionary scale, but its period of greatest expansion in the primates was still millions of years before universal moral principles arrived on the scene.

For example, rules such as "thou shalt not kill" and "thou shalt not steal" are not found in humans 40,000 years ago, though there is no known biological difference between their dorsolateral prefrontal cortex and ours. Moreover, as a deontologist, Kant would presumably be classified by Greene et al. as someone with overdeveloped emotional brain. But the Kantian Categorical Imperative--we should always act according to a rule that can become universal law; we should always treat another human being as an end, never as merely means--is not the type of morality that characterized our hunter-gatherer ancestors.

It should also be pointed out that a measure of brain activity like the fMRI signal can at best be correlated with a psychological function; it cannot demonstrate a causal role for the brain area in question. Just because a particular brain area became more active when a decision was made does not mean that that area influenced the decision. In fact, the fMRI signal does not even provide a direct measure of the spiking of neurons, so we do not know whether it reflects the inputs or outputs of the activated area.

Nor, for that matter, is the classification of the brain areas as either "emotional" or "cognitive" beyond dispute; it is in fact a common sleight of hand among the new phrenologists. Most fMRI studies link a particular area to a particular function. Never mind the validity of the psychological function posed-the assumption, rather, is that if there is a scholarly name for a function, such as cognitive control, there must be a brain area for it. As a result, many areas are burdened with dozens of labels. Now if you found multiple areas activated, you can search the literature, and find what you are looking for among the functions discovered for these areas. This approach rarely fails, and in the hands of skillful practitioners, it is almost guaranteed to succeed.

So much for the scientific arguments. Greene's own views on morality can be found in an opinion piece written for Nature Reviews Neuroscience[1]. Not surprisingly, Greene is on the side of the utilitarians. His opinions now fortified by empirical evidence of the sort described above, our utilitarian neuroscientist, following contestants for Ms. Universe, argues in essence that we should care more about the world, in order to make it a better place. This simple message, however, is presented in the scholastic format fashionable among contemporary analytic philosophers, and deserves some unpacking.

Using yet another moral dilemma taken from Peter Unger's work, Greene asks: Why should we feel it is morally obligatory to help an injured man on the road, but ignore letters asking for donations to respectable charities? As there is no difference between these cases when analyzed in terms of total human welfare, he argues, this is a result of our having evolved a brain that cares more about the personal rather than the impersonal, thus failing to take into account "impersonal" situations, such as the conditions of starving children in Africa. Our moral sentiments are therefore deficient in scope. For if viewed in terms of total welfare, not giving to charities is just as wrong as not helping the wounded on the road.

In this connection Greene mentions with approval the fact that Peter Singer, the famed utilitarian professor, donates about 20% of his income to charity. But if we must act on utilitarian assumptions, why stop at modest contributions? Why only 20%, why not all? If we really must perform the moral calculus as the utilitarians urge us, the only rational thing to do is to donate everything we have and starve to death. The money Greene spends on groceries is more than enough to feed, say, several starving babies in third-world countries. Surely starving oneself to death, in terms of the moral calculus, is no different from smothering the baby to save others-the paradigm of utilitarian thinking.

Why does such stifling moral responsibility seem ridiculous, especially when recommended by someone whose research requires hundreds of thousands of dollars in taxpayers' money? To answer this question it helps to remember that Greene's arguments are intended to attack morality that relies on universal principles. Such a morality contains very few positive recommendations, among which, as Greene suggests, is the rule that if we see a seriously injured man on the road we should help him. The prohibitions are rather more extensive: we should not murder, steal, rob, and so on. It is a morality of rather modest scope, in comparison to the utilitarian quest for cosmic justice.

Earlier I explained that utilitarians do things for the greater good, while deontologists follow absolute moral principles. This brief definition is not quite adequate, because it is not clear what is meant by "the greater good." For utilitarians like Greene, the greater good or welfare could be calculated by the individual agent based on his beliefs about the world; it is a product of individual calculation.

It was the central concern of Hayek, especially in his later years, to show why this assumption is not valid. According to Hayek, while it is often possible to calculate the immediate consequences of one's actions, it is nearly impossible to calculate, given the limited information available, the long-term consequences. But these can be discovered, albeit indirectly, simply by observing those rules that have survived the longest period of selection, that have been independently developed in various cultures, or, originating in one culture, have spread to others in the course of history. These rules and practices are themselves selected, the unit of selection being the group of human beings following them. They are universal due to their long-term consequences for the groups that follow them, and their existence implies some sort of overall advantage.

On this account, "moral calculus" is an oxymoron, because the whole purpose of morality is to get rid of the individual calculation. Utilitarians of all stripes mistake calculations of this type, which every primitive can perform, for rationality, and they think a cost-benefit analysis is necessary for behaving morally. What they fail to recognize, above all, is that there can be more intelligent ways of information gathering and "calculation" beyond the individual actor involved in the decision making. The development of morality transforms temporary action-outcome contingencies into perennial and universal rules. As Hume famously put it, "the rules of morality are not the conclusions of our reason." That moral principles are not necessarily based on reason does not invalidate them, because the product of a gradual process of selection can be superior to the results of individual calculation.

The above is the crux of Hayek's argument, which we do not need to accept in toto to see the flawed assumptions behind utilitarian thinking. As Hayek was fond of pointing out, it is often precisely the altruistic urges that are primitive, and drive the irrational behavior of so-called progressives. By contrast, universal principles derived from cultural selection avoid the individual bias that taints the utilitarian analysis. The instinctual altruism of doing visible good, for instance, is replaced by an impersonal system of coordinating resources, namely capitalism, which, not surprisingly, is the favorite target of those who can get no satisfaction of their primitive altruistic urges[4].

How theoretical altruism-pity for all in the abstract sense, delivered from the armchair-has become the opium of the intellectuals is a fascinating topic. Unfortunately the urge to do good, though overwhelming in our universities, more often than not stands in the way of accomplishing any good. It is, indeed, only with the decline of moral responsibility that there has emerged a large group of professional intellectuals whose job it is to proclaim concern for the poor and the suffering.

Professing care has become the latest function of the professors, and enhancing aggregate welfare the pet function of governments everywhere, much to the chagrin of those being cared for and whose welfare is considered in the aggregate. Just as a little sacrifice for the greater good has ever been the refrain of dictators, justifying murder, plunder, and torture, so traditional moral prohibitions, the petty moral principles, must be trampled upon by the aggregate welfare, as they have been.

Greene's second confusion concerns moral realism, the idea that moral truths are objective. He argues against moral realism, claiming that moral principles are subjective, that what is moral is only in the eyes of the beholder. For if there are no moral facts, only opinions, then there cannot be universal moral principles that everyone must follow.

Immediately, however, an objection arises: something being objective has nothing to do with whether we should follow it. For example, although the tree outside of my house exists objectively, independent of human beings, the tree-fact does not command my obedience. Love, on the other hand, is subjective, but we should not on that account dismiss it as irrelevant in human affairs, especially when choosing whom to marry. That moral principles are produced by humans, that they only apply to the human inhabitants of this planet for a brief time in its history, does not mean that they are to be ignored.

Moreover, moral realism cannot be experimentally tested by neuroscience.If you scan someone's brain while he is making moral decisions, you will find certain patterns of neural activation. But moral truths are not subjective because you find brain activation (or a change in heart rate, for that matter). Moral subjectivism can never be falsified by any experiment. Greene's confusion stems from his fundamental misunderstanding of what an empirical science can and cannot do, because for him, neuroscience is just another tool with which to beat down opposing points of view.

With the scholastic and pseudoscientific façade removed, what Greene really wants to say is that, because people believe in objective moral truths, they are very dogmatic in that regard. And he detests such dogmatism. Like a good moral relativist, he wishes everyone to go on comfortably with his or her moral truths--a state of affairs that is supposed to produce peace and harmony. For example, if I believe it's wrong for you to steal my wallet, and you disagree because morals are subjective, I would simply shrug and walk away.

For Greene wants moral relativism, though he finds it easier to argue for moral subjectivism, or rather to attack moral realism. Accordingly, he believes that universal moral principles can be abolished by showing that there are no objective moral truths. Here we encounter a particularly interesting trait of many contemporary intellectuals, which explains the conceptual morass that Greene has gotten himself into. On the one hand, they believe that, so far as morality is concerned, we should not be so dogmatic-we should not believe so firmly in our moral principles, for others have their own.

On the other hand, they advocate some version of utilitariansim and collectivism deemed to be so mandatory that anyone questioning it must be labeled as primitive and stupid, like the deontologist whose emotional brain is overdeveloped.

Thus the goal of this concealed search for moral relativism is to get others to abandon their beliefs in a few moral principles; and thus Greene's studies are just an attempt to prove his own collectivist truth with brain science. Only in this case we discover a new twist. He uses facts about brain activity to argue 1) there are no moral facts, it's all a matter of opinion; and 2) we should all become utilitarians and donate to charity.

To be fair, Greene did not urge us, ala the Communist Manifesto, to unite and become utilitarians immediately. He merely described the two sides for us. One side is controlled by the primitive emotional brain, which evolved before this world of multicultural community-a brain suitable for the harsh, prehistoric ghetto of our hunter-gatherer ancestors, who have no interest in peace or happiness. The other side, with its more evolved brain capable of cognitive control, is more rational and fit for the world today.

Our brains are a combination of the two, which are perpetually at war within our skull. Greene would have us believe that, given these facts, we would know which side to choose. But students of liberty should recognize in this instance just another collectivist fantasy. Greene's brand of neuroscience is not science, but a new addition to the category of "politics by other means."

----------

Lucretius is a neurobiologist living in Maryland. Email. He will read the blog and answer comments there.

[1] Greene, J., From neural 'is' to moral 'ought': what are the moral implications of neuroscientific moral psychology?, Nat Rev Neurosci, 4 (2003) 846-9.

[2] Greene, J.D., Nystrom, L.E., Engell, A.D., Darley, J.M. and Cohen, J.D., The neural bases of cognitive conflict and control in moral judgment, Neuron, 44 (2004) 389-400.

[3] Greene, J.D., Sommerville, R.B., Nystrom, L.E., Darley, J.M. and Cohen, J.D., An fMRI investigation of emotional engagement in moral judgment, Science, 293 (2001) 2105-8.

[4] Hayek, F.A., The Fatal Conceit, The University of Chicago Press, Chicago, 1988.

Link to comment
Share on other sites

quote:
But students of liberty should recognize in this instance just another collectivist fantasy. Greene's brand of neuroscience is not science, but a new addition to the category of "politics by other means."

Ever hear the old saying about "When you're a hammer, everything starts looking like a nail"? Well, I'm reminded of that when I read that phrase after what appears to be an objective scientific paper, but what winds up being Yet Another "The Commies (collectivists) are out to get us using Yet Another Means to Take Over the World". (Shades of Pinky and the Brain! icon_rolleyes.gif:rolleyes:-->)

It's like the knee-jerk approach that folks like Jesse Jackson and Al Sharpton uses to portray everything thru 'racial lenses' (you know, there is a racial meaning behind everything, or even most things, people say and do). So it is with this supposed 'mass collectivist hive mind' that 'students of liberty' are supposed to be on the lookout for.

Utilitarians? I've met a few, and quite a number of them are members/friends/aggreable with of the Libertarian Party and it's philosophies. ... And you KNOW how Communist/Collectivist they aren't. wink2.gif;)-->

Link to comment
Share on other sites

quote:
In this connection Greene mentions with approval the fact that Peter Singer, the famed utilitarian professor, donates about 20% of his income to charity.

Peter Singer’s name recognition should be characterized in terms of infamy rather than fame.

It has sprung largely from his advocacy of a state of affairs in which parents would be allowed to have disabled newborns euthanized for up to 28 days or so following birth. Among things qualifying a baby for parentally approved extermination in Singer's view reportedly are Down’s Syndrome and hemophilia (see http://www.jewishworldreview.com/cols/hentoff091399.asp and http://www.michaelspecter.com/ny/1999/1999...hilosopher.html ).

Nat Hentoff, who has referred to Singer as an “apostle of infanticide,” quotes Singer as having written:

'"Human babies are not born self-aware, or capable of grasping that they exist over time. They are not persons."'

'"[T]he life of a newborn is of less value than the life of a pig, a dog, or a chimpanzee."'

(Italicization is mine.)

Singer also has some lower-tier notoriety due to a quasi-review (titled “Heavy Petting”) he did of a book on bestiality. In the piece, Singer, an animal rights whacko, conducted jihad against the distinctiveness of humans by minimizing the sexual barrier between man and beast. Commenting on an orangutan’s attempted rape of a human female, Singer maintained the attack should be inoffensive in respect to its being committed by an animal, though he recognized it as disturbing on the basis of its potential violence. It can be read at http://www.nerve.com/Opinions/Singer/heavyPetting/main.asp .

Link to comment
Share on other sites

quote:
Nat Hentoff, who has referred to Singer as an “apostle of infanticide,” quotes Singer as having written:

'"Human babies are not born self-aware, or capable of grasping that they exist over time. They are not persons."'

'"[T]he life of a newborn is of less value than the life of a pig, a dog, or a chimpanzee."'

(Italicization is mine.)

Singer also has some lower-tier notoriety due to a quasi-review (titled “Heavy Petting”) he did of a book on bestiality. In the piece, Singer, an animal rights whacko, conducted jihad against the distinctiveness of humans by minimizing the sexual barrier between man and beast.

Sounds like a man vee pee and craig would really respect and love to have seen involved with TWI. Imagine, he coulda been on the BOD with those kinds of beliefs. icon_rolleyes.gif:rolleyes:-->

Link to comment
Share on other sites

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

 Share

×
×
  • Create New...