Sam Harris: Science can answer moral questions

It seems to me that we need not have any illusions about a casual agent living within the human mind to condemn such a mind as unethical, negligent, or even evil, and therefore liable to occasion further harm. What we condemn in another person is the intention to do harm—and thus any condition or circumstance (e.g., accident, mental illness, youth) that makes it unlikely that a person could harbor such an intention would mitigate guilt, without any recourse to notions of free will. Likewise, degrees of guilt could be judged, as they are now, by reference to the facts of the case: the personality of the accused, his prior offenses, his patterns of association with others, his use of intoxicants, his confessed intentions with regard to the victim, etc. If a person’s actions seem to have been entirely out of character, this will influence our sense of the risk he now poses to others. If the accused appears unrepentant and anxious to kill again, we need entertain no notions of free will to consider him a danger to society.

Of course, we hold one another accountable for more than those actions that we consciously plan, because most voluntary behavior comes about without explicit planning.108 But why is the conscious decision to do another person harm particularly blameworthy? Because consciousness is, among other things, the context in which our intentions become completely available to us. What we do subsequent to conscious planning tends to most fully reflect the global properties of our minds—our beliefs, desires, goals, prejudices, etc. If, after weeks of deliberation, library research, and debate with your friends, you still decide to kill the king—well, then killing the king really reflects the sort of person you are. Consequently, it makes sense for the rest of society to worry about you.

While viewing human beings as forces of nature does not prevent us from thinking in terms of moral responsibility, it does call the logic of retribution into question. Clearly, we need to build prisons for people who are intent upon harming others. But if we could incarcerate earthquakes and hurricanes for their crimes, we would build prisons for them as well.109 The men and women on death row have some combination of bad genes, bad parents, bad ideas, and bad luck—which of these quantities, exactly, were they responsible for? No human being stands as author to his own genes or his upbringing, and yet we have every reason to believe that these factors determine his character throughout life. Our system of justice should reflect our understanding that each of us could have been dealt a very different hand in life. In fact, it seems immoral not to recognize just how much luck is involved in morality itself.

Consider what would happen if we discovered a cure for human evil. Imagine, for the sake of argument, that every relevant change in the human brain can be made cheaply, painlessly, and safely. The cure for psychopathy can be put directly into the food supply like vitamin D. Evil is now nothing more than a nutritional deficiency.

If we imagine that a cure for evil exists, we can see that our retributive impulse is profoundly flawed. Consider, for instance, the prospect of withholding the cure for evil from a murderer as part of his punishment. Would this make any moral sense at all? What could it possibly mean to say that a person deserves to have this treatment withheld? What if the treatment had been available prior to the person’s crime? Would he still be responsible for his actions? It seems far more likely that those who had been aware of his case would be indicted for negligence. Would it make any sense at all to deny surgery to the man in example 5 as a punishment if we knew the brain tumor was the proximate cause of his violence? Of course not. The urge for retribution, therefore, seems to depend upon our not seeing the underlying causes of human behavior.

Despite our attachment to notions of free will, most us know that disorders of the brain can trump the best intentions of the mind. This shift in understanding represents progress toward a deeper, more consistent, and more compassionate view of our common humanity—and we should note that this is progress away from religious metaphysics. It seems to me that few concepts have offered greater scope for human cruelty than the idea of an immortal soul that stands independent of all material influences, ranging from genes to economic systems.

And yet one of the fears surrounding our progress in neuroscience is that this knowledge will dehumanize us. Could thinking about the mind as the product of the physical brain diminish our compassion for one another? While it is reasonable to ask this question, it seems to me that, on balance, soul/body dualism has been the enemy of compassion. For instance, the moral stigma that still surrounds disorders of mood and cognition seems largely the result of viewing the mind as distinct from the brain. When the pancreas fails to produce insulin, there is no shame in taking synthetic insulin to compensate for its lost function. Many people do not feel the same way about regulating mood with antidepressants (for reasons that appear quite distinct from any concern about potential side effects). If this bias has diminished in recent years, it has been because of an increased appreciation of the brain as a physical organ.

However, the issue of retribution is a genuinely tricky one. In a fascinating article in The New Yorker, Jared Diamond recently wrote of the high price we often pay for leaving vengeance to the state.110 He compares the experience of his friend Daniel, a New Guinea highlander, who avenged the death of a paternal uncle and felt exquisite relief, to the tragic experience of his late father-in-law, who had the opportunity to kill the man who murdered his family during the Holocaust but opted instead to turn him over to the police. After spending only a year in jail, the killer was released, and Diamond’s father-in-law spent the last sixty years of his life “tormented by regret and guilt.” While there is much to be said against the vendetta culture of the New Guinea Highlands, it is clear that the practice of taking vengeance answers to a common psychological need.

We are deeply disposed to perceive people as the authors of their actions, to hold them responsible for the wrongs they do us, and to feel that these debts must be repaid. Often, the only compensation that seems appropriate requires that the perpetrator of a crime suffer or forfeit his life. It remains to be seen how the best system of justice would steward these impulses. Clearly, a full account of the causes of human behavior should undermine our natural response to injustice, at least to some degree. It seems doubtful, for instance, that Diamond’s father-in-law would have suffered the same pangs of unrequited vengeance if his family had been trampled by an elephant or laid low by cholera. Similarly, we can expect that his regret would have been significantly eased if he had learned that his family’s killer had lived a flawlessly moral life until a virus began ravaging his medial prefrontal cortex.

It may be that a sham form of retribution could still be moral, if it led people to behave far better than they otherwise would. Whether it is useful to emphasize the punishment of certain criminals—rather than their containment or rehabilitation—is a question for social and psychological science. But it seems quite clear that a retributive impulse, based upon the idea that each person is the free author of his thoughts and actions, rests on a cognitive and emotional illusion—and perpetuates a moral one.

It is generally argued that our sense of free will presents a compelling mystery: on the one hand, it is impossible to make sense of it in causal terms; on the other, there is a powerful subjective sense that we are the authors of our own actions.111 However, I think that this mystery is itself a symptom of our confusion. It is not that free will is simply an illusion: our experience is not merely delivering a distorted view of reality; rather, we are mistaken about the nature of our experience. We do not feel as free as we think we feel. Our sense of our own freedom results from our not paying attention to what it is actually like to be what we are. The moment we do pay attention, we begin to see that free will is nowhere to be found, and our subjectivity is perfectly compatible with this truth. Thoughts and intentions simply arise in the mind. What else could they do? The truth about us is stranger than many suppose: The illusion of free will is itself an illusion.

Oops, should have listened to this before uploading. Skip Chapter 3 - Belief.mp3 and download this instead:

http://thepiratebay.org/torrent/5922347/The_Moral_Landscape_by_Sam_Harris_-_fixed_audio_files

I also like the way Sam answers this question in chapter 3

[b]Do We Have Freedom of Belief?[/b]

While belief might prove difficult to pinpoint in the brain, many of its mental properties are plain to see. For instance, people do not knowingly believe propositions for bad reasons. If you doubt this, imagine hearing the following account of a failed New Year’s resolution:

This year, I vowed to be more rational, but by the end of January, I found that I had fallen back into my old ways, believing things for bad reasons. Currently, I believe that robbing others is a harmless activity, that my dead brother will return to life, and that I am destined to marry Angelina Jolie, just because these beliefs make me feel good.

This is not how our minds work. A belief—to be actually believed—entails the corollary belief that we have accepted it because it seems to be true. To really believe a proposition—whether about facts or values—we must also believe that we are in touch with reality in such a way that if it were not true, one would not believe it. We must believe, therefore, that we are not flagrantly in error, deluded, insane, self-deceived, etc. While the preceding sentences do not suffice as a full account of epistemology, they go a long way toward uniting science and common sense, as well as reconciling their frequent disagreements. There can be no doubt that there is an important difference between a belief that is motivated by an unconscious emotional bias (or other nonepistemic commitments) and a belief that is comparatively free of such bias.

And yet many secularists and academics imagine that people of faith knowingly believe things for reasons that have nothing to do with their perception of the truth. A written debate I had with Philip Ball—who is a scientist, a science journalist, and an editor at Nature—brought this issue into focus. Ball thought it reasonable for a person to believe a proposition just because it makes him “feel better,” and he seemed to think that people are perfectly free to acquire beliefs in this way. People often do this unconsciously, of course, and such motivated reasoning has been discussed above. But Ball seemed to think that beliefs can be consciously adopted simply because a person feels better while under their spell. Let’s see how this might work. Imagine someone making the following statement of religious conviction:

I believe Jesus was born of a virgin, was resurrected, and now answers prayers because believing these things makes me feel better. By adopting this faith, I am merely exercising my freedom to believe in propositions that make me feel good.

How would such a person respond to information that contradicted his cherished belief? Given that his belief is based purely on how it makes him feel, and not on evidence or argument, he shouldn’t care about any new evidence or argument that might come his way. In fact, the only thing that should change his view of Jesus is a change in how the above propositions make him feel. Imagine our believer undergoing the following epiphany:

For the last few months, I’ve found that my belief in the divinity of Jesus no longer makes me feel good. The truth is, I just met a Muslim woman who I greatly admire, and I want to ask her out on a date. As Muslims believe Jesus was not divine, I am worried that my belief in the divinity of Jesus could hinder my chances with her. As I do not like feeling this way, and very much want to go out with this woman, I now believe that Jesus was not divine.

Has a person like this ever existed? I highly doubt it. Why do these thoughts not make any sense? Because beliefs are intrinsically epistemic: they purport to represent the world as it is. In this case, our man is making specific claims about the historical Jesus, about the manner of his birth and death, and about his special connection to the Creator of the Universe. And yet while claiming to represent the world in this way, it is perfectly clear that he is making no effort to stay in touch with the features of the world that should inform his belief. He is only concerned about how he feels. Given this disparity, it should be clear that his beliefs are not based on any foundation that would (or should) justify them to others, or even to himself.

Of course, people do often believe things in part because these beliefs make them feel better. But they do not do this in the full light of consciousness. Self-deception, emotional bias, and muddled thinking are facts of human cognition. And it is a common practice to act as if a proposition were true, in the spirit of: “I’m going to act on X because I like what it does for me and, who knows, X might be true.” But these phenomena are not at all the same as knowingly believing a proposition simply because one wants it to be true.

Strangely, people often view such claims about the constraints of rationality as a sign of “intolerance.” Consider the following from Ball:

I do wonder what [Sam Harris] is implying here. It is hard to see it as anything other than an injunction that “you should not be free to choose what you believe.” I guess that if all Sam means is that we should not leave people so ill-informed that they have no reasonable basis on which to make those decisions, then fair enough. But it does seem to go further—to say that “you should not be permitted to choose what you believe, simply because it makes you feel better.” Doesn’t this sound a little like a Marxist denouncement of “false consciousness,” with the implication that it needs to be corrected forthwith? I think (I hope?) we can at least agree that there are different categories of belief—that to believe one’s children are the loveliest in the world because that makes you feel better is a permissible (even laudable) thing. But I slightly shudder at the notion, hinted here, that a well-informed person should not be allowed to choose their belief freely … surely we cannot let ourselves become proscriptive to this degree?70

What cognitive freedom is Ball talking about? I happen to believe that George Washington was the first president of the United States. Have I, on Ball’s terms, chosen this belief “freely”? No. Am I free to believe otherwise? Of course not. I am a slave to the evidence. I live under the lash of historical opinion. While I may want to believe otherwise, I simply cannot overlook the incessant pairing of the name “George Washington” with the phrase “first president of the United States” in any discussion of American history. If I wanted to be thought an idiot, I could profess some other belief, but I would be lying. Likewise, if the evidence were to suddenly change—if, for instance, compelling evidence of a great hoax emerged and historians reconsidered Washington’s biography, I would be helplessly stripped of my belief—again, through no choice of my own. Choosing beliefs freely is not what rational minds do.

This does not mean, of course, that we have no mental freedom whatsoever. We can choose to focus on certain facts to the exclusion of others, to emphasize the good rather than the bad, etc. And such choices have consequences for how we view the world. One can, for instance, view Kim Jong-il as an evil dictator; one can also view him as a man who was once the child of a dangerous psychopath. Both statements are, to a first approximation, true. (Obviously, when I speak about “freedom” and “choices” of this sort, I am not endorsing a metaphysical notion of “free will.”)

As to whether there are “different categories of belief”: perhaps, but not in the way that Ball suggests. I happen to have a young daughter who does strike me as the “loveliest in the world.” But is this an accurate account of what I believe? Do I, in other words, believe that my daughter is really the loveliest girl in the world? If I learned that another father thought his daughter the loveliest in the world, would I insist that he was mistaken? Of course not. Ball has mischaracterized what a proud (and sane and intellectually honest) father actually believes. Here is what I believe: I believe that I have a special attachment to my daughter that largely determines my view of her (which is as it should be). I fully expect other fathers to have a similar bias toward their own daughters. Therefore, I do not believe that my daughter is the loveliest girl in the world in any objective sense. Ball is simply describing what it’s like to love one’s daughter more than other girls; he is not describing belief as a representation of the world. What I really believe is that my daughter is the loveliest girl in the world for me.

One thing that both factual and moral beliefs generally share is the presumption that we have not been misled by extraneous information.71 Situational variables, like the order in which unrelated facts are presented, or whether identical outcomes are described in terms of gains or losses, should not influence the decision process. Of course, the fact that such manipulations can strongly influence our judgment has given rise to some of the most interesting work in psychology. However, a person’s vulnerability to such manipulations is never considered a cognitive virtue; rather, it is a source of inconsistency that cries out for remedy.

Consider one of the more famous cases from the experimental literature, The Asian Disease Problem:72

Imagine that the United States is preparing for the outbreak of an unusual Asian disease, which is expected to kill 600 people. Two alternative programs to combat the disease have been proposed. Assume that the exact scientific estimates of the consequences of the programs are as follows:


If Program A is adopted, 200 people will be saved.


If Program B is adopted, there is a one-third probability that 600 people will be saved and a two-thirds probability that no people will be saved.

Which one of the two programs would you favor?

In this version of the problem, a significant majority of people favor Program A. The problem, however, can be restated this way:

If Program A is adopted, 400 people will die.


If Program B is adopted, there is a one-third probability that nobody will die and a two-thirds probability that 600 people will die.


Which one of the two programs would you favor?

Put this way, a majority of respondents will now favor Program B. And yet there is no material or moral difference between these two scenarios, because their outcomes are the same. What this shows is that people tend to be risk-averse when considering potential gains and risk seeking when considering potential losses, so describing the same event in terms of gains and losses evokes different responses. Another way of stating this is that people tend to overvalue certainty: finding the certainty of saving life inordinately attractive and the certainty of losing life inordinately painful. When presented with the Asian Disease Problem in both forms, however, people agree that each scenario merits the same response. Invariance of reasoning, both logical and moral, is a norm to which we all aspire. And when we catch others departing from this norm, whatever the other merits of their thinking, the incoherency of their position suddenly becomes its most impressive characteristic.

Of course, there are many other ways in which we can be misled by context. Few studies illustrate this more powerfully than one conducted by the psychologist David L. Rosenhan,73 in which he and seven confederates had themselves committed to psychiatric hospitals in five different states in an effort to determine whether mental health professionals could detect the presence of the sane among the mentally ill. In order to get committed, each researcher complained of hearing a voice repeating the words “empty,” “hollow,” and “thud.” Beyond that, each behaved perfectly normally. Upon winning admission to the psychiatric ward, the pseudopatients stopped complaining of their symptoms and immediately sought to convince the doctors, nurses, and staff that they felt fine and were fit to be released. This proved surprisingly difficult. While these genuinely sane patients wanted to leave the hospital, repeatedly declared that they experienced no symptoms, and became “paragons of cooperation,” their average length of hospitalization was nineteen days (ranging from seven to fifty-two days), during which they were bombarded with an astounding range of powerful drugs (which they discreetly deposited in the toilet). None were pronounced healthy. Each was ultimately discharged with a diagnosis of schizophrenia “in remission” (with the exception of one who received a diagnosis of bipolar disorder). Interestingly, while the doctors, nurses, and staff were apparently blind to the presence of normal people on the ward, actual mental patients frequently remarked on the obvious sanity of the researchers, saying things like “You’re not crazy. You’re a journalist.”

In a brilliant response to the skeptics at one hospital who had heard of this research before it was published, Rosenhan announced that he would send a few confederates their way and challenged them to spot the coming pseudopatients. The hospital kept vigil, while Rosenhan, in fact, sent no one. This did not stop the hospital from “detecting” a steady stream of pseudopatients. Over a period of a few months fully 10 percent of their new patients were deemed to be shamming by both a psychiatrist and a member of the staff. While we have all grown familiar with phenomena of this sort, it is startling to see the principle so clearly demonstrated: expectation can be, if not everything, almost everything. Rosenhan concluded his paper with this damning summary: “It is clear that we cannot distinguish the sane from the insane in psychiatric hospitals.”

There is no question that human beings regularly fail to achieve the norms of rationality. But we do not merely fail—we fail reliably. We can, in other words, use reason to understand, quantify, and predict our violations of its norms. This has moral implications. We know, for instance, that the choice to undergo a risky medical procedure will be heavily influenced by whether its possible outcomes are framed in terms of survival rates or mortality rates. We know, in fact, that this framing effect is no less pronounced among doctors than among patients.74 Given this knowledge, physicians have a moral obligation to handle medical statistics in ways that minimize unconscious bias. Otherwise, they cannot help but inadvertently manipulate both their patients and one another, guaranteeing that some of the most important decisions in life will be unprincipled.75

Admittedly, it is difficult to know how we should treat all of the variables that influence our judgment about ethical norms. If I were asked, for instance, whether I would sanction the murder of an innocent person if it would guarantee a cure for cancer, I would find it very difficult to say “yes,” despite the obvious consequentialist argument in favor of such an action. If I were asked to impose a one in a billion risk of death on everyone for this purpose, however, I would not hesitate. The latter course would be expected to kill six or seven people, and yet it still strikes me as obviously ethical. In fact, such a diffusion of risk aptly describes how medical research is currently conducted. And we routinely impose far greater risks than this on friends and strangers whenever we get behind the wheel of our cars. If my next drive down the highway were guaranteed to deliver a cure for cancer, I would consider it the most ethically important act of my life. No doubt the role that probability is playing here could be experimentally calibrated. We could ask subjects whether they would impose a 50 percent chance of death upon two innocent people, a 10 percent chance on ten innocent people, etc. How we should view the role that probability plays in our moral judgments is not clear, however. It seems difficult to imagine ever fully escaping such framing effects.

Science has long been in the values business. Despite a widespread belief to the contrary, scientific validity is not the result of scientists abstaining from making value judgments; rather, scientific validity is the result of scientists making their best effort to value principles of reasoning that link their beliefs to reality, through reliable chains of evidence and argument. This is how norms of rational thought are made effective.

To say that judgments of truth and goodness both invoke specific norms seems another way of saying that they are both matters of cognition, as opposed to mere sentiment. That is why one cannot defend one’s factual or moral position by reference to one’s preferences. One cannot say that water is H2O or that lying is wrong simply because one wants to think this way. To defend such propositions, one must invoke a deeper principle. To believe that X is true or that Y is ethical is also to believe others should share these beliefs under similar circumstances.

The answer to the question “What should I believe, and why should I believe it?” is generally a scientific one. Believe a proposition because it is well supported by theory and evidence; believe it because it has been experimentally verified; believe it because a generation of smart people have tried their best to falsify it and failed; believe it because it is true (or seems so). This is a norm of cognition as well as the core of any scientific mission statement. As far as our understanding of the world is concerned—there are no facts without values.

Precis please - you’re getting as bad as TeleoPhroners.

Free will isn’t even an illusion but morality is still real and there’s no such thing as freedom of belief.

Exactly how the emergence of observable neural signatures signalling a certain action or response an appreciable time interval before such action or response enters the subject’s consciousness – exactly how this negates free will is clear only to Sam Harris. He seems to think that without self-awareness there can be no free will. You don’t need to think very hard to see that this is nonsense.

'Luthon64

Do the neural signals travel back in time? Is it Quantum?

Ask Sam Harris. He seems to know all your answers.

'Luthon64

My reading of the long introduction to his book has me thinking that Sam has seen work done similar to that described in the following quote from the intro and sees a future expansion of this research to explain how our thoughts, actions and feelings might be partially influenced by the chemistry of our brains. This chemistry might well be as a result of past experiences and nurture as babies. He foresees a denial and opposition to such research from both scientists and the religious.
The current expansion of the understanding of the human brain and the processes that take place in it seems to be the next frontier in human understanding and research as far as Sam Harris is concerned. He also does not foresee a quick or short solution.

From “The Moral Landscape” Sam Harris.

Consider, for instance, the connection between early childhood experience, emotional bonding, and a person’s ability to form healthy relationships later in life. We know, of course, that emotional neglect and abuse are not good for us, psychologically or socially. We also know that the effects of early childhood experience must be realized in the brain. Research on rodents suggests that parental care, social attachment, and stress regulation are governed, in part, by the hormones vasopressin and oxytocin,11 because they influence activity in the brain’s reward system. When asking why early childhood neglect is harmful to our psychological and social development, it seems reasonable to think that it might result from a disturbance in this same system. While it would be unethical to deprive young children of normal care for the purposes of experiment, society inadvertently performs such experiments every day. To study the effects of emotional deprivation in early childhood, one group of researchers measured the blood concentrations of oxytocin and vasopressin in two populations: children raised in traditional homes and children who spent their first years in an orphanage.12 As you might expect, children raised by the State generally do not receive normal levels of nurturing. They also tend to have social and emotional difficulties later in life. As predicted, these children failed to show a normal surge of oxytocin and vasopressin in response to physical contact with their adoptive mothers.

Reading this and other related threads on the forum, the essential question is, was and will remain what it is that we really mean by “free will.” I do not deny – and have never denied – that our choices and reflections are constrained by various factors, not least a host of physiological ones. What has always intrigued me, and will no doubt continue to intrigue me until at least someone shows just cause for it, is exactly how these observations rule out free will. Limit it, yes, but nobody who’s thought about it for even a second will reasonably attempt to equate “free will” with wholly unrestrained choice. But make choices entirely free of discretion? You’ll need to present a much more forcefully cohesive and persuasive argument than Harris’s or that which any number of other free-will deniers has managed thus far. My view on this particular issue is that it is one area where reductionism has most obviously failed to illuminate the whole picture.

Moreover, to argue for an in-principle incompatibility between our materialistic (philosophically speaking) conceptions of mind/brain and such ideas as intentionality and will (free or otherwise) is to ignore the entire realm of emergent phenomena and the infancy in which the physics thereof presently finds itself. As said before, if free will is an illusion (or, “not even [that],” according to the latest info, the proponents of this stance have yet to give any kind of account of either the reasons for, or the utility of this alleged illusion – i.e. why do we have this illusion? Instead of acknowledging this grave and crucial difficulty, we are treated to a series of stale samey-samey rationalisations.

'Luthon64

Agreed, if there was an essential question that would be it. I still have no idea what it is you really mean by “free will” or how you differentiate it from ordinary, fully caused will. If there is an essential difference I don’t see it.

But neither Sam nor I are ignoring intentionality and will, these are real emergent phenomena, I just don’t see how it is meaningful to call them free. From the above:

Of course, there is a distinction between voluntary and involuntary actions, but it does nothing to support the common idea of free will (nor does it depend upon it). The former are associated with felt intentions (desires, goals, expectations, etc.) while the latter are not. All of the conventional distinctions we like to make between degrees of intent—from the bizarre neurological complaint of alien hand syndrome to the premeditated actions of a sniper—can be maintained: for they simply describe what else was arising in the mind at the time an action occurred. A voluntary action is accompanied by the felt intention to carry it out, while an involuntary action isn’t. Where our intentions themselves come from, however, and what determines their character in every instant, remains perfectly mysterious in subjective terms. Our sense of free will arises from a failure to appreciate this fact: we do not know what we will intend to do until the intention itself arises. To see this is to realize that you are not the author of your thoughts and actions in the way that people generally suppose. This insight does not make social and political freedom any less important, however. The freedom to do what one intends, and not to do otherwise, is no less valuable than it ever was.

Sam Harris wants to disregard the is/ought, fact/value dichotomies and in doing so circumvent the naturalistic fallacy and claim that science can determine an ought from an is.

I wonder if he is aware of natural law theory and whether he is realises which direction he is going with his “new” “idea”? Does he even have some sort of understanding of the terms “good” and “goodness”?

As is quite typical of discussions of Harris’s work (and he’s not the first - read Binmore, for example), people appeal to the authority of Hume’s Guillotine without bothering to read the text carefully. In this, I include many biggish-name moral philosophers. As a non-big-name ex-moral-philosopher, I wish that more participants in this debate would be willing to contemplate the notions that a) if Hume were around now, he’d be one of Harris’s loudest cheerleaders, seeing as his primary commitment was to empiricism, and b) that traditional moral philosophy is by and large useless, because many of its premises can now be informed and in many cases disproved by knowledge that wasn’t available to the traditional names that are still cited as authorities.

Hume said (emphasis added):

In every system of morality, which I have hitherto met with, I have always remark'd, that the author proceeds for some time in the ordinary ways of reasoning, and establishes the being of a God, or makes observations concerning human affairs; when all of a sudden I am surpriz'd to find, that instead of the usual copulations of propositions, is, and is not, I meet with no proposition that is not connected with an ought, or an ought not. This change is imperceptible; but is however, of the last consequence. For as this ought, or ought not, expresses some new relation or affirmation, [i]'tis necessary that it shou'd be observ'd and explain'd; and at the same time that a reason should be given[/i]; for what seems altogether inconceivable, how this new relation can be a deduction from others, which are entirely different from it.

In other words, he says people do a bad job of it - not that the job is impossible.

Hume was also aware of the limitations of empiricism.

Traditional moral philosophy is by and large useless? I am curious what exactly you understand about “traditional moral philosophy” to make such a sweeping statement. Have you read anything by Alasdair MacIntyre and Phillipa Foot?

You might recall that I’ve given up on trying to engage in substantive discussion with you. If not, this is a reminder. As for your question, I’ll go with a simple “yes”.

You might also recall that in your “attempt” to engage in “substantive discussion” with me, you insinuated that others share my delusions, basically saying I am delusional. Surely you can try harder?

A simple yes to what btw?

Q.E.D.

I doubt that. Let him answer though.

I don’t have a problem with the concept that scientific methods can be applied in the persuit of morals. Where I have some differences with Harris’s approach, it is not because said approach it too empirical, but rather not scientific enough. The issue of free will has not been resolved beyond reasonable doubt. I have expressed my views regarding free will in the Naturalism thread and do not wish to become repetitive.

The introduction of “freedom of religion” here can be confusing. The terms are usually associated with a civil right. Neither this nor a neurological lack of choice a la “absence of free will” seams to be topical here. What Harris reasons is that we cannot choose to believe something just because it suits us. I think he has a point when it comes to the rational, skeptical mind, but that his argument does not apply to all people. There are people who filter facts and accept only those that support their beliefs, while rejecting others. As an opponent put it in a school debate many years ago: “Should there be a conflict between religion and science, we do not need to accept that scientific evidence.” This deliberate rejection of facts that do not fit in with one’s beliefs is not restricted to religious beliefs. In the weeks before Sharemax went belly-up, indignant shareholders vilified financial journalists who were pointing out obvious discrepancies in their financial figures.

Irrational people can consciously choose beliefs.