The Blue Brain Blues - Materialist ethics and simulated minds

Just spotted it on RD.net. Been thinking about this for quite a while myself:

By STEVE ZARA - RD.NET Added: Sunday, 18 July 2010 at 10:46 PM

A materialist does not believe in magic. A materialist does not believe that anything more than the interactions of forces and particles in the physical world is needed to explain that world and everything in it. Not many people are materialists; the majority of those alive, and who have lived, believe that there are extra aspects to the world, usually termed “spiritual” or “supernatural”. But we who don’t subscribe to the idea of those extras are increasing in number.

However, I am not going to argue here the truth or otherwise of the materialist view. What I want to show is that it has consequences. Serious moral consequences, and in an area of research and technology that will be of increasing importance to humanity. The moral consequences may be surprising, and yet I will suggest that they follow inevitably from the materialist position. And, for reasons I will explain later, they may - and I feel should – change the way that certain scientists and technologists approach their work.

http://richarddawkins.net/articles/490048-the-blue-brain-blues-materialist-ethics-and-simulated-minds

Here’s what Dawkins had to say:

Comment 4 by Richard Dawkins

Steve Zara’s point is either true, in which case we need to have an important moral conversation, or false, in which case we have to worry about the truth of materialism itself (which I don’t for one moment). It may indeed be that ‘Blue Brain’ technology is away in the future, but one of the things moral philosophers do is anticipate moral dilemmas that future technologies may bring. And conventional AI, too, would raise a similar moral question, which might hit us rather sooner.

Steve’s article provoked me to think. Thank you.

Richard

What do you guys think? Would you feel empathy for a simulated human mind?

I’m also quite interested to know if theists would feel sorry for the ‘Blue Brain’ and if so why? One of the commenters who professed to be a “naturalistic dualist” (whatever that is) agreed rather grumpily.

Doesn’t anyone care? I mean, I get it that we want artificial minds, they would be very useful. But if we have to torture the prototypes to develop them, it may not be worth it. Come on, please somebody respond. Even if it’s just to tell me I’m being dumb.

 42

Yeah, bet Marvin would just love that. ;D

Here’s the TED talk by Henry Markram the director of Blue Brain:

Would mammalian attributes built into Blue Brain be actual consciousness or remain simulations thereof? I find it hard to imagine compassion for a machine as long as it is not a form of life.

Well from a materialist’s perspective there is no essential difference between a simulation of consciousness and consciousness. Presumably, the more realistic the simulation the more real it would feel.

Here’s another comment I quite liked:

Comment 73 by curby
  1. You can’t simulate consciousness and you can’t simulate pain.

There are three kinds of things in the universe: matter, energy, and information. Things generally include all 3. For example, what makes a cow a cow are the carbon, oxygen, nitrogen, hydrogen, (and other) atoms, the stored chemical and electrical energy, and how all the components are related to each other (the information) which determines how they will behave over time.

Simulation is the process of extracting the (relevant) information of an object and reimplementing it in a different medium (usually a computer). So it makes sense to simulate a traffic jam or the weather, but it doesn’t make sense to simulate pure information itself. What could it possibly mean to simulate a poem or a piece of music? When you upload a music file to your iPod you’re not “simulating” the music, you’re copying it.

To the best of our knowledge, the mind, our consciousness, and everything that’s involved (including pain) is information implemented in the brain: software running on hardware. Of course, it’s possible that consciousness depends on some form of exotic matter or energy that would be lost in a simulation, but the burden of proof lies most definitely with the proponents of such an idea. The atoms that make up the brain are constantly being exchanged through normal metabolism, and no exotic “soul” matter or energy has ever been detected, and there’s not even an understood way for something like that to evolve or appear in a fetus/baby.

Because our consciousness is information, we can’t simulate it; we can only implement it in different media (e.g., wet brains and silicon computers). And just like it doesn’t matter whether a piece of music is encoded on an Apple or on a PC, it also doesn’t matter what medium a consciousness is encoded on.

So any pain that an artificial agent will feel will be very real.

  1. Intelligence and agency inherently involve suffering of one form or another.

Any intelligent agent is by its very nature goal driven. It strives to achieve its goals and it prefers some states to others. An agent’s goals and preferences are always defined externally. For humans, our goals/preferences are defined by our genes. This includes primitive goals/preferences such as: sex is good and tissue damage is bad. It also includes more complex goals/preferences: having respect and a high status in our community is good, members of our tribe suffering is bad.

Pain and suffering for an agent is being in a state that is defined to be negative for that agent (for humans, a negative state would be one where we perceive tissue damage, or a state where we perceive ourselves to be despised by everyone around us). Similarly pleasure and happiness are states defined to be positive for an agent.

A chess-playing program is a goal-driven, intelligent agent (though its intelligence is very limited). It prefers to be in a state where it is winning and it tries to avoid states where it is losing. In a very real sense, the program suffers when it finds itself in a bad position during a game of chess, and it will try to desperately act to get itself out of such a situation. Though this kind of pain is probably comparable to the pain felt by a shrimp or an insect, it does raise an important question. At what point is a goal-driven system intelligent enough that its pain becomes morally significant?

Note that we can’t avoid this problem by simply creating an agent that is always happy. Any intelligent agent that we build that has no goals to achieve and that is in the most preferable state that it can be (however we want to define such a state) will not do anything at all. It will have no rational reason to do so (you can do the same thing with humans by constantly pumping them full of morphine). It’s a bit pointless to create an intelligent agent just so that it can sit around doing nothing but feeling happy.

We define the goals and preferences of the agents we create. Is it ethical to create a human-level intelligent being and specify its goals and preferences so that it desperately wants to be our slave and servant, and it is desperately unhappy if it can’t?

Well, at least there are lots of comments on Dawkins’ site. Reading them all got me thinking of Marvin again:

"Listen," said Ford, who was still engrossed in the sales brochure, "they make a big thing of the ship's cybernetics. A new generation of Sirius Cybernetics Corporation robots and computers, with the new GPP feature."

“GPP feature?” said Arthur. “What’s that?”

“Oh, it says Genuine People Personalities.”

“Oh,” said Arthur, “sounds ghastly.”

A voice behind them said, “It is.” The voice was low and hopeless and accompanied by a slight clanking sound. They span round and saw an abject steel man standing hunched in the doorway.

“What?” they said.

“Ghastly,” continued Marvin, “it all is. Absolutely ghastly. Just don’t even talk about it. Look at this door,” he said, stepping through it. The irony circuits cut into his voice modulator as he mimicked the style of the sales brochure. “All the doors in this spaceship have a cheerful and sunny disposition. It is their pleasure to open for you, and their satisfaction to close again with the knowledge of a job well done.” As the door closed behind them it became apparent that it did indeed have a satisfied sigh-like quality to it.

“Hummmmmmmyummmmmmm ah!” it said.Marvin regarded it with cold loathing whilst his logic circuits chattered with disgust and tinkered with the concept of directing physical violence against it Further circuits cut in saying, Why bother? What’s the point? Nothing is worth getting involved in. Further circuits amused themselves by analysing the molecular components of the door, and of the humanoids’ brain cells. For a quick encore they measured the level of hydrogen emissions in the surrounding cubic parsec of space and then shut down again in boredom. A spasm of despair shook the robot’s body as he turned.

“Come on,” he droned, “I’ve been ordered to take you down to the bridge. Here I am, brain the size of a planet and they ask me to take you down to the bridge. Call that job satisfaction? 'Cos I don’t.”

The Hitchhiker’s Guide to the Galaxy - Douglas Adams

;D

But my dear friend Peter,

How can you espouse compassion for Blue Brain and then poke fun at poor, depressed Marvin?
Are you now going to mock robots at traffic intersections and make them feel bad? You must consider that, just like you, they have no free will. I think they have a point. And the robots at rail intersections have an even stronger point. I think you should admire that and stop teasing them.

The grin was directed at the reader. ;D I feel more empathy for Marvin than I do for most fictional characters.

Oh sure, take that Mary chick who claimed she was a virgin after cheating on her fiancé and falling pregnant, for instance.

Been carefully considering my reply. Actually, in such a situation I might conceivably feel compassion for Mary. She may have been raped by a Roman legionnaire, who knows? They used to stone people at the time. Also, the virgin birth part of the myth may have been added later.

There’s a summary of Steve’s responses to arguments here on his new blog:

http://zarbi.posterous.com/blue-brain-blues-discussions

I especially liked his conclusion:

[i]If what you say is true, shouldn't materialists have a moral responsibility to campaign against brain simulation research, or at least insist that such research is performed with caution, and monitored?[/i]

Yes! That was one of the reasons I wrote the article. Because it would be such a fascinating situation. It’s not unusual for religious people to campaign against the progress of science in certain areas (such as stem cell research). The brain simulation situation is one where a moral objection to research could come from a position of hard materialism. I found it really interesting that this would be a case of materialist atheists not only having a moral position, but having that moral position as a direct consequence of their belief that there is only a physical world, and minds don’t come from or connect to a supernatural realm. To put it more simply, discovering a situation where it is likely that some atheists felt the need to campaign against some scientific research because of their atheism seemed pretty astonishing to me, and I thought it would be interesting to discuss even if only for that aspect.

Damn. I read “Blue Train” and thought this would be an interesting thread.

Did you watch the TED talk? I thought that was rather exciting.