Well from a materialist’s perspective there is no essential difference between a simulation of consciousness and consciousness. Presumably, the more realistic the simulation the more real it would feel.
Comment 73 by curby
- You can’t simulate consciousness and you can’t simulate pain.
There are three kinds of things in the universe: matter, energy, and information. Things generally include all 3. For example, what makes a cow a cow are the carbon, oxygen, nitrogen, hydrogen, (and other) atoms, the stored chemical and electrical energy, and how all the components are related to each other (the information) which determines how they will behave over time.
Simulation is the process of extracting the (relevant) information of an object and reimplementing it in a different medium (usually a computer). So it makes sense to simulate a traffic jam or the weather, but it doesn’t make sense to simulate pure information itself. What could it possibly mean to simulate a poem or a piece of music? When you upload a music file to your iPod you’re not “simulating” the music, you’re copying it.
To the best of our knowledge, the mind, our consciousness, and everything that’s involved (including pain) is information implemented in the brain: software running on hardware. Of course, it’s possible that consciousness depends on some form of exotic matter or energy that would be lost in a simulation, but the burden of proof lies most definitely with the proponents of such an idea. The atoms that make up the brain are constantly being exchanged through normal metabolism, and no exotic “soul” matter or energy has ever been detected, and there’s not even an understood way for something like that to evolve or appear in a fetus/baby.
Because our consciousness is information, we can’t simulate it; we can only implement it in different media (e.g., wet brains and silicon computers). And just like it doesn’t matter whether a piece of music is encoded on an Apple or on a PC, it also doesn’t matter what medium a consciousness is encoded on.
So any pain that an artificial agent will feel will be very real.
- Intelligence and agency inherently involve suffering of one form or another.
Any intelligent agent is by its very nature goal driven. It strives to achieve its goals and it prefers some states to others. An agent’s goals and preferences are always defined externally. For humans, our goals/preferences are defined by our genes. This includes primitive goals/preferences such as: sex is good and tissue damage is bad. It also includes more complex goals/preferences: having respect and a high status in our community is good, members of our tribe suffering is bad.
Pain and suffering for an agent is being in a state that is defined to be negative for that agent (for humans, a negative state would be one where we perceive tissue damage, or a state where we perceive ourselves to be despised by everyone around us). Similarly pleasure and happiness are states defined to be positive for an agent.
A chess-playing program is a goal-driven, intelligent agent (though its intelligence is very limited). It prefers to be in a state where it is winning and it tries to avoid states where it is losing. In a very real sense, the program suffers when it finds itself in a bad position during a game of chess, and it will try to desperately act to get itself out of such a situation. Though this kind of pain is probably comparable to the pain felt by a shrimp or an insect, it does raise an important question. At what point is a goal-driven system intelligent enough that its pain becomes morally significant?
Note that we can’t avoid this problem by simply creating an agent that is always happy. Any intelligent agent that we build that has no goals to achieve and that is in the most preferable state that it can be (however we want to define such a state) will not do anything at all. It will have no rational reason to do so (you can do the same thing with humans by constantly pumping them full of morphine). It’s a bit pointless to create an intelligent agent just so that it can sit around doing nothing but feeling happy.
We define the goals and preferences of the agents we create. Is it ethical to create a human-level intelligent being and specify its goals and preferences so that it desperately wants to be our slave and servant, and it is desperately unhappy if it can’t?