Is unplugging a robot from power the same as killing a person? As the commenters said this does not quite work because robots can be plugged back in. So let's replace robots with philosophical zombies, they behave just like humans except for not having consciousness. In particular, once killed they are not coming back. Is killing them the same? In both cases you deactivate them permanently. If it is still not the same, why is harming something with consciousness ethically different from harming something without consciousness?
-
2If the two actions were the same, this means that to kill a person amounts to nothing more than switch off the washing machine... and I think it is not so. – Mauro ALLEGRANZA Jun 13 '19 at 11:11
-
2How do you switch a human back on after he dies? – Jishin Noben Jun 13 '19 at 12:02
-
3What does it mean to "hurt" something without feelings? Can you "hurt" a rock? – curiousdannii Jun 13 '19 at 12:02
-
The issue is : "what is a robot ?" If it is a machine, you switch it on and off and you do not "kill" it. If it have consciousness, it is **not** a machine but a "living creature" and thus you do not "switch" it but kill it. – Mauro ALLEGRANZA Jun 13 '19 at 12:12
-
Yes I want to add something because I have never been good in expressing my thoughts.The robot isnt switched on again – 4HonorNDFame Jun 13 '19 at 13:10
-
And you know that the robot cannot be switched on agaon. – 4HonorNDFame Jun 13 '19 at 13:31
-
2The philosophical question I see here is this: why should ethics only extend to creatures with consciousness? Robots are irrelevant, and only serve as a distraction. I clarified this by rephrasing your post in terms of [philosophical zombies](https://en.wikipedia.org/wiki/Philosophical_zombie). If this is not what you want feel free to roll back the edit, but I am afraid the post was might be closed otherwise. – Conifold Jun 13 '19 at 17:00
-
4If you can distinguish a philosophical zombie from a normal human, it isn't a philosophical zombie (which is why the whole concept is incoherent nonsense, but that's another issue). It's generally wrong to kill normal humans. It would probably be wrong to kill something that you have no way of knowing is a human or not. – Ask About Monica Jun 13 '19 at 17:07
-
Thank you Conifold. – 4HonorNDFame Jun 13 '19 at 18:03
-
To expend on @askaboutmonica 's comment, if P zombies are greenlit for killing, and people could mistake you for one (as the definition of P zombies require they can't be differentiated from normal people), then people could mistakenly kill you. Wouldn't it be convenient, as a mean of self protection, to make their killing taboo, an act forbidden by quasi universal reprobation, like a moral law ? – armand May 27 '21 at 14:22
-
The problem I have is where do you draw the lines? How can you tell whether a person is sentient or a zombie? We have recently discovered that social insects (and even forests) pass information around and process it in quite sophisticated ways; how do you tell a dumb life form from a cognitive one? To what extent is cognition related to sentience? Until you have a coherent set of answers, the whole zombie thing is just fantasy rubbish. – Guy Inchbald May 28 '21 at 11:36
-
Your question assumes it is "immoral" to kill a human. What's the reason you consider it immoral, and why is that reason not applied to anything other than humans/animals/etc... – Tvde1 May 31 '21 at 09:23
2 Answers
That depends on what the problem is with killing people:
- A Deontologist could argue that the zombies have no inherent duty of care, being entirely imaginary entities, and so declare Open Season without qualm.
- A Consequentialist could notice that killing philosophical zombies has no effect IRL, and grab a shotgun.
- A Virtue Ethicist could acknowledge the degradation of character inherent in any killing and so have his philosophical brainz eaten.
And whether the morality is to be judged within the Thought Experiment:
- A Deontologist should enquire whether there is a way to distinguish the zombies from normies, allowing different duties toward each.
- A Consequentialist ought consider both killings the same, insofar as the effects of the killing on loved ones, in amount of pain during the process, in GDP,... would be very similar
- A Virtue Ethicist is even more likely to be food for a Zombie Thought Experiment.
- 133
- 6
The teleporter paradox can help us understand. If we can have continuity with the new copy, we may not object to vaporising the old. Irreversibility, is a big issue. We have the same issue with losing biodiversity, and species extinctions.
If we consider replacing the function of a human brain neuron by neuron with computer components, we have to accept a robot can in principle be a person. If we simulate the function of those components digitally, we have to admit a programme can be a person. Turing proposed hus famous test, to shift away from concern about invisible 'essences', to focus on what intelligence does. 'Passing the Turing test' is at best ambiguous evidence, & there are many versions, but the point stands to focus on the functionality of intelligence, and personhood.
Peter Singer argues for animal rights on the basis of capacities, arguing that the 'imaginary circle' we draw around humans is not coherent or consistent, if say we prioritise a brain dead human's well-being over a dolphin that we know pass the mirror test and have complex language, and can we think suffer in ways the brain dead human can't.
Bostrom proposes 'mindcrime', that we extend moral considerations to complex programmes and machines, in a parallel way to our concerns for animals, capacity to suffer & capacity to take up moral duties - and crucially, consider how deeply alien their experiences, potentially causing their suffering to be difficult for us to understand or see.
Has a philosophical zombie 'grown', in a complex and unique way through interaction? Or, is it like a Boltzmann brain, or 'beamed down' by a teleporter, and so potentially replaceable with perfect fidelity? We should be more careful about what we can un-do, than what we can't. More broadly than that, it depends on your framing of What is one’s incentive to be moral?
- 19,444
- 4
- 23
- 65