Admittedly, I have very little background in philosophy, but I believe this is the right place to ask my question. In the field of artificial intelligence we build programs which learn parameter configurations to maximize reward. More and more, this looks to be the technology that will allow robots to behave like (and possibly outperform) humans.
So, when does it become immoral to stop this kind of computational process? At what point do we define this process as alive and having a purpose to its life, such that stopping it would be akin to killing a living being?
Imagine, for example, a human with a psychological condition which causes them to only care about one goal. Presumably this person would still eat, drink, sleep, and do all of the things necessary to stay alive—as staying alive would be necessary to pursue the goal. Would it be moral to kill this person?
The only distinction I can see between this person and the computational process is that the person is making an active effort to stay alive, while continued life is the program's default condition. Is this a difference that fundamentally changes the ethics of murder? Is there some other distinction that I'm missing?