Is anyone of deontology or utilitarianism adequate to handle moral issues in cyber technology? Or do we need some other theory?
Asked
Active
Viewed 109 times
1
curiousdannii
- 1,747
- 4
- 18
- 25
Kyzil Quentero
- 27
- 2
-
Please see [SEP, Ethics of Artificial Intelligence and Robotics](https://plato.stanford.edu/entries/ethics-ai/) for general reading. Questions here are expected to be more specific. – Conifold Apr 05 '21 at 09:11
-
@Conifold Good link, but surprising sketchy on ethics and [superintelligence](https://plato.stanford.edu/entries/ethics-ai/#Sing), e.g. "*Arguments for risk from superintelligence say that rationality and morality are entirely independent dimensions*" and "*These discussions of risk are usually not connected to the general problem of ethics under risk.*" – Chris Degnen Apr 05 '21 at 13:00
1 Answers
1
Where AI becomes competent enough that people can become dependent on it and vulnerable to manipulation 'ethics of care' would be pertinent. Good luck to that though.
https://en.wikipedia.org/wiki/Ethics_of_care
Carol Gilligan, who is considered the originator of the ethics of care, criticized the application of generalized standards as "morally problematic, since it breeds moral blindness or indifference"
Chris Degnen
- 4,780
- 1
- 14
- 21
-
-
1It would be interesting to clarify exactly how care ethics' claim goes further than utilitarianism and deontology. Maybe it's a basic participatory position. – Chris Degnen Apr 05 '21 at 12:39