1

Is anyone of deontology or utilitarianism adequate to handle moral issues in cyber technology? Or do we need some other theory?

curiousdannii
  • 1,747
  • 4
  • 18
  • 25
  • Please see [SEP, Ethics of Artificial Intelligence and Robotics](https://plato.stanford.edu/entries/ethics-ai/) for general reading. Questions here are expected to be more specific. – Conifold Apr 05 '21 at 09:11
  • @Conifold Good link, but surprising sketchy on ethics and [superintelligence](https://plato.stanford.edu/entries/ethics-ai/#Sing), e.g. "*Arguments for risk from superintelligence say that rationality and morality are entirely independent dimensions*" and "*These discussions of risk are usually not connected to the general problem of ethics under risk.*" – Chris Degnen Apr 05 '21 at 13:00

1 Answers1

1

Where AI becomes competent enough that people can become dependent on it and vulnerable to manipulation 'ethics of care' would be pertinent. Good luck to that though.

https://en.wikipedia.org/wiki/Ethics_of_care

Carol Gilligan, who is considered the originator of the ethics of care, criticized the application of generalized standards as "morally problematic, since it breeds moral blindness or indifference"

Chris Degnen
  • 4,780
  • 1
  • 14
  • 21