3

Which modern philosophers examine problems of ethics and knowledge between artificial, intelligent agents, leaving their relationship to humans aside?

What I'm looking for:

  • The assumption that AIs can potentially be conscious in the same way as humans
  • Arguments about when an action by an AI towards another AI might be moral or immoral, leaving consequences for humans out of the picture
  • Criteria for when a proposition held by an AI is true or when it is false, without reference to any humans involved.
  • Discussion of how to assign meaning to the utterances of AIs to other AIs

What I'm not looking for:

  • Discussion of how the creation of AI might cause ethical or practical problems for human society
  • The "friendly AI" problem
  • Discussion over whether AIs can be conscious in the same way as humans. I'm looking for a philosopher who takes it for granted that they can and goes from there. (Well, it's okay if they also argue the point, but that argument is probably not what I'm interested in, unless there's some novel perspective I haven't seen).

The notion is that if we don't have a general concept of ethics and meaning that can apply to any agent - human, artificial, or alien - then we don't really understand ethics or meaning.

causative
  • 10,452
  • 1
  • 13
  • 45
  • Is there a philosopher of ethics who fully accommodates inter-species interaction? AIs can be and often are so vastly different from one another, they might as well be separate species. A lot of human sense of morality relates to reproduction and survival of the group. When agents of greatly diverse nature interact, much of the historical human reasoning may be unfit. With that said, I generally believe in the possibility of creating an ethics framework capable of considering all living things and complex machinery. But not all subjects would necessarily agree to it. – Michael Feb 22 '22 at 17:42
  • @Michael I can think of a number of ways an ethical system might be proposed for non-humans. For instance, social contract theory should work fine for non-humans intelligent enough to agree to a social contract. Reduction of pain/increase of joy could work for agents capable of experiencing pain or joy. The prisoner's dilemma and tit-for-tat could be applicable to any intelligent agents. Preference utilitarianism could be applicable to any agents capable of holding preferences. – causative Feb 22 '22 at 17:58
  • I agree that there are plenty of options and principles that could apply. But what about non-embodiment, or the possibility of making copies of an agent's mind? What about updates and other substantial changes to an agent's functioning or capability? Dilemmas of identity, ownership, and delineation could complicate traditional assumptions. With humans, a mind and body tend to have a one-to-one, non-changing relation. With AI, other possibilities exist, and things can get blurry. – Michael Feb 22 '22 at 18:30
  • @Michael Well, I see those as reasons to question traditional assumptions and try to find more solid foundations that can handle such scenarios. Great comments btw. – causative Feb 22 '22 at 18:51
  • 1
    What is with these close votes? I'm not asking multiple questions, I'm asking for a philosopher who discusses certain types of issues. Nor is it opinion-based, it's a reference request not a general call for opinions. Since the close votes come with downvotes this feels more like "-1 disagree." – causative Feb 23 '22 at 00:17
  • The question does not define AI, and else is very broad. This seems to ask more about reading recommendation than any specific reference. – tkruse Feb 23 '22 at 04:50
  • When I read the question, I hoped you were asking about any actual, artificially intelligent, philosophers. Have we taken any steps in this direction? : ) – Futilitarian Feb 24 '22 at 04:57
  • 1
    @Futilitarian -- There is something to that effect called [Philosopher AI](https://philosopherai.com/), but I cannot say I have used it myself. It looks to use GPT-3, which certainly can give philosophical output if prompted in that way. I *have* played with [GPT-J](https://textsynth.com/playground.html), but usually for other ends. I bet it can give philosophy too with the right prompt. In fact, I gave a related example prompt [for this question](https://philosophy.stackexchange.com/questions/89034/please-help-with-suggestions-for-existential-or-philosophical-inspired-team-name). – Michael Feb 24 '22 at 10:00
  • 1
    Check out Haraway's '[A Cyborg Manifesto'](https://en.wikipedia.org/wiki/A_Cyborg_Manifesto) – CriglCragl Mar 02 '22 at 10:04
  • 1
    @CriglCragl good answer! – causative Mar 02 '22 at 22:16

2 Answers2

1

'Friendly AI' presumably means the AI alignment problem.

Bostrom & Chalmers are clearly essential modern reading. Donna Harraway I already mentioned.

Some relevant discussions on here, inc resources:

What would be the first indication that an Artificial Intelligence Entity (AIE) has become sentient?

PhilPapers Survey 2020, Why do so many physicalists deny consciousness of future AI systems?

A lot of interesting discussion is happening in science fiction: Which Philosophers talk about the Future. the technologies beyond AI and John Searle?

CriglCragl
  • 19,444
  • 4
  • 23
  • 65
  • Upvote for Harraway, but I did mention I'm *not* looking for AI alignment or discussion of whether an AI might be conscious. – causative Oct 03 '22 at 23:04
  • At the same time, Haraway's essay isn't really what I'm looking for. It's more from the perspective of using "cyborg" as imagery and metaphor for feminism, with little about actual cyborgs or robots. – causative Oct 04 '22 at 00:20
0

Ethics requires choice so you are begging the question of whether AI necessitates freedom. Asimov's rules for robots is just silly

Isaac Asimov's "Three Laws of Robotics" A robot may not injure a human being or, through inaction, allow a human being to come to harm. A robot must obey orders given it by human beings except where such orders would conflict with the First Law. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

  • I would describe Asimov's Laws of Robotics as a kind of thought experiment, that allowed him to get to grips with issues about misalignment, rather than just conflict. – CriglCragl Oct 03 '22 at 20:47