1

I'm a very selfish person, so these ethics questions are really hard for me to do.

This one goes like so:

You are making a computer system for your manager. You realize the system would hold sensitive customer information so you present your manager with 2 options: A cheaper, less secure system or a more expensive, more secure system. Your manager thinks for a while and chooses the cheaper, less secure system.

Should you refuse to make the system? Why or why not.

Initial thoughts: No, because I need to get paid.

I don't think I'm going to get any marks for that though.

NoName
  • 111
  • 1
  • Somewhat obvious question: why would you give your manager two options and then refuse to build one? When you "present your manager with 2 options", you are making a recommendation of sorts. Why recommend something you'll refuse to make? –  Dec 10 '17 at 15:22

4 Answers4

1

See these parts in your question:

I'm a very selfish person,

Your manager thinks for a while ...

I need to get paid

You are lucky that you know you are selfish. (Since you know that, you can give it up at any time you wish.) Selfish persons usually don't think much about the real consequences of their selfishness. If you are really a selfish person you shouldn't care for the manager's future. If you did against your nature it will make disturbance in your mind. So you should consider that also.

And since your manager thought for a while you can think that his decision is right and the computer might be for an ordinary purpose.

So you shouldn't refuse to make the system. He is your manager (You may think so in this case). Also, you need to get paid. Nobody will pay you if you don't give anything.

If you don't wish to stick to your selfishness I would give the answer only after analyzing the real problem.

SonOfThought
  • 3,507
  • 9
  • 18
0

Maybe think risk(to others) versus benefit(to you), and a personal "selfishness threshhold" defined by the risk/benefit ratio below which you'll comply with the request (and above which you'll refuse).

And consider that in the context of alternative scenarios, e.g., your original...

Scenario 1 You're making a computer system that holds sensitive customer information, so you present your manager with 2 options: A cheaper, less secure system, or a more expensive, more secure system. Your manager chooses the cheaper, less secure system.

versus...

Scenario 2 You're building a bridge that holds peoples' lives in its hands, so you present your manager with 2 options: A cheaper, less secure bridge (think Tacoma Narrows), or a more expensive, more secure bridge. Your manager chooses the cheaper, less secure bridge.

So, presumably, the same "selfishness threshhold" that might permit you to proceed with Scenario 1, might simultaneously inhibit you from proceeding with Scenario 2 where the risk is so much greater (assuming people value their lives more than their data:).

Everbody's selfish to some extent. The only question is the "extent", i.e., how much risk you'll expose others to in furtherance of your own benefit. We've tried to quantify that somewhat by introducing a "selfishness threshhold". While it's far from a perfect measure, just saying (as per your quesion) "I'm a very selfish person", without any further characterization/information, is pretty much meaningless.

  • This question is related to computer ethics. I think this answer suits good only if it is a general question. So, is Scenario 2 needed here? – SonOfThought Dec 10 '17 at 10:02
  • @SonOfThought Oh, "computer-ethics" only. Then maybe not. I hadn't noticed the tag, and the constraint isn't explicitly mentioned in the question. Even so, it's pretty easy to conjure up a computer-related Scenario2, maybe some medical diagnostic software with two possible designs, cheap-and-not-so-good versus expensive-and-better. Then we're still talking Scenario2=possible-life-and-death versus your Scenario1=possible-data-breach. So the threshhold measurement discussion still goes through. Ultimately, I don't think your question really is specifically computer-related, only your example. –  Dec 11 '17 at 07:43
  • I was answering using the limited facts. Also, we have to say "Yes" or "No" to the last question. I just tried to answer without making (more) disturbances to both of them. – SonOfThought Dec 12 '17 at 14:57
0

There is no threshold by which you can define one as selfish.
At the core, everybody and everything is selfish, from a point of view.

For instance, you may think helping others unconditionally or giving away has no selfishness in it, but deeply contemplating about it should tell you that you do it for your happiness or your ideology, which is a subtle form of selfishness since you're satisfying your need/want here.

So, in the end, it doesn't matter. All we can try is to do the best we can considering our factors.

If I were you, I'd try my hard to convince my manager by explaining what it means and why it's important.
If he still doesn't agree, you may proceed with the less secure one, assuming that the major portion of the mistake(wrongness) is shared by the manager, and you share a very very little portion of it, which is so tiny small that it doesn't even matter.
Because, ultimately, even if you quit the job, there'll be someone to do the same job for the same compensation.

And also, you're not making an nuclear bomb or whatever for the World War III.
So you might also want to consider the degree of impact/harm it would do.

Gokul NC
  • 139
  • 6
0

Oh, but you do deserve a mark even if your question and my reply disturb the moral pieties.

If ethical egoism is a coherent moral theory - and it is among the options standardly discussed (if usually rejected) in ethics tertbooks - then human conduct should be based solely on self-interest. 'Self-interest' is likely to provoke a volley of shots here, so let's say that ethical egoism is the view that everybody ought to look out for her- or himself alone, or that everyone ought to concern him- or herself only with their own welfare (as conceived by them).

As an ethical egoist you could defend, 'No, because I need to be paid', with complete moral propriety and consistency.

There would be a cost to ethical egoism if you value love, friendship, comradeship, but there is no logical need to include these values in your idea of your own welfare.

Ethical egoism is rejected by Kantians, most utilitarians, human rights theorists but how strong are their theories ? Aren't the journals, and questions on Philosophy Stack Exchange, standing proof that none of the relevant theories is free from objections just as strong as any that they pitch at ethical egoism?

But if one plumps for utilitarianism, ethical egoism and utililtarian can be squared on two assumption. (1) If we accept Jeremy Betham's dictum that each person is the best judge of their own interests, the maximisation of interests is more likely to be satisfied if each person aims purely for their own interest. After all, they (rather than some well-meaning other agent) best knows what it is. (2) Interests, even best judged, may of course conflict but this is an empirical and contingent point. Conditions are imaginable in which they do not. If they do not, and if each person is the best judge of their own interests, interests are likely, more likely than not, to be maximised by ethical egoism.

References : P. Facione, D. Scherer & T. Attig, 'Values and Society', NJ, 1978, esp. p. 45. D. Emmons, 'Refuting the Egoist', 'Personality', 50, 1969. 309-19. And not to forget an older classic, H. Sidgwick, 'Methods of Ethics', 7th ed., 1907 : Bk II, ch.1.

Geoffrey Thomas
  • 35,303
  • 4
  • 40
  • 143