5

What are the arguments for and against this? Any resources that are easy to read would help

Geoffrey Thomas
  • 35,303
  • 4
  • 40
  • 143
Cygni P
  • 107
  • 1
  • 5

3 Answers3

6

This is currently a major topic in academic philosophy of science. Among people who specialize in this topic — including myself — a strong majority now think that ethical values do and should play a role in evaluating scientific claims.

One major argument for this claim is the argument from inductive risk. Inductive risk simply refers to the risk of believing a false claim ("false positive" error) or rejecting a true claim ("false negative" error) whenever we evaluate the claim using limited evidence and cognitive capabilities. Which, of course, is pretty much all the time in empirical science. In this context, when evaluating a claim, we need to determine the relative importance of the two kinds of error. Is it worse to believe a false claim or reject a true claim? The argument from inductive risk points out that setting this balance requires us to consider the downstream consequences of making each kind of error, including the non-epistemic consequences of acting on our beliefs. "Which is worse?" is ultimately a question about values. In this way, and perhaps in others, values have a role to play in evaluating empirical claims.

Here are some readings to get you started:

Dan Hicks
  • 2,457
  • 7
  • 25
  • This seems a good answer but it is all about interpretation and hypothesis. A scientific claim is not an interpretation or hypothesis. Where there is a claim it should be bullet-proof and unaffected by ethics. At least, this view of what constitutes a scientific claim explains why our answers don't agree. A lot of so-called scientific claims are just guesswork but these are not really scientific claims, just the opinions of some scientists. Still, I wish they'd read your answer and stop prompting damaging philosophical guesswork as the 'scientific view'. –  Jan 22 '19 at 15:48
  • 3
    "Where there is a claim it should be bullet-proof...." This assumes that false positives are a higher priority than false negatives. Which requires a value judgment. For example, the tobacco industry argued for decades that we shouldn't regulate tobacco until we could be certain that it caused cancer, while also promoting research that suggested it might not cause cancer. More acceptance for the risk of false negatives might have led to earlier regulation of tobacco, which could easily have lengthened millions of lives. See here: – Dan Hicks Jan 22 '19 at 18:02
  • 2
    evaluating science isnt justifying it. This is slightly besides the OP question, but interesting nevertheless. What you are doing sounds a lot like risk management : https://en.wikipedia.org/wiki/Risk_management – Manu de Hanoi Jan 22 '19 at 18:47
  • @DanHicks - Good point. Where the jury is out on an issue the ethics might determine our actions. But what I would call a scientific claim is a claim justified by data and experiment. I would agree with Manu that the issue you raise is about risk management. The main thing for me would be to distinguish scientific claims from the speculations of scientists. They are regularly confused. –  Jan 23 '19 at 11:57
  • @PeterJ To point out the part of Manu's comment you missed: a claim is always a hypothesis. The jury is always out, to one degree or another, or science would just stop. No claim is 'justified' by experiment, there is always more data to be gathered in the future that might change it in some way. That door is never closed. So the risk is never gone. Newtonian physics was settled fact, until it wasn't. –  Jan 26 '19 at 19:58
  • @Jobermark - Very true, but I can't see that it makes a difference.to the principles involved. Geoffrey Thomas gives my view below. –  Jan 27 '19 at 09:54
  • @PeterJ If one of your principles is something that is never true, that doesn't matter? This is a philosophy forum, not one of competing dogmatic assertions. –  Jan 27 '19 at 15:49
  • @Jobermark My view is a matter of definitions, not dogmas. Where a claim is scientific, (and is not just an opinion or optional speculation such as the idea that homosexuality is a mental disorder), then ethics can have nothing to with it. The problem is that many people confuse scientific opinions with scientific claims. Hell, some people think Materialism is scientific claim. If this is what you mean by a 'scientific claim' then all bets are off. I dismiss such claims as mere opinion. I'd agree that the promotion of opinions and speculations raises ethical issues. . . . –  Jan 28 '19 at 10:02
  • @PeterJ What are the scientific claims of clinical psychology if whether or not something is a mental disorder is an optional speculation? Are there any? Is no part of that field then a science? Should we just throw it out? Are there aren't any facts in that branch of medicine, are there any facts in any part of medicine, then? Is human biology all speculation? A scientific claim is a claim, it is something to be accepted or falsified. That does not mean it is right, it means it is *a claim*. If it might be wrong, ethics demands damage control. –  Feb 04 '19 at 20:42
  • @jobermark - I'm sorry you read my remarks in such a extreme way and don't understand your reaction to them. Are you suggesting that any claim made by a scientist is a scientific claim? I prefer to think such a claim must be backed by experiments and reliable data. A great deal of trouble is caused by so-called scientific claims that are here today and gone tomorrow. If these interim opinions and guesses are going to be called scientific claims then science loses its credibility. –  Feb 05 '19 at 09:28
  • @PeterJ The examples I have given are all backed by experiments and data. –  Feb 05 '19 at 19:13
0

The leading answer is obviously correct. It may not be concrete enough.

In the 1970's homosexuality per se stopped being classified as a mental disorder because of political pressure to re-analyze a long and established scientific consensus to the contrary. Knowing in retrospect that this line of study had a cultural bias behind it, all of that work was subjected to closer scrutiny, and much of it was basically discarded. The 'justification' of that existing work was changed by an ethical consideration.

More recently, these are three cases I have followed that have come to the point of mass popularization after the scientific community has raised the ordinary standards of acceptance and delayed approving, publishing or citing work, because doing so could have unfortunate effects.

  • Thinking revived by The Bell Curve which considers the relative intelligence of classes and races and revives citation of older eugenicist views that are now broadly controverted.

  • The work explained in The Man Who Would be Queen which proposes an overall diagnosis that most male-to-female transsexuals (but not all) really have a totally different disease and their transsexuality is a symptom.

  • The studies including those of James Cantor that indicate pedophilia is a physiological brain-configuration problem that cannot be treated, which leads to specter of using brain scans to detect criminals.

(The choice of this list is obviously skewed entirely by my vested interest in the case I introduced this with. There are equally strong examples that would be preferred by someone to the political right. The last two authors, also pretty much 'won' -- because they are good scientists. So this is not about silencing opinion, it is about a varying the standards of rigor in a socially productive way.)

Doing that is using ethics to change the criteria for 'justification'. It is a real thing, and a good one.

Scientific claims are never settled, but they do take on a greater degree of presumed correctness if they are repeatedly cited. They can contribute, ultimately to paradigmatic principles.

So it is very important that potentially damaging claims not succeed too easily. Once they do we get led down the tortured paths documented in the kind of history popularized in Steve J. Gould's The Mis-Measure of Man.

In the shorter term, while there is always the opportunity to publish controverting papers, doing the research involved, when the subject is politically tense, can be adversative. So the pushback comes in irrelevant political arguments instead of engaging good science in response.

The argument then becomes established through 'ad baculum' attrition as people are led away from threatening their careers by engaging in politically contentious process to get published or even to speak about their work. Given the way this sort of thing gets handled outside science, nobody is going to repeat these experiments, even to contest them. So we need to be more certain the originals hold water, or worse science results.

Avoiding this is good science, even if it directly alters the justification process by holding people who do audacious work to an occasionally unfair higher standard.

It counteracts a lot more subtle effects alluded to by Conifold's first comment. The fact that humans are doing science, that we have a particular sense of what is simple, when a measure is good, how mechanical a mechanism has to be, when an argument is spurious, etc. automatically skews the kinds of hypotheses that arise, and how we combine them. People are political animals, science is a social process, and we have a strong tendency to all have the same thoughts, especially if we share a formative culture. And those inevitably bias the small decisions made in day-to-day scientific work. Those biases can be aspects of harmful broader social trends.

Knowing that, it is perfectly reasonable to push back with other aspects of social processes, like arguments about ethics.

  • It is certainly good practice to make sure the research is sound and the data is not being misinterpreted, but I see no examples here of scientific claims being modified by ethical evaluation, just poor or optional interpretations of the data being corrected by care and thoughtfulness. –  Jan 28 '19 at 10:10
  • @PeterJ And I see you changing the rules so you won't be wrong. –  Feb 04 '19 at 20:24
  • What is your point? Can you provide an example of a scientific claims being modified by an ethical evaluation? If not then why argue? –  Feb 05 '19 at 09:32
  • @PeterJ. What would such an example need to be. Soimething that was settled for years and then changed because of ethical considerations? Done. Something that would ordinarily have been considered accepted but was not, because of the potential ethical consideration? Done. Something that meets some bizarre prejudice you have about what is and is not scientific? Impossible, because no such thing exists. My point is that you are using a standard that does not exist. You are ignoring the core of the philosophy of science for the last 60 years. –  Feb 05 '19 at 19:08
  • @PeterJ My point is that you are not arguing, you are asserting dogmatically that what you refuse to see does not exist. –  Feb 05 '19 at 19:16
  • @PeterJ Statements about physics or chemistry may never involve ethical claims other than those internal to the process, but other sciences exist. To claim ethics never intervenes pretty much insists these impersonal sciences are the only ones that exist, and that science with implications for medicine, especially psychology, just isn't science. –  Feb 05 '19 at 20:31
  • It seems we'll have to agree to differ. –  Feb 06 '19 at 10:55
0

According to popper, what is logical positivism's role in scientific ethics?

Popper's position is that logical positivism is false, so it has no role in scientific ethics. Popper criticised logical positivism in a couple of ways. Logical positivists wanted to justify induction. But as Popper pointed out, induction is impossible so this program was doomed from the start. In addition, they wanted to adopt methodological naturalism: they would observe what scientists do and that would tell them the methods of science. As a result, experimental science would tell us how science works and there would be no need for a separate field of philosophy of science. This naturalism couldn't address the problem of induction since it presupposed that induction is possible and would have similar problems for any other controversy about methodology on which scientists disagree. Naturalists would also need to decide what sort of activities constitute science and which people are scientists. So the naturalists would just shift the problem of methodology to deciding those questions instead of talking about actual methodological problems directly. See "Logic of Scientific Discovery" by Popper, Part I, Chapters 1 and 2 and Section 17 of Popper's book "Unended Quest" for his criticisms of logical positivism.

Popper's position on scientific ethics is that scientists are fallible, that any scientific theory may be mistaken and that scientists should criticise their own theories and seek criticism from others. See "The World of Parmenides" Essay 2 Addendum 2 for more details.

alanf
  • 7,275
  • 12
  • 18