Header 1

Our future, our universe, and other weighty topics


Saturday, March 21, 2015

Why Peer Reviewed Experiments Are Not Better Evidence Than Scrupulous Photo Blogs

There is lots of photographic evidence on the Internet purporting to show various types of evidence. Some claim to show evidence of paranormal things such as ghosts, orbs, or UFOs. Others may claim to show evidence of health-related claims, or evidence of various unexplained anomalies such as Bigfoot. Others may present photos attempting to back up more modest claims, such as the claim that “my cat is really smart” or “my dog has special abilities.” But many a modern scientist may think that all such evidence can be ignored, on the grounds that it does not meet the “gold standard” of publication in a peer-reviewed journal.

But is some experimental evidence described in a peer-reviewed journal generally better evidence than photographic evidence in a scrupulous photo blog (such as one that is careful about requiring that photos be from a dated identified source, rather than passing along anonymously posted photos)? I think it is not. I will now give some reasons to support this claim.

Reason #1: Photographic evidence is in general better evidence than merely experimental evidence.

In general, photographic evidence is more convincing evidence than experimental evidence. For example, imagine if I do some experiments suggesting that John Blackheart killed his wife. Whatever such experiments suggest, they do not have the evidence value of a photograph showing John Blackheart killing his wife. Similarly, I may do some fancy computer experiments suggesting that a particular star is going to soon explode. But such experiments do not have the evidence value of a photograph showing the star actually blowing up.

Reason #2: The number of hard-to-detect “ways to go wrong” in a complex experiment is much greater than the number of hard-to-detect “ways to go wrong” when taking a photograph.

Once we dismiss some invalid claims of skeptics (some of whom incorrectly imagine that the air in front of our cameras is typically filled with dust sufficient to mislead us, and that your ordinary breath is sufficient to produce what looks like a ghost in a night photo), we find that the number of ways in which you can go wrong when taking a photo is pretty small. It is also true that in almost every case in which a photographic mistake might mislead you, the mistake is easily detectable. For example, if you point your camera at a bright light, it may produce lens flare that misleads you; but that has a very characteristic look, so it's easy to detect (and easy to avoid). Similarly, if you move your camera violently while taking a picture, that may produce a ghostly look that misleads you. But that also will produce a very characteristic look that is very easy to detect, and avoid. The same thing holds true for the mistake of photographing your camera strap – it may create a weird-looking white streak that may mislead you, but it has a very characteristic look that is very easy to recognize. So in general, detecting errors in photos is pretty easy, and avoiding such errors is also pretty easy.

But when it comes to experiments, we have a totally different situation. There are countless subtle ways in which a complicated experiment might go wrong. A scientist might make a mistake in setting up the “controls” that are used with the experiment. Or he might make a mistake in any of a number of measurements used in performing the experiment. Such a mistake could involve using some piece of fancy equipment incorrectly, which is very easy to do (since such instruments are often harder to use than one of the old VCR machines). Or a mistake (such as ordinary human clerical error) might be made in writing down a result after reading a scientific instrument. Or a mistake might be made when tabulating or summarizing data collected from different sources. Such a mistake might be as easy to make as a typographical error entered into a spreadsheet. Similarly, scientific work based on computer experiments offer 1001 opportunities for error. Such experiments often involve many thousands of lines of code, and a programming bug might exist in any one of those lines. Also, experimental bias might lead to errors in the design or interpretation of the data. 

With this thing, you're 1 button click away from a false result

In short, photographs are much simpler than complicated experiments, and offer much less opportunity for error.

Reason #3: Many scientists have financial incentives to cheat on experiments, but most bloggers do not have any financial incentive to cheat when producing photographs.

The overwhelming majority of blogs make no significant money for those who write them. So almost all bloggers have no financial incentive to cheat by posting fake photos.

But when it comes to scientific papers, one often finds a different situation. Many scientists have financial incentives to cheat on experiments. The most obvious case is scientists who are taking money (directly or indirectly) from corporations that desire a particular experimental result (corporations such as oil companies, tobacco companies, and pharmaceutical companies). More generally, it would seem that many scientists have a financial incentive to cheat in order to produce some dramatic experimental result (although this does not tell us anything about what percentage of them cheat, and we may presume that most do not cheat). A scientist who can claim to have produced some breakthrough result is more likely to keep his job, or to get a better job.

Reason #4: Peer-review is greatly overrated, because it is an anonymous process that does not involve auditing the data behind a scientific paper, and therefore does little or nothing to show that the paper was produced without cheating.

We have heard so much hype about peer-review being some “gold standard,” that one might imagine that when a paper is peer-reviewed by other scientists who didn't write the paper, those scientists drop in on the paper's authors to check their source data and log books, to make sure that things were done without any cheating. But nothing of the sort happens. Instead, peer-review is an anonymous process. A paper's authors never meet those who are doing the peer review.

Accordingly, peer-review offers little opportunity to detect that an experiment was done without cheating. A peer-review may be able to detect if a paper commits math errors or errors of fact (such as listing the wrong chemical formula for a particular molecule). But a peer-review cannot find something such as an experimenter who simply faked things in order to be able to claim an experimental breakthrough, and enhance his job prospects. Detecting such a thing would require face-to-face visits, which don't occur under the peer-review process.

Reason #5: You have no way of detecting whether an experimenter cheated while doing the research for a peer-reviewed paper, but you can investigate the authenticity of  Internet photos.

There exists software that you can use to detect fake photos. One example is the free site www.fotoforensics.com. The site is easy to use. You can go to a blog, cick on a photo to get a URL that lists only that photo, and then paste in that URL into the “URL” slot on the www.fotoforensics.com site.

Could you use any similar technique to check out the validity of the original source data from which peer-reviewed scientific papers were written? No, because scientists rarely publish such data.

Reason #6: Scientific experiments are often robbed of their evidence value by a subsequent experiment on the same topic, but a similar type of thing is rarely possible with a photograph.

A scientific paper on the PLOS One site had the startling title “Why Most Published Research Findings are False.” One of the points made in this paper is that scientific studies are very often contradicted by later scientific studies on the same topic. For example, one study may show that Substance X causes cancer, while a later study may show that Substance X does not cause cancer.

As reported here, a head of global cancer research at Amgen identified 53 “landmark” scientific papers, and had his workers try to reproduce as many as they could. It was found that 47 of the 53 results could not be replicated.

So it is very common – perhaps even probable – that a randomly selected peer-reviewed paper of experimental results may be “undone” by subsequent research. But there is no such problem with a photograph. Suppose one investigator gets a photograph seeming to show a ghost (or an orb with a face) at a particular location. If you take a later photograph of the same spot that does not show such an anomaly, that does not at all undo or cancel out the previous photograph. The evidence of that photograph still stands. It is immune to disprove by taking additional photographs at the same site. While such photographs (if made in sufficient number) may show that some paranormal thing does not usually occur at some location, they can never show that the first photograph did not capture a paranormal sight that may occur only rarely.

Reason # 7: The total amount of peer review on a popular “photo blog post” exceeds the peer review for the average scientific paper

A typical scientific paper is reviewed by two people, and then receives no further peer review after its publication. Part of the reason is that online scientific journals offer no convenient mechanism for posting comments to a published paper. It is true that the http://arxiv.org server (which hosts lots of physics papers) has a “trackback” mechanism by which you can write a blog post and then have your blog post be listed in the same place that the original paper is available online (kind of buried at the bottom). But you can do that only by using some nerdy hard-to-use “embedding” technical trick. Partially because it is so hard to post an online comment to a scientific paper, most peer-reviewed scientific papers end up being reviewed by only 2 peers of the paper's authors.

But a popular post on a photo blog may end up getting dozens of user comments, many of which qualify as being a form of peer-review. So a popular post may end up getting ten times more peer-review than the average scientific paper.

Conclusion

In short, there is no sound basis for thinking that experimental evidence published in peer-reviewed journals is a form of evidence superior to photographic evidence published on scrupulous web sites. The myth that publication in a peer-reviewed journal is some greatly superior “gold standard” is a convenient excuse used by many materialists, an excuse for ignoring evidence that may upset their worldview, and contradict their dogmatic assumptions. But there is no sound basis for assuming that some experimental result published in a peer-reviewed journal is more likely to be true than something that is shown abundantly in photographs outside of a peer-reviewed journal.

No comments:

Post a Comment