Header 1

Our future, our universe, and other weighty topics


Saturday, June 22, 2019

Poll Suggests Very Many Atheists Reject Naturalism

Darwinism is the belief that that all organisms have a common ancestor, and that the world's species arose by purely natural processes, mainly because of random mutations that were favored by “natural selection” (a term that refers merely to the superior reproduction rate of fitter organisms). Nowadays when an advocate of Darwinism hears about objections to Darwinism, he will often suggest that such objections are merely based on religion. This claim has always been very dubious, because it is quite possible to make a very detailed case against the claims of Darwinism without ever stating any religious doctrine. For example, without mentioning any religious doctrine, a writer might discuss the failure of Darwinism to explain the origin of life, the failure of Darwinism to explain the appearance of the useless initial stages of complex biological innovations, the failure of Darwinism to explain the Cambrian Explosion,  and the failure of Darwinism to explain the origin of language. 

Recently a poll appeared profiling the beliefs of the non-religious such as atheists and agnostics. An interesting finding from this "Understanding Unbelief" poll (conducted by some university scientists) is that a significant minority of atheists and agnostics seem to doubt Darwinism

Below is a finding from page 17 of the poll:


Percent agreeing “strongly” or “somewhat” with the statement “Humans have developed over time from simpler, non-human life forms.”


Atheists/Agnostics General Population
Brazil 66 50
China 74 87
Denmark 69 59
Japan 49 68
United Kingdom 74 63
USA 80 49
Average 69 63


The results in the second column are not surprising for the USA. It has been known for a long time that roughly half of the US population rejects the textbook story about the origin of humans. What is surprising here is how the question reveals that Darwinism seems to be doubted by very substantial fractions of atheists and agnostics. It seems that a full 20% of atheists and agnostics in the US do not agree “strongly” or “somewhat” with the claim that “humans have developed over time from simpler, non-human life forms,” and that in the UK about 25% of atheists and agnostics do not agree “strongly” or “somewhat” with that claim. Moreover, in Brazil apparently about one third of atheists and agnostics do not agree “strongly” or “somewhat” with the claim that “humans have developed over time from simpler, non-human life forms,” and in Japan about half of atheists and agnostics do not agree “strongly” or “somewhat” with the claim that “humans have developed over time from simpler, non-human life forms.”

This question is not actually one that exactly measures full belief in Darwinism, because the question says nothing about what caused humans to appear. Let us imagine that the question had been worded to exactly measure belief in Darwinism. Then the question might have been something like: “Do you agree strongly or somewhat with this statement: humans have developed over time from simpler, non-human life forms, purely because of natural factors such as random mutation and natural selection?” Since this question is more specific, asking people to endorse a particular belief about what caused the origin of humans, the percentages of people answering “Yes” would almost certainly have been smaller. What the survey has revealed is that even when they are given a “human origins” statement that says nothing about causes, and matches textbook explanations, a very substantial fraction of atheists and agnostics fail to say they support the statement "strongly" or "somewhat." 

In light of such a survey, you should not believe claims that objections to Darwinism stem purely from religious belief. If that were true, we might have expected 90% or 95% of the atheists or agnostics to agree “strongly” or “somewhat” with the claim that  "humans have developed over time from simpler, non-human life forms,” rather than an average of only 69% of them agreeing "strongly" or "somewhat" with such a statement. 

On page 13 there was a question asking atheists and agnostics whether they believed in a “universal spirit or life force.” In Denmark, China the US, and the UK, the answer was “Yes” for about 18% to 27% of the atheists asked the question, and a similar fraction of agnostics answered “Yes.” 

There is a reason why the result on page 13 should surprise no one. The term “God” is loaded with historical and cultural baggage, and much of that baggage has negative connotations. The term “God” has negative connotations to very many people, but many such people are not hostile to the underlying idea of a supreme mind behind the universe. Use the term “God” in a poll question, and many people will think of things they dislike, like the image of an angry bearded figure on a throne. But many of those same people may respond affirmatively if you ask about the possibility of some Cosmic Mind or Universal Spirit or “intelligent guiding force behind nature.”  For example, in a poll of Danish citizens, 28% said "they believe there is a God," but a separate 47% said "they believe there is some sort of spirit or life force," with only 24% saying, "they do not believe there is any sort of spirit, God or life force." 

Therefore a conversation something like the one below is not one you should ever be surprised to hear.

Joe: Do you believe in God?
Jim: God? Angry old guy on a throne in the clouds? I don't believe in that kind of bull.
Joe: Okay, I got you. But what about some intelligent ordering principle or mind guiding the universe to a purposeful result?
Jim: Well, sure, there's probably something like that.

When do I Google search for the definition of naturalism, the first definition I get is "the philosophical belief that everything arises from natural properties and causes, and supernatural or spiritual explanations are excluded or discounted."  Such a belief is synonymous with materialism. Page 3 of the poll states, "Only minorities of atheists or agnostics in each of our countries appear to be thoroughgoing naturalists."  On page 13 the poll indicates that there is a substantial minority of atheists (about 10% to 30%) who believe in life after death. 

The poll had a pretty good sample size. 900 atheists and agnostics were polled in each of several countries, and in each country 200 of the general population were polled, with those 200 having characteristics matching that in the country as a whole.


Wednesday, June 19, 2019

Blame Mainly Authors for Junk Science (Which Is Everywhere)

There was recently published a remarkable article by Alex Gillis entitled “The Rise of Junk Science.” The story is told in simplistic “good guy/bad guy” terms in which the blame is put on a group of publishers who supposedly are not being restrictive enough in excluding poor-quality scientific papers. The story tells us that in order to make money while incurring very low operating costs, certain publishers of scientific papers seem to be doing very little peer review to exclude bad papers. A remarkable claim is attributed to a professor named Eduardo Franco:

These companies have become so successful, Franco says, that for the first time in history, scientists and scholars worldwide are publishing more fraudulent and flawed studies than legitimate research—maybe ten times more. Approximately 10,000 bogus journals run rackets around the world, with thousands more under investigation, according to Cabell’s International, a publishing-services company. 'We’re publishing mainly noise now,' Franco laments. 'It’s nearly impossible to hear real signals, to discover real findings.' "

I know of no hard facts that substantiate these claims by Franco, which I suspect are exaggerated. I also failed to find any justification in the article that there are great numbers of “bogus journals” that are running “rackets.” The only thing the article describes are scientific journals that publish scientific papers without doing much to exclude bad papers.

Let us imagine that you start an open-access scientific journal with a non-restrictive publication policy. You decide that anyone who writes up a scientific experiment can publish it in your journal. Are you guilty of running a “bogus journal” and running a racket because you do not get scientists to peer-review the work submitted? I think not. You are simply someone who has started a journal with a publication policy that differs from social norms. Of course, if you claim that your journal is peer-reviewed but you do not actually engage in peer-review (by hiring scientists to review submitted articles), that would be bogus, because it would be a misrepresentation.

Peer-review of scientific papers has always been something that is a mixture of something good mixed with something that is very bad. The good done by peer-review is that some bad papers get excluded from publication. But it is not at all true that peer-review is an effective quality control system. One reason is that peer-review does not involve reviewing the source data behind an experiment.

Imagine you submit to a scientific journal a paper describing an experiment involving animals. If your paper is being peer-reviewed, the reviewer does not come over to your laboratory and ask to check the data you used to write up your paper. The peer-reviewer does not ask to see your daily log book or see photographs you took to document the experiment. Instead, the peer-reviewer assumes the honesty of the person writing the paper.

So what types of things are excluded by peer-reviewers? Things like this:
  1. Obvious logical errors or obvious procedural errors that can be detected by reading the paper.
  2. Obvious mathematical errors that can be found in the paper.
  3. Deviations from belief customs of scientists. A paper may be rejected by peer-reviewers mainly because it presents evidence against a cherished belief of scientists, or if the paper seems to have sympathy for some idea that is a taboo in the scientific community.
  4. Papers producing null results, which fail to confirm the hypothesis they were testing. Such papers are very often excluded on the grounds of being uninteresting, and sometimes excluded because a scientist would prefer to believe the hypothesis is correct. 
Because peer-review acts like a censorship system, it does great harm. Peer-review helps to keep scientists in “filter bubbles” in which they only read about results that are consistent with their world views. The scientist reading his peer-reviewed journal and reading only results consistent with a materialist worldview may be like a 1970's Soviet Union citizen reading his daily edition of Pravda, and reading only information compatible with a Marxist-Leninist worldview. The exclusion of null results (experiments that did not confirm the hypothesis tested) is a very great problem that often leads scientists to think certain effects are more common or better-established than they are, or that certain claims are better substantiated than they are. 

And since you can't very effectively police bad scientific papers without doing a detailed audit that asks to look at source data, peer-review doesn't do very much to prevent scientific fraud. A more effective system would be one in which there was no peer-review except for a certain small percentage of experimental papers which would be randomly selected to undergo a thorough audit, with the auditor allowed to conduct detailed interviews with all the experimenters, and with an inspection of the original source data. A scientist would be unlikely to commit fraud if he thought there was a 5% chance his experiment would have to face such a detailed audit.

Peer-review as it has been traditionally practiced is such a mixed bag that is no obvious evil for a scientific journal to dispense with it altogether and allow unrestricted publication for any scientist presenting a paper. That would result in some more bad papers, but also allow the publication of many papers that should have been published but were not published because of being wrongly blocked by a typical peer-review system.

It seems, therefore, that if there are many junk science papers being published, the people we should mainly blame are not publishers failing to uphold dubious peer-review conventions, but instead the scientists who wrote the junk science papers. It's rather silly to be suggesting “there's so much junk science – damn those bad publishers,” when the main person to be blamed for a bad science paper is the author of that paper, not its publisher. 

One big problem with the Gillis article is that it creates a simplistic narrative that may lead you to think that junk science exists almost entirely in junk science journals that do not follow proper peer-review standards. But the truth is that junk science is all over the place.  Very many of the scientific papers published in the most reputable scientific journals are junk science papers. 

There are several reasons why a sizable fraction of the scientific papers published should be called junk science. One reason is that very many scientific papers consist of groundless speculation, flights of fancy in which imaginative guesswork runs wild.  For example, a large fraction of the scientific papers published in cosmology journals, particle physics journals, neuroscience journals and evolutionary biology journals consist of such runaway speculation. 

Another reason is that a sizable fraction of all experimental papers involve sample sizes that are too small. A rule-of-thumb in animal studies is that at least 15 animals should be used in each study group (including the control group) in order for you to have moderately compelling evidence in which there is not a high chance of a false alarm. This guideline is very often ignored in scientific studies that use a much smaller number of animals. In this post I give numerous examples of  memory studies that were guilty of such a problem.

The issue was discussed in a paper in the journal Nature, one entitled Power failure: why small sample size undermines the reliability of neuroscience. The article tells us that neuroscience studies tend to be unreliable because they are using too small a sample size. When there is too small a sample size, there's a too high chance that the effect reported by a study is just a false alarm. 

The paper received widespread attention, but did little or nothing to change practices in neuroscience. A 2017 follow-up paper found that "concerns regarding statistical power in neuroscience have mostly not yet been addressed." Exactly the same problem exists in the field of psychology.  It is interesting that the peer-review process (supposedly designed to produce high-quality papers) totally fails nowadays to prevent the publication of scientific studies that are probably false alarms because a too-small sample size was used. 

An additional reason for junk science in mainstream journals is that a great deal of biomedical research is paid for by pharmaceutical companies trying to drive particular research results (such as one suggesting the effectiveness of the medicine they are selling).  Yet another reason for junk science in mainstream journals is that the modern scientist is indoctrinated in unproven belief dogmas that he is encouraged to support, and he or she often ends up writing dubious papers trying to support these far-fetched ideas.  Such papers may commit any of the sins listed in my post, "The Building Blocks of Bad Science Literature." 

A widely discussed 2005 paper entitled "Why Most Published Research Findings Are False" stated the following:

"Simulations show that for most study designs and settings, it is more likely for a research claim to be false than true. Moreover, for many current scientific fields, claimed research findings may often be simply accurate measures of the prevailing bias."

There are very many junk-science studies in even the best science journals. Scientists know some of the ways in which the amount of junk science papers can be reduced (such as increasing the sample size and statistical power of experimental studies). But thus far there has been little progress in moving towards more rigorous standards that would reduce the number of junk science papers. 

Many science textbooks contain a great deal of junk science, mixed in with factual statements. Since textbooks do little more than summarize what is written in science journals,  a large number of false published research findings will inevitably result in a huge number of false claims being made in science textbooks. 


science textbook

"Junk" means "something of little value," and when I speak of "junk science" here I include any science paper that is of little value, for reasons such as being too speculative or too trivial or because of drawing conclusions or making insinuations that are poorly supported or not likely to be true. 

Most scientific claims must be critically scrutinized, and we must always be asking questions such as, "Do we really have proof for such a thing?" and "Why might they have gone wrong when reaching such a conclusion?" and "What alternate explanations are there for the observations?" We cannot simply trust something because it is published by some publisher with a good reputation. 

Sunday, June 16, 2019

Study Hints Your Brain Isn't Making Your Dreams

There are many problems with neuroscience studies that greatly affect their reliability, and should cause us to believe that a large fraction of them are false alarms. One of the biggest problems is insufficient samples sizes, a problem so bad that it led one neuroscientist to conclude that most published neuroscience studies are false. Another huge problem is the low use of what are called blinding protocols.

Imagine a study involving two different groups of subjects, either humans or animals, who differ in some way. Perhaps one group received some stimulus, and the other did not; or perhaps one group reported some tendency or experience, and the other did not. When a blinding protocol is used, the scientists analyzing these subjects will not know which group the subjects belonged to. So, for example, if 10 test subjects are given a pill, and 10 control subjects are not given the pill, then when scientists are studying data from the subjects they will not know whether the subjects got the pill or did not get the pill.

A blinding protocol such as this can be important for reducing experimental bias. For example, if such a protocol were not used, and you were a scientist asked to compare 10 subjects who you knew were given a pill and 10 subjects who you knew were not given the pill, it would be all too easy for you to show some bias in your analysis caused by your knowledge of whether or not the subjects had been given the pill.

Proper blinding protocols are not often used in neuroscience studies. But recently we had an example of an interesting study that used such a protocol. The study (called the Dream Catcher study) was about whether or not scientists could detect a brain signature of dreaming. Nine subjects went to sleep in a laboratory, each with an EEG reader attached to his or her head. The brain waves of the subjects were recorded, and at random intervals subjects were woken up. The subjects were then asked to recall any dreams they were having when woken up. From such cases a Data Team accumulated 27 cases of dreamless sleep, and 27 cases of dreaming sleep, along with the corresponding EEG readings from the brain.

The EEG readings were then given to some other people in an Analysis Team, consisting of people who did not know whether any particular case they were analyzing was a case of dreaming sleep or a case of dreamless sleep. These “blinded” analysts were asked to predict from the EEG readings whether particular cases were examples of dreaming sleep or dreamless sleep.

The result was a null result. The predictions of the analysts (using the EEG data) were not better than what would be expected by chance. The experiment is consistent with the hypothesis that your brain is not actually the source of your dreams.



A previous study by Tononi and others claimed to find some neural correlate of dreaming. In a science news article, Tononi tried to suggest that the difference was due to “trouble” in the methodology of the “Dream Catcher” study finding no evidence of a neural correlate of dreaming. But such an insinuation does not seem fair. Although it involved only 9 subjects, the “Dream Catcher” study involved 54 different cases, and a sample size of 54 is generally regarded as adequate. The “Dream Catcher” study actually involved a protocol much better than that in the Tononi study, which failed to use blinding.

The Tononi study has one claim of predictive success, but only a very dubious one. In one experiment, 84 times sleeping people (connected to EEG brain wave readers) were awoken based on some criteria in their brain waves that might predict that they were dreaming. The paper tells us that the vast majority of these observational cases were thrown away, leaving only 36 cases that were used to judge predictive success; and in that 36 the prediction was pretty good. But this “picking 36 out of 84” smells like cherry-picking to get the desired predictive success, so it is very unimpressive. Among the reasons for discarding observational cases mentioned in the Tononi study (in the Methods section for Experiment 3) is when "sleep stage could not be confirmed," but that is a most dubious procedure, since the whole idea is to show whether we can tell whether people are dreaming from their brain waves; and all the dreamers had their brain waves continuously monitored.   If there was actually a brain wave signal showing dreaming, there should be no reason to throw out most of the observational cases on the basis that "sleep stage could not be confirmed," since the brain waves in such a case would let you know what the sleep stage was. 

What I would like to see is many more neuroscience experiments using proper blinding protocols. Here is an experiment that neuroscientists have not done (to the best of my knowledge), but should be doing:

  1. Do a brain scan on 20 subjects (called Group A). Tell the subjects to think of absolutely nothing during the brain scan other than the blackness of outer space.
  2. Do a brain scan on 20 other subjects (called Group B). Tell the subjects to do some mental task, such as creating a summary total of the first 20 integers. For example, 1+2=3, 1+2+3=6,1+2+3+4=10, 1+2+3+4+5=15, and so on and so forth until a total for the first 20 integers is reached.
  3. Shuffle the brain scans, and submit them to some other “blinded” scientists who do not know whether the subjects were in Group A or Group B. Ask the scientists to predict whether the people were actively engaging in calculation, or simply thinking of the blackness of space.

I predict that the predictive success would not actually be better than chance. The likely reason is that the human brain is not actually the cause of human thought. No one has a coherent idea as to how neurons could produce thinking or ideas. There are strong reasons for believing that fast accurate complex thought should be impossible for a brain, because of the very high noise levels in a human brain (as discussed here), and because signal transmission should actually be very slow in a brain (for reasons discussed here).

Here is another experiment that neuroscientists have not done (to the best of my knowledge), but should be doing:
  1. Do a brain scan on 20 subjects (called Group A). Tell the subjects to think of absolutely nothing during the brain scan other than the blackness of outer space.
  2. Do a brain scan on 20 other subjects (called Group B). Tell the subjects to do some task involving memory recall, such as remembering all the vacations they have ever had (or trying to recall everyone they can remember with a name beginning with the letter “A,” everyone they can remember with a name beginning with the letter “B,” and so forth).
  3. Shuffle the brain scans, and submit them to some other “blinded” scientists who do not know whether the subjects were in Group A or Group B. Ask the scientists to predict whether the people were actively engaging in memory recall, or simply thinking of the blackness of space.

I predict that the predictive success would not actually be better than chance. The likely reason is that the human brain is not actually the cause of human recall. Given the short lifetime of synapse proteins and other forms of instability in the brain, no one has a coherent idea as to how a brain could store memories lasting for decades, or how a brain could instantly recall memories without any addressing system that might allow such a thing. There are strong reasons for believing that the brain is not the storage place of human memory. 

Studies with the protocol above have not been done (to the best of my knowledge). But scientists have done studies in which people have their brains scanned while the people are thinking or recalling. Such studies show no real evidence of neural correlates of thinking or neural correlates of recall. Typically the change in signal strength from one brain region to another (which is the most important thing to consider) is no greater than 1%, about what we would expect from random variations. Such results (discussed here) are consistent with what we would expect if the brain is not a storage place for memories, and if the brain is not the source of our thoughts.  Memory and thought are very likely aspects of a spiritual aspect of man, something quite distinct from the brain. 

An interesting aspect of dreaming is how we can recall names, locations and even intellectual principles during dreaming, even though we may have never thought of such things in years.  Recently I had a dream in which I recalled the principle that you can compute the price of a bond from its yield, a principle I haven't used, read about or thought about in many years.  Nobody has a coherent detailed explanation as to how such abstract principles could ever be stored as neural states or synapse states, and it is all the more impossible to explain how a sleeping person's brain could recall such a principle. 

Thursday, June 13, 2019

Anonymously, Scientists Report the Paranormal as Much as Average People

For a long time, skeptics have attempted to use gaslighting to explain away observations of the paranormal. Gaslighting is when someone tries to shake confidence in an observational report by raising doubts about the mind or observational skills of the observer. A person engaging in gaslighting may try to suggest that you only reported seeing something because:

  1. you were hysterical;
  2. you simply got confused, and “mixed up” the observational details;
  3. you overreacted;
  4. you confabulated, filling in details later that you didn't actually see;
  5. you failed to observe carefully because you're not a “trained observer”;
  6. you're just a “fantasy-prone” person who confuses your imagination with reality;
  7. perhaps you hallucinated;
  8. perhaps you were intoxicated or under the influence of drugs.

Gaslighting is used by skeptics of the paranormal and also by the defense attorneys of accused rapists and sex abusers.


gaslighting

A recent study by Dean Radin and four other scientists helps to discredit claims that reports of the paranormal come mainly from people with poor observational skills. The scientists sent an email to 254,102 people, asking them to fill out a survey about extraordinary experiences they may have had.  Many of the people who got the email were scientists. Of the people who filled out the whole survey, 283 were from the general population, 175 were scientists and 441 were “enthusiasts” who had been identified as having an interest in the paranormal or the extraordinary.

The subjects were asked to answer “Yes” or “No” to questions about a large variety of possible paranormal experiences. Large fractions of the scientists who completed the survey answered “Yes” to some of the questions. For example:


Question Percent of survey-answering scientists answering “Yes”
Felt as though you were in touch with someone when they were far away from you? 59.2%
Received important information through your dreams? 59.4%
Known something about the future that you had no normal way to know ? 48.0%
Felt as though you were really in touch with someone who died? 39.1%
Experienced your awareness traveling outside of your body ? 27.0%
Known information about past events or an individual’s past experiences without any possible way of you knowing it? 43.4%
Seen events that happened at a great distance as they were happening? 15.5%
Caused your body to float in the air for any period of time using only your mind? 10.9%


From the table above, you might conclude that large fractions of scientists experience paranormal experiences. But that assumption might not be correct. The vast amount of people who got the survey email did not answer it. So it could be that only small percentages of scientists experience such things, and that people having such experiences were more likely to answer the survey.

But the survey does seem to offer important evidence relating to whether people who are “trained observers” are less likely to report paranormal experiences than average people. The table below compares some answers given by scientists in the survey to answers given by non-scientists.


Question Percent of survey-answering general population answering “Yes” Percent of survey-answering scientists answering “Yes”
Felt as though you were in touch with someone when they were far away from you? 52.7% 59.2%
Received important information through your dreams? 43.1% 59.4%
Known something about the future that you had no normal way to know ? 47.3% 48.0%
Felt as though you were really in touch with someone who died? 41.3% 39.1%
Experienced your awareness traveling outside of your body ? 20.2% 27.0%
Known information about past events or an individual’s past experiences without any possible way of you knowing it? 35.2% 43.4%
Seen events that happened at a great distance as they were happening? 12.1% 15.5%
Caused your body to float in the air for any period of time using only your mind? 7.8% 10.9%


We see from the table above that the percentage of scientists who reported such paranormal experiences was usually higher than the percentage of non-scientists who reported such things. The “trained observers” reported more paranormal experiences than the average person. This debunks claims or insinuations that reports of the paranormal are caused by unreliable observers.

It is interesting that the survey fails to ask about some of the main types of paranormal observations. If I had got the survey, I would have had to answer nothing but “No,” except for one or two questions, even though I have had very many paranormal-seeming observational events (as described here).

The survey should have had several additional questions, including the following:

  • Did you ever have a strong impression that a thought was traveling between your mind and the mind of someone else, even though nothing was spoken, typed or written?
  • Did you ever repeatedly get in photographs some type of  anomaly that you cannot explain?
  • Did you ever see some event around your home or office that you cannot explain, such as a lamp seeming to turn on by itself, a door seeming to open or unlock itself,  or a TV seeming to change channels by itself?
  • Did you ever notice some object that seemed to have appeared in an inexplicable way?
  • Did you ever see with your own eyes some sight that you cannot explain, such as something like a ghost or a UFO?
  • Did you ever have a thought of someone, just before the person unexpectedly called or unexpectedly appeared?

If these questions had been asked, an even higher percentage of respondents would have answered in the affirmative.