Header 1

Our future, our universe, and other weighty topics


Wednesday, August 13, 2025

There's Very Much Hard Fraud in Scientific Research, But Even More Soft Fraud

 Reading the Science News page on Google News today, I read a shocking story at the mainstream "Inside Higher Ed" site. It's an article entitled "The Growing Problem of Scientific Research Fraud." It starts out by saying, "When a group of researchers at Northwestern University uncovered evidence of widespread—and growing—research fraud in scientific publishing, editors at some academic journals weren’t exactly rushing to publish the findings."  We then hear a little about what sounds like a "censor the bad news" affair.

But the researchers' paper did eventually get published. We read this:

"Last week Amaral and his colleagues published their findings in the Proceedings of the National Academy of Sciences of the United States of America. They estimate that they were able to detect anywhere between 1 and 10 percent of fraudulent papers circulating in the literature and that the actual rate of fraud may be 10 to 100 times more."

We read in the article some researcher saying, "If this trend goes unchecked, science will be ruined and misinformation is going to dominate the literature.” Figure 5 in the paper includes this graph:

fraud in scientific research

The "paper mill products" line shows fraudulent papers. The "PubPeer commented" line shows papers suspected of fraud, and mentioned on a site in which scientists can anonymously discuss suspicions of fraud. The "retracted" line shows papers retracted because of their low-quality or problems discovered in them. The great majority of junk science papers are not retracted. Notice the trend lines. A larger and larger fraction of scientific papers are fraudulent or junk. 

The graph above is from the newly published paper "The entities enabling scientific fraud at scale are large, resilient, and growing rapidly." The link for the paper's pdf file is here

In previous posts I discussed the issue of fraud in biology research. The posts were these:

An article in the journal Nature asks "How big is science's fake-paper problem?"  We read this:

"An unpublished analysis shared with Nature suggests that over the past two decades, more than 400,000 research articles have been published that show strong textual similarities to known studies produced by paper mills. Around 70,000 of these were published last year alone (see ‘The paper-mill problem’). The analysis estimates that 1.5–2% of all scientific papers published in 2022 closely resemble paper-mill works. Among biology and medicine papers, the rate rises to 3%."

What's so bad if a scientific paper resembles the product of a paper mill? The article gives us a bit of a clue, without explaining it very well. It says, "Paper-mill studies are produced in large batches at speed, and they often follow specific templates, with the occasional word or image swapped." The average reader will have no idea of what this refers to, so let me explain. 

In computer programming a template is some body of text containing placeholders. The template can be used to make many different versions of a narrative, by simply replacing the placeholders with specific examples.  For example, the page here gives us a template for producing a press release announcing some scientific research. The template starts out like this:

"Scientists today announced that they are the first to successfully demonstrate SCIENTIFIC FINDING. This has long been one of the holy grails of SCIENTIFIC FIELD. 'This finding radically alters our understanding of the field, to say the least,' says FIRST AUTHOR, a SCIENTIFIC FIELDologist from INSTITUTION who led the research. 'We were stunned when we made the discovery. For a few minutes we just didn’t believe what we were seeing,'  says FIRST AUTHOR, then SECOND AUTHOR (a student of FIRST AUTHOR) yelled "We’ve done it!" and we started dancing around the LAB/OBSERVATORY/FIELD SITE. It was very exciting.”

If you are writing a scientific press release, you could manually replace the capitalized phrases to match some new research.  But templates such as these can also be inputs to computer programs. Computer programs can generate countless different versions of the narratives in a template, by doing search and replace of the capitalized words. 

So, for example, imagine you want 10,000 different versions of the story below:

"MALE HUMAN ONE had a good life, but he knew that something was missing. He tried using dating apps to meet Miss Right, but somehow it never worked out. But one day MALE HUMAN ONE had a stroke of luck.  He was at the BUSINESS PLACE ONE where he was a regular customer. He looked to his left, and was stunned by the beauty of a female he had never met before: FEMALE HUMAN ONE. MALE HUMAN ONE felt sure that he wanted to strike up a conversation with the beautiful stranger, but he couldn't think of what to say. He thought of saying TRITE OVERUSED PICKUP LINE, but thought that would never work.  Suddenly, he had a good idea. Walking up to the stranger he said, ORIGINAL WITTY ICE-BREAKING LINE." 

It would be very easy to write a computer program that generated 10,000 different versions of this story.  The computer program could just run in a loop, and thousands of times replace the phrases MALE HUMAN ONE, FEMALE HUMAN ONE and BUSINESS PLACE ONE with items randomly extracted from a list, or randomly generated. Similarly, the program could thousands of times replace TRITE OVERUSED PICKUP LINE with an item randomly chosen from a list of such lines, and replace ORIGINAL WITTY ICE-BREAKING LINE with  with an item randomly chosen from a list of such lines. 

It seems that paper mills are doing something similar, to generate phony scientific papers, which amount to phony narratives. We hear in the Nature article that some machine-learning software is being used to look for papers that are suspected products of paper mills. An estimate has been produced that 3% of the biology and medicine papers from recent years are fake papers produced by paper mills. This 3% figure is higher than for any of the other fields mentioned. We read this: "June 2022 report by the Committee on Publication Ethics, based in Eastleigh, UK, said that for most journals, 2% of submitted papers are likely to have come from paper mills, and the figure could be higher than 40% for some."

Why would such wrongdoing occur? If you are a scientist living in a "publish or perish" culture, it may be expected that you will author a certain number of papers each year. There is an effect called publication bias, in which scientific journals prefer to publish papers reporting positive results. If you are a scientist doing experiments that have recently produced only null results, you may resort to paying some paper mill to get some result that will have a higher chance of getting published. The paper mill companies are typically in foreign countries, and have discreet names such as Suichow Editorial Services. 

A researcher named Bernhard A. Sabel has developed what he thinks is a pretty simple way to spot paper mill papers in biology and medicine: look for papers which have author email addresses that are private emails or hospital emails rather than college or university emails such as joesmith@harvard.com. The technique of Sabel is entirely different from the technique mentioned earlier in this post. 

The latest version of a paper by Sabel describes the paper mill industry:

"The major source of fake publications are 1,000+ 'academic support' agencies – so-called 'paper mills' – located mainly in China, India, Russia, UK, and USA (Abalkina, 2021Else, 2021PĂ©rez-Neri et al., 2022). Paper mills advertise writing and editing services via the internet and charge hefty fees to produce and publish fake articles in journals listed in the Science Citation Index (SCI) (Christopher, 2021Else, 2022). Their services include manuscript production based on fabricated data, figures, tables, and text semi-automatically generated using artificial intelligence (AI). Manuscripts are subsequently edited by an army of scientifically trained professionals and ghostwriters." 

Sabel mentions a case of a paper mill that emailed a scientific journal offering a sum of $1000 if the journal published one of the papers the paper mill (calling itself an editorial services firm) helped to produce. 

A paper by Sabel states this:

"More than 1,000 paper mills openly advertise their services on Baidu and Google to 'help prepare' academic term papers, dissertations, and articles intended for SCI publications. Most paper mills are located in China, India, UK, and USA, and some are multinational. They use sophisticated, state-of-the-art AI-supported text generation, data and statistical manipulation and fabrication technologies, image and text pirating, and gift or purchased authorships. Paper mills fully prepare – and some guarantee –publication in an SCI journal and charge hefty fees ($1,000-$25,000; in Russia: $5,000) (Chawla, 2022) depending on the specific services ordered (topic, impact factor of target journal, with/without faking data by fake 'experimentation')" 

Sabel estimates that paper mills are a major business, earning a revenue of about a billion dollars per year.  He estimates that close to 150,000 papers are questionable papers with red flags indicating possible paper mill authorship.  

paper mill

Publicly available AI programs such as ChatGPT are making this kind of hard fraud easier. Such programs can do a million and one things. Ask such a program to generate some type of information on some topic, and you might get some largely fictional or largely inaccurate output (sometimes called "AI slop") that can be pasted into a scientific paper. 

The discussion above is largely about what we might call hard fraud. Hard fraud may be defined as something involving data that is fake or made up. But we should not limit a discussion of scientific fraud to a mere discussion of hard fraud. There is also what we can call soft fraud. Soft fraud in scientific research may be defined as the use of  extremely misleading analysis techniques and misleading data gathering techniques and misleading data presentation techniques to give the impression that something was discovered, when no such thing occurred. Soft fraud is extremely abundant in scientific research. To read about some of the things going on when soft fraud occurs in scientific literature, read my posts here:

The Building Blocks of Bad Science Literature

50 Types of Questionable Research Practices

To understand the financial factors that drive such hard and soft fraud, you need to "follow the money" by considering factors like those diagrammed below. Read here for an explanation of the diagram. 

Factors Driving Scientific Fraud



Tuesday, August 12, 2025

Sneaky Racism at a Major Science News Site

 At the week I wrote this post (auto-scheduled to be published months later), I noticed something that troubled me when I went to a major science news site. The site had a link to an article on an opinion site which has long been guilty of sneaky racism. Since I don't wish to be guilty of the same thing I am criticizing in this post (linking to racist articles and racist sites), I won't include any site names, article names, or URLs in this post. 

The article link on the site was an example of a sneaky way to promote racism. The method involves having a prominent link to a rather innocent-looking article on a site that includes racist posts. Readers are thereby drawn to that site, where they may find other posts that promote racism, once the readers start looking around on the site they have been drawn to. 

A few days later I found another example of what seemed like sneaky racism on the same major science news site. We had on that site an article describing in detail a scientific paper that seemed to be promoting racism in a sneaky, subtle way. The scientific paper took the approach of having an opinion poll in which professors were asked about their agreement with a small set of opinions, one of which was a racist claim about racial inferiority.  The paper seemed to have an agenda of promoting the claim that a certain small minority of professors agreed with the racist claim.  By talking up this paper, the major science news site thereby promoted (in a sneaky way) the very racist claim that people were being polled about. 

Why should we not be terribly surprised at such conduct going on at a major science news site? The answer may be that many such sites serve largely to promote Darwinism, and claims or insinuations of racial inferiority have often occurred within Darwinist culture, appearing sporadically many times. You can understand why Darwinist culture so often has been associated with racism when you consider why under the assumptions of Darwinism the idea of inferior races is very useful. 

Darwinism teaches the idea that humans accidentally evolved from some lower species that was like an ape or a chimpanzee. The gigantic gulf between animals and humans has always made such an idea seem preposterous to the person who adequately considers how great such a gulf is. The gulf between mere animals and humans who could build cities, design governments, make scientific discoveries and write great works of philosophy is like the oceanic gulf that separates the United States from Europe, or the gulf between our sun and Alpha Centauri. 

If you are trying to maintain that such a gigantic gulf was crossed by mere accidents of nature such as random mutations, it is helpful if you can advance the idea of inferior humans, the idea of lesser races that are kind of a stepping stone between apes and humans. This is one reason why racism so often was embraced by Darwinists.  By believing in inferior human races, they could say themselves: "It wasn't so great a gulf between apes and humans, because inferior races were kind of like stepping stones along the way." 

This is one reason why you will find racism popping up again and again in Darwinist culture: because the idea of inferior human races is useful within the context of Darwinist ideology.  To those who reject such an ideology, the misguided idea of inferior races is not useful. 

racism in science garb

People should study the abundant evidence telling us that the human mind cannot be the product of the brain, and the abundant evidence that the brain itself (and every other human organ and every type of human cell) cannot be explained by genes, which do not specify how to build humans or any of their organs or any type of human cell.  People should also study the abundant evidence for paranormal psychical phenomena, suggesting so strongly that every human is a soul. People adequately making such studies will tend to scorn the illogical idea that some race of humans might be intellectually inferior because of genetic differences.  Through deep study you can understand the very many reasons why evolution does not explain DNA, DNA does not explain bodies, and bodies do not explain minds. Once you have reached such insight, one of many benefits will be that you will help to immunize yourself against the virus of racism dressed up in genetic jargon, the type of thinking a major science news site seemed to recently promote. 

Postscript: I see that weeks after writing this post, the same major science news site is again linking multiple times to the same racist web site. I also see on another day that the same major science news site is linking to a post on another site that approvingly cites without disapproval a blatantly racist quote from Darwin's The Descent of Man. The site claims to have the high purpose of educating us about genetics, although a close inspection shows the site is largely about promoting genetic fiddling and lobbying for the pesticide industry. The site includes a link to a racist neuroscientist article making false claims  that the brains of certain people become "wired for aggression." Then on a later date the same major science news site I mentioned earlier has once again featured as one of its main articles an article promoting racism in an indirect way. It seems there still roams about the monster that is racism in science garb, with the help of the "science news" infosystem. But the monster is clever about donning disguises, so clever you may not notice when it sprays poison in your direction. 

Saturday, August 9, 2025

Asking "Why Is the Universe Just Right for Life," Hawking's Collaborator Has No Sensible or Coherent Answer

 In Quanta Magazine, we recently had a podcast giving an interview with physicist Thomas Hertog. The podcast is entitled "Why Did the Universe Begin?" In the podcast Hertog discusses some of the issues he discussed in his 2023 New Scientist article "Why Is the Universe Just Right for Life?" Hertog offers no coherent-sounding or sensible-sounding answers to either of these questions, nor does he even give any coherent-sounding or sensible-sounding speculations while trying to answer these questions. But along the way in the article and the interview we get some revealing confessions. 

Hertog collaborated for years with the well-known physicist Stephen Hawing. After an interview says "you said some of the first words that Stephen ever said to you was, the universe we observe appears designed," we have this confession from Hertog:

"I do not believe, and Stephen certainly did not believe, that there was an actual designer or a ‘God’ behind this whole thing. He would rather keep religion out of the physics of the Big Bang. But then on the other hand, the laws of physics as we know them, seem mysteriously fit for life. They seem fine-tuned. It’s as if the universe was destined to bring forward life at some point."

Later in the interview Hertog gives more details, saying this:

"Down at the level of fundamental physics, down at the level of the particle forces and the composition of the universe, the fact that we have three dimensions of space and all that, it seems to me fine-tuned to bring forth life. Change any of these properties of the laws, and quickly you end up with a lifeless universe."

We have here multiple confessions. One is a confession that the universe appears fine-tuned. Then there is the confession that Hawking and Hertog were atheists, the type of people who would not believe in a designed universe no matter how well-designed the universe appeared to be. In his New Scientist article Hertog quotes Hawking as stating, "The universe appears designed." I have added this statement to my very long list of scientist confessions that you can read here, which is the longest collection available anywhere of scientists making confessions they do not normally make. 

In his New Scientist article Hertog elaborates further on cosmic fine-tuning:

"Of all the universes that could exist, ours is spectacularly well configured to bring forth life....The universe’s biofriendliness, it turns out, concerns the laws of physics themselves. There are numerous features in these laws that render the universe just right for living things...But the density of vacuum energy seems to be 10¹²⁰ times lower than physicists expect based on theory. If the vacuum energy density of the universe were just a tad larger, however, its repulsive effect would be stronger and acceleration would have kicked in much earlier. This would have meant that matter was so sparsely distributed that it couldn’t clump together to form stars and galaxies, once again precluding the formation of life. The laws of physics and cosmology have many more such life-engendering properties. It almost feels as if the universe is a fix – a big one."

In his New Scientist article Hertog claims that Hawking rejected the idea of the multiverse as an attempt to explain cosmic fine-tuning. He states this:

"Stephen’s reticence to embrace the multiverse grew stronger in the early 2000s, when it became clear that it didn’t actually explain anything....Multiverse cosmology is like a debit card without a PIN or an IKEA flatpack closet without a manual: useless."

We then have in the article some incoherent mumbo-jumbo that does nothing to explain why we have a habitable universe. It is some cockamamie musing that attempts to throw some mention of the word "natural selection" without actually offering anything referring to the so-called "natural selection" of Darwin. We read this:

"Stephen and I came to understand what went on in the early universe as a process akin to that of natural selection on Earth, with an interplay of variation and selection playing out in this primeval environment. Variation happens because random quantum jumps cause frequent small excursions from deterministic behaviour and occasional larger ones. Selection enters the picture because some of these excursions, especially the larger ones, can be amplified and frozen-in thanks to quantum observation. This then gives rise to new rules that help shape the subsequent evolution. The interaction between these two competing forces in the furnace of the big bang produces a branching process – somewhat analogous to how biological species would emerge billions of years later – in which dimensions, forces and particles first diversify and then acquire their effective form when the universe expands and cools. And just like in Darwinian evolution, this introduces a subtle backward-in-time element to our hypothesis. It is as if the collective quantum observations retroactively fix the outcome of the big bang. For this reason, Stephen liked to refer to our idea as 'top-down cosmology'  to drive home the point that we read the fundamentals of the universe ex post facto, somewhat like how biologists reconstruct the tree of life. 'We create the universe as much as the universe creates us,'  he once told me."

This is nonsensical incoherent hogwash. All of the references to natural selection and evolution are spurious, as they refer to a time when life did not exist. We have some scrambled effusion with a little Darwin seasoning sprinkled in to try to make the mess sound a little more sensible. No, we don't create the universe. No, observations after the Big Bang cannot possibly "fix" the Big Bang in terms of making it retroactively compatible with life's eventual appearance. The passage has a reference to "quantum observation," but the reference makes no sense, because what is being talked about is a time when there were no observers. The claim by Hertog at the beginning of the quote that he and Hawking "came to understand what went on in the early universe" is very vain groundless boasting. Jumbled, unreasonable, incoherent-sounding speculation is not understanding something. What went on in the early universe is a mystery a thousand miles over Hertog's head. 

egotism of scientist

The subsequent paragraphs in Hertog's New Scientist article are  laughable. He starts talking about the hazy, murky concept of a holographic universe, as if that had some relevance to his discombobulated musings. Hertog isn't making a speck of sense when he says this:

"In this cosmological setting, it turns out it is the dimension of time that holographically pops out. History itself is holographically encrypted. What’s more, time emerges in the ex post facto manner that we had envisioned. The past is contingent on the present in holographic cosmology, not the other way around. In a holographic approach to cosmology, venturing far back in time means taking a fuzzy look at the cosmological hologram. It is like zooming out, an operation whereby we discard more and more of the entangled information that the hologram encodes. Holography suggests that not only time, but also the physical laws that shape our universe, disappear back into the big bang."

This statement, like the previous paragraph I quoted by Hertog, belongs in some compilation that we might entitle "scientists shoveling incoherent or unbelievable baloney and BS." In the more recent Quanta Magazine podcast interview, we don't get anything any better. Hertog gives us this far-from-enlightening statement:

"I guess the crux of the hypothesis that Stephen and I ended up developing is that this process of simplification and unification, maybe it just goes on all the way, and maybe ultimately even the distinction between space and time disappears. That’s the crux of his hypothesis. And the unsettling thing, of course, is that the Big Bang — the origin of time — would also become the origin of law. The laws themselves sort of evaporate going all the way backwards."

Nothing that Hertog says in the Quanta Magazine podcast interview or his New Science article makes any sense in terms of helping to explain why the universe is just right for life or why the universe came to exist. In the interview he talks on and on about the concept of a holographic universe, and it all sounds as incoherent, confused and irrelevant as his quote above referencing that concept.  The holographic universe theory is the harebrained speculation that the universe's volume is an illusion. It was stated by physicist Leonard Susskind like this: "The three-dimensional world of ordinary experience—the universe filled with galaxies, stars, planets, houses, boulders, and people—is a hologram, an image of reality coded on a distant two-dimensional surface." A theory so silly does nothing to explain the universe's origin or the fine-tuning of the universe's laws and fundamental constants. 

Hertog has no sensible or coherent answer to questions such as "why is the universe just right for life" and "why did the universe begin." There is a sensible and coherent answer to such questions.  It is the answer that a transcendent power and wisdom wanted our universe to exist and caused it to have characteristics that would allow creatures such as us to exist. 

Thursday, August 7, 2025

More Old Periodical Accounts of Clairvoyance or Telepathy

Before it senselessly became a taboo to honestly and fairly report on successful results of telepathy in publications such as Scientific American, such publications would sometimes admit the existence of telepathy. My post here documents how in 1941 the editors of Scientific American confessed that telepathy was proven. 

The photo below is from a Scientific American article in 1924 that you can read here. On the left of each pair is a drawing that one person attempted to mentally transmit to another. On the right of each pair is a drawing made by the person attempting to receive such telepathy transmissions. We see some striking successes. 

successful ESP test

One problem with tests of the type shown above is that it is hard to quantify the improbability of the result. A different type of test developed at Duke University did allow the exact quantification of the improbability of achieved results. The Duke University psychologist Joseph Rhine did many tests using a deck of Zener cards. In such tests an experimenter uses a pack of cards that all look the same on one side. The other side of a card has one of five symbols shown below. There are an equal number of each symbol in the deck. 

Below is part of a  1938 newspaper account of the work of Duke University psychologist Joseph Rhine. We have a reference to card guessing experiments involving Zener cards that  have five possible symbols on one side of the card. I have touched up a few illegible characters in the original. With such cards, each correct guess of the symbol on one card has a chance probability of 1 in 5, or 20%. 

ESP test results

You can read more details about Rhine's experiments with Hubert Pearce in my post here. I describe the evidence from these tests as "smoking gun" evidence for ESP. 

The account below appeared on page 90 of  the April 15, 1940 edition of Life magazine, which during its heyday was one of the three or four leading weekly magazines in the United States. We read of tests with Zener cards that have five possible symbols, on one side of the card. The chance probability of guessing one of the cards correctly is 1 in 5. 

clairvoyance test

The article correctly states the rough likelihood of guessing a sequence of 9 of these cards correctly, purely by chance. Since there are five possible symbols, each equally likely, the chance of correctly guessing a sequence of 9 of the cards is 1 in 5 to the ninth power, which is 1 in 1,953,125. The text above seems to suggest that this feat of guessing all the cards correctly occurred in two consecutive attempts to guess 9 cards in a row. The likelihood of that occurring by chance in such a series involving 18 guesses would be 1 in 5 multiplied by itself 18 times, which would be 1 in  3,814,697,265,625.  You can calculate something like that using one of the large exponents calculators online, such as the one shown below:


We read explicitly in the account that the student Linzmayer guessed correctly 15 of the Zener cards in a row. The likelihood of that occurring purely by chance in a series of 15 guesses would be 1 in 5 to the 15th power, which would be 1 in 30,517,578,125.

On page 90 the Life magazine story gives us the same claim made at the top of this post, that Hubert Pearce correctly guessed the symbols on 25 consecutive Zener cards. We get the same estimate above, that the probability of such a thing occurring by chance is 1 in 298,023,223,876,953,125. The mathematical basis for this calculation is simple. Since there are five possible symbols on each Zener card, you simply use 1 divided by 5 raised to the 25th power. The screen below shows the calculation of the number. 


The tests above were mainly tests of clairvoyance, not telepathy. What is the difference? Clairvoyance is typically described as an ability to acquire information in some extra-sensory way, with there being little or no chance of you getting the information from some kind of mind-reading of someone else who knew the information. So when a pack of cards is shuffled, and no one knows the next card until the card is turned, and someone tries to guess what the card is, that is a test of clairvoyance. But if you and I sit in separated rooms, and I try to transmit mentally a thought or image to you, then such a test is called a test of telepathy. It can actually be hard to separate clairvoyance and telepathy under some tests. For example, if I hold up a card and try to mentally transmit it to you, in some other room or some other house, if you are successful in such tests that could either be from telepathy (you reading my mind) or by some clairvoyance in which you were able to see the cards by extrasensory perception. 

On page 92 the same Life magazine article tells us of the remarkable success in a long-distance test carried out over a span of 250 miles. It is a test that can be considered either a test of clairvoyance or a test of telepathy:

ESP test

The wording is ambiguous, leaving us in doubt whether the 16 out of 25 result was obtained on two consecutive days, or whether it was one result of 16 out of 25 spanning two days. I cleared up the ambiguity by searching in the writings of Joseph Rhine, to find his account. It occurs on page 60 of his book Extra-sensory Perception. We read this:

remote ESP test

So the results on the first three tests were:

Test 1: 19 out of 25 correct
Test 2: 16 out of 25 correct
Test 3: 16 out of 25 correct

Rhine does not do a good job of explaining how unlikely this result is. But by using what is called a binomial probability calculator, we can estimate that. The first three tests add up to being 51 successes out of 75, in a test in which the expected chance result is about 15 (which is one fifth of 75). Using the Wolfram Alpha binomial probability calculator, we can calculate the chance of that level of success, by using the inputs below:


The probability of getting a result as good as this by chance is calculated above. It is roughly 1 in 10 to the 19th power or 1 in 10,000,000,000,000,000,000.

 The Life magazine also discusses on page 95 tests of clairvoyance and ESP that Joseph Rhine did with the medium Eileen Garrett.  These were very unusual, in that the medium would go into a trance, and then start speaking as if she were another personality called Uvani. Often when this occurs with mediums, there is an impression of communication with some unearthly realm, such as an afterlife realm.  Instead of quoting the Life magazine summary, I will quote the original report of these tests, which appear at the beginning of this edition of the scientific journal Character and Personality, the December 1934 edition (Volume 3, Issue 2). On page 100 we read this report by Rhine:

"The experimentation with the Mrs. Garrett personality began on April 10 and lasted until April 28, approximately three weeks. During this period 14,425 tests or trials were given her in the normal state in clairvoyance and telepathy combined.

The work with the Uvani personality in clairvoyant and telepathic perception began on the 17th of April and lasted until the 25th. The amount of work per day, as well as the number of days, was limited by Uvani’s disinclination toward the experiments, which was in contrast to Mrs. Garrett’s willingness and patience. Only 1,575 trials were obtained with Uvani.

In all there were performed 16,000 trials at clairvoyant and telepathic perception. This number includes all the results, high scores and low. The most probable number of hits expected by chance for this number of trials would be 3,200 but the actual results were 4,018, or 818 hits above the chance mean. This gives an average per 25 of 6.3, and when evaluated for anti-chance significance, a value of X (i.e., deviation divided by probable error) equal to 24.0. This gives odds against the chance hypothesis of such a huge number—one with well over 50 digits—that it is beyond a moment’s question that chance is not the explanation."

Although he states very exactly the experiment's results, Rhine is not expressing very clearly how above chance these results are. But with a binomial probability calculator, a more clear idea of the improbability can be revealed. Using the Wolfram Alpha binomial probability calculator, the improbability can be calculated using the inputs shown below:

very successful ESP test

The tests involving guessing the symbol shown on a Zener card with 5 possible symbols. The tests were done with a deck of cards with an equal number of each of the five symbols. The probability of guessing correctly 4018 or more out of 16,000 cards is 1 in 7.7  times 10 to the 56th power. This is a probability of less than 1 in 100,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000,000. We would never expect chance to produce such a result, even if you spent half of every person's life doing telepathy tests on them. 

The reported test result is one of the best test results ever achieved in a test of telepathy. I can recall only one better result: the result of the Riess test, reported here

The experimental evidence for the reality of telepathy is overwhelming. Also, we have two hundred years of well-documented evidence for the reality of clairvoyance.  The refusal of today's materialists to seriously study such evidence is a sign of how fatal such results are to the dogmas taught by such materialists. 

Another page of Life magazine reveals the appalling "head in the sand" attitude of scientists toward telepathy. By 1952 the evidence for telepathy was overwhelming. But in that year Lucien Warner sent out a questionnaire to 515 members of the American Psychological Association. Of the 360 who answered, one sixth said that ESP was either an established fact or a likely one. But why so few, despite such a wealth of evidence? The reason was that two-thirds of those responding confessed that they had never read a single paper on the topic.  So there was a "head in the sand" majority who refused to even look at the evidence, and a group of 60 who apparently had studied some of the evidence and were convinced.