Precognition: Science or Pseudoscience?
Can people predict the future better than chance?
Imagine you're sitting in a psychology lab, trying to guess which of two curtains hides a randomly selected erotic image — before the computer has even decided where to place it. This is exactly what happened in Daryl Bem's controversial 2011 experiment that seemed to show people could glimpse the future. Now, an international team of researchers has conducted the most transparent replication attempt ever undertaken in psychology, with real-time data streaming, video documentation, and external auditors watching every move. The results paint a fascinating picture of how science grapples with extraordinary claims.
Large replication study found no evidence for ESP abilities.
In 2011, psychologist Daryl Bem published controversial research claiming people could sense future events before they happened. His study sparked fierce debate in psychology, with some calling it groundbreaking and others demanding rigorous replication. An international team of researchers decided to test Bem's claims using the most transparent methods possible.
This study didn't replicate Bem's ESP findings, but it pioneered revolutionary transparency methods that could transform how we verify scientific credibility across all fields.
Key Findings
- Participants guessed correctly 49.89% of the time - actually slightly worse than the 50% expected by pure chance.
- This directly contradicted Bem's original finding of 53.07% accuracy.
- The ultra-transparent methods showed no evidence for extrasensory perception.
What Is This About?
Researchers across multiple laboratories recreated Bem's original experiment where participants tried to predict which side of a computer screen would show an image. They used extraordinary transparency measures: all data was uploaded in real-time, sessions were video-recorded, and external auditors monitored the process. Both believers and skeptics helped design the study to ensure fairness.
Multi-laboratory replication of Bem's ESP experiment using enhanced transparency methods including real-time data reporting, video documentation, and external auditing.
Found 49.89% correct guesses compared to Bem's original 53.07%, failing to replicate the ESP effect with chance level being 50%.
How Good Is the Evidence?
49.89% correct guesses - essentially at chance level (50%) and far from Bem's claimed 53.07%. This is like flipping a coin and getting heads slightly less than half the time.
ESP proponents argue that the sterile laboratory conditions and skeptical atmosphere may have inhibited psychic abilities, and that some individuals might have stronger abilities than others. Skeptics point out that extraordinary claims require extraordinary evidence, and this rigorous replication with enhanced transparency found no effect. Both sides agree the study represents a gold standard for research transparency.
Mainstream: The study confirms ESP claims lack scientific support and demonstrates proper replication methodology. Moderate: While this particular test found no ESP effect, the transparency methods could benefit all psychological research. Frontier: Laboratory conditions may not capture real-world psychic phenomena, but the rigorous approach is commendable.
Misconception: 'This study proves ESP doesn't exist.' Reality: This study failed to find evidence for ESP under these specific conditions, but science rarely 'proves' absolute negatives.
To establish ESP scientifically would require multiple independent replications showing consistent effects above chance, ideally with effect sizes large enough to be practically meaningful. This study meets the transparency and rigor criteria but found no supporting evidence.
We found 49.89% successful guesses, while Bem reported 53.07% success rate, with the chance level being 50%. Thus, Bem's findings were not replicated in our study.
Stance: Skeptical
What Does It Mean?
This study livestreamed every data point as it was collected, with external auditors monitoring the research in real-time — creating the most transparent scientific investigation ever conducted. It's like having a glass laboratory where every decision and result is visible to the world as it happens.
It's like having a 'gut feeling' about which elevator will arrive first, then testing whether your hunches are actually better than random guessing - and finding they're not.
If these transparency methods become standard practice, they could revolutionize scientific publishing by making research fraud nearly impossible and allowing real-time verification of results. This could restore public trust in science and accelerate discovery by eliminating the need to waste time on irreproducible findings. The collaborative approach between believers and skeptics could become a model for investigating any controversial phenomenon.
This study shows how pre-registration and real-time data sharing can make research more trustworthy by preventing researchers from changing their methods after seeing the results.
Understanding Terms
What This Study Claims
Findings
The study failed to replicate Bem's ESP findings, with 49.89% success rate compared to Bem's 53.07%
strongMethodology
Video-documented trial sessions, piloting, checklists, and laboratory logs can help ascertain as-intended protocol delivery
moderateThe study demonstrated feasible methodologies for enhancing research credibility through transparency measures
moderateReal-time data deposition and video documentation can effectively monitor research integrity
moderateExternal research auditors can effectively monitor research integrity in parapsychology studies
moderateThis summary is for general information about current research. It does not constitute medical advice. The scientific interpretation of these results is debated among researchers. If personally affected, please consult qualified professionals.