Thursday, June 12, 2025

Science for Sale: The Crisis of Credibility

I am as much of a secular rationalist as they come. Once I learned about the scientific method in middle school, I saw it as the most trustworthy tool humans have for understanding the world; something solid that stood in place of unquestioning faith and, hitherto, the best tool for us to navigate reality. While that all may be true in the physical sciences, the unfortunate reality is that we are not at such a place in the realm of social psychology. My disenchantment with academia began in college, during my pursuit of a career in research psychology, as I discovered it surpasses even corporate America in its politics, nepotism, and dishonesty.

Psychological research, particularly social psychology research, is often ineffective and unreliable due to weak statistical standards, a replication crisis, cultural pressures like "publish or perish," and deeper methodological flaws. Capitalism, not rationalism, primarily drives which research gets done and why. Money and ego undermine the very fabric of the scientific method, making it nearly impossible to conduct impartial research within any academic institution. Research is often published not for the sake of understanding the world, but to serve the needs of capital. The consequences have included the widespread dissemination of misleading findings, the marginalization of important but unprofitable lines of inquiry, and the reinforcement of existing power structures. Studies that might challenge dominant economic or social paradigms are underfunded or ignored, while research that aligns with the interests of industry, tech, or pharmaceutical giants is prioritized. This results in a distorted intellectual landscape where knowledge is commodified and public trust in science erodes. Over time, the ideal of science as an objective, truth-seeking endeavor is replaced by a performative shell that mirrors the market forces it is supposed to critically examine.  
 
For this reason, I have a great deal of compassion for the “do your own research" crowd even while wholeheartedly disagreeing with them. The layman should not have to navigate multiple paywalled scientific journals to come to their own conclusions.

In the hard sciences of physics, biology, and chemistry, hypothesis testing often requires a p-value of 0.01 or lower to claim statistical significance (
A p-value is a statistical measurement of the probability that the results of a study occurred by random chance. A p-value of means there is only a 1% probability that researcher landed on their findings by random chance.). However, in the social sciences, a p-value of 0.05 is the norm, meaning 1 in 20 "significant" findings could simply be random chance. As a result, a neuroscience lab (which for all intents and purposes IS biochemistry), you have a much lower burden of proof to demonstrate your findings than the biochem lab. In fact, something that nobody in your university’s Psychology Department will tell you is that many research labs prefer biochem degree holders over those in their own programs! 

The p-value of 0.05 was first introduced by Ronald Fisher in his 1925 book Statistical Methods for Research Workers. He suggested it as a convenient cutoff, but he never intended it to become a hard rule. He described it as a flexible guideline for when an experimental result might warrant further investigation. Over time, what became a suggestion for what might be publishable, became instead a hard rule of what demonstrates statistical significance. Psychology academics enshrined and adopted this methodology during a period when the field was desperately trying to be seen as a "hard science” - otherwise very few academics would have had anything worth publishing. 

This issue becomes more problematic when you compound that in 2008, an event known as the Replication Crisis occurred. In a landmark effort, a large team of researchers called the Open Science Collaboration attempted to replicate 100 published psychology studies from top journals. These studies mostly came from the fields of social psychology and cognitive psychology. While 97% of the original studies had reported statistically significant results, only about 36% of the replications produced statistically significant results. The effect sizes for these studies (the strength of the findings) were, on average, about half as large in the replications, with many replicated results were weaker, inconsistent, or with the effect outright disappeared. This crisis included work by a researcher you might be familiar with, Amy Cuddy who went viral with her Ted Talk on doing power poses in front of the mirror to increase confidence. 

I want to point out that groups like The Open Science Collaboration may exist to challenge the exact issue I am describing; they are still entangled in the same ecosystem as everyone else. Their funding sources include the US federal government and a Mark Zuckerburg 501c. If a study seriously undermined a major funder's interests, it’s plausible that the OSC would deprioritize it, slow-walk its dissemination, or fail to champion it. Not from malice, but from institutional survival instinct. This is the very paradox that the OSC tries to solve, but also cannot fully escape under capitalist conditions.

In Amy Cuddy’s case, the issue with her research was more than just that of replication. She was found to have used “questionable research practices” (QRPs) meaning she cherry-picked data to achieve statistical significance while ignoring results that didn't fit her narrative. I first heard about this situation in Devon Price’s book Unlearning Shame. In his book, Price explains how he was taught to use QRPs during his own upper-grad research training. I would argue that the fact that these methods are called "questionable research practices" instead of something more definite such as lying, cheating, or fraud says everything that a layperson needs to know about how research is conducted in the field of psychology.

Price writes that Questionable Research Protocols are a systemic problem. He recounts the history of the replication crisis, where it came to light that, and that frankly, many psychological researchers were just "making shit up." I don't know where Price earned his degrees, but as an undergraduate, I was required to take a full year of 200-level statistics and research methods courses before I could even enroll in upper-level psychology courses. Perhaps a dose of shame for a doctoral student not understanding basic statistical methodology isn’t entirely unhealthy.

This problem extends to popular science. Many books, including those by PhD holders like Judson Brewer (Unwinding Anxiety), the above Unlearning Shame, or authors like Malcolm Gladwell (Blink), often state that a single study has "proven" a hypothesis. This is something I was taught never to do in my 200-level classes. These are not necessarily bad books, but the way these authors discuss their findings is a product of the academic culture they inhabit; a culture they, in turn, help perpetuate. If there is no room for genuinely critical minds in psychological research, and if researchers who try to do the right thing are dragged down by their peers, what is the point? 

In fact, there are examples of researchers outside of psych who were outright shunned for findings that we now accept to be true. Take for example Dr. Kilmer McCully, a Harvard-trained physician and researcher. In the late 1960s, McCully published research suggesting that elevated levels of the amino acid homocysteine, not just cholesterol, could lead to cardiovascular disease. This was a departure from the prevailing cholesterol-centric view of heart disease at the time. His research was not well received by the medical establishment. Harvard-affiliated Massachusetts General Hospital, where he was employed, did not renew his research grant, and he was forced to leave his position, costing him any chance at tenure. McCully eventually found a position at the Veterans Affairs Medical Center in Providence where he continued his research. Over time, subsequent studies validated his findings. Today, the role of homocysteine in cardiovascular health is widely acknowledged with McCully credited as pioneering this field of study.

Nevertheless, there was a systemic attempt to stop his research for daring to use the scientific method for its intended purpose: to pursue truth, regardless of political or institutional consequences. his brings me to some open questions in relation to Price's work on shame. At what point should the gatekeepers of such a system feel genuine shame? When does their complicity demand more than a quiet retraction or a pivot to a new grant cycle? What would it mean for them to truly rectify the damage: intellectually, ethically, even personally before they allow themselves the luxury of moving on or forgiving themselves?

More pressingly: if data is being manipulated to fit agendas, if replication is avoided because the incentives reward novelty over reliability, and if entire academic careers are built on shaky, unreplicated findings, then are we actually advancing knowledge at all? Or are we just creating an elaborate theater of credibility? At what point can we even say strides are being made in social psychology, or are we simply becoming more efficient at justifying our own biases in increasingly scientific-sounding ways? If this is how people want to operate, what is the point of anything at all? Do words even have meaning? I’m genuinely sincere in these questions.

Imagine we could travel back 2,500 years and converse with Plato, Aristotle, or Socrates. If we tried to explain modern physics: quantum mechanics, relativity, germ theory, or even basic chemistry - they would likely be bewildered. These concepts would be completely foreign to them, built on centuries of empirical discovery, mathematical modeling, and experimental replication that simply didn’t exist in their time. But if we shifted the conversation to human psychology: virtue, motivation, desire, or the divided self; they might understand us surprisingly well. In fact, many of the psychological insights we consider cutting-edge today were already being explored in their dialogues.

These issues are not just limited to the US. Consider David Nutt in the UK, he performed Multi‑criteria decision analysis where he ranked alcohol and tobacco as more harmful than LSD, MDMA, or cannabis (Lancet 2007; F test p < 0.001). He published his findings and stated that horseback riding caused more brain injury than ecstasy, the Home Secretary fired him and ministries withdrew related research contracts.
Academic incentives prioritize novel, flashy findings over careful, incremental science or simply trying to see if it’s possible to replicate the findings of a particular researcher. This is one reason why QBRs are such an issue in research. Journals favor "positive” results over null findings, creating strong bias toward publishing studies that find something, even if it's not real! Career advancement hinges on publication count, not research quality. This creates perverse incentives to manipulate data until statistical significance is found. Consider the example of a pharmaceutical company trying to release a new drug. Due to the high stakes and potential liabilities, they're required to have multiple independent auditors and trials to confirm the drug's effects. There are institutional checks: FDA oversight, liability law, and third-party replication; that make publishing unreliable or irreproducible findings incredibly risky. Ironically, this means that even in an industry driven by profit, there's often more methodological accountability than in academic social psychology. The absence of real-world consequences for failed replications or methodological shortcuts makes it easy for bad science to flourish in academia specifically.
Even when studies find an impact, the subjects are almost always college psychology students participating in studies conducted by their own professors for extra credit and are aware of the exact studies that they are performing in. There’s even an acronym for this over reliance on particular research participants, WEIRD:  Western, Educated, Industrialized, Rich, and Democratic. For example, when I attended UMASS I would often participate in game theory studies that the Economics department would have. In order to achieve significant results in game theory, subjects have to play with real money; people act and behave differently when you give them fake money; think chips at a casino. The studies would fill up fast as a result of them potentially paying well and the only real reliable way to sign up was to have a smart phone so that as soon as the email went out you could get the notification and sign up. This was the early 2010s when not everyone had a smart phone in their pocket, again creating another layer of class abstraction when it came to study subjects. In fact, this scenario was the straw that broke the camel’s back for me personally when it came to transitioning to a smart phone. The money was too good to pass up. When I did sign up for the studies, I made sure to do a literature review of whatever I thought that the professor was trying to research. I wanted to win the game and make as much money as possible because I was broke. I’ve often been told that I was an outlier in this case by other students, but was I? I consider myself a bright guy, but at UMass I wasn’t a big fish in a little bowl. I wasn't even a business student and was average by UMass academic standards. How many other students were doing something similar? 
When these students are tested, oftentimes the constructs used to measure results are vague and poorly-defined. Constructs such as such as "grit," "mindsets," "priming", "brain power" without clear operational definitions. For example, when I was doing research, we were primarily interested in the N400. The N400 is part of the normal brain response to words and other meaningful (or potentially meaningful) stimuli, including visual and auditory words, sign language signs, pictures, faces, environmental sounds, and smells… so what does it actually mean when an N400 response gets triggered? Well, that’s non-concrete and rather fuzzy. The N400 is primarily associated with semantic processing, that is, how the brain understands meaning. When a word fits well into a sentence or context, the N400 is smaller. When a word is unexpected, incongruent, or hard to integrate, the N400 is larger. The normal sentence: “She spread the warm bread with butter.” results in a Small N400 because the word makes sense. However, the incongruent sentence “She spread the warm bread with socks. Results in a large N400 response.

Why is this important? As a researcher I would tell you that the N400 shows that the brain automatically and rapidly processes meaning, even when you’re not actively trying to. It's one of the clearest examples of how neuroelectric signals reflect real-time cognitive processing. Now why is THAT important? You tell me… What I see is that absence of strong underlying theories makes replication and extension difficult and that the field of psychology, in general, often prioritizes interesting phenomena over solid, predictive models.
This over-reliance on a specific type of human subject points to a much deeper, more foundational problem. We’ve only been industrialized for less than 300 years, and we're arguably already in a post-industrial age. This brief, historically bizarre period of human existence created a new kind of person, way of thinking, and in a way... a new science to manage them. The question isn't just whether the research is relevant, but whether the entire discipline of psychology, as we know it, is a temporary tool for a temporary way of life. This is a central theme in the work of documentarian Adam Curtis. In The Century of the Self, Curtis argues that modern psychology, particularly the psychoanalytic ideas of Sigmund Freud, was co-opted by corporate and political power structures. Freud’s nephew, Edward Bernays, used these insights to invent the field of public relations, pioneering techniques to manage the irrational desires of the masses and channel them into consumerism. In this view, psychology didn’t just seek to understand the modern self; it helped to construct it - a self defined by perpetual anxiety and a constant need for fulfillment through purchasing.

This argument is reinforced by scholars like Philip Cushman in books such as Constructing the Self, Constructing America. Cushman posits that post-WWII American society cultivated an "empty self" a person disconnected from community, tradition, and shared meaning. Both psychotherapy and advertising rushed in to fill this void, promising wholeness through therapeutic intervention or the accumulation of goods. If psychology’s primary role has been to diagnose and service this historically specific, empty, consumer self, then its findings are not timeless truths about the "human mind." Instead, they are maintenance instructions for the engine of capitalism. This context makes the field's current replication crisis and subservience to market forces seem less like a recent failure and more like the inevitable outcome of its own origin story...
So where does this all leave us? If power poses work for you, then they are good enough. And while I am not a religious person, maybe not be so critical of others’ belief systems that might be a little more un-rational than your own. I think an understanding of how rationalism has been co-opted by capitalism can teach us why we should understand where the “do your own research” crowd is coming from, even if we hold them in contempt. Ultimately as it stands today, the very system of integrity has failed all of us. I think that also this helps us understand where Donald Trump is coming from when he said “If we stopped testing right now, we'd have very few cases, if any.” in relation to the Coivd pandemic in June of 2020. When everything is for sale, people getting tested for Covid rates is no different than paying a consultant to prove what you already believe, or for experts in the legal system who always find in favor of the person that hired them. 
As laypeople, what can we do? We can follow a few rules of thumb. Never trust any single person, author, or publication claiming one study "proves" anything. We should only have evidence-based faith in science, this is taught in elementary school but bares repeating: only after many studies from many different sources should we ever consider that research might be valid. It's absolutely appropriate to always be skeptical of any science. That's the point. If you're so inclined, find the original scientific article that should be well cited. Ask: Has this research been replicated by anyone without a financial incentive? What was the p-value? What perverse incentives might influence these individuals and institutions?

Ultimately, I think our culture places too much value on material wealth, fame, and ego. This problem will not get better until the institutions and societal constructs around us fundamentally change to serve something other than capital.

No comments:

Post a Comment