The Truth About Dentistry

Dentists seem to have the lowest standards of any profession. Horrifying. No wonder so many commit suicide.

From The Atlantic in 2019:

When you’re in the dentist’s chair, the power imbalance between practitioner and patient becomes palpable. A masked figure looms over your recumbent body, wielding power tools and sharp metal instruments, doing things to your mouth you cannot see, asking you questions you cannot properly answer, and judging you all the while. The experience simultaneously invokes physical danger, emotional vulnerability, and mental limpness. A cavity or receding gum line can suddenly feel like a personal failure. When a dentist declares that there is a problem, that something must be done before it’s too late, who has the courage or expertise to disagree? When he points at spectral smudges on an X-ray, how are we to know what’s true? In other medical contexts, such as a visit to a general practitioner or a cardiologist, we are fairly accustomed to seeking a second opinion before agreeing to surgery or an expensive regimen of pills with harsh side effects. But in the dentist’s office—perhaps because we both dread dental procedures and belittle their medical significance—the impulse is to comply without much consideration, to get the whole thing over with as quickly as possible.

The uneasy relationship between dentist and patient is further complicated by an unfortunate reality: Common dental procedures are not always as safe, effective, or durable as we are meant to believe. As a profession, dentistry has not yet applied the same level of self-scrutiny as medicine, or embraced as sweeping an emphasis on scientific evidence. “We are isolated from the larger health-care system. So when evidence-based policies are being made, dentistry is often left out of the equation,” says Jane Gillette, a dentist in Bozeman, Montana, who works closely with the American Dental Association’s Center for Evidence-Based Dentistry, which was established in 2007. “We’re kind of behind the times, but increasingly we are trying to move the needle forward.”

Consider the maxim that everyone should visit the dentist twice a year for cleanings. We hear it so often, and from such a young age, that we’ve internalized it as truth. But this supposed commandment of oral health has no scientific grounding. Scholars have traced its origins to a few potential sources, including a toothpaste advertisement from the 1930s and an illustrated pamphlet from 1849 that follows the travails of a man with a severe toothache. Today, an increasing number of dentists acknowledge that adults with good oral hygiene need to see a dentist only once every 12 to 16 months.

Many standard dental treatments—to say nothing of all the recent innovations and cosmetic extravagances—are likewise not well substantiated by research. Many have never been tested in meticulous clinical trials. And the data that are available are not always reassuring.

The Cochrane organization, a highly respected arbiter of evidence-based medicine, has conducted systematic reviews of oral-health studies since 1999. In these reviews, researchers analyze the scientific literature on a particular dental intervention, focusing on the most rigorous and well-designed studies. In some cases, the findings clearly justify a given procedure. For example, dental sealants—liquid plastics painted onto the pits and grooves of teeth like nail polish—reduce tooth decay in children and have no known risks. (Despite this, they are not widely used, possibly because they are too simple and inexpensive to earn dentists much money.)

…Fluoridation of drinking water seems to help reduce tooth decay in children, but there is insufficient evidence that it does the same for adults. Some data suggest that regular flossing, in addition to brushing, mitigates gum disease, but there is only “weak, very unreliable” evidence that it combats plaque. As for common but invasive dental procedures, an increasing number of dentists question the tradition of prophylactic wisdom-teeth removal; often, the safer choice is to monitor unproblematic teeth for any worrying developments. Little medical evidence justifies the substitution of tooth-colored resins for typical metal amalgams to fill cavities. And what limited data we have don’t clearly indicate whether it’s better to repair a root-canaled tooth with a crown or a filling. When Cochrane researchers tried to determine whether faulty metal fillings should be repaired or replaced, they could not find a single study that met their standards.

“The body of evidence for dentistry is disappointing,” says Derek Richards, the director of the Centre for Evidence-Based Dentistry at the University of Dundee, in Scotland. “Dentists tend to want to treat or intervene. They are more akin to surgeons than they are to physicians. We suffer a little from that. Everybody keeps fiddling with stuff, trying out the newest thing, but they don’t test them properly in a good-quality trial.”

* When physicians complete their residency, they typically work for a hospital, university, or large health-care organization with substantial oversight, strict ethical codes, and standardized treatment regimens. By contrast, about 80 percent of the nation’s 200,000 active dentists have individual practices, and although they are bound by a code of ethics, they typically don’t have the same level of oversight.

* Among other problems, dentistry’s struggle to embrace scientific inquiry has left dentists with considerable latitude to advise unnecessary procedures—whether intentionally or not. The standard euphemism for this proclivity is overtreatment. Favored procedures, many of which are elaborate and steeply priced, include root canals, the application of crowns and veneers, teeth whitening and filing, deep cleaning, gum grafts, fillings for “microcavities”—incipient lesions that do not require immediate treatment—and superfluous restorations and replacements, such as swapping old metal fillings for modern resin ones. Whereas medicine has made progress in reckoning with at least some of its own tendencies toward excessive and misguided treatment, dentistry is lagging behind. It remains “largely focused upon surgical procedures to treat the symptoms of disease,” Mary Otto writes. “America’s dental care system continues to reward those surgical procedures far more than it does prevention.”

“Excessive diagnosis and treatment are endemic,” says Jeffrey H. Camm, a dentist of more than 35 years who wryly described his peers’ penchant for “creative diagnosis” in a 2013 commentary published by the American Dental Association. “I don’t want to be damning. I think the majority of dentists are pretty good.” But many have “this attitude of ‘Oh, here’s a spot, I’ve got to do something.’ I’ve been contacted by all kinds of practitioners who are upset because patients come in and they already have three crowns, or 12 fillings, or another dentist told them that their 2-year-old child has several cavities and needs to be sedated for the procedure.”

Posted in Dentistry | Comments Off on The Truth About Dentistry

The Natural Cures They Don’t Want You To Know About!

Surgeon David Gorski writes:

One of the biggest medical conspiracy theories for a long time has been that there exist out there all sorts of fantastic cures for cancer and other deadly diseases but you can’t have them because (1) “they” don’t want you to know about them (as I like to call it, the Kevin Trudeau approach) and/or (2) the evil jackbooted thugs of the FDA are so close-minded and blinded by science that they crush any attempt to market such drugs and, under the most charitable assessment under this myth, dramatically slow down the approval of such cures. The first version usually involves “natural” cures or various other alternative medicine cures that are being “suppressed” by the FDA, FTC, state medical boards, and various other entities, usually at the behest of their pharma overlords. The second version is less extreme but no less fantasy-based. It tends to be tightly associated with libertarian and small government fantasists and a loose movement in medicine with similar beliefs known as the “health freedom” movement. who posit that, if only the heavy hand of government were removed and the jack-booted thugs of the FDA called off, free market innovation would flourish and all these cures, so long suppressed by an overweening regulatory apparatus, the floodgates would open and these cures, long held back by the dam of the FDA, would flow to the people.(Funny how it didn’t work out that way before the Pure Food and Drug Act of 1906.) Of course, I can’t help but note that in general, in this latter idea, these fantastical benefits seem to be reserved only for those who have the cash, because, well, the free market fixes everything.

The idea that the FDA is keeping cures from desperate terminally ill people, either intentionally or unintentionally, through its insistence on a rigorous, science-based approval process in which drugs are taken through preclinical work, phase 1, phase 2, and phase 3 testing before approval is one of the major driving beliefs commonly used to justify so-called “right-to-try” laws.

* I can totally understand the patient impetus for these laws, given that I have had family members with terminal illnesses. Unfortunately, however, forces like the Goldwater Institute are taking advantage of the very human desire not to die and not to be forced to watch one’s loved ones die, all in order to push bad legislation. Indeed, the Goldwater Institute uses terminally ill patients desperate for their lives in much the same way Stanislaw Burzynski uses them: As shields and weapons in their battle against the FDA and state medical boards. That’s why, as I’ve morbidly joked before, being against right-to-try in the eye of the public is not unlike being against Mom, apple pie, the American flag, and puppies, hence the reluctance of even doctors doing clinical trials to publicly voice opposition. The most predictable attack against anyone who dares to publicly oppose these bills has been to portray opponents as not just callous, but as practically twirling their mustaches with delight and cackling evilly while watching terminally ill patients die without hope.

* Not surprisingly, libertarians are declaring this a big “win” for patient’s rights. It’s nothing of the sort. The flavor of the arguments can best be seen in two articles from Reason.com’s Nick Gillespie, who is clearly clueless about clinical trials. Basically, he took to Reason.com to gloat, referencing an article from over two weeks ago that he entitled The Upside of Ebola (Yes, There May Actually Be One). It’s about as blatant a move to take advantage of the Ebola outbreak to promote bad right-to-try legislation as I’ve ever seen. The subtitle exults:

A rising death toll, mass panic, scary mortality rate—what could possibly be good about the out-of-control epidemic? It may accelerate the adoption of laws giving patients more power.

Yeah, sure. Thousands of people are dying of a horrible disease in Africa while people in the U.S. are freaking out about the possibility of the virus causing outbreaks right here at home, and Gillespie sees these events, apparently more than anything else, as an opportunity to push his libertarian agenda with respect to medicine:

Ebola’s arrival and seeming spread in America is causing mass panic, tasteless Internet jokes, and incredibly poorly timed magazine covers. Can anything good come out of the disease, which has no known cure and a terrifying mortality rate of 50 percent?

Yes. To the extent it forces a conversation about the regulations surrounding the development of new drugs and the right of terminal patients to experiment with their own bodies, Ebola in the United States may well accelerate adoption of so-called right-to-try laws. These radical laws allow terminally ill patients access to drugs, devices, and treatments that haven’t yet been fully approved by the Federal Drug Administration and other medical authorities. The patients and their estates agree not to bring legal action against caregivers, pharmaceutical companies, and insurers.

You don’t have to be a doctrinaire libertarian—though it helps—to see the value in letting people with nothing left to lose experiment on themselves. They may get a new lease on life. The rest of us get meaningful information that may speed up the development of the next great medical intervention.

Actually, you do rather have to be a doctrinaire libertarian to have a reality distortion field as powerful as Nick Gillespie’s that leads him to write drivel like this. Ebola and right-to-try laws. Hmmmm. How is one thing not like the other (or not related to the other)? First of all, Gillespie’s rationale is a complete non sequitur, clearly designed to take advantage of the Ebola panic to persuade people that right-to-try laws are a good idea, even though such laws would not have had one whit of an effect on the odd patient in the US who might be infected with Ebola. After all, Ebola, as deadly as it is, is not a terminal illness. Second, I can’t help but note that existing FDA mechanisms got ZMapp to American Ebola patients rather rapidly, no need for right-to-try laws necessary. But excuse me. What Gillespie says is that Ebola and ZMapp are “forcing a conversation.” I suppose that’s true, but it’s the wrong conversation, a profoundly deceptive conversation, in which an advocate of right-to-try laws shamelessly takes plays on people’s fears of Ebola to promote these bad laws. Claiming that there is “no good argument against right-to-try” (wrong, wrong, wrong), Gillespie also shamelessly attacks straw men, representing the primary argument against right-to-try as giving patients “false hope.” There are lots of other reasons why these are bad laws.

But Gillespie is just getting warmed up:

But what’s already cruel is the FDA’s drug-testing process. It’s massively expensive and overly long, costing between $800 million and $1 billion to bring a drug to market and taking a decade or more to complete the approval process. There’s every reason to believe that the FDA approval process is killing as many or more people than it saves, especially as the FDA doesn’t allow approvals from Europe and elsewhere to stand in for trials here.

Uh, no. There is not “every reason to believe” anything of the sort. See? Once again, there’s the myth that there are all these fantastic cures out there that the FDA, through its bureaucratic inertia, is keeping from you. I am rather grateful, though, that Gillespie, through his link, makes his intent very clear. The article to which he linkes is entitled Kill The FDA (Before It Kills Again), in which, referencing the movie Dallas Buyers Club—which I finally saw on cable and was surprised to find that, leaving aside its historical inaccuracies about the AIDS epidemic in the 1980s, taken just as a movie it was at best just OK (I was seriously disappointed)—Gillespie proclaims that the FDA “continues to choke down the supply of life-saving and life-enhancing drugs that will everyone agrees will play a massive role not just in reducing future health care costs but in improving the quality of all our lives.” And what is his rationale? Wrap your mind around this:

As my Reason colleague Ronald Bailey has written, this means the FDA’s caution “may be killing more people than it saves.” How’s that? “If it takes the FDA ten years to approve a drug that saves 20,000 lives per year that means that 200,000 people died in the meantime.”

Completely missing from Bailey’s and Gillespie’s equation is the number of drugs that the FDA doesn’t approve because they don’t show efficacy and safety that could allow even more than those 20,000 people a year to die or even actively kill some of them. As conceded by even Bailey, it was the FDA that prevented, for example, approval of Thalidomide in the US and the rash of birth defects seen elsewhere in the world. Bailey’s argument is, at best, tenuous, at worse misleading. Gillespie notes:

A 2006 Government Accountability Office (GAO) study found that the number of new drug applications submitted to the FDA between 1993 and 2004 increased by just 38 percent despite an increase in research and development of 147 percent. The mismatch, said GAO, was the result of many factors, ranging from basic issues with translating discoveries into usable drugs, patent law, and dubious business decisions by drug makers. But the problems also included “uncertainty regarding regulatory standards for determining whether a drug should be approved as safe and effective,” a reality that almost certainly made pharmaceutical companies more likely to tweak old drugs rather than go all in on new medicines.

Notice how this is another non sequitur applied to right-to-try laws, given that the answer to this problem would be regulatory clarity, not state-by-state right-to-try laws. Think of it this way: What’s more uncertain? The FDA or having different laws in different states regarding “right to try”? Gillespie’s citing his previous article claiming that the FDA is killing you in an article promoting right-to-try is a very good indication what these laws are really about. They are not about helping patients. That’s how they are sold to desperately ill patients, but in reality libertarians like Gillespie and Bailey are using desperately ill patients in the same way that Stanislaw Burzynski is: As a powerful tool to sway public opinion against the FDA and towards their viewpoint.

There’s a reason that certain aspects of these laws are not as widely emphasized in the PR offensive in favor of right to try. It’s because they are pure “health freedom” and libertarian wingnuttery. For example, if you look at the Goldwater Institute template for right to try laws, which, unfortunately, has been the basis of every right to try law passed and under consideration, you’ll notice a number of highly problematic clauses. As I’ve discussed multiple times, there is the requirement that the drug or device has only passed phase 1 trials, which, given how few drugs that have passed phase 1 actually make it through to approval, is a really low bar, especially since most phase 1 trials involve fewer than around 25 patients.

More disturbing are the financial aspects. The Goldwater Institute legislative language template (to which the Michigan legislation is virtually identical) allows drug companies to charge patients, with no provision to help patients pay for exercising right-to-try. Indeed, it specifically states that the bill “does not require any governmental agency to pay costs associated with the use, care, or treatment of a patient with an investigational drug, biological product, or device” and that insurance companies do not have to pay for costs associated with the use of such therapies. You know what this means? Insurance companies could refuse to pay for care related to complications that might occur because of experimental treatments. You use an experimental drug and suffer a complication? Too bad! Your insurance company can cut you off! Now, it’s unlikely that government entities like Medicare or Medicaid would do that, but insurance companies certainly will.

Basically, what this law says is: If you can afford it yourself, no help, you can have it. If not, you’re SOL. As I’ve pointed out, if there’s one thing worse than dying of a terminal illness, it’s suffering unnecessary complications from a drug that is incredibly unlikely to save or significantly prolong your life and bankrupting yourself and family in the process. Right-to-try encourages just that. What’s more compassionate? Attacking the FDA and degrading the approval process that requires drug safety and efficacy while dangling false hope in front of patients or standing up and protecting patients from the harm such a policy could cause. Let’s just put it this way: I predict that Stanislaw Burzynski will soon be sending antineoplastons to patients in right-to-try states if, as he keeps bragging, the FDA has allowed him to reopen his clinical trials. After all, his antineoplastons would qualify just fine under right-to-try laws if they’re back under clinical trial. Indeed, if there’s one thing the decades-long battle between the FDA and Burzynski tells us, it’s that the FDA actually bends over way too far backwards to allow manufacturers of dubious drugs to prove themselves.

Finally, the anti-FDA rhetoric, such as linked to by Gillespie, is a very good indication that the true purpose of right-to-try legislation is to neuter the FDA’s power to control drug approval, thus greatly loosening or even eliminating hurdles to the drug approval process. It is no coincidence that the strongest, richest, and most vocal proponents of these laws are the Goldwater Institute and libertarians like Nick Gillespie and Ronald Bailey, who, not coincidentally, think that the FDA is “killing us.” Those articles are a definite tell. It’s also clearly a strategy to get right-to-try passed in as many states as possible and get referendums passed by wide margins to pressure the federal government to weaken the FDA.

In the end, though, right-to-try laws are what I like to call “placebo” laws in that they make people who pass them and support them feel good but don’t actually address the problem that they are supposedly intended to address. Drug approval regulatory authority lies with the FDA; it could completely ignore state right-to-try laws. The FDA also has a compassionate use program and rarely turns down such requests. Admittedly, the application process is long and probably too onerous, but the answer to that problem is not state right-to-try laws. It’s to address the issue at the federal level. I’ve also said in an interview that, now that my state government has foolishly passed a right-to-try law, one of two things is likely to happen: Either nothing, because federal authority trumps state authority, or disaster for patients, doctors, and, yes, biotech and drug companies. Everybody, myself included, wants to help terminally ill patients. After all, I’ve seen too many of them. Right to try and similar misguided efforts, however, are not the way.

Posted in FDA | Comments Off on The Natural Cures They Don’t Want You To Know About!

The Problem With Challenge Trials

Marcia Angell writes: There are two specific problems with even the most carefully done challenge studies of a Covid-19 vaccine. First, we still know very little about this novel virus, including what hidden or longterm effects it might have on even young, healthy volunteers. And second, will a vaccine be equally effective in the elderly and chronically ill, those who are most vulnerable to Covid-19? Elliott acknowledges this problem, but I think it may be more serious than he implies.

But more generally, I worry about the erosion of our hard-won ethical consensus (starting with the Nuremberg Code) that people should not be used as means to an end if they might be harmed. There is also a risk of bribery or coercion in enrolling volunteers, even if they are officially unpaid. I believe this erosion of our ethical standards, even for a good cause, would be a very unfortunate precedent. We would then be on the proverbial slippery slope downhill.

Carl Elliott replies: First, if research subjects in the United States are sickened or injured in a trial, they may well face financial ruin on top of their illness. Most sponsors require subjects to pay for their own medical care, and virtually none guarantees compensation for pain, suffering, or the inability to work. Second, many of the current industry sponsors of vaccine trials have a record of burying, spinning, and rigging their research. (The list of such sponsors includes Merck, for which Lipsitch consults.) Third, even if a trial leads to a vaccine, we have been given no guarantees that it will be made available to those who can’t afford it, raising the possibility that the sacrifices made in vaccine trials will yield benefits primarily to the rich and well insured.

None of these problems is unique to challenge studies. What is unique about those studies is the extraordinary number of people willing to volunteer without being told about the fine print. This is a recipe for exploitation. Eyal and Lipsitch claim their priority in their article was “to explain how to select participants with minimal likelihood of dying,” yet in the study design they proposed, some subjects up to the age of forty-five would be exposed to the coronavirus after getting only a placebo vaccine.

Like Marcia Angell, I’m disturbed by the use of subjects as a means to an end, especially when the risks are unknown. If I left the door to Covid-19 vaccine challenge studies ajar, I can think of no one better than Angell to close it. In fact, we may need to close a lot more doors. In Phase I trials, researchers routinely use subjects as a means to an end, even when the risks are significant and the subjects are vulnerable. That slope was slippery and we have reached the bottom. We need to find a way back up.

Posted in Articles | Comments Off on The Problem With Challenge Trials

Should antidepressants be used for major depressive disorder?

From a 2019 meta-analysis in the British Medical Journal: Conclusions: The benefits of antidepressants seem to be minimal and possibly without any importance to the average patient with major depressive disorder. Antidepressants should not be used for adults with major depressive disorder before valid evidence has shown that the potential beneficial effects outweigh the harmful effects.

Wikipedia notes:

[Harvard psychology professor Irving] Kirsch’s analysis of the effectiveness of antidepressants was an outgrowth of his interest in the placebo effect. His first meta-analysis was aimed at assessing the size of the placebo effect in the treatment of depression.[7] The results not only showed a sizeable placebo effect, but also indicated that the drug effect was surprisingly small. This led Kirsch to shift his interest to evaluating the antidepressant drug effect.

The controversy surrounding this analysis led Kirsch to obtain files from the U.S. Food and Drug Administration (FDA) containing data from trials that had not been published, as well as those data from published trials. Analyses of the FDA data showed the average size effect of antidepressant drugs to be equal to 0.32, clinically insignificant according to the National Institute for Health and Clinical Excellence (NICE) 2004 guidelines, requiring Cohen’s d to be no less than 0.50.[8] No evidence was cited to support this cut-off and it was criticised for being arbitrary;[9] NICE removed the specification of criteria for clinical relevance in its 2009 guidelines.[10][11]

Kirsch challenges the chemical-imbalance theory of depression, writing “It now seems beyond question that the traditional account of depression as a chemical imbalance in the brain is simply wrong.” [12] In 2014, in the British Psychological Society’s Research Digest, Christian Jarrett included Kirsch’s 2008 antidepressant placebo effect study in a list of the 10 most controversial psychology studies ever published.[13]

In September 2019 Irving Kirsch published a review in BMJ Evidence-Based Medicine, which concluded that antidepressants are of little benefit in most people with depression and thus they should not be used until evidence shows their benefit is greater than their risks.

Marcia Angell writes in the New York Review of Books:

It seems that Americans are in the midst of a raging epidemic of mental illness, at least as judged by the increase in the numbers treated for it. The tally of those who are so disabled by mental disorders that they qualify for Supplemental Security Income (SSI) or Social Security Disability Insurance (SSDI) increased nearly two and a half times between 1987 and 2007—from one in 184 Americans to one in seventy-six. For children, the rise is even more startling—a thirty-five-fold increase in the same two decades. Mental illness is now the leading cause of disability in children, well ahead of physical disabilities like cerebral palsy or Down syndrome, for which the federal programs were created.

A large survey of randomly selected adults, sponsored by the National Institute of Mental Health (NIMH) and conducted between 2001 and 2003, found that an astonishing 46 percent met criteria established by the American Psychiatric Association (APA) for having had at least one mental illness within four broad categories at some time in their lives. The categories were “anxiety disorders,” including, among other subcategories, phobias and post-traumatic stress disorder (PTSD); “mood disorders,” including major depression and bipolar disorders; “impulse-control disorders,” including various behavioral problems and attention-deficit/hyperactivity disorder (ADHD); and “substance use disorders,” including alcohol and drug abuse. Most met criteria for more than one diagnosis. Of a subgroup affected within the previous year, a third were under treatment—up from a fifth in a similar survey ten years earlier.

Nowadays treatment by medical doctors nearly always means psychoactive drugs, that is, drugs that affect the mental state. In fact, most psychiatrists treat only with drugs, and refer patients to psychologists or social workers if they believe psychotherapy is also warranted. The shift from “talk therapy” to drugs as the dominant mode of treatment coincides with the emergence over the past four decades of the theory that mental illness is caused primarily by chemical imbalances in the brain that can be corrected by specific drugs. That theory became broadly accepted, by the media and the public as well as by the medical profession, after Prozac came to market in 1987 and was intensively promoted as a corrective for a deficiency of serotonin in the brain. The number of people treated for depression tripled in the following ten years, and about 10 percent of Americans over age six now take antidepressants. The increased use of drugs to treat psychosis is even more dramatic.

Marcia Angell follows up in the July 14, 2011 issue:

One of the leaders of modern psychiatry, Leon Eisenberg, a professor at Johns Hopkins and then Harvard Medical School, who was among the first to study the effects of stimulants on attention deficit disorder in children, wrote that American psychiatry in the late twentieth century moved from a state of “brainlessness” to one of “mindlessness.” By that he meant that before psychoactive drugs (drugs that affect the mental state) were introduced, the profession had little interest in neurotransmitters or any other aspect of the physical brain. Instead, it subscribed to the Freudian view that mental illness had its roots in unconscious conflicts, usually originating in childhood, that affected the mind as though it were separate from the brain.

But with the introduction of psychoactive drugs in the 1950s, and sharply accelerating in the 1980s, the focus shifted to the brain. Psychiatrists began to refer to themselves as psychopharmacologists, and they had less and less interest in exploring the life stories of their patients. Their main concern was to eliminate or reduce symptoms by treating sufferers with drugs that would alter brain function. An early advocate of this biological model of mental illness, Eisenberg in his later years became an outspoken critic of what he saw as the indiscriminate use of psychoactive drugs, driven largely by the machinations of the pharmaceutical industry.

When psychoactive drugs were first introduced, there was a brief period of optimism in the psychiatric profession, but by the 1970s, optimism gave way to a sense of threat. Serious side effects of the drugs were becoming apparent, and an antipsychiatry movement had taken root, as exemplified by the writings of Thomas Szasz and the movie One Flew Over the Cuckoo’s Nest. There was also growing competition for patients from psychologists and social workers. In addition, psychiatrists were plagued by internal divisions: some embraced the new biological model, some still clung to the Freudian model, and a few saw mental illness as an essentially sane response to an insane world. Moreover, within the larger medical profession, psychiatrists were regarded as something like poor relations; even with their new drugs, they were seen as less scientific than other specialists, and their income was generally lower.

Posted in Psychology | Comments Off on Should antidepressants be used for major depressive disorder?

The 10 Most Controversial Psychology Studies Ever Published

From the British Psychological Society:

* 5. Loftus’ “Lost in The Mall” Study
In 1995 and ‘96, Elizabeth Loftus, James Coan and Jacqueline Pickrell documented how easy it was to implant in people a fictitious memory of having been lost in a shopping mall as a child. The false childhood event is simply described to a participant alongside true events, and over a few interviews it soon becomes absorbed into the person’s true memories, so that they think the experience really happened. The research and other related findings became hugely controversial because they showed how unreliable and suggestible memory can be. In particular, this cast doubt on so-called “recovered memories” of abuse that originated during sessions of psychotherapy. This is a highly sensitive area and experts continue to debate the nature of false memories, repression and recovered memories. One challenge to the “lost in the mall” study was that participants may really have had the childhood experience of having been lost, in which case Loftus’ methodology was recovering lost memories of the incident rather than implanting false memories. This criticism was refuted in a later study (pdf) in which Loftus and her colleagues implanted in people the memory of having met Bugs Bunny at Disneyland. Cartoon aficionados will understand why this memory was definitely false.

* 8. The Kirsch Anti-Depressant Placebo Effect Study
In 2008 Irving Kirsch, a psychologist who was then based at the University of Hull in the UK, analysed all the trial data on anti-depressants, published and unpublished, submitted to the US Food and Drug Administration. He and his colleagues concluded that for most people with mild or moderate depression, the extra benefit of anti-depressants versus placebo is not clinically meaningful. The results led to headlines like “Depression drugs don’t work” and provided ammunition for people concerned with the overprescription of antidepressant medication. But there was also a backlash. Other experts analysed Kirsch’s dataset using different methods and came to different conclusions. Another group made similar findings to Kirsch, but interpreted them very differently – as showing that drugs are more effective than placebo. Kirsch is standing his ground. Writing earlier this year, he said: “Instead of curing depression, popular antidepressants may induce a biological vulnerability making people more likely to become depressed in the future.”

* 9. Judith Rich Harris and the “Nurture Assumption”
You could fill a library or two with all the books that have been published on how to be a better parent. The implicit assumption, of course, is that parents play a profound role in shaping their offspring. Judith Rich Harris challenged this idea with a provocative paper published in 1995 in which she proposed that children are shaped principally by their peer groups and their experiences outside of the home. She followed this up with two best-selling books: The Nurture Assumption and No Two Alike. Writing for the BPS Research Digest in 2007, Harris described some of the evidence that supports her claims: “identical twins reared by different parents are (on average) as similar in personality as those reared by the same parents … adoptive siblings reared by the same parents are as dissimilar as those reared by different parents … [and] … children reared by immigrant parents have the personality characteristics of the country they were reared in, rather than those of their parents’ native land.” Harris has powerful supporters, Steven Pinker among them, but her ideas also unleashed a storm of controversy and criticism. “I am embarrassed for psychology,” Jerome Kagan told Newsweek after the publication of Harris’ Nurture Assumption.

* 10. Libet’s Challenge to Free Will

Your decisions feel like your own, but Benjamin Libet’s study using electroencephalography (EEG) appeared to show that preparatory brain activity precedes your conscious decisions of when to move. One controversial interpretation is that this challenges the notion that you have free will. The decision of when to move is made non-consciously, so the argument goes, and then your subjective sense of having willed that act is tagged on afterwards. Libet’s study and others like it have inspired deep philosophical debate. Some philosophers like Daniel Dennett believe that neuroscientists have overstated the implications of these kinds of findings for people’s conception of free will. Other researchers have pointed out flaws in Libet’s research, such as people’s inaccuracy in judging the instant of their own will. However, the principle of non-conscious neural activity preceding conscious will has been replicated using fMRI, and influential neuroscientists like Sam Harris continue to argue that Libet’s work undermines the idea of free will.

Posted in Psychology | Comments Off on The 10 Most Controversial Psychology Studies Ever Published