Randomized, Controlled Trials for Parachutes
Can you randomly assign people to “parachute” versus “no-parachute” treatments to “scientifically” test the effectiveness of them as a preventive device against “gravitational challenge?”
A satirical meta-analysis in the BMJ illustrates the ridiculousness of this proposition
Despite this hilarity, this article brings up two important points:
1) Evidence-based medicine has limitations
Evidence-based medicine–the practice of only making medical decisions based on conclusive scientific evidence–is the ideal. Of course we want our medical decisions to be based on sound scientific findings. What most people don’t realize is medical evidence is thin (at best). It doesn’t take long working in medical research to find out that we actually know very little about practicing medicine. Human bodies are complex and diseases typically don’t make just one thing go wrong within the body. If a person has kidney failure and we treat the kidneys, they may develop problems with their heart. Then we may be able to fix the kidneys and the heart, but now there is liver toxicity from the drugs. This is just one example among thousands of different possible scenarios. Thus, doctors cannot simply follow guidelines and protocols based on the “evidence.” Our complex bodies develop complex problems, some of which have never been seen before. Doctors must be trained to adapt in these situations, think critically about how different pieces of evidence may apply to this unique situation and act prudently in the face of uncertainty.
2) Although randomized, controlled trials are considered to be the “gold standard” in medical/epidemiological research, they are not without their flaws
Here is a big word to throw out there–equipoise. Equipoise in epidemiological research is the balance between the potential benefit and the potential harm of a given drug, device, or any other intervention. We cannot ethically randomize a group of people to parachutes and another group to no parachutes and then push them out of a plane to see if parachutes increase survivability. We have reason to believe parachutes are very effective at what they do, thus we can’t send people to their certain death simply to gather “evidence” of the effectiveness of parachutes. So, to get around this we use other study designs to gather our evidence to determine “evidence-based” guidelines. These other designs–like cohort studies (following a group of people over time) and case-control studies (looking at a group with a disease and comparing them to another group without disease to see how they differ)–suffer from a major flaw; they can’t demonstrate causality. Let’s use cigarettes as an example.
Hypothetically, we think smoking cigarettes causes lung cancer. In fact, we have some very good evidence that it causes lung cancer, so much so that we can’t take two groups of people, have one smoke like chimneys and the other eat pretzels and see who get more lung cancer. But, we can do a case-control study. We find a bunch of people who have lung cancer and a bunch of other, similar people who don’t have lung cancer. We sit them down and ask all of them, “do you smoke cigarettes?” If the lung cancer group has more smokers in it, then we can say there is a link between smoking and lung cancer. BUT, we can’t say that smoking CAUSES lung cancer based on this evidence. I know this is going to sound ridiculous when I say it, but we can’t prove (from this evidence) that lung cancer doesn’t cause smoking. Maybe people get lung cancer and say, screw it, I’m going to start smoking. A case-control study can’t prove this didn’t happen. In the case of smoking and lung cancer, the stark difference in lung cancer rates between smokers and non-smokers almost conclusively proves that smoking causes cancer. In addition to studies in humans, we also have evidence from controlled trials in other animals, as well as biological evidence. Usually this much evidence doesn’t accumulate for one particular condition or disease. Generally, differences between cases and controls aren’t very big and biological evidence may be sparse or impossible to collect.
So, all study designs are flawed and gathering evidence can be cumbersome. However, the least talked about aspect of evidence based medicine is the need for more evidence. It seems blatantly obvious that all people want their medical decisions based on scientific findings. Since we don’t currently have much evidence, why don’t we gather more?! Despite this obvious deficiency with an equally obvious solution, much of the squabbling in the “evidence-based medicine” debate is doctors not wanting strict rules or guidelines taking away their autonomy to treat their patients how they want to. What we need is more evidence. Many questions can be answered through careful research and analysis. But, this research and analysis costs money…typically a lot of money. Randomized, controlled trials can easily cost hundreds of millions of dollars (another drawback I didn’t mention earlier). We need to invest more money in generating the evidence for our evidence-based medicine.
Tying it back to electronic medical records
One place we can start is investing in the highest quality electronic medical records systems. (I know you’re saying, “Wow, how can he tie evidence-based medicine back to electronic medical records? He’s good.”) With more accurate and comprehensive records systems, we can abstract data electronically from those records and look at what conditions a patient had, what treatment they received, and how well the treatment worked. Doing this on a national scale could generate a huge repository of evidence on many, many conditions. Additionally, since different illnesses are treated differently across the nation, natural experiments are occuring daily. We could see how treatments for heart disease in one region differ from those in another and determine which patients do better. Since investments in health information technology are a growing necessity anyways, integrating data analysis tools into them would only be a minor additional cost which could reap huge rewards in the end.
[original BMJ link via Not Totally Rad]