edoc

Meta-epidemiologic consideration of confounding for health care decision making

Ewald, Hannah. Meta-epidemiologic consideration of confounding for health care decision making. 2018, Doctoral Thesis, University of Basel, Faculty of Science.

[img]
Preview
PDF
2543Kb

Official URL: http://edoc.unibas.ch/diss/DissB_12836

Downloads: Statistics Overview

Abstract

Plain language summary.
As patients, we all want to believe that there is the right medical solution for every ailment and that our doctor knows best. What we usually don’t know is that our doctor’s knowledge is based on experience and on evidence. However, the evidence can be flawed, exaggerated, or may not actually apply to us. While there are many things that can go wrong in clinical studies, the main focus of this dissertation is on the concept of confounding. Confounding occurs when a specific exposure and outcome have a common cause. For example, more breast cancer patients receiving surgery as the observed “exposure” survive than those receiving chemotherapy. Concluding that surgery is better for survival may, however, be confounded by cancer stage because those who were operated on had a less advanced cancer stage and thus were more likely to survive to begin with. Minimizing the impact of such confounding in research studies on treatment effects is important because it can alter the estimates of a treatment effect and thus may lead to wrong conclusions and ultimately to wrong treatment decisions.
For many health topics, there are myriads of studies available and whether or not their results can give us reliable answers to what we want to know depends on a variety of factors. The most important factor is the study design. Randomized controlled trials (RCTs) are the current gold standard to produce evidence for treatment decisions. They measure the causal effect of a treatment versus a control on a specific outcome. The key element is that study participants are randomly assigned to treatment or control (which could be a placebo or another treatment). The randomization tries to balance all known (such as age) and unknown characteristics (such as undiagnosed diseases) of the participants which means that they also balance all known and unknown confounding factors. The only difference between the participant groups will then be the allocation to a treatment or control. This would be the perfect study design if the circumstances were ideal, i.e. if every participant adhered to the assigned treatment and stayed on the study until the end. In reality, the participants often do not adhere (e.g. because the exercise program of a weight-loss study is too demanding) or they become lost to follow-up (e.g. because they moved away or did not want to be on the study anymore). However, not every clinical question can be answered in an RCT. Another important research design are observational studies, where the exposure of patients to an intervention or a control is not decided by the study investigators (thus observational) and may thus depend on a number of other known and unknown factors, e.g. doctors’ decisions or patient’s preferences. This study design is very prone to confounding and requires careful statistical analyses. Statistical methods can then be used to retrospectively address issues like confounding or confounding that changes over time. One such statistical method is marginal structural models (MSM). MSM allow a causal interpretation of results under the assumptions that all confounding factors are known, correctly measured and properly implemented in the statistical models. However, even with the latest statistical methods, RCTs and observational studies may not give the same answer when trying to solve the same question. Hence, the aims of the doctoral projects were 1) to evaluate the extent to which confounding is actively considered in the conclusions from observational studies; 2) to evaluate the agreement of treatment effects from non-randomized studies using MSM with reported effects from RCTs on the same topic; 3) to evaluate when MSM is used in RCTs and how these results differ from the main (non-MSM) results of the same trial.
First, we assessed the scope of the issue within the health professionals’ literature. Are authors of scientific papers aware of the problem of confounding for the interpretation of their results and do they present their results in light of its possible impact? Second, if observational studies use MSM to reduce the impact of confounding and allow a causal interpretation, the results should be similar to those from RCTs on the same clinical question. To assess how well they agree, we used established approaches to compare the effects, for example we determined how often the effects from both designs indicated concordantly that a treatment is beneficial or not. Third, we conducted an empirical analysis of where and why MSM is used to analyze randomized comparisons, a rather new and emerging approach to address confounding within randomized trials, and how these results compare to non-MSM results from the same trial.
We found that observational studies in general tend to have unsatisfactory or no discussion of confounding at all. If confounding was mentioned, it was either deemed irrelevant for the respective research or results are not brought in context of necessary cautious interpretation. Studies that did, however, report possible limitations due to confounding were actually cited more by other researchers than studies that deemed an influence due to confounding unlikely. This means research that is carefully reported may have more impact on science than other research.
When MSM was applied to observational study data, the effects often had opposite directions (i.e. one showed harm and the other benefit of the intervention) and were more favorable for the experimental treatment than in randomized studies on the same research question. This was even more so when the studies focused on informing health care decision making rather than statistical methodology.
MSM was applied to RCTs to minimize the influence of confounding that arises when study participants do not adhere to the protocol. Within the main publication and the publication reporting MSM-based results (sometimes the same), authors reported on average 6 analyses for one outcome in the same population and at the same point in time. Most of these results, however, pointed in the same direction and had more or less similar effect sizes, which means that the clinical interpretation is often similar.
We can never be certain that we know all confounding factors, measured them correctly and implemented them correctly in the statistical models. Even research that used causal modelling techniques may still come to different answers than RCTs evaluating the same clinical question would. Hence, confounding should be more carefully acknowledged in non-randomized research, doing so is not associated with lower citation impact. Results from causal modelling can be useful sensitivity analyses that can help researchers to get a bigger picture of the impact of other influencing factors. Health care decision makers should remain cautious when using non-randomized evidence to guide their health care decisions.
Advisors:Tanner, Marcel and Hemkens, Lars Gerrit and Fretheim, Atle
Faculties and Departments:03 Faculty of Medicine > Departement Public Health > Sozial- und Präventivmedizin > Malaria Vaccines (Tanner)
09 Associated Institutions > Swiss Tropical and Public Health Institute (Swiss TPH) > Former Units within Swiss TPH > Malaria Vaccines (Tanner)
UniBasel Contributors:Ewald, Hannah and Tanner, Marcel
Item Type:Thesis
Thesis Subtype:Doctoral Thesis
Thesis no:12836
Thesis status:Complete
Number of Pages:1 Online-Ressource (x, 98 Blätter)
Language:English
Identification Number:
edoc DOI:
Last Modified:08 Feb 2020 15:00
Deposited On:10 Dec 2018 13:57

Repository Staff Only: item control page