Envisioning and Implementing New Psychiatric Diagnostic Systems via Causal Modeling
An interview with Dr. Glenn Saxe, clinician, researcher and computational psychiatrist on the limitations of our current diagnostic system, and how Causal Data Science methodologies can move the needle.
I first met Dr. Saxe through an introduction from one of my senior mentors. In the classic setting of the St. Regis Hotel in NYC, I realized that Dr. Saxe is one of those rare geniuses who has directed his energies with uncommon focus and persistence, compelled to do his level best to help others using all the means at his disposal.
We connected around a shared love of physics and mathematics, though clearly his command of the formalism and dedication in his professional career rendered the conversation asymmetrical — he is a master, and I am an afficianado. Reading his work on Trauma Systems Therapy, and the application of causal data science to mental health, was edifying and inspiring.
His work could be pivotal, during a time when mental illness is more and more salient, polythetic diagnostic systems characterized by overlapping and often indistinct clusters of disorders which don’t always best serve patient care, and the rise of machine intelligence, create opportunities for a paradigm shift. Polythetic means having many properties in common, but not all. It can be used to describe a classification system, a group of organisms, or a definition.
For example, difficulty with attention and cognition is a symptom shared across many different conditions, non-specific and transdiagnostic. Clinical evaluation and even formal diagnostic testing often leave a muddy picture. While diagnosis ought to be precise and specific, foundational for effective treatment planning and implementation, in reality the same person might be comprehensively evaluated by three different experts who agree on core diagnoses such as major depression and anxiety disorders, but differ on less well-defined diagnoses — often as a function of their personal or organizational orientation.
So, for instance, a program which treats trauma may diagnose PTSD, whereas a program which focuses on personality disorders may diagnose the same person with Borderline Personality Disorder. In reality, both may be relevant — while the diagnostic validity is disputed by experts, people in need often are caught in a tug of war.
This is a profound problem as factors including conflict of interest come into play — for example, if someone holds to a specific therapeutic model developed to treat a particular condition, they are likely to diagnose that condition more often, even if they are trying to be “objective”.
With great pleasure, therefore, I present an interview with Dr. Saxe on this state of affairs, and what he ardently believes needs to be done. Not only is he pointing out challenge areas, rather refreshingly — solutions. Not miracles, but tools and methods which could vastly alleviate suffering and help identify the best ways to direct scarce resources.
Grant H. Brenner: Why do we need to have a better understanding of mental illness? Why are new approaches needed to decrease the burden of mental disorders?
Glenn Saxe: Recent estimates indicate that approximately 970 million people globally are living with mental disorders. The impact of these conditions is profound, affecting individuals, families, and communities in ways that are difficult to quantify. Unfortunately, progress in alleviating the burden of mental disorders has not kept up with advancements in other areas of medicine. As more people seek mental health treatment, we are witnessing an increase in the prevalence of various disorders, which raises significant concerns. To effectively address these trends, it’s essential that we keep an open mind, exploring all possibilities — including the notion that the problem is in the paradigm itself. This is a conclusion I have reached.
GHB: You offer a perspective which many in the field may contest, though you are by no means alone. What is your stance toward the current body of clinical evidence?
GS: We often hear about exciting developments in the field, and many professionals highlight evidence-based treatments as a sign of progress. However, a closer look at this evidence shows crucial gaps. For starters, implementing these treatments in regular care settings is quite challenging, often falling short of the high standards set for the trials. This means that many individuals receiving the treatment in usual care settings receive weaker versions of these treatments than what was tested, so many couldn’t be expected to benefit. Furthermore, the results from clinical trials may not be as robust as initially reported. Numerous studies have pointed this out, with one of the largest and most comprehensive analyses published by Leichsenring and colleagues in World Psychiatry (2022) a few years ago. Their meta-analysis examined the effects of interventions from 102 meta-analyses, which included a staggering 3,782 Randomized Clinical Trials (RCTs) with over 650,514 participants across various major mental disorders. Hard to get more comprehensive than this. The findings revealed that treatment effects were generally modest, and biased reporting of these effects was quite common. These results have profound implications for the trustworthiness of our body of evidence. However, these results are utterly unsurprising. I believe they were inevitable.
GHB: Why inevitable? I think those results surprised many people
GS: The results were inevitable because they are entirely consistent with the level of etiological complexity most in the field believe mental disorders have. If we understand many causes contribute to the expression of a mental disorder, it’s not hard to see why a definitive, comprehensive study on the efficacy of mental disorder interventions would show such modest results.
Let’s consider why this would be the case for any mental disorder. Play along with me. Pick any mental disorder, like Major Depression, Schizophrenia, PTSD, Autism, or another that you know a lot about. Now, think about the fewest factors you believe must be involved in the disorder’s expression. You can consider factors from any area — genes, molecules, brain circuits, social influences, cultural impact, developmental considerations, or anything else that comes to mind. Add up all those factors, and let’s keep it on the conservative side. For the disorder you selected, what would you estimate as the minimal number of causes contributing to its expression? Is it around 10, 20, 30, 50, 500?
GHB: For the sake of discussion, let’s go with twenty.
GS: Okay. Twenty causes. That’s a reasonable, even modest, number. Now, here’s something to know about causal factors. The only way an intervention can decrease the expression of a mental disorder is by successfully targeting a cause of that expression. Intervention targeting a non-causal factor simply won’t help — it could even make things worse, for example by causing adverse reactions or delaying definitive treatment.
Now let’s talk about why I found the results of the Leichsenring umbrella meta-analysis unsurprising. Let’s consider the nature of the interventions they studied: a wide diversity of interventions for mental disorders was studied in almost 4,000 RCTs. These interventions mainly address only one or a few causes of disorders, like psychotropic medications targeting serotonin or noradrenaline activity and psychosocial therapies targeting cognitive distortions or emotional dysregulation for example. Most of us, like you, believe that mental disorders have more than twenty causes. So, if our interventions only target one or two of them, what about the other eighteen or nineteen?
You might then say. Well, maybe those interventions just so happened to target those one or two causes with much larger effects than the others. I’d agree with you that this could result in an intervention’s large effect. However, this leaves us with a big nagging question. How, exactly, did the intervention developers know — out of the twenty or so possible causes — which ones had the disproportionally large effects?
Unless mental disorders are a lot less multicausal than most of us believe, it’s hard for me to imagine our interventions would be able to precisely hit those large-effect targets unless they were selected or designed from robust causal knowledge from scientific research. However, our field has acknowledged that it does not possess such a robust body of causal knowledge from its research. So, the fact that we keep publishing reports of our interventions yielding significant effects in our clinical trials lacks credibility, to say the least.
GHB: What makes you say the field acknowledges that it does not possess such a robust body of scientific knowledge? The scientific literature on mental disorders is extensive.
GS: The field undoubtedly acknowledged this in 1980 with the release of DSM III, when it turned from diagnosis by cause for the explicit reason that its body of causal knowledge was insufficient to classify patients. The DSM has been revised several times since 1980 but has never returned to a causal nosology for the same reason. If the field believes its causal knowledge is insufficient for diagnoses, why would it believe it is sufficient to guide treatment?
Yet, it believes the evidence from its clinical trials. As you said, many in the field were surprised or even shocked by the results of the Leichsenring umbrella meta-analysis. If people understood the nature of causality, they wouldn’t be surprised or shocked. They would understand that the only way clinical trials on our interventions for our multicausal disorders could repeatedly find significant effects without scientific guidance is if we are the luckiest field in human history. It’s like winning the lottery every time. If these findings were true, it would make our etiological literature unnecessary because we can achieve such effects without it. A field that is focused on causality is a field that is constrained by reality for the conclusions it draws. The Leichsenring umbrella meta-analysis provided a hard dose of reality for the field, and anyone who was surprised by the results should pause to ask themselves, why?
Now, even though mental disorders may have many causes doesn’t mean all of them have equally small effects. If this were true, the situation would be pretty hopeless. You’d need to deliver twenty different interventions for your twenty-cause disorder to be able to help your patients. Fortunately, this is not how nature works. Causes can be expected to vary in magnitudes of effect, and in complex systems, a small number of causes should be expected to carry disproportionally large effects. So, the central scientific challenge in our field is to discover those causes with disproportionally large effects so that we can select or develop treatments to target them.
GHB: What is your stance toward some of the “chemical” models of psychiatric illness, like the catecholamine and serotonin hypothesis which are often used to explain how common antidepressants are supposed to be working? We are starting to see there much more to the story, including the role of the glutamate system — which has to do with learning and plasticity — as well as understanding network models of the brain.
GS: The catecholamine and serotonin hypotheses were proposed in the 1960s and 1970s to explain the causes of depression. This led to the development of tricyclic antidepressants, MAOIs, SSRIs, and similar medications. The results are in. These agents yield clinically significant effects for some, only modest effects for others, and leave a significant proportion of people with depression still burdened by their disorder. These results align entirely with the field’s belief in the multicausal nature of mental disorders because even if our multicausal disorders had a modest number of causes with disproportionally large effects, patients should be expected to differ over which ones caused the expression of their disorder. Multicausality puts us in a personalized and precise intervention world.
Your question about the catecholamine and serotonin hypothesis, as well as ketamine and the glutamatergic system, reveals much about our field’s progress. You had to go back fifty or sixty years to find causal hypotheses and findings that have truly impacted care. Most medications used in practice today are based on theories regarding norepinephrine, serotonin, dopamine, and mental disorders from so long ago. To my knowledge, Ketamine is the first medication based on a different causal theory of depression that is beginning to influence care. Why do we so rarely see a finding from our etiological literature guiding our routine care?
We’ve published tens of thousands of articles yielding countless findings on factors associated with mental disorders. Where are these findings in our clinics? We assert that these countless discoveries, particularly those related to brain imaging, genomics, and molecular factors, showcase our impressive advancements. Consider a finding on any of these factors, or any others the field deems most significant for any mental disorder: genes, molecules, brain circuits, social influences, or anything else. Do we regularly evaluate patients based on that factor and use that information to inform our decisions on how to help them? What purpose do we have for these findings if not that? Here’s the best way to assess the reality of advances in a medical field for any disorder: examine the changes in its diagnostic criteria over time.
GHB: What do you mean? Why should a change in diagnostic criteria be so significant?
GS: Pick any medical disorder that has significantly reduced morbidity or mortality since, say, 1980. It’s not hard to find such disorders, heart disease, and many cancers, for example. Look at its diagnostic criterion today and back in 1980. You will inevitably see very significant changes over time. Medical fields advance their ability to help their patients by embedding their most significant scientific findings in the diagnostic criteria for their disorders. They memorialize their progress in diagnostic criteria.
Now — I hate to ask — pick any psychiatric disorder in the DSM 5 that was also there in DSM III, Major Depression, Schizophrenia, Anxiety Disorders, or any other disorder. Look at its diagnostic criterion now and forty-five years ago. How much did the criterion change? Can you find evidence that forty-five years of scientific advances are memorialized in the diagnostic criteria? Not much change. Not much memorialization.
Think of what this means. How — other than by informing diagnostic criteria — can a scientific finding benefit a patient? A finding benefits a patient if it helps their clinician classify them into a group of patients sharing a pathological process contributing to their health risk and — especially — into a subcategory of those patients who would respond to an intervention to lessen that risk. In other words, for a scientific finding to benefit a patient, it must indicate factors causal to their health risk and inform their diagnostic classification. So, if our diagnoses preclude classification by cause, then it also precludes our most promising scientific findings from benefiting our patients. Perhaps we shouldn’t be so surprised that we can’t find our science in our clinics.
We often hear or read the argument that the vast complexities of the disorders we study make advancement much more challenging than for other medical fields. We use this argument to temper expectations, including from those who hope that our research may somehow help alleviate their suffering. You know, the impatient ones who read about an exciting advance in our brain imaging research and go to their doctor hoping a brain scan can help them, or who ask for a blood test for the molecular finding they just read about whose authors said would revolutionize their care.
I agree our progress is challenged by the multicausal nature of mental disorders and, perhaps, we shouldn’t expect a similar level of progress as for other medical disorders for this reason. However, that is not why I strenuously object to using this argument to temper expectations for our progress. I object because our field has not behaved in a way that is consistent with this argument. Why do I say this? Well, if you claim that mental disorders are so incredibly complex that it is exceptionally challenging to make progress, then we should observe evidence of the effort to grapple with this challenge in our empirical and theoretical literature. Such evidence would be seen in the plethora of research articles that could not find evidence to support research hypotheses because the complexity of our disorders makes discovering their true nature so elusive. Such evidence would be seen in many lines of research arriving at dead ends and theories falsified because they proved inconsistent with empirical evidence. Our literature would demonstrate ample proof of our struggle to clarify the nature of our disorders, given the immense challenges this clarification involves. We would see hard-fought advancements built on previous painful failures as we learned from our mistakes. We would have a rich theoretical foundation, continually built on top of our errors as we tirelessly struggle to learn. And, of course, we would make such refinements in our understanding relevant to our patients with corresponding refinements in our diagnostic criteria, and then use clinical data to both improve diagnosis and validate or invalidate the etiological knowledge from which their criteria originated.
I don’t see evidence of such a struggle in our literature, do you? Instead, each decade seems to usher in new waves of seemingly exciting findings based on new and improved measurement methods, such as better brain imaging technology or more sophisticated genomic or molecular analysis. Are we really deepening our understanding in a way that can help our patients?
We can’t have it both ways. If we argue that our disorders are so multicausal that we should be modest in our expectations of progress compared to other medical disorders, we cannot simultaneously reference our scientific literature that predominantly highlights our successes and claim it as evidence of our advancement. I fully agree that our progress will be challenged by the multicausal nature of mental disorder etiology. Therefore, our advances will come from identifying the most rigorous methods to uncover causes from the available data, considering the complexity of disorder etiology. Additionally, because this pursuit presents numerous challenges, we must move forward within a scientific culture that relentlessly focuses on learning from errors and misunderstandings.
GHB: How do we cope with the complexity of multicausality beyond the obvious implications for figuring out what treatments are predictably effective, as we begin to get a clear sense of the true scope of the problem?
GS: Figuring out what treatments are predictably effective defines the clinical application of causal knowledge, whether a disorder is multicausal or singularly causal. When we know a cause of a disorder, then we can predict whether a person with the disorder will respond to an intervention known to target that cause, if the person’s clinical data indicates the cause’s relevance for them.
For instance, if evidence shows that one of the causes of children’s clinically significant anxiety is over-protective parenting and a particular child with clinically significant anxiety has over-protective parents, then we can anticipate that a treatment targeting parental over-protection will help reduce the child’s anxiety. Conversely, treatment not addressing this pattern of parenting will be ineffective for this child (unless it targets another relevant cause).
The approach to multicausality — assuming we have good scientific evidence– is the same. Let’s extend our example to the multicausal context. Suppose that in the population of children with clinically significant anxiety, the anxiety for a great proportion of them was caused by at least one of four factors: 1) a variant of a Gene X, 2) the activity of a brain circuit Y, 3) a child’s tendency to think ruminatively, and 4) overprotective parenting. Then, to help these children, interventions would be selected or developed to target those causes that are changeable.
Some, like Gene X, may be unchangeable, but others may be responsive to interventions. Suppose Medication A was known to reduce the activity of Brain Circuit Y; Psychotherapy B was known to reduce child ruminative thinking; and Family Therapy Z was known to reduce overprotective parenting. Then, children’s clinical data on these three changeable causes could be used to select which of these three treatments would, alone or in combination with others, reduce their anxiety. Again, multicausality puts us in the world of personalized and precise interventions.
GHB: Okay. This is clear and seems compelling, a “no brainer” [pun intended] as they say. What do you think stands in the way of change?
GS: I think our field has convinced itself that it can make progress without causal knowledge, with tragic consequences. It recognizes it can’t advance its causal understanding through randomized experiments because manipulating variables in humans is usually not possible for ethical or practical reasons, which is why our human etiological literature is almost entirely observational. It also understands that, even if randomized etiological experiments could be conducted more often, it would still have great difficulty clarifying etiology due to the numerous causes of mental disorders.
This puts us in quite a bind because — if we believe that discovering causes is — practically speaking — impossible, and we wish to demonstrate our value as a scientific field, then we have to figure out how to show our progress without relying on causal knowledge. How might we accomplish this? Oh. I don’t know. Okay. Here’s something we might do. We could decide that our diagnoses could be used to classify patients without reference to causes. Although we would be the first discipline in the history of medicine to try this, we could argue that this approach is still scientific by emphasizing the evidence of diagnostic reliability while ignoring evidence of validity. We could also decide that the evidence of the efficacy of our interventions in our clinical trials is all we need to demonstrate that our treatments work, even if they were not designed to target causes and even if they can’t be used in typical care settings at anything close to their fidelity standards. We could also prioritize publishing findings showing statistically significant associations between biopsychosocial factors and mental disorders without giving much attention to whether they might be confounded or whether they can translate to patient benefit.
A medical field misaligned with causality is a medical field misaligned with science. A medical field misaligned with science is a medical field misaligned with its patients.
GHB: What would you do, if you were in charge?
GS: Since we can’t move forward without discovering causes and only have human observational data to work with, we must figure out how to discover causes using this type of data. It is crucial that our observational causal discovery methods are suitable for multicausal disorders since the risk of false discovery due to confounding is significantly heightened in the context of multicausality. A confound, by definition, is a common cause; it is a cause of the disorder that is also causal of a factor statistically associated with the disorder.
In cases of confounding, the statistical association cannot be seen as evidence of the factor’s effect on the disorder; instead, it is merely a statistical artifact arising from its common causal origin. Intervening on the confounded factor will never alter the disorder’s outcome. If confounds are common causes, and a disorder has at least twenty causes, then each of them can confound any factor under study if that factor was an effect — even a downstream effect — of any one of them. Our conventional means of managing confounding, through statistical control of a suspected confound or its matching between groups, even with methods like propensity scoring, is completely unsuitable in the context of multicausality.
Fortunately, powerful methods designed and validated to infer causes in observational data exist, even in multicausal contexts. This is why I’ve invested heavily in Causal Data Science methods in my research. We are not the only field that cannot use experimental research to uncover causes. These methods have been around for some time and have contributed to discoveries in fields like economics and various medical disciplines. My group was the first to apply them to mental disorders, and other groups have begun to utilize these methods for mental disorders in recent years. These techniques are rooted in the groundbreaking work of UCLA computer scientist Judea Pearl. Scalable algorithms tailored for their application, particularly in medical disorders, were developed by my research partner Constantin Aliferis at the University of Minnesota.
Causal Data Science methods are designed to identify causal relationships in observational data, even with thousands of variables, by excluding confounding through rigorous conditional independence tests. Such tests enable the identification of variables related to an outcome that become independent when other variables are considered. These methods can also reveal signatures of confounding from unmeasured variables. When both measured and unmeasured confounding are excluded in this manner, estimates of the effect of a factor on an outcome become unbiased, allowing for the estimation of an intervention’s effect on an outcome through its anticipated change to a cause. Ultimately, these methods develop causal models consistent with the causal processes that generated the data. The methods are highly complex and technical, and I’ve just provided a very general overview of them. Interested readers can learn more from the references included at the end of this article.
Constantin and I, along with our teams at NYU and the University of Minnesota, have been collaborating for twelve or thirteen years on adapting these methods for mental health observational data, and we’ve published several articles demonstrating their utility in discovering causes. For example, in some of our earlier studies on the risk of PTSD in children hospitalized for injuries, we reported variables such as acute pain, low heart rate variability, parental acute stress, as well as SNPs for the COMT and CRHR1 genes were causal of PTSD. In another study, on risk for PTSD in recruits to the police academy [see below], we reported that factors such as acoustic startle responses to low threat during academy training, and SNPs for the Histidine Decarboxylase (HDC) and Mineralocorticoid Receptor (MR) genes were causally related to PTSD one year after graduation from the academy.
While these results are exciting, they are based on some of the first applications of Causal Data Science to mental disorders, so we must be very cautious. I believe the greatest value of these early studies lies in proving the concept that Causal Data Science methods can be effectively applied to identify causes from observational data on mental disorders. This indicates that our field need not remain trapped in its non-causal paradigm. I hope more researchers in the field — especially those who are most skeptical — will give these methods a try, so that we, as a field, can collectively forge a robust clinical science based on causal discovery to advance knowledge to better help our patients.
GHB: Is there anything else you want to add? Where can readers learn more about your work?
GS: We can rebuild our field by sticking closely to the first principles of medicine and staying consistent with our core beliefs. To reduce the burden of mental disorders, our interventions must target causes. To know whether a patient will be responsive to an intervention, they must be classified by the causes it targets. Mental disorders are multicausal, so we must discover those causes with the largest effects. The data we have available is observational, so we must use rigorous methods to enable the discovery of causes from this data. Our true north is our patients, so failure is not an option.
I’ve included some references to our work and the work of others that describe Causal Data Science methods and their application to mental disorders. Your readers may want to start with the one linked here, which we published a couple of years ago in Frontiers in Psychiatry.
Mental health progress requires causal diagnostic nosology and scalable causal discovery
We call this article our manifesto, and it provides details on the points I have made in this interview.
See also Saxe and company’s application of computation causal discovery to PTSD causes among police office trainees.
Scroll Down for Relevant Articles and Books
Additional Reading
Teach Your Children Well — Preparing for the Intelligence Revolution
The Unreasonable Effectiveness of Mathematics for Everything
Making Effective Choices in the Timeless Present Moment
Time: A Computational Construct of Perceptual Change
Self-Other-Help Books from GHB
Making Your Crazy Work For You: From Trauma and Isolation to Self-Acceptance and Love
Relationship Sanity: Creating and Maintaining Healthy Relationships
Irrelationship: How We Use Dysfunctional Relationships to Hide from Intimacy
References
Leichsenring F, Steinert C, Rabung S, Ioannidis JPA. The efficacy of psychotherapies and pharmacotherapies for mental disorders in adults: an umbrella review and meta-analytic evaluation of recent meta-analyses. World Psychiatry. 2022 Feb;21(1):133–145. doi: 10.1002/wps.20941. PMID: 35015359; PMCID: PMC8751557.
Saxe GN, Bickman L, Ma S and Aliferis C (2022) Mental health progress requires causal diagnostic nosology and scalable causal discovery. Front. Psychiatry 13:898789. doi: 10.3389/fpsyt.2022.898789
Saxe GN, Ma S, Morales LJ, Galatzer-Levy IR, Aliferis C, Marmar CR. Computational causal discovery for post-traumatic stress in police officers. Transl Psychiatry. 2020 Aug 11;10(1):233. doi: 10.1038/s41398–020–00910–6. PMID: 32778671; PMCID: PMC7417525.
Bio: Glenn N. Saxe, MD is a distinguished psychiatrist, researcher, and leader in the field of child and adolescent trauma. He serves as the Director of the Center on Causal Data Science for Child and Adolescent Maltreatment Prevention (The CHAMP Center), where he advances research to prevent child maltreatment through innovative data-driven approaches. Dr. Saxe is also the Director of the Trauma Systems Therapy (TST) Training Center, a nationally recognized model for treating traumatized children, as well as the Director of the Center for Child Welfare Practice Innovation, where he works to improve outcomes for vulnerable youth.
A Professor of Child & Adolescent Psychiatry at the New York University Grossman School of Medicine, Dr. Saxe is a key faculty member in the Department of Child and Adolescent Psychiatry at the Child Study Center. His work focuses on the impact of trauma on children, developing effective interventions, and improving child welfare systems.
Dr. Saxe has made significant contributions to the field through his research, publications, and leadership, helping to shape trauma-informed care practices worldwide. His dedication to advancing scientific understanding and implementing real-world solutions continues to improve the lives of children affected by adversity.
Disclaimer: This Blog Post (“Our Blog Post”) is not intended to be a substitute for professional advice. The views of interviewees do not necessarily reflect Dr. Brenner’s views. We will not be liable for any loss or damage caused by your reliance on information obtained through Our Blog Post. Please seek the advice of professionals, as appropriate, regarding the evaluation of any specific information, opinion, advice, or other content. We are not responsible and will not be held liable for third-party comments on Our Blog Post. Any user comment on Our Blog Post that in our sole discretion restricts or inhibits any other user from using or enjoying Our Blog Post is prohibited and may be reported Medium. Grant H. Brenner. All rights reserved.