Abstract
In 2020, cognitive neuroscientist Itiel Dror developed a cognitive framework to address biases influenced by cognitive processes and external pressures in decisions made by forensic experts. Dror’s model highlights how ostensibly objective data, such as toxicology or fingerprints, can be affected by bias driven by contextual, motivational, and organizational factors. Forensic mental health evaluations, often more subjective than physical forensic evidence analysis, are particularly vulnerable to these cognitive biases. Dror identified six expert fallacies, such as the belief that bias only affects unethical or incompetent practitioners, and proposed a pyramidal model showing how biases infiltrate expert decisions. This article adapts Dror’s model to forensic mental health, exploring how biases influence data collection and interpretation and proposing mitigation strategies like Linear Sequential Unmasking-Expanded (LSU-E). We emphasize that mitigating cognitive biases requires structured, external strategies, as self-awareness alone is insufficient. By applying Dror’s concepts and framework, we offer a practical approach to reduce biases and improve the fairness and accuracy of forensic mental health assessments.
At face value, laboratory results and physical evidence would appear shielded from evaluator bias. Nevertheless, cognitive neuroscientist Itiel Dror Ph.D.1 found evidence of cognitive contamination contributing to errors in forensic scientists’ analysis of toxicology, bloodstain pattern analysis (BPA), DNA, fingerprinting, bitemarks, handwriting, voice, pathology, and firearms. Dror1 and others2 have argued that cognitive biases are often rooted within unconscious processes and the human brain’s tendency to look for shortcuts. These processes can lead experts to systematic processing errors stemming from “fast thinking”2 or snap judgments from minimal data.1 Kahneman2 theorized that human thinking mechanisms are composed of two systems, both subject to bias. System 1 thinking is fast, reflexive, intuitive, and low effort. It is subconscious and emerges from innate predispositions and learned experience-based patterns. System 2 thinking is slow, effortful, and intentional, executed through logic, deliberate memory search, and conscious rule application.
Given the subjective nature of the data utilized to form their opinions, forensic evaluators may be even more prone to cognitive biases than forensic scientists. Extensive literature shows that humans are subject to the wide range of unconscious cognitive biases and contextual noise that Dror describes in his models.3 Forensic evaluators may be even more prone to these factors because of the complexity, volume, and diversity of data sources and the need to form multiple subordinate opinions that are inherent to forensic reports. In forensic evaluation, bias can inject itself at multiple points making the process fraught with its influences. Indeed, other manifestations of biases in forensic assessments include the following: gender bias (female defendants may be more likely than male defendants to be declared legally insane or to be diagnosed with borderline personality disorder), misattribution of symptoms and presentations because of evaluees’ neurodiversity (e.g., interpersonal disengagement interpreted as lacking empathy and misdiagnosis of antisocial personality disorder among those with autism spectrum disorder), and racial disparities in diagnosis, such as with attention deficit hyperactive disorder (e.g., misdiagnosis of trauma effects in immigrants who are refugees).4,–,7
Despite awareness of biasing effects in forensic mental health assessments, a recent systematic review noted that most studies focused on biasing effects, but few described effective mitigation strategies.8 Frameworks for mitigating bias require understanding the pathways or sources of bias. Dror1 identified expert fallacies that increase the risk for bias and proposed a cognitive-based method for mitigating bias. This approach has been applied in various domains (DNA analysis, police conflict management).1,9 Biases are formed by the interpretation of data, which includes what information the expert collects, gives weight to, or disregards as noise.1,–,3 After initial training, forensic evaluators may operate in feedback vacuums, cutoff from corrective feedback, peer review, and consultation. Consequently, fallacies and biasing influences can threaten objectivity and fairness in their evaluations. These biases undermine the validity of a forensic evaluator’s findings and, eventually, justice.10 Evaluators are ethically obligated to conduct fair, unbiased evaluations, yet all people are subject to cognitive bias. Therefore, all forensic evaluators must adopt strategies to mitigate bias. Yet, given the implicit or unconscious nature of biasing factors, it may not be apparent to forensic evaluators how biases may infiltrate the evaluation process or lead to ineffective mitigation strategies.
We suggest that Dror’s approaches apply to understanding and mitigating bias in forensic mental health assessments. Dror1 proposes two main pathways to bias: the fallacies experts embrace, including the bias blind spot, and the ways the human brain processes information, described in his pyramidal structure.3 We suggest that Dror’s approach to minimizing cognitive contamination through the linear sequential unmasking of information processing method can also be applied toward minimizing bias in forensic mental health evaluations. To this end, we adapt Dror’s concepts and examine ways the six expert fallacies and the pyramid of biasing elements influence forensic mental health assessments. We then explore effective strategies to mitigate bias and illustrate select applications.
Pathways to Bias
Six Expert Fallacies
Dror1 emphasized that biases do not reflect the expert’s character. Cognitive biases are inherently implicit; their very nature hides them from awareness. Consequently, they are challenging to identify. Highlighting expert-held fallacies provides a pathway for understanding why forensic experts may resist acknowledging their vulnerability to bias. Dror identified six fallacies commonly held by experts and described how even seasoned evaluators can fall into the cognitive trap of believing these vulnerabilities do not apply to them (see Table 1).1
Six Fallacies and Cognitive Bias Commonly Held by Experts (Ref. 1, p 7999)
The first fallacy is that only the unethical practitioner commits cognitive biases. Practitioners may incorrectly conclude that bias is the domain of unscrupulous peers driven by greed or ideology and who do not value justice and truth.1,3 Yet vulnerability to cognitive bias is a human attribute that does not reflect a person’s character. The notion that one is subject to bias may conflict with self-perception as an ethical practitioner, an identity valued in the field. Forensic psychiatrists and psychologists may correctly view themselves as ethical practitioners who strive to adhere to ethics mandates and aspirational goals outlined in their respective disciplines’ ethics codes and principles.11,12 Nevertheless, as humans in a complex world, even the most ethical practitioners are vulnerable to cognitive biases.3 Part of this confusion comes from not understanding the differences between cognitive biases (which we focus on) in contrast to intentional discriminatory biases.
A second fallacy is that biases result from incompetence, that only incompetent evaluators are biased. Deviations from best practices, such as using outdated instruments, are overt and easily detected. An evaluation can be well-written and logical and use widely accepted violence or sexual risk assessment instruments yet conceal biased data gathering (e.g., an overreliance on criminal history). The evaluator may omit any relevant mention of the defendant’s race and how the risk instrument may be biased against Blacks or other people of color13 or its applicability to indigenous populations.14 In juvenile cases, the evaluator may falsely attribute conduct to characterological failings, disregarding environmental pathways to such behavior (e.g., school expulsion rates, the school-to-prison pipeline).15 Technical competence does not obviate the crucial role of bias-mitigating actions. Bias mitigation is essential to preparing competent, valid forensic evaluations. Recognizing one’s vulnerabilities and deploying mitigating strategies can augment competence.
The third fallacy is “expert immunity,” the notion that experts are shielded from bias by merely being experts. Paradoxically, the mantle of “expert” may itself enhance the risk of bias. Expert status from training, education, and professional practice may lead forensic evaluators to adopt cognitive efficiencies or shortcuts. An example of such efficiency is that experts may selectively attend to certain kinds of data that comport to preconceived notions and assumptions and neglect novel, potentially salient data points.1 The very cognitive mechanisms that enable them to identify relevant information and valuable predictive expectations based on their knowledge base and clinical experience create bias and can lead to errors.1,3 For example, a forensic evaluator who has conducted thousands of malingering evaluations and even trains others in conducting such evaluations may not acknowledge cognitive blind spots and may thus be vulnerable to error. That expert’s experience may be that all those who endorse visual hallucinations with tactile properties (seeing and feeling the hallucination) have been found to malinger psychosis. That expert-held view can lead to the inability to consider alternate hypotheses (e.g., alcohol hallucinosis) that can produce that type of symptom.
The fourth fallacy, technological protection, may lead forensic scientists to wrongly believe that technological methods (such as instrumentation, machine learning, and artificial intelligence) eliminate bias. Analogously, forensic experts may think that using actuarial risk tools with operationalized factors statistically linked to criminal recidivism renders them invulnerable to subjective decision-making.16 The use of algorithms and associated statistical values may foster a false sense of empiricism. Although research-supported risk factors and tools reduce bias inherent to subjective or idiosyncratic methods, these tools and algorithms are not immune to biasing effects. For example, the technological protection offered by statistical algorithms can be offset by an inadequate normative representation of racial groups that can skew data by overestimating risk in minority groups.13,17,–,21 Often, the normative sample is predominantly white. Alternatively, to justify using risk tools, some may merely compare effect sizes between whites and other racial groups to conclude that the tools work “equally well” across racial groups.22,23 The data are viewed as ideologically neutral and the statistical analysis as objective; therefore, the application to the individual under evaluation is unbiased. The assumption that statistical data and significance represent “good psychological science” ignores that the risk factors are based on the values of the researchers (that of the dominant culture and socioeconomic status), who define what represents maladaptive behavior.24 Rogers et al.25 note that the definition of “good” psychological science is that it is ideologically neutral (and by further application, so are the statistical methods and values reported) and therefore generalizable universally. Such an approach neglects the lived circumstances of the criminal defendant (often poor, minority, extended versus nuclear family structure). It can result in unintentional racial bias, such as that of attributing personality characteristics and recidivism risk to race.22
The fifth fallacy of a bias blind spot has been the finding that forensic experts tend to perceive others, but not themselves, as vulnerable to bias.8 Because cognitive biases are beyond awareness, experts do not recognize their biases or susceptibility. Believing they are not subject to bias may lead evaluators to ignore effective bias reduction strategies, thinking they do not apply to them.26
The sixth fallacy is that experts may believe they can control bias through willpower. Even the eminent psychologist Daniel Kahneman, who identified an array of human cognitive biases, acknowledged that he was not immune to biased thinking.1,3
Human Brain Processing
Dror1 developed a neurocognitive theoretical model of bias using a pyramidal structure (see Fig. 1). Dror’s model of biasing factors gives a clear structure for understanding how bias can infiltrate decisions in forensic evaluation. Based on how the brain processes cognitive information, the model emphasizes the decision-maker over situation and data.1,3 In other words, the model examines how experts receive, perceive, and interpret data and the potential for cognitive contamination. The model amplifies Kahnehan’s2 observations: the reliance on mental shortcuts is a default arising from the brain’s tendency to choose the path of least resistance, especially when under stress, including time pressures. The pyramidal approach permits a comprehensive understanding of bias, incorporating its manifestations at the data, contextual, and brain structure and processing levels and showing their proportional contributions. The pyramid is a compelling metaphor for illustrating that human nature is the most considerable biasing influence.1

Figure 1. Sources of bias. This figure is from an open-access article published under an ACS AuthorChoice License (Ref. 1, p 7998), which permits copying and redistribution of the article or any adaptations for non-commercial purposes. The author has also given permission for use of the figure.
Dror1,3 suggested three broad sources of bias formed by how the brain processes information. First, factors related to case-specific materials (specifically, the data, reference material, and content) are at the pyramid’s apex. Next are factors related to interactions between the evaluator and their environment, culture, and experience (specifically, base rates, organizational factors, education, and training). At the pyramid’s foundation lie the most fundamental sources of bias: those related to human nature (specifically, the personal evaluator factors and the brain’s information processing mechanisms). Dror argues that decision-making is underpinned by and results from the expert’s cognitive data processing. This cognitive processing interacts with factors within the pyramid and creates bias. The human brain’s vulnerability to cognitive contamination gives rise to bias.1,26,–,30
At the level of case-specific materials, irrelevant case facts,26,31 reference materials,32,33 and the referral context can be sources of bias.34,35 Dror1,3 found that fingerprint experts asked to match two sets of prints (one from the crime scene, one taken from the suspect) matched the two prints more often when told that a witness placed the suspect at the crime scene than when that information was not included in the referral. Dror labels such material as “task-irrelevant contextual information,” which can lead to overlooking data, underweighting the absence of data, or failing to consider alternative explanations. Forensic mental health experts can face similar biasing effects based on irrelevant case information or the referral source. For example, in high-profile homicide cases, the nature of the crime facts may unduly influence assessment of future violence risk. Case material, such as transcripts of testimony by sexual abuse victims or crime scene photographs of homicide victims, can be emotionally charged and affect decisions.
A similar concern arises in competency to stand trial assessments that require the evaluator to answer at least five questions: whether the individual has a mental disorder, has a factual understanding of the proceedings, has a rational understanding of the proceedings, can assist counsel with the case, and whether impairments in competency (if any) result from the mental disorder.36 Within each of these questions lie more considerations. For example, the evaluator must determine if any signs and symptoms displayed are genuine or feigned, if the signs indicate a mental disorder is present, if certain aberrant beliefs represent delusions, and if the delusion affects the defendant’s understanding of the case to the degree that compromises competence. Data in preexisting records and evaluations and the referring attorney’s framing can influence mental health diagnoses and competency opinions.
The context of the referral can create bias. For example, a referral for psychological assessment on a competency to stand trial restoration unit might state the defendant is suspected of malingering. The specified reason for the referral can lead evaluators to inadvertently prioritize signs of malingering and overlook genuine indicators of mental illness. The referral context can indicate outcome expectations (e.g., excessive interest from an institutional administrator in a high-profile case or adversarial biases favoring the retaining party).27,–,29 Murrie et al.28 found that psychiatrists’ and psychologists’ ratings of offender case files were affected by which side retained them, also called an allegiance bias. Ratings on risk instruments resulted in higher risk scores by evaluators who believed themselves to be retained by the prosecutors than by those who believed themselves to be retained by defense counsel.
Another source of potential bias can be the influence of the forensic evaluator’s environmental, cultural, and professional experiential factors. Some identified biases relate to the working environment, such as organizational culture, goals, and targets (e.g., expectations of a certain percentage of findings in one direction or another) or adhering to professional peer group consensus about methods or tests.16 Differences of opinion among evaluators may reflect independence from organizational pressures and a lack of excessive groupthink30 in evaluator cohorts. Contrary to the notion that inter-rater reliability is inherently good, too much agreement could reflect environmentally imposed biasing influences.
Education and training influence how evaluators conduct assessments, gather information, and interpret data. Some evaluators might focus too much on either neurobiological or psychosocial factors. Experts’ experience may also become a biasing factor. They may jump to the wrong conclusion based on their experienced (and thus expected) base rates for a particular outcome. For instance, working in a state hospital with high violence rates might cause an evaluator to overpredict violence in other groups. In forensic assessments, expected base rates could influence findings of incompetency to stand trial. As an example, the expert’s experience as a psychiatrist on a forensic inpatient unit with a high rate of malingered psychosis in competency cases may lead to bias through an expectancy of malingering. A forensic psychologist who treats sexual abuse victims may overestimate the risk of sexual violence posed by a defendant under evaluation for probation placement and treatment. Forensic mental health professionals in justice settings may be vulnerable to reducing all behavior to symptoms of mental illness or disruptive behavior to pejorative characterizations (e.g., malingerer, psychopath), creating diagnostic errors.31
Prior experiences could also create risk aversion. For example, inaccurately assessing the risk of a defendant who went on to commit a highly publicized violent crime could lead an evaluator to overestimate violence risk in future assessments. These tendencies may create idiosyncratic differences in interpreting an evaluee’s behavior. They may lead to applying the thresholds of diagnoses in the Diagnostic and Statistical Manual of Mental Disorders, Fifth Edition, Text Revision (DSM-5-TR)32 differently33,–,35 and can be obstacles to forming alternative hypotheses, conducting differential diagnoses, or seeking professional consultation. For example, an evaluator who is unaware of or ignores or minimizes the impact of neurodiversity, such as that with individuals with autistic spectrum disorders, may attribute interpersonal disconnection to callousness and mistake it for antisociality, as noted earlier.6 Evaluee defensiveness because of cognitive impairment, low intellectual functioning, or even illiteracy may lead to a misdiagnosis of antisocial personality.6
Another source of bias comes from how human beings process information. Personal history, ideology, or emotions can bias expert opinions and lead evaluators to weigh data points in an idiosyncratic manner. Various evaluator characteristics affect forensic opinions. These include personality features, such as agreeableness, skepticism, empathy, and attitudes, such as those about treatability, the value of public safety versus civil liberties, and people convicted of sex offenses. For example, a cynical evaluator may misdiagnose genuine functional impairment manifested with idiosyncratic presentations as “malingering” in exculpation-related evaluations, as with not guilty by reason of insanity (NGRI). It may cause one to attribute behavior to callousness and minimize or fail to assess the presence of severe mental illness.37 For instance, past sexual trauma experienced by an expert might cause an overperception of sexual deviance.37 Ideological stances on sexually violent predator (SVP) laws or the death penalty can affect the evaluation of commitment criteria and mitigation factors.38–40 Professional ideology may lead to experts’ disagreements on whether or not certain diagnoses meet the legal definition of a mental disorder, such as antisocial personality disorder, in the context of SVP civil commitment.38,39 Most fundamentally, the structure of the human brain itself biases perceptions and interpretations at each level of the pyramid, unconsciously affecting each level of analysis and each part of the evaluation.1,3
Approaches to Mitigate Bias
A conceptual understanding of bias alone cannot reduce it. Further, in their review of 22 studies addressing forensic psychologist bias, Neal and colleagues8 noted that, although the type of bias in forensic psychology is often the focus of studies, ways to mitigate biases are not; less than a third of the studies they reviewed addressed ways to do so. Still, understanding the fallacies and the forms and sources of bias permits the development of practical steps to mitigate and counter it. Below, we outline some of these steps.
Recognizing our inherent vulnerability to bias is a crucial first step, but not enough. This recognition must be the impetus to address bias and use countermeasure strategies. Evaluators must reject fallacies that equate bias susceptibility with incompetence or unethical behavior. Recognizing the fallibility of introspection and willpower is also fundamental. Bias cannot be mitigated through reflection or willpower alone; it is inherent to the human condition, making such strategies ineffective. As Pronin and Kugler41 point out, belief in the efficacy of introspection is itself a cognitive illusion contributing to the bias blind spot. Nevertheless, training about bias and its unconscious nature has been demonstrated as one way to recognize and effectively mitigate bias in forensic evaluation.3,41
Linear Sequential Unmasking-Extended
Linear Sequential Unmasking-Expanded (LSU-E), developed by Dror and Kukuka,42 offers a bias mitigation methodology. Initially developed to minimize bias in DNA, fingerprinting, firearms, and other physical forensic evidence interpretation, LSU was later expanded (LSU-E) to further improve forensic decision-making by minimizing noise and improving decision quality.43 Central to LSU-E is the recognition that the order in which information is presented can significantly influence evaluators’ interpretations. Thus, the method advocates for starting with objective data before considering subjective contextual information and its relevance and biasing potential.
Dror and Kukuka42 suggest the first approach is to make sure, as much as possible, that evaluators are not exposed at all to irrelevant information. As the order of information reviewed is essential, they suggest that experts begin their analysis with more objective information before considering information that is more subjective in nature. For example, in forensic investigations, it is crucial for crime scene investigators to assess the scene without preconceived notions, such as knowledge of a suspected manner of death, which can lead to biased interpretations. Contextual information has a significant impact on deciding whether a death was a suicide or homicide. Dror and Kukuka indicate that this ordering of considering information can reduce noise created by contextual information, increase objectivity, and reduce bias. For example, a forensic investigator should not receive information about the manner of death (e.g., supposed suicide) prior to coming onto the scene. That information causes cognitive priming that sets up a priori expectations and hypotheses that bias the perception and interpretation of the crime scene itself, the actual evidence (e.g., interpreting the body’s position to support suicide (versus homicide) rather than gathering information that is not expectation-driven).42,43
LSU-E has applicability in various criminal forensic assessment contexts (e.g., competency, NGRI, and capacity assessments). The process would be for evaluators to examine the data in the records and objective signs and behavior exhibited by the evaluee before reviewing subjective or contextual information, such as diagnosis and conclusions by prior evaluators. Prior psychiatric diagnoses are important but may lead the forensic evaluator to search for signs to support that condition when examining the evaluee. LSU-E is not about depriving the evaluator of such information; it is just about not starting with it. The idea is that the evaluator first examines the more objective data. The evaluator independently forms hypotheses based on the initial data and only then uses other sources of information to develop an opinion.
LSU-E applies to varied forensic mental health assessments, including trial competency evaluations, NGRI, and civil commitment evaluations. In these contexts, evaluators would look at arrest reports and clinical records before reviewing probation officer reports or previous evaluator findings. The officer report and evaluator findings may bias the evaluator based on the officer’s data selection. These reports are based on another person’s data selection and interpretation, allowing potentially biased interpretations to be perpetuated if adopted wholesale. These phenomena have been termed the bias cascade and bias snowball effects.44
In addition to ordering the information based on its objectivity, LSU-E considers the relevance and salience to be crucial, prioritizing a review of what is essential and likely to be most relevant to the case. An LSU-E approach would prioritize the identification of irrelevant information and even omit it from review when appropriate. For instance, toxicology reports may be essential and thus very relevant to a coroner’s assessment of cause of death. For consideration of manner of death, as in suicide, collateral interviews are relevant.
Irrelevant, prejudicial, or emotionally charged data can often be isolated from other documents and disregarded in the analysis. For example, an earlier stigmatizing finding of psychopathy would serve only as another data point to address after forming a data-based opinion. This approach addresses a concerning source of bias, “chart momentum,” where outdated or inaccurate diagnoses persist in records (also as described by bias cascade).
The LSU-E method prioritizes raw data over prior conclusions. This can be especially helpful in complex, sensitive cases, such as sexual risk assessments. Evidence, such as surveillance footage, DNA matches, or lists and descriptions of child exploitation material, can be more reliable than the respondent’s subjective accounts or another evaluator’s findings. Direct review of the evidence can subordinate consideration of potentially emotionally charged, prejudicial conclusions.
To illustrate applying LSU-E to the diagnostic formulation of a predicate condition, the evaluator would rely on objectively described facts, such as direct quotations that illustrate thought disorder or delusions. The biasing impact of chart momentum, wherein potentially inaccurate or resolved diagnoses are perpetuated in chart notes over time, can be mitigated by relying on objective data points to substantiate diagnostic conclusions. Additionally, evaluators can weigh raw data over other clinicians’ conclusions. For sexual risk assessment cases, such as amenability for outpatient treatment or SVP commitment, objective sources would be surveillance videos of behavior (e.g., security cameras), social media communications, texts sent by the accused individual to a child’s phone, sexual abuse physical examinations of the victim, and DNA evidence of semen on the victim. More subjective sources that could be influenced by context would include child protective services’ reports or forensic interviews of the child that could be affected by the child’s distress or reaction to authority. More subjective sources could include reports by collateral parties, such as the accused’s parents, teachers, or friends, which may be biased by anger, allegiance, or other emotional elements.
Depending on the context, agencies can remove potentially biasing data elements, such as race or socioeconomic status, from the data. Factors that degrade accuracy in certain assessments can also be removed or explicitly addressed. For example, the severity of crime is unrelated to recidivism in some instances. Crime details could be removed from the case file, or the evaluator can describe how the case factors did or did not factor into the risk determination and explain how emotionally evocative crime may overly affect risk assessment.
Moving Beyond LSU-E
The LSU-E method faces limitations in complex psycho-legal assessments, particularly in civil cases where case files can be extensive, encompassing decades of treatment records. The volume and disorder common to case files complicate the identification of an optimal information sequence. The method also risks oversimplifying intricate matters like causality, which can be pivotal in civil litigation. Another limitation of the LSU-E is that, in cases with decades-old arrest reports and treatment records or complex civil litigation cases (such as those by plaintiffs with historically remote sexual assaults), identifying what constitutes the least biasing sequence may be impossible and impracticable. Rigorous application of the method may risk oversimplifying complex forensic assessment. Nevertheless, the relevance of certain case details, such as socioeconomic status or emotional-laden factors, must be critically weighed, as they can unduly influence risk evaluations or causality determinations. Given the limitations of LSU-E, we suggest that the strategies outlined below address the various biasing factors and their interactions to augment LSU-E. There are two broad pathways for mitigating bias: at the organizational and the individual evaluator level.
At the organizational level, policy and procedures are essential to bias mitigation applicable to those working within state or federal agencies (as employees or contractors). External influence is helpful because most individuals are blind to their susceptibility and cannot entirely escape their unconscious proclivities, despite their best efforts. Policies and procedures can protect against biased or inconsistent decision-making. They also compel decision-makers and evaluators to examine a range of evidence outside their individual experiences. Requiring multiple forensic expert opinions can reduce bias although this approach may not be feasible if there is a shortage of competent evaluators. One example is in California’s statutes for postprison civil commitments of sexually violent predators13 and offenders with mental health disorders (OMD).45 Multiple evaluations provide decision-makers with diverse perspectives, revealing neglected data, varying evaluator thresholds, and controversial subjects. Knowing another evaluator will prepare a separate report can enhance vigilance. Imperfect inter-rater reliability highlights the benefit of multiple evaluations, suggesting independence from organizational pressures and avoiding excessive consensus. Different opinions can reflect consideration of new research or improved assessment methods.41
Another strategy may be limiting or prohibiting access to opinions of referral sources, other experts, and irrelevant data. For example, the California Department of State Hospitals (DSH) blinds OMD civil commitment evaluators to opinions from the California Department of Corrections and Rehabilitation (CDCR).45 DSH also keeps evaluators uninformed about the high-profile nature of or high-ranking officials’ interest in cases. Evaluators should be shielded from administrative pressures, such as bed space needs.45
Further, organizations and private practitioners can monitor the rates of their diagnoses and conclusions (incompetent versus competent; sanity versus insanity), including rates of these findings for various groups (race, gender, etc.). Misalignment with base rates could reveal potential bias.46 Private practitioners can avoid being subject to the retaining parties’ case framing by requesting that they only provide the referral question, case files, and access to the evaluee early in the process. Practitioners can also vary the sources of their referrals to ensure they maintain a balanced perspective.
Finally, organizations can develop quality assurance programs to monitor reports for bias and evaluator trends, reviewing for pejorative language, insubstantial evidence, cherry-picking, and erroneous reasoning. Trends and court outcomes can be monitored for bias through indicators like differences in diagnoses by race and gender. Standardized assessment protocols can ensure adherence to current best practices. Quality assurance teams can consult on cases with access to files and raw data, offering detached perspectives. Organizations can support bias awareness through mentorship, training in cognitive bias, and peer comparisons.3 Supervision or peer consultation forums can provide objectivity, although effectiveness may be limited if raw data access is restricted.
At the individual level, evaluators can counter bias through the Claim, Evidence, Reasoning (CER)47 method. The CER involves stating a forensic opinion, listing supporting evidence, and explaining the reasoning. This method counters bias by requiring evaluators to justify their assertions with transparent reasoning, helping to identify gaps in data or logic, and reducing reliance on cognitive shortcuts.47,48 Kukor48 suggests that bias can be countered when the forensic evaluator must explain how the evaluator “knows” each assertion with explicit evidence and reasoning. Doing so can reduce expert overreliance on cognitive shortcuts, pattern matching, and overconfidence. A corollary is one that clearly explicates alternatives and countervailing information. Evaluators can make a point to gather and list the evidence and the absence of evidence supporting the most salient potential conclusions and their opposites. To illustrate, in a violence risk assessment, an evaluator would list and consider incidents of violence, research-supported risk factors, and absent protective factors. The evaluator would then list periods without violent behavior, protective factors, and the absence of research-supported risk factors.43,47 With this process, evaluators can carefully avoid the biasing aspects of cognitive ease and the fast, reflexive, intuitive, and subconscious thinking of System 1 by injecting sufficient rigor and structure into their assessments.47,48 Using an explicit reasoning structure, such as the CER structure, which requires evidence for the conclusion, including countervailing data and identifying missing information, could counter expert bias by compelling consideration of the full range of data.47,–,50
Evaluators can further avoid biases from System 1 thinking by using the logical, deliberative System 2 processes, such as those intrinsic to structured, objective instruments.8 Adhering strictly to criteria and scoring rules can counteract bias from cognitive ease or fatigue. Structured approaches ensure a thorough review of empirically relevant data, which can mitigate confirmatory search strategies and neglect of base rate information.2,12 Combining objective measures with explicit, detailed reasoning helps link data to conclusions without falling into unconscious bias and cognitive ease-related errors and promotes the mitigation of structural racism.13,14,18,19 In their reports and testimony, evaluators should cite the limitations of an actuarial or other instrument, such as the sample population on which the instrument was normed and its applicability to the individual under evaluation (e.g., there may not have been adequate sample representation by age, race, or gender).
Hegel’s51 dialectical reasoning is another overarching strategy evaluators can adapt, aligning with CER and other structured approaches, such as the CHESS,52 an acronym for Claim (preliminary opinion), Hierarchy (of the supporting evidence), Exposure (considering the weakness of the evidence), Studying (examining and revising the claim), and Synthesizing (the revised opinion). Although dialectical reasoning was not originally intended for forensic applications, it offers a way to mitigate bias in this arena. This method involves resolving contradictions to come closer to objective truth through three key stages: stating a proposition (thesis), identifying contradictions (antithesis), and synthesizing them. The dialectical method is a cyclical process that allows for ongoing revision, encouraging transparency and potentially revealing biases (as in CHESS).52 It aligns with scientific methods by seeking the truth through emphasizing experimentation and theory refinement. We can illustrate the strategy for not guilty by reason of insanity opinions. The evaluator can list available facts suggesting that defendants did understand what they were doing, such as efforts to avoid detection (e.g., using gloves, scrubbing web search history), and available counterevidence, such as signs of irrationality (e.g., incoherent speech, disorganized behavior). The evaluator can then explicitly reconcile the evidence to form an opinion and then update the opinion as new information arises.
The approach counters shallow thinking by actively seeking and engaging with evidence contradicting initial impressions.50 It can avoid what Brodsky and Pope53 describe as a “Procrustean Report,” where data are manipulated to fit assertions and templates. Acknowledging that the nature of data, context, and human interpretation constrain truth, which Norko identifies as a “foundational aim and value of forensic evaluators” (Ref. 54, p 10), Hegel’s51 approach promotes a nuanced understanding of truth, making it a valuable approach for forensic evaluation. The method is also helpful in resolving any false dichotomies, such as the reification or demonization of actuarial risk assessment instruments, by explicitly describing benefits, limitations, and applications for the case at hand.46
Evaluators can mitigate their susceptibility to bias by managing themselves and their caseloads. They can be aware of their personal ideologies, preconceptions, and emotional vulnerabilities and avoid cases that trigger excessive fear or repulsion. For example, someone who actively protests sex offender placements in their communities should not perform sex offender risk assessments. Evaluators should ensure adequate rest and nutrition and avoid making decisions when hungry, tired, or under emotional strain. Individual evaluators and organizations can avoid excessive time pressure by carefully scheduling cases and court appearances and avoiding taking or assigning too many cases. In so doing, they can avoid the bias-inducing information overload and cognitive fatigue that excessive caseloads can bring.
Conclusion
Forensic mental health experts can be susceptible to fallacies regarding their lack of bias and vulnerability to the types of errors related to case material, the context of the evaluation, and the motivation and ideology of the evaluator. As Skeem and Lowenkamp55 have observed, policies and procedures targeting bias are more practical to implement than trying to change human inclinations. Methods for mitigating bias remain in the early stages of development. Mitigating biases based on motivation and willpower is not effective. Nonetheless, forensic evaluators are ethically responsible for identifying and countering potential biases in their conclusions. Criminal forensic evaluations are too consequential to neglect bias mitigation.56
Attenuating bias begins by acknowledging that human susceptibility to bias is expected and natural, not subject to willful control, and not caused by moral depravity or intentional violation of ethics standards. Accepting this premise and acknowledging the bias blind spot are essential first steps. Nevertheless, we cannot mitigate biases just through self-reflection or will.1,37,41 Even prominent experts are subject to the bias blind spot, biasing brain processes, external procedures, and information quality.2 To paraphrase Einstein, we cannot solve the bias problem with the same mind that closes our eyes to and generates bias. Instead, we need intentional, disciplined strategies and external structures. Because implicit biases are inherently complicated to detect, we need various mitigation strategies to reduce their impact. That said, external metrics and targets could influence opinions in a biased way, indicating the need for a measured, dialectical approach.
The nature of data, context, and the human brain’s processing of complex material can exert biasing effects.3,57 As a description of someone’s situation based on evolving scientific knowledge, any evaluation is an imperfect representation of reality and truth.48,–,50 Vigorous application of the mitigating strategies46,47,53,–,55 and structured dialectical reasoning51,52 may provide venues for reducing fallacies and sources of bias. Although mitigating strategies require additional time and cognitive effort, they offer pathways for minimizing cognitive bias. By encouraging a comprehensive and critical analysis of evidence, these strategies contribute to accurate, transparent, and reliable forensic evaluations.
Admittedly, there are multiple challenges to implementing these efforts. Organizational pressures may dampen independent opinions or lead to excessive alignment with organizational or partisan priorities15 (e.g., overvaluing inter-rater reliability in diagnoses or devaluing differences of opinion; an undue influence of public protest). Reviewing criminal history records of arrest and prosecution is a necessary task, although such records can be influenced by biases within the criminal justice system. Raw data, such as surveillance videos, social media text exchanges between an individual seeking children for sexual contact, or information from a coroner’s evaluation of the decedent all carry the risk of bias. Mitigation strategies have limitations, yet the layers of Dror’s1 pyramid ensure we attack bias comprehensively and holistically. LSU-E and the other suggested approaches offer methods for mitigating bias. These strategies can serve as external guardrails against the biasing forces inherent to the human condition.
Methods for mitigating bias remain in the preliminary stages of development. Nonetheless, given the high-stakes nature of criminal forensic psychiatric and psychological evaluations where an individual’s civil liberty may be curtailed indefinitely or where an individual faces potentially ruinous financial or psychological consequences, forensic evaluations should have robust protections for mitigating expert fallacies and biases.
Footnotes
The findings and conclusions in this article are those of the authors and do not necessarily represent the views or opinions of the California Department of State Hospitals or the California Health and Human Services Agency, or any federal, state, county government entity, university, or private affiliation.
Disclosures of financial or other potential conflicts of interest: None.
- © 2025 American Academy of Psychiatry and the Law