Editor:
We hope that you will consider publishing this letter as our response to Dr. Schacht’s1 letter and the commentary by Dr. Leong2 published on our article, “Assessing Competency Competently: Toward a Rational Standard for Competency-to-Stand-Trial Assessments” (J Am Acad Psychiatry Law 32:231–45, 2004). We thank both Drs. Schacht and Leong for their thoughtful responses, but we respectfully disagree with their analyses and conclusions.
Whatever the methodological limitations of our study (which will be addressed herein), the responses of the 273 forensic psychiatrists and psychologists who participated in our study clearly indicated confusion about the meaning of the different tests of competency to stand trial. Some respondents were unaware that the federal statutory test was construed by the Supreme Court to be the Dusky test. Some differentiated among the three tests; some did not. Some decided competency on the basis of mental disorder alone or on treatment considerations that are irrelevant to the determination of competency. In addition, some courts distinguished the rational manner test from the rational understanding test; some did not. The existing confusion must be addressed. We cannot simply accept Dr. Leong’s assertion that there are no rational reasons for changing the competency-to-stand-trial standard. The standard to measure competency must be understood and applied consistently by courts and by the forensic psychiatrists and psychologists who offer testimony on the matter. Clearly, that is not happening now.
From a methodological perspective, one would like to have evidence for both interrater and intrarater reliability before trying to prove that a construct is valid. In a sense, our study is a reliability study, because we asked (albeit obliquely) for respondents to indicate whether they thought the three standards of competence were identical or dissimilar. We predicted that subjects would see the differences among the standards and would apply and make judgments about vignettes based on those differences. We found that there was poor agreement on the meaning of those standards and poor agreement about how to apply them to different fact situations, which suggests that they are unreliable constructs. Furthermore, predictive validity was assessed in our study, in that we asked respondents to use the three standards to make judgments about the two vignettes, and we predicted how they would apply the standards. Essentially, respondents did a poor job (although better on the second vignette than on the first). This suggests that, at least for the vignettes we posed, there is no predictive validity when using these federal and state legal definitions, because they did not predict well (i.e., reliably) how evaluators would respond.
Dr. Schacht notes that the “brief vignettes” may not have been psychometrically sound. In fact, we never claimed that they were. In contrast, we made every effort to point out the limitations of the vignettes and did not try to extrapolate our findings beyond those limitations. Notwithstanding, we would like to point out that real criminal defendants who are undergoing competency-to-stand-trial evaluations are also not “psychometrically adequate.” Our vignettes were based on actual fact situations in which competency to stand trial was raised, evaluated, and decided. We believe that our vignettes approximated real life situations, with all the inherent limitations thereof. From this perspective, our study might more accurately be considered an examination of efficiency rather than efficacy.
Our study was a first attempt (a pilot study, if you will) to see if those performing evaluations on a regular basis would agree on the operational meaning of the three standards of competence and the appropriate application of each to different trial scenarios, regardless of the limited information provided. The answer is that they could not. The next step in improving the reliability (and universality) of evaluations of competence to stand trial is to try to identify sources of confusion and disagreement and eliminate them.
As professionals, we have an obligation to improve our methodology. As such, we should work to shed light on the issues by engaging in dialogue with each other, with our coprofessionals—forensic psychologists, lawyers, and judges—who also deal with individuals affected by the problem (i.e., criminal defendants), and with the public at large. Our study uncovered a fundamental problem in the fairness (or, conversely, arbitrariness) of competence evaluations, and we must respond. Dr. Leong’s implied suggestion that the problem be ignored (because the legal profession won’t change), or that we be satisfied with imperfect competency assessment instruments, may be convenient, but is not optimally ethical.
As two of the authors of our study are practicing forensic psychiatrists who routinely conduct these evaluations, we certainly would have preferred not to uncover this problem. We cannot, however, deny the devastating implications of our findings. Something appears to be fundamentally wrong with competency-to-stand-trial assessments. Instead of ignoring the problem and conducting “business as usual,” we must work together to solve the problem. The conclusion of our article contains specific recommendations to do just that.
- American Academy of Psychiatry and the Law