Regular Article
The California Scale of Appreciation: A New Instrument to Measure the Appreciation Component of Capacity to Consent to Research

https://doi.org/10.1097/00019442-200203000-00007Get rights and content

Capacity to consent is one of the linchpins of the ethical conduct of clinical care and research, and it needs to be reliably measured. The authors describe the development of a new measure of the “appreciation” component of capacity, the California Scale of Appreciation (CSA), 18 items rated according to the concept of “patently false belief” (a belief that is grossly improbable); 39 patients with schizophrenia or a related psychotic disorder (27 outpatients and 12 inpatients) and 15 normal-comparison subjects participated. Each subject's audiotaped interview was rated by three evaluators. Answers to each item were scored as “capable,” “incapable,” or “uncertain capacity.” Also, each subject was given an overall rating of one of these three categories by each rater. Total scores on the CSA were calculated and correlated with scores on standardized instruments for assessing psychopathology and cognitive impairment. The mean total CSA score was significantly lower in the patients than in the normal-comparison subjects; however, a majority of the patients were found to be fully “capable” on the CSA. The CSA is a potentially useful instrument for measuring the appreciation component of capacity in persons with psychotic disorders. Its generalizability to other patient populations and to other types of protocols needs to be determined.

Section snippets

METHODS

The CSA was developed in several stages. We used as our guide the standard elements of informed consent—presentation of the nature, risks and benefits, and alternatives to a procedure. We included research-specific items, such as randomization and use of a placebo. Also included were potential PFBs (defined below) that might motivate a decision, such as a subject's belief that his doctor is all-powerful. Once satisfied that the instrument covered the critical issues relevant to research

RESULTS

Demographic and clinical characteristics, as well as CSA ratings of the patients and normal-comparison subjects, are presented in Table 2. At the time of assessment, the patients had mild-to-moderate levels of psychotic symptoms, as indicated by their scores on the PANSS.

The CSA showed good interrater agreement on individual items as well as on total CSA score (Table 3). Overall Cronbach alpha values for internal consistency of the CSA scores for all the subjects ranged from 0.83 to 0.88; for

DISCUSSION

We have described a new instrument designed to measure the appreciation component of capacity to consent to psychiatric research. In a preliminary assessment of the instrument, we found that it had good interrater reliability, internal consistency, and face validity. Operationalizing the concept of a PFB in a semistructured interview with detailed scoring instructions appeared to allow raters to evaluate subjects in a consistent way. The mean total CSA score was significantly lower in the

References (30)

  • DA Wirshing et al.

    Informed consent: assessment of comprehension

    Am J Psychiatry

    (1998)
  • PS Appelbaum et al.

    Competency to consent to research: a psychiatric overview

    Arch Gen Psychiatry

    (1982)
  • ER Saks

    Competency to decide on treatment and research: The MacArthur Capacity Instruments

    Research Involving Persons With Mental Disorders That May Affect Decision-Making Capacity, Volume II; Commissioned Papers by the National Bioethics Advisory Commission

    (1999)
  • ER Saks et al.

    Competency to decide on treatment and research: MacArthur and beyond

    J Contemporary Legal Issues

    (1999)
  • JS Janofsky et al.

    The Hopkins Competency Assessment Test: a brief method for evaluating patients' capacity to give informed consent

    Hosp Community Psychiatry

    (1992)
  • Cited by (53)

    • Assessing decision-making capacity at end of life

      2014, General Hospital Psychiatry
    • The Assessment of Decisional Capacity

      2011, Neurologic Clinics
      Citation Excerpt :

      In addition, there are limitations to establishing reliability and validity of these instruments. Most capacity assessment instruments demonstrated good interrater reliability, but very few instruments have been evaluated for test-retest reliability.40,51–53 Establishing validity of these instruments has also been limited due to the lack of a “gold standard” against which these tools can be validated.

    View all citing articles on Scopus

    We acknowledge helpful advice from Paul Appelbaum, M.D., who critiqued an early draft of this manuscript.

    This work was supported, in part, by a grant from the Greenwall Foundation and by NIMH grants MH-43693, MH-49671, MH-59101, and by the Department of Veterans Affairs.

    View full text