Skip to main content

Main menu

  • Home
  • Current Issue
  • Ahead of Print
  • Past Issues
  • Info for
    • Authors
    • Print Subscriptions
  • About
    • About the Journal
    • About the Academy
    • Editorial Board
  • Feedback
  • Alerts
  • AAPL

User menu

  • Alerts

Search

  • Advanced search
Journal of the American Academy of Psychiatry and the Law
  • AAPL
  • Alerts
Journal of the American Academy of Psychiatry and the Law

Advanced Search

  • Home
  • Current Issue
  • Ahead of Print
  • Past Issues
  • Info for
    • Authors
    • Print Subscriptions
  • About
    • About the Journal
    • About the Academy
    • Editorial Board
  • Feedback
  • Alerts
Research ArticleRegular Articles

Differences in Expert Witness Knowledge: Do Mock Jurors Notice and Does It Matter?

Caroline T. Parrott, Tess M. S. Neal, Jennifer K. Wilson and Stanley L. Brodsky
Journal of the American Academy of Psychiatry and the Law Online March 2015, 43 (1) 69-81;
Caroline T. Parrott
Dr. Parrott is Staff Psychologist, Taylor Hardin Secure Medical Facility, Tuscaloosa, AL. Dr. Neal is Assistant Professor, New College of Interdisciplinary Arts and Sciences, Arizona State University, Phoenix, AZ. Ms. Wilson is a doctoral candidate and Dr. Brodsky is Professor, Department of Psychology, The University of Alabama, Tuscaloosa, AL. Portions of these results were presented at the 2013 Conference of the American Psychology-Law Society (AP-LS) in Portland, OR, March 7–9, 2013.
PhD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Tess M. S. Neal
Dr. Parrott is Staff Psychologist, Taylor Hardin Secure Medical Facility, Tuscaloosa, AL. Dr. Neal is Assistant Professor, New College of Interdisciplinary Arts and Sciences, Arizona State University, Phoenix, AZ. Ms. Wilson is a doctoral candidate and Dr. Brodsky is Professor, Department of Psychology, The University of Alabama, Tuscaloosa, AL. Portions of these results were presented at the 2013 Conference of the American Psychology-Law Society (AP-LS) in Portland, OR, March 7–9, 2013.
PhD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Jennifer K. Wilson
Dr. Parrott is Staff Psychologist, Taylor Hardin Secure Medical Facility, Tuscaloosa, AL. Dr. Neal is Assistant Professor, New College of Interdisciplinary Arts and Sciences, Arizona State University, Phoenix, AZ. Ms. Wilson is a doctoral candidate and Dr. Brodsky is Professor, Department of Psychology, The University of Alabama, Tuscaloosa, AL. Portions of these results were presented at the 2013 Conference of the American Psychology-Law Society (AP-LS) in Portland, OR, March 7–9, 2013.
MA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Stanley L. Brodsky
Dr. Parrott is Staff Psychologist, Taylor Hardin Secure Medical Facility, Tuscaloosa, AL. Dr. Neal is Assistant Professor, New College of Interdisciplinary Arts and Sciences, Arizona State University, Phoenix, AZ. Ms. Wilson is a doctoral candidate and Dr. Brodsky is Professor, Department of Psychology, The University of Alabama, Tuscaloosa, AL. Portions of these results were presented at the 2013 Conference of the American Psychology-Law Society (AP-LS) in Portland, OR, March 7–9, 2013.
PhD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading

Abstract

The knowledge of experts presumably affects their credibility and the degree to which the trier of fact agrees with them. However, specific effects of demonstrated knowledge are largely unknown. In this experiment, we manipulated a forensic expert's level of knowledge in a mock-trial paradigm. We tested the influence of low versus high expert knowledge on mock juror perceptions of expert credibility, on agreement with the expert, and on sentencing. We also tested expert gender as a potential moderator. Knowledge effects were statistically significant; however, these differences carried little practical utility in predicting mock jurors' ultimate decisions. Contrary to the hypotheses that high knowledge would yield increased credibility and agreement, knowledge manipulations influenced only perceived expert likeability. The low-knowledge expert was perceived as more likeable than the high-knowledge counterpart, a paradoxical finding. No significant differences across expert gender were found. Implications for conceptualizing expert witness knowledge and credibility and their potential effects on juror decision-making are discussed.

Knowledge and competence are characteristics that serve a key role in human interactions. Interpersonal effectiveness and positive impression management are affected by perceptions of intellectual ability, knowledge, and skill. Knowledge may be communicated through self-proclamation, assertiveness, substantive content, or experience.1,–,4 An expert's acquired knowledge and competence may serve an important role in a courtroom, where lives and livelihoods may hang in the balance. After all, knowledge relates initially to whether a particular professional is retained to testify.5 The rules of evidence explicitly identify knowledge, experience, training, education, or skill as the practical foundations on which the witness is deemed an expert and permitted to testify by the court.6

Unlike other credibility influences, such as confidence or trustworthiness,7 expert knowledge is mandated by the court in the rules of evidence governing acceptance of a witness as an expert. Thus, an expert witness's knowledge is doubly subjected to scrutiny by the court and the trier of fact. As suggested by previous research, the court's sanction of a witness as expert likely serves as a heuristic to triers of fact in their evaluations of experts' credibility in many cases.8,–,11 Conversely, a witness's expert status has the potential to backfire and create distrust in the form of skepticism that the expert is a hired gun or feelings of comparative inferiority on the part of the trier of fact.12,13 The effects of expert qualifications and displays of knowledge during testimony are largely unknown.

Expert Witness Credibility in the Courtroom

An expert witness's credibility has the potential to influence jurors' consideration of their testimony. Both expert knowledge and credibility have been shown to influence disputing parties and third-party decision-makers. Expert witness credibility has been instrumental in verdicts and sentencing recommendations in both criminal trials and civil proceedings with mock, potential, and real jurors.14,–,16

Constructing Expert Witness Credibility

The Witness Credibility Model (WCM)7 is a framework that conceptualizes witness credibility as a composite of four factors: confidence, likeability, knowledge, and trustworthiness. The four-factor model is effectively captured by the Witness Credibility Scale (WCS).7 Numerous studies have used the WCS and validated its usefulness in evaluating perceptions of expert witnesses7,10,17,–,23 Across these investigations, the WCS has held its conceptual strength and demonstrated adequate internal consistency and reliability.

Knowledge in the Courtroom Setting

Expert Qualification

Because of the potential influence of expert witness testimony, experts must be qualified by the courts. In a survey of judges, jurors, lawyers, and experts in civil trials, Champagne et al.24 found knowledge and expertise to be the most desired characteristics in an expert witness. Moreover, perceptions of knowledge were closely linked to impressive educational credentials and reputation as a leading expert in the field.24 Legal requirements generally define expertise as acquired through relevant experience, training, knowledge, education, or skill.6 Expert knowledge may be demonstrated through academic degrees obtained, positions held, particular populations evaluated or treated, professional certifications or licensure, board certification, membership in professional organizations, professional publications, prior court experience as an expert, and honors and awards.25 Thus, the qualification process becomes almost synonymous with credentialing.4

Commons et al.26 suggest that the way expert witness qualifications are presented to jurors could affect how jurors view the expert and by extension, perhaps how the jurors evaluate the testimony. Hurwitz et al.27 conducted a language and content analysis of actual trial transcripts. They concluded that the jurors perceived expert witnesses as more credible if the experts presented content related to their credentials or experience (i.e., expertise) and objectivity (i.e., trustworthiness) during expert qualification.

Knowledge on the Stand

In the credentialing procedures for an expert witness, the court treats each of the five characteristics outlined in the Federal Rules of Evidence—experience, education, training, skill, and knowledge—as independently representative of expertise.6 However, research has shown that experience does not necessarily equate with improved accuracy or knowledge.28,–,30 Thus, to be viewed as expert by the trier of fact (and not just by the rules of evidence), expert witnesses should demonstrate mastery of their craft, conveying their knowledge through testimony.24 As Champagne and colleagues24 reported, jurors especially appreciate experts who can make testimony understandable to the lay person and communicate technical information simply and clearly by avoiding or explaining any jargon. Scholars have described how triers of fact may benefit from knowledge woven into a comprehensive story of the evidence.31,32 Testimony should accordingly be consistent with commonsense understanding of physical evidence and the testimony of other witnesses.32 Researchers have also explored jurors' sensitivity to differences in the quality and presentation style of data cited by expert witnesses, as well as the presence or absence of an expert.33 However, to our knowledge, no study has experimentally isolated and manipulated level or degree of expert knowledge on the stand to test its influence on decision-making.

Two of the four Witness Credibility Scale factors, likeability10,17 and confidence,18,20,23 have been experimentally manipulated and studied in relation to the Witness Credibility Model (Table 1). The main effect of knowledge on expert credibility has yet to receive similar empirical attention. Neal et al.10 studied expert witness knowledge, but only as it interacted with likeability. That is, they did not isolate knowledge in that study; rather, they varied knowledge and likeability at the same time and studied their interactions rather than their main effects.

View this table:
  • View inline
  • View popup
Table 1

Definitions and Examples of the Four Witness Credibility Model (WCM) Factors

They found that likeability and knowledge did interact in the expert witness role, with higher levels of likeability and knowledge being associated with higher credibility. However, they were not able to discern to what degree expert knowledge alone affects perceptions of credibility.

Gender as a Moderator of Perceived Knowledge

Prior research has found inconsistencies in whether the expert's gender moderates perceptions of expert witness credibility.34 For example, studies have found differential effects based on the experts' gender, either in favor of men10,35 or in favor of women.22,36 Other studies have uncovered complex interactions in the ways in which male and female experts are perceived. For example, Neal et al.10 found that experts who met threshold expectations of likeability and knowledge were not perceived differently based on their gender; however, when they were not likeable or particularly knowledgeable, male experts were perceived significantly more positively and were more persuasive than female experts. We included expert gender as an independent variable in the present study to explore further the relation between expert gender and credibility.

The Current Study

Perceptions of expert witness credibility may vary as a function of knowledge presentation. For example, some experts may not deem it necessary to discuss their specialized knowledge once qualified. Others may present displays of their knowledge judiciously and throughout their testimony in an effort to emphasize their expertise. Studying how various demonstrations of expert knowledge influence juror decision-making is a step toward understanding the effectiveness of expert testimony.

In the current study, we focused specifically on the main effect of expert witness knowledge. We sought to examine juror perceptions of expert credibility and varying degrees of expert knowledge. We manipulated expert knowledge as the independent variable (high versus low knowledge) while holding other WCS constructs constant. We expected to find a difference between high- and low-knowledge manipulations of the expert on the following dependent variables: the three other components of credibility—trustworthiness, likeability, and confidence; and sentencing recommendations, as well as agreement with the expert's opinion on likelihood of future violence. We specifically hypothesized that the very knowledgeable expert (compared with the less knowledgeable counterpart) would be rated significantly higher on credibility outcomes and yield more mock juror agreement with the expert regarding likelihood of defendant future violence and sentencing recommendations. Drawing on the inconsistent findings regarding the effects of expert witness gender on perceptions of credibility in prior research, we explored gender effects in the current study. That is, given the potential interaction of expert gender and knowledge on credibility,10 we included expert gender as a second independent variable.

Methods

Study Design and Operational Definitions

This study was a 2 (high versus low knowledge) × 2 (male versus female expert witness), between-subjects, factorial design. Thus, the independent variables were knowledge (high versus low) and gender (male versus female). We defined expert knowledge as, “the degree to which an expert is perceived to be well-informed, competent, or perceptive and to possess or exhibit intelligence, insight, understanding, or expertise” (Ref. 10, p 490). A literature review identified components associated with high knowledge, as displayed in Table 2. This conceptualization has been supported in previous work in which the interactions between knowledge and likeability were examined.10

View this table:
  • View inline
  • View popup
Table 2

Conceptual Components of Knowledge in Previous research

Our operational definition of knowledge included substantive content and clarity of testimony, credentials, relevant experience, self-proclaimed expertise, assertiveness, and familiarity with the case. The specific manipulated conceptions of high and low knowledge, again drawing on Neal et al.,10 are detailed in Table 1.

Participants

Undergraduate psychology students (n = 155) at a large public university participated for course credit. The U.S. Supreme Court decided in Witherspoon v. Illinois37 that jurors who sit on capital murder trials must be death qualified—that is, willing and able to consider capital punishment as a sentencing option. Because our stimulus material was based on the sentencing phase of a capital murder trial, those mock jurors who indicated absolute opposition to the death penalty were excluded from our analyses (n = 13), and six mock jurors were removed due to missing data, reducing the total sample size from 155 to 136. Mock jurors who were ineligible due to the Witherspoon criteria were distributed equally across the study conditions and reflected the overall demographic makeup of eligible participants. The gender composition of the sample was 81 percent female and ranged in age from 18 to 43 years (mean (M) = 18.76; standard deviation (SD) = 2.54). The sample was 79 percent Caucasian, 13 percent African-American, and 8 percent other racial or ethnic backgrounds.

Stimuli

We developed four separate videos, each approximately five minutes in length, to match the experimental conditions: male expert witness–high knowledge; male expert witness–low knowledge; female expert witness–high knowledge; female expert witness–low knowledge. Real expert witnesses testified in this mock scenario (rather than actors). When the video opened, the judge described to the mock jurors that the hearing represented the capital sentencing phase of Mr. Jones, a defendant who had already been found guilty of first-degree murder. The judge explained that the only task before the jurors was to decide whether Mr. Jones should be sentenced to death or life in prison. The judge explained the standard for burden of proof before the expert testified. In each video, the expert witness testified under both direct and cross-examination about evaluation of Mr. Jones' likelihood of future violence. In all conditions, the expert testified to the substantial likelihood that the defendant would reoffend. The video script was adapted from a jury sentencing proceeding used in previous studies.9,10,17,22 The script was modified to reflect the knowledge manipulations described above and included either a male or female expert matched for age, race, and clothing.

Procedure and Materials

Before the study began, Institutional Review Board approval was obtained from the University of Alabama Office of Research Compliance for research with human subjects. Information about the study procedures and details regarding informed consent were provided to participants, and then they viewed a randomly assigned video condition. After watching the video, all participants individually completed the following questionnaires.

Witness Credibility Scale

The Witness Credibility Scale (WCS) was used to assess the credibility of the expert.7 The scale contains 20 bipolar adjectives on a 1- to 10-point Likert scale (e.g., unkind (1) to kind (10); dishonest to honest; shaken to poised). Higher scores indicate higher credibility ratings. The WCS generates an overall credibility rating (α = 0.96 in this study) with higher scores indicating higher credibility. The WCS also yields a multidimensional measure of expert credibility defined by four subordinate domains: trustworthiness (α = 0.95), confidence (α = 0.92), likeability (α = 0.90), and knowledge (α = 0.93). Given the present study's interest in how expert knowledge may relate to operational, potentially changeable facets of credibility (e.g., likeability and confidence), expert credibility was assessed at the facet level (i.e., trustworthiness, confidence, and likeability) instead of at the global level (overall credibility) in this study.

Future Violence Likelihood Rating

Participants were asked to rate from 1 to 100 percent the likelihood that the defendant would commit future acts of violence. These ratings reflected how believable the participant found the expert who opined that the defendant was likely to commit future violent acts. Thus, this outcome reflects mock jurors' evaluations of the defendant's likelihood of engaging in future violence and mock jurors' agreement with the expert.

Sentencing Rating

On Likert-type scales ranging from extremely likely to extremely unlikely, participants rated their likelihood of sentencing the defendant to each of the two available sentencing options: life in prison without parole (LWOP) or death. To create a single continuous sentencing variable, these two Likert-type ratings were converted to standardized z scores. Then, the death penalty z scores were multiplied by −1, and the LWOP z scores were multiplied by +1. Finally, the two sets of z scores were summed to create a single continuous sentencing variable that conveys both direction and strength. That is, the more negative the score, the more likely the participant would be to assign the death penalty (representing agreement with the expert). The more positive the score, the more likely the participant would be to assign LWOP (disagreement with the expert).

Demographics

A demographics questionnaire elicited participants' age, gender, and degree of death penalty support.

Manipulation Checks

The Knowledge subscale of the Witness Credibility Scale was used as a manipulation check. This subscale comprises five items, including queries about whether the expert seems logical, informed, wise, educated, and scientific (again, α = 0.93 in this sample).7 In addition, we included one question about the attractiveness of the expert witness.

Target attractiveness can influence person perception such that greater attraction is positively associated with more favorable judgments.38,39 Given that our primary dependent variables in this study are credibility assessments, before data collection, we matched the relative attractiveness of the experts used as stimuli in this study. Results suggested that attractiveness would not covary with the independent variables (e.g., gender). We included the attractiveness question in the main study as a manipulation check.

Results

Manipulation Check

Knowledge

The knowledge manipulation check indicated that our manipulation of knowledge was successful for each expert; that is, the high-knowledge expert was perceived as more knowledgeable than the low-knowledge expert (F(1,135) = 6.31, p = .013) (high knowledge M = 39.83, SD = 8.53 versus low knowledge M = 35.97, SD = 9.33). Because knowledge was rated on a 10-point scale with five items per construct, the possible range in ratings was from 5 to 50. Thus, both experts were rated as relatively knowledgeable. As expected, we found that one expert was significantly less knowledgeable than the other, with a medium effect (η2 = 0.042).

Attractiveness

To ensure that the potential covariate of attractiveness was independent of the manipulations in this study,40 we matched the female and male experts on attractiveness before the manipulations were tested. We then tested this manipulation check in our study sample by using the following question about each expert: how physically attractive did you find this expert witness on a 10-point Likert scale (not at all attractive (1) to extremely attractive (10)). A significant difference in attractiveness emerged (F(1, 135) = 4.65; p = .033; η2 = 0.034). The female expert was rated as significantly more attractive (M = 5.01, SD = 1.78) than her male counterpart (M = 4.31, SD = 2.01) (a small to medium effect). Thus, attractiveness was an unexpected confounder that may diminish some portion of the effect of gender or knowledge or both (the portion associated with attractiveness) on outcomes.

There is a debate in the literature about how to address the confounders in multivariate analysis of covariance (MANCOVA).41 One key determinant on whether analysis of covariance (ANCOVA) can be implemented when a covariate and independent variables are confounded (as in our study) is whether the covariate arises by chance, or it is more likely that a meaningful difference between groups on the covariate is systematically delineated by the independent variable.42,–,46 MANCOVA is generally appropriate for random assignment designs if the covariate arises by chance because the analysis would be removing only “noise variance from group, not anything substantive about group.” (Ref. 41, p 45).44 In our study, attractiveness is likely to have differed between the male and female experts by chance. We have no reason to believe that the gender of the expert is the factor that influences the difference in attractiveness or that this difference would generalize to all female experts. Although it can be difficult to substantiate causal relationships between a covariate and an independent variable,47 it is widely accepted that attractiveness is a dimension independent of gender.48,49 Men and women vary in attractiveness, and these variables should not be conflated. Thus, our MANCOVA that included attractiveness as a covariate was used in the primary analysis to allow the variance introduced by this unexpected covariate to be reduced.

Main Analyses

For our primary analyses, we conducted a MANCOVA with the two independent variables (knowledge condition: high versus low; and expert gender: male versus female) on the dependent variables. We included five dependent variables: three credibility dimensions to examine witness credibility at the facet level (trustworthiness, confidence, and likeability), a continuous sentencing variable, and ratings of the defendant's likelihood of engaging in future violence (i.e., agreement with the expert's opinion). We included expert witness attractiveness as a covariate. Because participant age, gender, or race did not moderate any of the effects in the initial model, we did not include them in our final models.

Tests of multivariate normality, multicollinearity, and homogeneity of variance-covariance matrices revealed no significant results.40,50 Because of significant violations of Levene's test of equality of variance (for both sentencing and likeability), we set a conservative α level of 0.025 for these outcomes and used Pillai's trace to examine test statistics.40 Finally, the sample size requirement for MANCOVA procedures (at least 20 participants per cell49) was met, with the sample distributed relatively evenly across conditions. Adjusted means and descriptive information by condition are provided in Table 3.

View this table:
  • View inline
  • View popup
Table 3

Means (and Standard Deviations) Defined by Expert Gender and Knowledge

The MANCOVA results indicated that significant multivariate main effects emerged for the knowledge conditions (Pillai's trace = 0.146; F(6, 127) = 4.34; p < .001; ηp2 = 0.146). There was no significant main effect of expert gender (Pillai's trace = 0.79; F(6, 127) = 2.19; p = .059; ηp2 = 0.79), indicating that expert witness gender was not systematically related to any of the dependent variables. The interaction between expert knowledge and gender was not systematically related to any of the dependent variables (Pillai's trace = .02; F(6, 125) = .572; p = .720; ηp2 = .02).

Follow-Up Analysis to the MANCOVA

We initially conducted a discriminant function analysis (DFA) to identify how the dependent variables discriminated the high- versus low-knowledge groups. Essentially, DFA flips the approach to understanding the relationship between knowledge (the independent variable, or IV) and the dependent variables (DVs) used in the MANCOVA. Should a dependent variable explain a portion of the separation between high- and low-knowledge groups (i.e., if the DV can help explain the differences in the IV), it is likely that the significant main effect of the MANCOVA is attributable to the relationship between the IV conditions (knowledge in this case) and the particular dependent variable.51 In this case, our discriminant analysis revealed one discriminant function that explained 100 percent of the variance: canonical R2 = 0.15 (small effect size).

The discriminant function showed that the differences in knowledge could be explained in terms of one underlying dimension (Wilks' lambda = 0.85; χ2(5) = 21.96; p < .001). The correlations between outcomes and the discriminant function51,–,53 revealed that likeability loaded highly onto the discriminant function (r = 0.53), followed by sentencing recommendation (r = 0.26), followed by confidence (r = −0.20), then by chance of committing future acts of violence (agreement with expert) (r = −0.17), and finally by the low loading of trustworthiness (r = −0.04). Indeed, the DFA results indicate that the degree of knowledge is being discriminated between low and high (based on the nonstandardized canonical discriminant functions evaluated at group means). Although likeability tended to contribute the most to group separation of high versus low knowledge, the difference between knowledge groups may well be related to sentencing recommendation and, to a lesser extent, to agreement with the expert and perceptions of expert witness confidence. However, expert trustworthiness does not appear to relate systematically to group separation of knowledge.

To further understand these data, we conducted planned comparisons using univariate analyses with a Bonferroni correction of p = .01. In concert with the DFA, likeability was the only WCM facet to be systematically and significantly related to knowledge condition (F(1, 130) = 5.57; p = .020; ηp2 = 0.041). Of surprise to us, the highly knowledgeable expert was rated as significantly less likeable (M = 35.62, SD = 8.46) than the less knowledgeable expert (M = 39.58, SD = 8.26). No other significant effects of knowledge on the remaining WCM facets (i.e., confidence or trustworthiness), sentencing recommendations, future violence predictions (agreement with expert), or gender interactions emerged.

Supplemental Analysis

We conducted these supplemental analyses to examine the variable of attractiveness in more depth. Even though the influence of attractiveness was deemed a nonsystematic covariate in the present study, it is still possible that by entering the covariate into the model, “the covariate will in effect get credit for any relationship of their shared variance [with the independent variable] that is also shared with the dependent variable” (Ref. 41, p 45). The result may be a diminished estimate of the relationship between the gender and the dependent variables. This possibility is of particular concern in the present study because the multivariate main effect of expert gender approaches statistical significance at p = .059 when attractiveness is included in the model.

Attractiveness did in fact exert a significant multivariate main effect in the overall model (Pillai's trace = .087; F(6, 127) = 2.44; p = .038; ηp2 = .087). When attractiveness was removed from the analysis, a significant multivariate main effect for expert witness gender emerged (Pillai's trace = .98; F(5, 128) = 2.77; p = .021, ηp2 = 0.98), and the significant main effect for knowledge condition remained (Pillai's trace = .15; F(5, 128) = 4.52; p < .001; ηp2 = .15). The interaction between the expert's knowledge and gender was not systematically related to any of the dependent variables when attractiveness was removed from the model (Pillai's trace = .02; F(5, 128) = 0.607; p = .694; ηp2 = .02).

These results indicate that a portion of the effect of gender on the dependent variables may in fact have been removed in the main analyses (the portion of the effect that covaried with attractiveness).40 However, as noted above, this effect is more likely to be explained by gender's covariate relationship with attractiveness in our particular stimuli. Thus, the implication is that exploring the potential effect of gender (as possibly mediated by attractiveness) was not theoretically supported.

Discussion

In this study, we experimentally manipulated level of knowledge in an expert forensic mental health professional's testimony on the stand in a mock-trial paradigm. We sought to test the relation between lower and higher degrees of demonstrated expert knowledge and juror perceptions of expert credibility, agreement with the expert, and sentencing decisions. We also tested for potential moderating effects of expert gender. Our knowledge manipulations were successful from an empirical standpoint, operationally defining high versus low demonstrated expert knowledge.

We hypothesized that high knowledge would yield increased credibility as well as increased agreement with the expert. Although knowledge did exert an effect on one facet of credibility (i.e., likeability), it did so in a manner counter to our predictions. Knowledge influenced perceptions of expert likeability such that the expert with lower knowledge was paradoxically perceived as more likeable than the higher knowledge counterpart. The second part of our hypotheses that predicted a positive relationship between knowledge and agreement with the expert was not evidenced in this study. In other words, it appears that our defined levels of very knowledgeable versus less knowledgeable did not influence the mock jurors' ultimate opinions of the defendant's risk of future violence (agreement with the expert) or sentencing. Thus, knowledge manipulations influenced perceptions of some facets of credibility, yet carried little predictive utility in understanding mock jurors' ultimate decisions. Our results do not necessarily imply that an expert's knowledge has little effect on perceptions of credibility and subsequent juror decisions. Let us examine alternative explanations of our findings.

It is plausible that this study evidenced a ceiling effect, likely to exist in actual testimony, where the peripheral cue of being an expert extended a blanket influence of knowledge. Recall that differences between the low- and high-knowledge experts were statistically significant and yielded a medium effect. However, both experts were perceived as relatively knowledgeable. Moreover, knowledge levels did not contribute to credibility outcomes except in regard to likeability. Mock jurors also did not differentiate between very knowledgeable and less knowledgeable experts for agreement ratings and sentencing. These findings collectively suggest that mock jurors may have relied on the courts' discretion in allowing only qualified people with specialized knowledge to take the role of expert.6 That is, jurors may make an assumption that the expert is knowledgeable without critically evaluating the foundation of his or her knowledge. These results align with previous research that suggests the primary persuasive influence in expert testimony is the witness's status as an expert. Research has shown that jurors may not sufficiently evaluate the foundational research of expert opinions and that they may defer to the clinical opinion of the expert over an opinion rooted in actuarial evidence.9,54,–,56 The current study adds to the literature. Even when knowledge is varied (high versus low), there does not appear to be a critical evaluation of the witness, perhaps due to the witness's qualification as an expert. Thus, differential decision-making that could otherwise result from differences in expert knowledge may not be elicited.

However, low knowledge did increase the expert's likeability, and that result suggests that additional social–cognitive processes are at work. The negative relation that emerged between level of expert knowledge and perceived likeability implies that aspects of higher versus lower knowledge may influence expert likeability. Although we are cautious about speculating on underlying processes that were not directly examined in this study, it is possible, for example, that learning about an expert's qualifications would create a psychological distance between the expert and the mock juror. Social psychology research supports the competence–liking paradox; that is, the person with the most knowledge is often not the most liked.57,58 In court, and in life, however, it would seem beneficial to like the more knowledgeable person, as he may increase our chances of being correct and competent. Nevertheless, likeability for a knowledgeable expert comes at a cost to the juror, who may feel that he pales in comparison to the all-knowing expert. Perceptions of similarity and mutual liking decrease when a person perceived as superior is a factor.57 In fact, the pratfall effect suggests that competence with some degree of fallibility is perhaps the most liked combination59 and that a juror's gender and self-esteem may play into this phenomenon.60,61 Another possibility is that the highly knowledgeable experts were disliked because of character cues elicited from the high-knowledge content (e.g., perceived narcissism). Thus, it is plausible that differences in knowledge (e.g., the perceived narcissism in very knowledgeable experts) are more or less interpreted as differences in likeability (e.g., less likeable). In other words, experts may benefit from Baldoni's recommendation: “Never act like the smartest guy in the room.”62

When it comes to credibility, mock jurors may defer to the court and view very knowledgeable and less knowledgeable experts as knowledgeable because of their expert status. Thus, although it may seem that differences in knowledge have little influence on credibility determinations, differences in demonstrations of knowledge (e.g., high- versus low-knowledge presentations on the stand) may elicit psychological and peripheral cues to an expert's likeability. The evaluative, social, and cognitive influences that could be responsible for the negative knowledge–likeability link found in this and other research10 deserve future empirical attention, particularly given the potential influence of expert likeability on mock juror decision-making.17

Overall, the degree to which jurors are sensitive to differences in an expert's knowledge is not clear. Perhaps a continuum of perceived knowledge exists and exerts a meaningful influence on credibility. More likely, however, jurors assign a knowledge threshold to the person who is deemed an expert by the courts, consistent with heuristic models of jurors' evidence interpretation.63,64 Thus, perhaps the relative quality of the witness's expertise lacks a significant, observable influence on decision-making. This finding dovetails with prior witness credibility research. Despite the influence of manipulations on overall credibility, the components of credibility often lack direct or explicitly observable influence on individual jurors' explicit decision-making.10,17

The finding that differences in knowledge may affect the expert's perceived credibility but that expert differences did not translate into differences in jurors' ultimate decisions is potentially good news. The decision of the trier of fact is supposed to be based on the content of testimony, the substance of a case, and the strength of evidence.65 These findings add to the body of research showing that other variables affect the decision-maker, but only incrementally. That is, a variable such as expert witness knowledge is but one of many pieces of information decision-makers must integrate in formulating a decision. Experts and trial consultants may still benefit from recognizing that in close cases (i.e., those in which the verdict could go either way) or in cases in which opposing experts testify, expert knowledge may exert a substantive influence. In such instances, it might be beneficial to keep in mind that displays of knowledge may not always work in one's favor, at least to the extent that they diminish one's likeability.17

Effects of Expert Witness Gender

We found that expert gender had no effect on perceptions of credibility or mock juror decisions. Further, no statistically significant interactions regarding expert gender emerged in the present study. These results are encouraging: they suggest that jurors may not be using gender as a peripheral cue to assess expert knowledge or credibility.

Implications for Testifying Experts and the Attorneys Who Select Them

We constructed large differences in high- and low-knowledgeable experts in this study. However, the mock jurors did not pick up on the differences in the experts to the degree that we expected. These findings suggest that in uncontested cases or cases where the evidence is overwhelmingly strong for one side, the expert's basic credentials, accomplishments, and demonstrated knowledge may not make much of a difference to jurors. Experts and attorneys in such cases may not have to fret about relatively unaccomplished experts; so long as they meet a threshold level of perceived knowledge, various credentials may not matter. For example, it may make no difference whether the expert attended an Ivy League university or a lesser known institution, is board certified, or has published in scientific journals.

What this research cannot speak to is whether differences in the level of experts' knowledge would make a difference to judges, for example, in bench trials. Judges are probably more sophisticated about discerning relative degrees of expert knowledge. Furthermore, because our participants were exposed to only one expert, the results cannot show whether judges or jurors would notice relative differences in experts' knowledge if there were opposing experts in a single case. Whereas a meta-analysis found similar effects of unopposed and opposed expert testimony on juror decision-making,66 other studies have revealed particular contexts in which opposing testimony may have a uniquely strengthened effect.13 Perhaps if the high- and low-knowledge experts had been compared side by side, their differences would have become more salient, and a stronger effect would have been found.

Strengths, Limitations, and Future Directions

The knowledge manipulations used in this study were developed by amalgamating conceptual components from a variety of prior research projects. Ours is the second study to use these knowledge manipulations (see Ref. 10, for the first use). The current study was the first to test the unique effects of knowledge on mock jurors' determinations of witness credibility and decision-making. A strength of this design is the resultant ability to interpret direct causal relations of expert knowledge and gender to credibility and case-related decision-making. Additional strengths adding to ecological validity were using actual PhD forensic psychologists as experts in the video-taped scenarios and filming the stimulus videos in a well-simulated environment.

However, to achieve the control needed for experimental manipulation of expert knowledge in this preliminary study, we did not fully capture some real-world elements of a capital trial. Limitations include the lack of voir dire or deliberation and the use of a college student mock juror sample.67 College students often provide a large, easily accessible population for the purposes of initial mock jury research.68 While a review of jury simulation research concluded that the use of students as mock jurors is not necessarily a cause for concern,69 recent research suggests some differences between college and community samples.70,71 Nevertheless, the use of a college sample has been deemed no more problematic to generalizability than other common variables (e.g., trial context and jurisdiction).68

Other limitations of our sample include that it was largely Caucasian (79%), female (81%), and young (average age, 19). Although the characteristics of this sample do not reflect that of an average jury pool, the mock jurors in this study were jury-eligible citizens and may serve in actual trials at some point. A replication of our results with a more diverse sample within a paradigm that further extends the realistic nature of the trial process would allay some validity concerns, increase the generalizability of the findings, and increase the confidence that the field can place in these results. Analyses were also complicated because the particular female expert in this study was perceived as more attractive than the particular male expert, suggesting that our MANCOVA results should be interpreted with caution. It is possible that the gender manipulation was somewhat weakened by the difference in attractiveness and thus underestimated the relationship between gender and the dependent variables.41,43 In future research, investigators should seek to avoid confounding due to attractiveness, possibly by including multiple male and female experts for comparison.

Studying witness knowledge in a capital proceeding potentially limits the generalizability of our findings to other court proceedings. Of course, this criticism is not unique to capital proceedings. The same argument could be made for any other potential proceeding. Had we chosen to study expert knowledge in a civil commitment proceeding, for example, those findings might have been relevant only for other civil commitment proceedings. We chose to study expert knowledge in a capital case for several reasons. First, because the possibility of a sentence of death makes the verdict different from that which would result in other sentences and capital trials are among the most contentious cases, mock jurors' motivation to attend to the task and to the expert may have been maximized in this context. Second, lawyers and experts may seek the most consultation, given the resources devoted to a capital case and the high stakes (death versus life) partially contingent on testimony effectiveness.

Third, in all the other studies we have published that involved experimentally manipulating elements of expert credibility, we have used the same basic mock-trial stimuli (Table 1). For meaningful comparison of the findings from the current study with the body of research that has developed on witness credibility, we wanted to hold constant as many details as possible, other than the credibility behaviors that have been manipulated across the various studies. Finally, most prior research on expert witness testimony in which clinical versus actuarial testimony effectiveness was varied (related to expert witness knowledge) has been in capital sentencing paradigms, which allows us to build on this line of research.

We also note that the witness's self-proclaimed expertise (e.g., the expert's statement that “as far as I know I've never been wrong”) may have introduced a confounder of perceived arrogance coupled with high knowledge. To the extent that this confounder was present and systematically affected perceived likeability, this aspect of the knowledge presentation may have influenced more than just perceived expertise by lessening expert likeability and hampering the effects of increased confidence in the highly knowledgeable expert. Future research should explore the relative influence of various types of high expert knowledge displays on the stand.

Overall, manipulations of expert knowledge did not affect credibility or significantly predict mock jurors' decisions in the hypothesized manner. In this discussion, we presented hypotheses about why these findings may have emerged, emphasizing support found for heuristic models' explanatory value in understanding how expert testimony may influence jurors' evaluation of an expert's credibility. Given the centrality of expert knowledge to the courts' reliance on expert testimony, future research should seek to clarify its role in juror evaluations of expert evidence. In short, in answer to the questions we originally set out to explore, it appears that mock jurors do notice variations in expert witness knowledge; however, this difference may not carry weight when it comes down to influences on evidence interpretation and decision-making.

Acknowledgments

The authors thank Michael and Desiree Griffin for their assistance in preparing the video stimuli and Maddy Semon for data collection and management.

Footnotes

  • Disclosures of financial or other potential conflicts of interest: None.

  • © 2015 American Academy of Psychiatry and the Law

References

  1. 1.↵
    1. Lee E
    : Categorical person perception in computer-mediated communication: effects of character representation and knowledge bias on sex inference and informational social influence. Media Psychol 9:309–29, 2007
    OpenUrl
  2. 2.↵
    1. Kern J
    : Predicting the impact of assertive, empathic-assertive, and nonassertive behavior: the assertiveness of the assertee. Behav Ther 13:486–98, 1982
    OpenUrlCrossRef
  3. 3.↵
    1. Ware J,
    2. Williams R
    : The Dr. Fox effect: a study of lecturer effectiveness and ratings of instruction. J Med Educ 50:149–56, 1975
    OpenUrlPubMed
  4. 4.↵
    1. Brodsky SL
    : Testifying in Court: Guidelines and Maxims for the Expert Witness. Washington, DC: American Psychological Association, 1991
  5. 5.↵
    1. Barsky AE,
    2. Gould JW
    : Clinicians in Court: A Guide to Subpoenas, Depositions, Testifying, and Everything Else You Need to Know. New York: The Guilford Press, 2002
  6. 6.↵
    U.S.C.S. Fed Rules Evid. R 702, (2011), Federal Rules Of Evidence, Article VII. Opinions and Expert Testimony, Rule 702. Testimony By Experts.
  7. 7.↵
    1. Brodsky SL,
    2. Griffin MP,
    3. Cramer RJ
    : The Witness Credibility Scale: an outcome measure for expert witness research. Behav Sci & L 28:892–907, 2010
    OpenUrl
  8. 8.↵
    1. John OP,
    2. Robins RW,
    3. Pervin LA
    1. Higgins ET,
    2. Scholer AA
    : When is personality revealed? A motivated cognition approach, in Handbook of Personality: Theory and Research (ed 3). Edited by John OP, Robins RW, Pervin LA. New York: The Guilford Press, 2008
  9. 9.↵
    1. Krauss DA,
    2. Sales BD
    : The effects of clinical and scientific expert testimony on juror decision making in capital sentencing. Psychol Pub Pol'y & L 7:267–310, 2001
    OpenUrl
  10. 10.↵
    1. Neal TM-S,
    2. Guadagno RE,
    3. Eno CA,
    4. et al
    : Warmth and competence on the witness stand: implications for credibility of male and female expert witnesses. J Am Acad Psychiatry Law 40:488–97, 2012
    OpenUrlAbstract/FREE Full Text
  11. 11.↵
    1. Titcomb C,
    2. Brodsky SL,
    3. Nagle J
    : Looking beyond our assumptions: how mock jurors perceive expert witness testimony in insanity defense cases. Poster presented at the 31st Conference of the American Society of Trial Consultants, New Orleans, LA, June 7–9, 2012
  12. 12.↵
    1. Cooper J,
    2. Neuhaus IM
    : The “hired gun” effect: assessing the effect of pay, frequency of testifying, and credentials on the perception of expert testimony. Law & Hum Behav 24,149–71, 2000
    OpenUrlPubMed
  13. 13.↵
    1. Levett LM,
    2. Kovera MB
    : Psychological mediators of the effects of opposing expert testimony on juror decisions. Psychol Pub Pol'y & L 15:124–48, 2009
    OpenUrl
  14. 14.↵
    1. Bornstein BH
    : The impact of different types of scientific testimony on mock jurors' liability verdicts. Psychol Crime & L 10:429–46, 2004
    OpenUrl
  15. 15.↵
    1. Ruva CL,
    2. Bryant JB
    : The impact of age, speech style, and question form on perceptions of witness credibility and trial outcome. J Appl Soc Psychol 34:1919–44, 2004
    OpenUrlCrossRef
  16. 16.↵
    1. Bohm RM,
    2. Cincinnati OH
    1. Williams FP,
    2. McShane MD
    : Psychological testimony and the decisions of prospective death-qualified jurors, in Death Penalty in America: Current Research. Edited by Bohm RM, Cincinnati OH: American Publishing Company, 1991, pp 71–88
  17. 17.↵
    1. Brodsky SL,
    2. Neal TMS,
    3. Cramer RJ,
    4. et al
    : Credibility in the courtroom: how likeable should an expert witness be? J Am Acad Psychiatry Law 37:525–32, 2009
    OpenUrlAbstract/FREE Full Text
  18. 18.↵
    1. Cramer RJ,
    2. DeCoster J,
    3. Harris PB,
    4. et al
    : A confidence-credibility model of expert witness persuasion: mediating effects and implications for trial consultation. Consult Psychol J Pract Res 63:129–17, 2011
    OpenUrl
  19. 19.↵
    1. Cramer RJ,
    2. Neal TMS,
    3. DeCoster JM,
    4. et al
    : Witness self-efficacy: development and validation of the construct. Behav Sci & L 28:784–800, 2010
    OpenUrl
  20. 20.↵
    1. Cramer RJ,
    2. Neal TMS,
    3. Brodsky SL,
    4. et al
    : Self-efficacy and confidence: theoretical distinctions and implications for trial consultation. Consult Psychol J Pract Res 61:319–34, 2009
    OpenUrl
  21. 21.↵
    1. Griffin MP,
    2. Clark J
    : Juror expectations concerning technology implementation in the courtroom. Crim L Bull 43:238–49, 2007
    OpenUrl
  22. 22.↵
    1. Neal TMS,
    2. Brodsky SL
    : Expert witness credibility as a function of eye contact behavior and gender. Crim Just & Behav 35:1515–26, 2008
    OpenUrlAbstract/FREE Full Text
  23. 23.↵
    1. Cramer RJ,
    2. Brodsky SL,
    3. DeCoster J
    : Expert witness confidence and juror personality: their impact on credibility and persuasion in the courtroom. J Am Acad Psychiatry Law 37:63–74, 2009
    OpenUrlAbstract/FREE Full Text
  24. 24.↵
    1. Champagne A,
    2. Shuman D,
    3. Whitaker E
    : An empirical examination of the use of expert witnesses in American courts. Jurimetrics J 31:375–92, 1991
    OpenUrl
  25. 25.↵
    1. Melton GB,
    2. Petrila J,
    3. Poythress NG,
    4. et al
    : Psychological Evaluations for the Courts: A Handbook for Mental Health Professionals and Lawyers (ed 3). New York: Guilford Press, 2007
  26. 26.↵
    1. Commons ML,
    2. Gutheil TG,
    3. Hilliard JT
    : On humanizing the expert witness: a proposed narrative approach to expert witness qualification. J Am Acad Psychiatry Law 38:302–4, 2010
    OpenUrlFREE Full Text
  27. 27.↵
    1. Hurwitz SD,
    2. Miron MS,
    3. Johnson BT
    : Source credibility and the language of expert testimony. J Appl Soc Psychol, 22:1909–39, 1992
    OpenUrl
  28. 28.↵
    1. Brodsky SL
    : The Expert Expert Witness: More Maxims and Guidelines for Testifying in Court. Washington, DC: American Psychological Association, 1999
  29. 29.↵
    1. Kirk SA,
    2. Einbinder SD
    1. Faust D
    : Are there sufficient foundations for mental health experts to testify in court? No, in Controversial Issues in Mental Health. Edited by Kirk SA, Einbinder SD. Boston: Allyn and Bacon, 1994, pp 196–201
  30. 30.↵
    1. Garb HN
    : Clinical judgment, clinical training, and professional experience. Psychol Bull 105:387–96, 1989
    OpenUrlCrossRefPubMed
  31. 31.↵
    1. Brodsky SL
    : Coping with Cross-Examination and Other Pathways to Effective Testimony. Washington, DC: American Psychological Association, 2004
  32. 32.↵
    1. Sundby SE
    : The jury as critic: an empirical look at how capital juries perceive expert and lay testimony. Va L Rev 83:1109–88, 1997
    OpenUrl
  33. 33.↵
    1. Kovera MB,
    2. Gresham AW,
    3. Borgida E,
    4. et al
    : Does expert psychological testimony inform or influence juror decision-making?—a social cognitive analysis. J Appl Psychol, 82:178–91, 1997
    OpenUrlCrossRefPubMed
  34. 34.↵
    1. Neal TMS
    : Women as expert witnesses: a review of the literature. Behav Sci & L 32:164–79, 2014
    OpenUrl
  35. 35.↵
    1. Larson BA,
    2. Brodsky SL
    : When cross-examination offends: how men and women assess intrusive questioning of male and female expert witnesses. J Appl Soc Psychol 40:811–30, 2010
    OpenUrl
  36. 36.↵
    1. Amina M
    : Juror perception of experts in civil disputes: the role of race and gender. Law & Psychol Rev 22:179–97, 1998
    OpenUrl
  37. 37.↵
    Witherspoon v. State of Illinois, 391 U.S. 510.
  38. 38.↵
    1. Hosoda M,
    2. Stone-Romero EF,
    3. Coats G
    : The effects of physical attractiveness on job-related outcomes: a meta-analysis of experimental studies. Pers Psychol 56:431–62, 2003
    OpenUrl
  39. 39.↵
    1. Ritts V,
    2. Patterson ML,
    3. Tubbs ME
    : Expectations, impressions, and judgments of physically attractive students: a review. Rev Educ Res 62:413–26, 1992
    OpenUrl
  40. 40.↵
    1. Tabachnick BG,
    2. Fidell LS
    : Using multivariate statistics (ed 5). Boston: Pearson Education, 2007
  41. 41.↵
    1. Miller GA,
    2. Chapman J P
    : Misunderstanding analysis of covariance. J Abnorm Psychol 110:40–8, 2001
    OpenUrlCrossRefPubMed
  42. 42.↵
    1. Cochran WG
    : Analysis of covariance: its nature and uses. Biometrics 44:261–81, 1957
    OpenUrl
  43. 43.↵
    1. Evans SH,
    2. Anastasio EJ
    . Misuse of analysis of covariance when treatment effect and covariate are confounded. Psychol Bull 69:225–34, 1968
    OpenUrlCrossRefPubMed
  44. 44.↵
    1. Maxwell SE,
    2. Delaney HD
    : Designing experiments and analyzing data: a model comparison perspective. Belmont, CA: Wadsworth, 1990
  45. 45.↵
    1. Overall JE,
    2. Woodward JA
    : Nonrandom assignment and the analysis of covariance. Psychol Bull 84:588–94, 1977
    OpenUrlCrossRef
  46. 46.↵
    1. Wildt AR,
    2. Ahtola OT
    : Analysis of Covariance. Beverly Hills, CA: Sage, 1978
  47. 47.↵
    1. Cohen J,
    2. Cohen P
    : Applied Multiple Regression/Correlation Analysis for the Behavioral Sciences (ed 2). Hillsdale, NJ: Erlbaum, 1983
  48. 48.↵
    1. Eagly AH,
    2. Ashmore RD,
    3. Makhijani MG,
    4. et al
    : What is beautiful is good, but…: a meta-analytic review of research on the physical attractiveness stereotype. Psychol Bull 110:109–28, 1991
    OpenUrlCrossRef
  49. 49.↵
    1. Langlois JH,
    2. Kalakanis L,
    3. Rubenstein AR,
    4. et al
    : Maxims or myths of beauty?—a meta-analytic and theoretical review. Psychol Bull 126:390–423, 2000
    OpenUrlCrossRefPubMed
  50. 50.↵
    1. Pallant J
    : SPSS Survival Manual: A Step-by-Step Guide to Data Analysis Using SPSS for Windows (ed 3). New York: McGraw-Hill, 2000
  51. 51.↵
    1. Field A
    : Discovering Statistics Using SPSS (ed 3). Thousand Oaks, CA: Sage Publications, Inc., 2009
  52. 52.↵
    1. Bose RC
    1. Bargman RE
    : Interpretation and use of a generalized discriminant function, in Essays in Probability and Statistics. Edited by Bose RC. Chapel Hill, NC: University of North Carolina Press, 1970
  53. 53.↵
    1. Bray JH,
    2. Maxwell SE
    : Multivariate Analysis of Variance. Sage University Paper Series on Quantitative Applications in the Social Sciences, 07-054. Newbury Park, CA: Sage, 1985
  54. 54.↵
    1. Brekke N,
    2. Borgida E
    : Expert psychological testimony in rape trials: a social–cognitive analysis. J Pers Soc Psychol 55:372–86, 1988
    OpenUrlCrossRef
  55. 55.↵
    1. Cooper J,
    2. Bennett EA,
    3. Sukel HL
    : Complex scientific testimony: how do jurors make decisions? Law & Hum Behav 20:379–94, 1996
    OpenUrlCrossRef
  56. 56.↵
    1. Schuller R,
    2. Vidmar N
    : Battered wife syndrome evidence in the courtroom: a review of the literature. Law & Hum Behav 16:273–91, 1992
    OpenUrlCrossRef
  57. 57.↵
    1. Aronson E
    : The Social Animal (ed 10). New York: Worth Publishers, 2008
  58. 58.↵
    1. Maccoby EE,
    2. Newcomb TM,
    3. Hartley EL
    1. Bales R
    : Task roles and social roles in problem solving groups, in Readings in Social Psychology (ed 3). Edited by Maccoby EE, Newcomb TM, Hartley EL. New York: Holt, 1958, pp 437–47
  59. 59.↵
    1. Aronson E,
    2. Willerman B,
    3. Floyd J
    : The effect of a pratfall on increasing interpersonal attractiveness. Psychonom Sci 4:227–8, 1966
    OpenUrl
  60. 60.↵
    1. Deaux K
    : To err is humanizing: but sex makes a difference. Represent Res Soc Psychol 3:20–8, 1972
    OpenUrl
  61. 61.↵
    1. Aronson E,
    2. Helmreich R,
    3. LeFan J
    : To err is humanizing—sometimes: effects of self-esteem, competence, and a pratfall on interpersonal attraction. J Pers Soc Psychol 16:259–64, 1970
    OpenUrlCrossRefPubMed
  62. 62.↵
    1. Baldoni J
    : Never act like the smartest guy in the room. Forbes, July 26, 2012. Available at http://www.forbes.com/sites/johnbaldoni/2012/07/26/never-act-like-the-smartest-guy-in-the-room/. Accessed October 28, 2013
  63. 63.↵
    1. Chaiken S
    : Heuristic versus systematic information processing and the use of source versus message cues in persuasion. J Pers Soc Psychol 39:752–66, 1980
    OpenUrlCrossRef
  64. 64.↵
    1. Petty R,
    2. Caccioppo J
    : Communication and Persuasion: Central and Peripheral Routes in Attitude Change. New York: Springer-Verlag, 1986
  65. 65.↵
    1. Greene E,
    2. Heilbrun K,
    3. Fortune WH
    : Wrightsman's Psychology and the Legal System (ed 6). Belmont, CA: Thompson-Wadsworth, 2007
  66. 66.↵
    1. Roesch R,
    2. Hart SD,
    3. Ogloff JRP
    1. Nietzel MT,
    2. McCarthy DM,
    3. Kerr MJ
    : Juries: the current state of the empirical literature, in Psychology and Law: The State of the Discipline. Edited by Roesch R, Hart SD, Ogloff JRP. New York: Kluwer Academic/Plenum, 1999, pp 23–52
  67. 67.↵
    1. Wiener RL,
    2. Krauss DA,
    3. Lieberman JD
    : Mock jury research: where do we go from here? Behav Sci & L 29:467–79, 2011
    OpenUrl
  68. 68.↵
    1. Bornstein B,
    2. Miller M
    1. Devine DJ
    : Jury decision making: the state of the science, in Psychology and Crime Series. Edited by Bornstein B, Miller M. New York: New York University Press, 2012
  69. 69.↵
    1. Bornstein BH
    : The ecological validity of jury simulations: is the jury still out? Law & Hum Behav 23:75–91, 1999
    OpenUrlCrossRef
  70. 70.↵
    1. Hosch HM,
    2. Culhane SE,
    3. Tubb VA,
    4. et al
    : Town vs. gown: a direct comparison of community residents and student mock jurors. Behav Sci & L 29:452–66, 2011
    OpenUrl
  71. 71.↵
    1. Keller SR,
    2. Wiener RL
    : What are we studying?—student jurors, community jurors, and construct validity. Behav Sci & L 29:376–94, 2011
    OpenUrl
PreviousNext
Back to top

In this issue

Journal of the American Academy of Psychiatry and the Law Online: 43 (1)
Journal of the American Academy of Psychiatry and the Law Online
Vol. 43, Issue 1
1 Mar 2015
  • Table of Contents
  • Index by author
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in recommending The Journal of the American Academy of Psychiatry and the Law site.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Differences in Expert Witness Knowledge: Do Mock Jurors Notice and Does It Matter?
(Your Name) has forwarded a page to you from Journal of the American Academy of Psychiatry and the Law
(Your Name) thought you would like to see this page from the Journal of the American Academy of Psychiatry and the Law web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Differences in Expert Witness Knowledge: Do Mock Jurors Notice and Does It Matter?
Caroline T. Parrott, Tess M. S. Neal, Jennifer K. Wilson, Stanley L. Brodsky
Journal of the American Academy of Psychiatry and the Law Online Mar 2015, 43 (1) 69-81;

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero

Share
Differences in Expert Witness Knowledge: Do Mock Jurors Notice and Does It Matter?
Caroline T. Parrott, Tess M. S. Neal, Jennifer K. Wilson, Stanley L. Brodsky
Journal of the American Academy of Psychiatry and the Law Online Mar 2015, 43 (1) 69-81;
del.icio.us logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • Abstract
    • Expert Witness Credibility in the Courtroom
    • Knowledge in the Courtroom Setting
    • Gender as a Moderator of Perceived Knowledge
    • The Current Study
    • Methods
    • Results
    • Discussion
    • Acknowledgments
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • PDF

Related Articles

Cited By...

More in this TOC Section

  • Mental Health and Social Correlates of Reincarceration of Youths as Adults
  • Legal and Ethics Considerations in Capacity Evaluation for Medical Aid in Dying
  • Mental Health Aftercare Availability for Juvenile Justice-Involved Youth in New York City
Show more Regular Articles

Similar Articles

Site Navigation

  • Home
  • Current Issue
  • Ahead of Print
  • Archive
  • Information for Authors
  • About the Journal
  • Editorial Board
  • Feedback
  • Alerts

Other Resources

  • Academy Website
  • AAPL Meetings
  • AAPL Annual Review Course

Reviewers

  • Peer Reviewers

Other Publications

  • AAPL Practice Guidelines
  • AAPL Newsletter
  • AAPL Ethics Guidelines
  • AAPL Amicus Briefs
  • Landmark Cases

Customer Service

  • Cookie Policy
  • Reprints and Permissions
  • Order Physical Copy

Copyright © 2025 by The American Academy of Psychiatry and the Law