Editor:
The article “Ethics Implications of the Use of Artificial Intelligence in Violence Risk Assessment” by Dr. Cockerill1 offers important and timely commentary on the application of technological advances, such as machine learning, to the field of forensic psychiatry. Dr. Cockerill is right to highlight the potential use of these algorithms in violence risk assessment. Not only is the relevance of artificial intelligence in forensic psychiatry an important and timely discussion, it is already seeing growing interest. Predictive algorithms have been applied to electronic health records with promising results in assessing future suicide risk.2,3 There are emerging efforts to predict inpatient violence using clinical notes.4 As predictive models are increasingly used to evaluate the risk of violence, computer scientists will develop algorithms yielding results that clinicians will be urged to consider, highlighting the critical role forensic experts should play in their development.5
Dr. Cockerill falls into the familiar trap of focusing on the sexy aspects of deep learning, a phrase likely never previously said. While a J.J. Abrams-produced show featuring an intelligent crime-fighting computer has good ratings, today's forensic psychiatrists should not focus on these futuristic imaginative possibilities. Facebook's suicide prediction algorithm is interesting, but we lack clarity about which variables the company evaluates in determining risk. There remains uncertainty if Netflix or Spotify will allow researchers to access users' viewing or listening trends, nor do we know if watching “The Silence of the Lambs” or listening to Eminem is predictive of anything outside personal preferences. There are potentially informative studies that include this sort of data, but it should not be our immediate focus.
Our immediate focus should be on available data contained within electronic health records, where we can decide which data we want to evaluate, how it is evaluated, and share results and methods publicly. This approach will ensure the integrity of the analysis and allows for transparent discussions on the ethics considerations of interventions stemming from risk predictions. Machine learning algorithms applied to electronic health records data offer the potential to analyze rapidly the vast amount of data contained in the medical record. This capability could produce screening mechanisms that might be able to identify potential risky patients quickly, thus offering crucial information for clinical staff.
Artificial intelligence may improve the accuracy of violence risk prediction, as has already been seen in suicide prediction studies, in theory by virtue of the vast amount of data that can be readily assessed. We can better stratify individuals at high risk of recidivism and tailor interventions to those in need. There are opportunities to query both quantitative and qualitative data collected in health records, which will ultimately prove more fruitful than attempting to understand Facebook's algorithms. Forensic experts should be actively involved in the use of machine learning algorithms applied to electronic health records data to predict future violence. While there has been a great increase in digital mental health spurred by the COVID-19 pandemic, our focus should start with the vast data available in medical records.
Footnotes
Disclosures of financial or other potential conflicts of interest: None.
- © 2021 American Academy of Psychiatry and the Law