Skip to main content

Main menu

  • Home
  • Current Issue
  • Ahead of Print
  • Past Issues
  • Info for
    • Authors
    • Print Subscriptions
  • About
    • About the Journal
    • About the Academy
    • Editorial Board
  • Feedback
  • Alerts
  • AAPL

User menu

  • Alerts

Search

  • Advanced search
Journal of the American Academy of Psychiatry and the Law
  • AAPL
  • Alerts
Journal of the American Academy of Psychiatry and the Law

Advanced Search

  • Home
  • Current Issue
  • Ahead of Print
  • Past Issues
  • Info for
    • Authors
    • Print Subscriptions
  • About
    • About the Journal
    • About the Academy
    • Editorial Board
  • Feedback
  • Alerts
EditorialEDITORIAL

Artificial Intelligence Use in Forensic Academic Writing Does Not Necessarily Compromise Integrity

Margarita Abi Zeid Daou
Journal of the American Academy of Psychiatry and the Law Online June 2025, 53 (2) 136-139; DOI: https://doi.org/10.29158/JAAPL.250020-25
Margarita Abi Zeid Daou
Dr. Daou is an Assistant Professor, Department of Psychiatry and Behavioral Sciences, University of Massachusetts, Chan Medical School, Worcester, MA.
MD
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Article
  • Figures & Data
  • Info & Metrics
  • PDF
Loading
  • artificial intelligence
  • forensic psychiatry
  • academic writing
  • ethics
  • professionalism
  • publication

The past few decades propelled artificial intelligence (AI) from an abstract concept to a force capable of reshaping industries, including medicine and academia. About a year ago, Grabb and Angelotta wrote an editorial in The Journal about the “Emerging Forensic Implications of the Artificial Intelligence Revolution.”1 They addressed its use in clinical and forensic practice, biases and ethics considerations, and the absence of a legal framework to define the use of AI in medicine. An equally important and highly utilized aspect of AI is its role in academic writing, which warrants particular attention. The idea of relying on generative AI in manuscript preparation and academic writing created debates in the scientific community regarding its ethics implications and impact on scholarly integrity.

These questions are of particular significance for The Journal. At the intersection of forensic psychiatry and legal scholarship, The Journal is uniquely positioned to benefit from and critically evaluate AI’s applications: leveraging AI for innovation and efficiency while safeguarding its publications’ quality, rigor, and ethics standards and guiding authors, reviewers, and readers on the use of AI in the journal’s publications. This editorial explores how The Journal can responsibly and thoughtfully navigate the complexities of AI adoption.

The Evolution of AI in Academia

AI’s Historical Development

First coined by John McCarthy in 1956 at the Dartmouth Summer Research Project on Artificial Intelligence,2 AI debuted in the early 1900s. Early systems relied on rule-based logic, following preset instructions to simulate reasoning (e.g., in chess and checkers games). The following decades were transformative to the field. Machine learning (ML), introduced in the 1980s, allowed systems to make decisions and predict outcomes based on prelearned data without specific programming. Deep learning, a subset of ML, and neural network models that mimic the human brain’s structure further advanced AI by allowing complex data analysis, such as image recognition (in the field of computer vision) and natural language processing (NLP). With its high scalability potential, thanks to the wide availability of data and high computational powers, AI’s performance improved dramatically. Machines became capable of understanding context over long text spans, performing tasks for which they were not explicitly trained, and generating human-like text, as seen with tools like OpenAI’s GPT series.3 Furthermore, ML, NLP, and large language models (LLM) work in feedback loops, where LLMs push the boundaries of ML and NLP research, and new ML techniques improve LLM efficiency and effectiveness.

AI’s Expanding Role in Academic Writing

For many scholars, adopting AI significantly improved their academic writing efficiency. They have used it in more ways than expected. Technological advances and AI facilitated decades of scholarly work, from literature searches and referencing and citation tools to statistical and peer review tools. Still, the author’s contributions and control remained the cornerstone of the work.

With further development, AI tools currently can suggest topics; draft outlines or research proposals; write texts with references; create charts, graphs, and infographics; clarify complex topics by providing explanations and analogies; summarize articles; provide translations; make edits; identify trends and gaps in research to guide researchers toward unexplored areas; and so forth. With the introduction of these AI features and benefits, scholarly work could be within the reach of a broader group of people. Some also argue that authors could shift their focus to higher level analysis while saving time and potentially expanding the scope of their research.3

Table 1 offers a few examples of tools and programs that have been part of scholarly work for years and others that are more recent.

View this table:
  • View inline
  • View popup
Table 1

Existing AI Tools and their Functions

Ethics Considerations in the Use of AI

Accuracy of AI-Generated Information

Despite the advantages listed above, AI-generated content still poses challenges. This is particularly true regarding the risk of biased or skewed information and the concern for misleading outcomes. Because ML and NLP models depend on the data on which they are trained, the generative tools will inadvertently propagate errors in their outputs (texts, responses, references) if the training data contain biases, inaccuracies, outdated or irrelevant information, or misinformation. The answers given by the tools may also be at risk of sampling or algorithmic biases. Poorly weighted focus on specific data during the models’ training could empower certain ideas, sources, and authors while silencing others. It could also lead to wrongly representing demographic, historical, sociocultural, and other factors or groups, which do not represent reality and could be harmful on many levels.4,5 Imagine the consequences of such potentially skewed information in our field where we interact, clinically or forensically, with the most vulnerable populations entangled between the mental health and the legal worlds.

Additionally, AI models, like GPT, generate text that may be syntactically correct but factually inaccurate, a phenomenon called “hallucination.” A famous example is the New York lawyer who blindly relied on GPT in building his argument to represent his client’s injury claim. GPT provided nonexistent citations and quotes and wrongly stipulated that they were available in major legal databases.6

With the increased and developing sophistication of AI tools, their ability to generate paraphrased content that traditional tools struggle to identify will continue to expand. This will unmistakably lead to a higher risk of plagiarism with a lower ability to detect it. With more authors resorting to expansive use of AI-generated content, the literature’s quality, richness, and originality will diminish.

The more concerning consequence of heavy reliance on generative AI models is their lack of expertise and critical thinking, which are necessary to examine complex data and results and to contextualize them. In specialized and professional milieux, AI falls short compared with experts’ opinions and input because of insufficiently nuanced comprehension of particular topics. AI’s lack of sophisticated expertise and contextual understanding of complex data often leads to misinterpretations, misleading anxious writers who blindly rely on the misinformation without questioning or vetting it. Furthermore, unlike the human mind and intelligence, AI lacks creativity. Its generative potential is limited to the datasets it pulls from and its ability to retrieve and repackage information differently. AI cannot generate new ideas and innovate, which is essential to any field’s advancement.

AI Authorship Criteria and Usage Disclosure

The academic community continues to deliberate on AI's role in authorship. Some argue that AI’s contributions are akin to those of human coauthors. In December 2022, an Elsevier journal (Nurse Education in Practice) and PLoS Digital Health allowed human authors to credit ChatGPT as a coauthor in their ahead-of-print publications.7,8 After the articles’ peer-reviewed publication, however, ChatGPT was retracted from the author list.9,10 There was significant backlash from the scientific community, highlighting concerns for accountability and contributorship,11 as supported by the International Committee of Medical Journal Editors (ICMJE),12 stipulating that authors must make substantial contributions to the work and be accountable for its content.

Because AI lacks consciousness and cannot assume responsibility, it does not meet authorship criteria. Consequently, journals like Nature and JAMA Network13 have policies that preclude listing nonhuman AI tools as authors. Furthermore, journals like JAMA and Science require transparent reporting of AI usage in manuscript preparation, including the specific systems and prompts employed in writing.14,–,16

The Journal’s Role in Managing AI’s Impact

The Journal must balance leveraging AI’s potential and ensuring the integrity and rigor foundational to scholarly work. It is imperative to carefully develop editorial strategies and proactively create clear ethics guidelines that incorporate the current realities of technological integration in academic work to ensure the use of AI is ethical, transparent, and aligned with the highest standards of scholarly publishing.

Updating Editorial Policies

Given the growing reliance on AI tools, the editorial policy should mandate authors to disclose all AI technologies they use. A workgroup of editorial board members could be formed to discuss this matter and propose a Journal-specific disclosure form that holds authors accountable for specifying AI’s role in the research and manuscript writing process. This may include detailing whether they used AI tools to enhance clarity, assist in drafting sections of the manuscript, generate statistical models, or identify patterns in large datasets. In this form, authors could be asked to assume responsibility for all published information obtained from programs such as ChatGPT.

By requiring such disclosures, The Journal achieves multiple purposes: it builds trust with the readers, reviewers, and contributors by promoting transparency and ensuring The Journal maintains an accurate and responsible record of how manuscripts are produced. Additionally, the editorial team and peer reviewers could better examine AI-generated content’s potential strengths and limitations and apply appropriate scrutiny to ensure a solid review process. This approach could help mitigate concerns about AI’s reliability and accountability, ensuring the editorial process remains rigorous and transparent.1

Moreover, The Journal should actively engage in ongoing training for authors and reviewers to ensure they are well-versed in AI’s ethics and technical implications in academic writing. Authors would benefit from guidance on proper AI use in their research and writing processes without compromising their work and intellectual integrity. For instance, The Journal could offer guidelines on how authors and reviewers could review AI-generated content for factual accuracy, relevance, and proper attribution. Such training would empower authors to harness AI to enhance their scholarly output rather than substituting it. It would also help reviewers recognize the potential pitfalls of AI-generated content, including biases, inaccurate information, and hallucinations. In addition, they would learn to evaluate the originality of AI-sourced content and the accuracy of AI-suggested citations to ensure that accepted manuscripts meet high academic standards.2

Conclusion

As we enter the unchartered territory of an artificial and supposedly intelligent world, it behooves us to acknowledge the importance of genuineness and integrity in our work. Intelligent work is conscious, aware, and responsible. Forensic psychiatry is an ever-evolving field, constantly updating based on medical advances and legal developments. The Journal has always hosted remarkable scholarship spanning cultural, sociological, and medicolegal topics. It is not beyond The Journal to keep up with technological advancements without fearing the consequences of that reality or skipping on the quality of its publications. It is necessary, however, to develop clear and educated guidance and policies to manage AI’s use in proposed submissions properly. The biggest concern otherwise would be that the world has just created a Frankenstein’s monster that would soon be out of control.

Disclosure

In preparing this manuscript, I resorted to several AI tools: I used Google Scholar to search for articles and vet articles later suggested by GPT. Grammarly and Microsoft Editor helped refine my sentences and stylistic and grammatical errors. GPT helped explain AI terminology in a simplified way, provided examples of AI tools, created the table after I fed it the information, and refined specific sentences in the text. Through the usual Google search (and its integrated AI tool), I conducted independent research to find and confirm information included in this text through multiple sources, including references.

As I indicated in this editorial, GPT hallucinated several references. It sometimes provided inaccurate answers and replied in an overly generic (and stiff) text when I asked it to elaborate on some ideas. As a result, vetting, correcting, and refining all information included in this editorial was as time-consuming for me as writing it in pre-AI days.

Nevertheless, when I ran the text through two separate plagiarism checks, I received two different results: Scribbr indicated that 100 percent of the text was human-generated, with zero percent being AI-generated or AI-refined. But the free version indicated a “high risk” of plagiarism, which I pledge not to have engaged in. Grammarly indicated that eight percent of the text resembled matched sources and 20 percent had patterns resembling AI text (numbers changed to 1% and 23%, respectively, when I deleted all references from the checked text).

Footnotes

  • Disclosures of financial or other potential conflicts of interest: None.

  • Editor’s Note: Readers are invited to review the addition of AI-specific guidance in the Instructions for Authors and the Statement of Authorship Responsibility at jaapl.org and in the print version of The Journal.

  • © 2025 American Academy of Psychiatry and the Law

References

  1. 1.↵
    1. Grabb DJ,
    2. Angelotta C
    . Emerging forensic implications of the artificial intelligence revolution. J Am Acad Psychiatry Law. 2023 Dec; 51(4):475–9
    OpenUrlFREE Full Text
  2. 2.↵
    1. McCarthy J,
    2. Minsky ML,
    3. Rochester N,
    4. Shannon CE
    . A proposal for the Dartmouth summer research project on artificial intelligence, August 31, 1955. AI Magazine. 2006; 27(4):12
    OpenUrlCrossRefPubMed
  3. 3.↵
    1. Vaswani A,
    2. Shazeer N,
    3. Parmar N
    et al. Attention is all you need [Internet]; 2017. Available from: https://arxiv.org/abs/1706.03762. Accessed November 26, 2024
  4. 4.↵
    1. Nicoletti L,
    2. Bass D
    . Humans are biased. Generative AI is even worse [Internet]; 2023. Available from: https://www.bloomberg.com/graphics/2023-generative-ai-bias. Accessed November 14, 2024
  5. 5.↵
    1. Germain T
    . ‘They’re all so dirty and smelly’: Study unlocks ChatGPT’s inner racist [Internet]; 2023. Available from: https://gizmodo.com/chatgpt-ai-openai-study-frees-chat-gpt-inner-racist-1850333646. Accessed November 14, 2024
  6. 6.↵
    Mata v. Avianca, Inc., 678 F. Supp. 3d 443 (S.D.N.Y 2023)
  7. 7.↵
    1. King TH,
    2. Cheatham M
    , ChatGPTet al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models [Internet]; 2022. Available from: https://www.medrxiv.org/content/10.1101/2022.12.19.22283643v2.article-info. Accessed November 15, 2024
  8. 8.↵
    1. Somasundaram R
    . Elsevier breaks new ground: ChatGPT listed as a journal author [Internet]; 2023. Available from: https://www.ilovephd.com/elsevier-breaks-new-ground-chatgpt-listed-as-a-journal-author/. Accessed November 15, 2024
  9. 9.↵
    1. Kung TH,
    2. Cheatham M,
    3. Medenilla A
    et al. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. PLOS Digit Health. 2023; 2(2):e0000198
    OpenUrl
  10. 10.↵
    1. O’Connor S
    . Open artificial intelligence platforms in nursing education: Tools for academic progress or abuse? Nurse Educ Pract. 2023 Jan; 67:103572
    OpenUrlPubMed
  11. 11.↵
    1. Siegerink B,
    2. Pet LA,
    3. Rosendaal FR,
    4. Schoones JW
    . ChatGPT as an author of academic papers is wrong and highlights the concepts of accountability and contributorship. Nurse Educ Pract. 2023 Mar; 68:103599
    OpenUrlCrossRefPubMed
  12. 12.↵
    1. Yadav S
    . Enhancing research integrity and publication ethics: An analysis of the latest International Committee of Medical Journal Editors recommendations. Cureus. 2024; 16(3):e56193
    OpenUrl
  13. 13.↵
    1. Flanagin A,
    2. Bibbins-Domingo K,
    3. Berkwits M,
    4. Christiansen SL
    . Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA. 2023; 329(8):637–9
    OpenUrlPubMed
  14. 14.↵
    1. Flanagin A,
    2. Kendall-Taylor J,
    3. Bibbins-Domingo K
    . Guidance for authors, peer reviewers, and editors on use of AI, language models, and chatbots. JAMA. 2023; 330(8):702–3
    OpenUrlCrossRefPubMed
  15. 15.↵
    1. Kwon D
    . AI is complicating plagiarism. How should scientists respond? Nature [Internet]; 2024 Jul 30. Available from: https://doi.org/10.1038/d41586-024-02371-z. Accessed November 15, 2024
  16. 16.↵
    International Committee of Medical Journal. Editors. Up-dated ICMJE recommendations [Internet]; 2023. Available from: https://www.icmje.org/news-and-editorials/updated_recommendations_may2023.html. Accessed February 1, 2025
PreviousNext
Back to top

In this issue

Journal of the American Academy of Psychiatry and the Law Online: 53 (2)
Journal of the American Academy of Psychiatry and the Law Online
Vol. 53, Issue 2
1 Jun 2025
  • Table of Contents
  • Index by author
Print
Download PDF
Article Alerts
Sign In to Email Alerts with your Email Address
Email Article

Thank you for your interest in recommending The Journal of the American Academy of Psychiatry and the Law site.

NOTE: We only request your email address so that the person you are recommending the page to knows that you wanted them to see it, and that it is not junk mail. We do not capture any email address.

Enter multiple addresses on separate lines or separate them with commas.
Artificial Intelligence Use in Forensic Academic Writing Does Not Necessarily Compromise Integrity
(Your Name) has forwarded a page to you from Journal of the American Academy of Psychiatry and the Law
(Your Name) thought you would like to see this page from the Journal of the American Academy of Psychiatry and the Law web site.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Citation Tools
Artificial Intelligence Use in Forensic Academic Writing Does Not Necessarily Compromise Integrity
Margarita Abi Zeid Daou
Journal of the American Academy of Psychiatry and the Law Online Jun 2025, 53 (2) 136-139; DOI: 10.29158/JAAPL.250020-25

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero

Share
Artificial Intelligence Use in Forensic Academic Writing Does Not Necessarily Compromise Integrity
Margarita Abi Zeid Daou
Journal of the American Academy of Psychiatry and the Law Online Jun 2025, 53 (2) 136-139; DOI: 10.29158/JAAPL.250020-25
del.icio.us logo Twitter logo Facebook logo Mendeley logo
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Jump to section

  • Article
    • The Evolution of AI in Academia
    • Ethics Considerations in the Use of AI
    • The Journal’s Role in Managing AI’s Impact
    • Conclusion
    • Disclosure
    • Footnotes
    • References
  • Figures & Data
  • Info & Metrics
  • PDF

Related Articles

Cited By...

More in this TOC Section

  • Research Literacy Is Crucial in Training Forensic Psychiatrists
  • Challenges and Opportunities for Forensic Mental Health in Immigration Courts
Show more Editorial

Similar Articles

Keywords

  • artificial intelligence
  • forensic psychiatry
  • academic writing
  • ethics
  • professionalism
  • publication

Site Navigation

  • Home
  • Current Issue
  • Ahead of Print
  • Archive
  • Information for Authors
  • About the Journal
  • Editorial Board
  • Feedback
  • Alerts

Other Resources

  • Academy Website
  • AAPL Meetings
  • AAPL Annual Review Course

Reviewers

  • Peer Reviewers

Other Publications

  • AAPL Practice Guidelines
  • AAPL Newsletter
  • AAPL Ethics Guidelines
  • AAPL Amicus Briefs
  • Landmark Cases

Customer Service

  • Cookie Policy
  • Reprints and Permissions
  • Order Physical Copy

Copyright © 2025 by The American Academy of Psychiatry and the Law