The past few decades propelled artificial intelligence (AI) from an abstract concept to a force capable of reshaping industries, including medicine and academia. About a year ago, Grabb and Angelotta wrote an editorial in The Journal about the “Emerging Forensic Implications of the Artificial Intelligence Revolution.”1 They addressed its use in clinical and forensic practice, biases and ethics considerations, and the absence of a legal framework to define the use of AI in medicine. An equally important and highly utilized aspect of AI is its role in academic writing, which warrants particular attention. The idea of relying on generative AI in manuscript preparation and academic writing created debates in the scientific community regarding its ethics implications and impact on scholarly integrity.
These questions are of particular significance for The Journal. At the intersection of forensic psychiatry and legal scholarship, The Journal is uniquely positioned to benefit from and critically evaluate AI’s applications: leveraging AI for innovation and efficiency while safeguarding its publications’ quality, rigor, and ethics standards and guiding authors, reviewers, and readers on the use of AI in the journal’s publications. This editorial explores how The Journal can responsibly and thoughtfully navigate the complexities of AI adoption.
The Evolution of AI in Academia
AI’s Historical Development
First coined by John McCarthy in 1956 at the Dartmouth Summer Research Project on Artificial Intelligence,2 AI debuted in the early 1900s. Early systems relied on rule-based logic, following preset instructions to simulate reasoning (e.g., in chess and checkers games). The following decades were transformative to the field. Machine learning (ML), introduced in the 1980s, allowed systems to make decisions and predict outcomes based on prelearned data without specific programming. Deep learning, a subset of ML, and neural network models that mimic the human brain’s structure further advanced AI by allowing complex data analysis, such as image recognition (in the field of computer vision) and natural language processing (NLP). With its high scalability potential, thanks to the wide availability of data and high computational powers, AI’s performance improved dramatically. Machines became capable of understanding context over long text spans, performing tasks for which they were not explicitly trained, and generating human-like text, as seen with tools like OpenAI’s GPT series.3 Furthermore, ML, NLP, and large language models (LLM) work in feedback loops, where LLMs push the boundaries of ML and NLP research, and new ML techniques improve LLM efficiency and effectiveness.
AI’s Expanding Role in Academic Writing
For many scholars, adopting AI significantly improved their academic writing efficiency. They have used it in more ways than expected. Technological advances and AI facilitated decades of scholarly work, from literature searches and referencing and citation tools to statistical and peer review tools. Still, the author’s contributions and control remained the cornerstone of the work.
With further development, AI tools currently can suggest topics; draft outlines or research proposals; write texts with references; create charts, graphs, and infographics; clarify complex topics by providing explanations and analogies; summarize articles; provide translations; make edits; identify trends and gaps in research to guide researchers toward unexplored areas; and so forth. With the introduction of these AI features and benefits, scholarly work could be within the reach of a broader group of people. Some also argue that authors could shift their focus to higher level analysis while saving time and potentially expanding the scope of their research.3
Table 1 offers a few examples of tools and programs that have been part of scholarly work for years and others that are more recent.
Existing AI Tools and their Functions
Ethics Considerations in the Use of AI
Accuracy of AI-Generated Information
Despite the advantages listed above, AI-generated content still poses challenges. This is particularly true regarding the risk of biased or skewed information and the concern for misleading outcomes. Because ML and NLP models depend on the data on which they are trained, the generative tools will inadvertently propagate errors in their outputs (texts, responses, references) if the training data contain biases, inaccuracies, outdated or irrelevant information, or misinformation. The answers given by the tools may also be at risk of sampling or algorithmic biases. Poorly weighted focus on specific data during the models’ training could empower certain ideas, sources, and authors while silencing others. It could also lead to wrongly representing demographic, historical, sociocultural, and other factors or groups, which do not represent reality and could be harmful on many levels.4,5 Imagine the consequences of such potentially skewed information in our field where we interact, clinically or forensically, with the most vulnerable populations entangled between the mental health and the legal worlds.
Additionally, AI models, like GPT, generate text that may be syntactically correct but factually inaccurate, a phenomenon called “hallucination.” A famous example is the New York lawyer who blindly relied on GPT in building his argument to represent his client’s injury claim. GPT provided nonexistent citations and quotes and wrongly stipulated that they were available in major legal databases.6
With the increased and developing sophistication of AI tools, their ability to generate paraphrased content that traditional tools struggle to identify will continue to expand. This will unmistakably lead to a higher risk of plagiarism with a lower ability to detect it. With more authors resorting to expansive use of AI-generated content, the literature’s quality, richness, and originality will diminish.
The more concerning consequence of heavy reliance on generative AI models is their lack of expertise and critical thinking, which are necessary to examine complex data and results and to contextualize them. In specialized and professional milieux, AI falls short compared with experts’ opinions and input because of insufficiently nuanced comprehension of particular topics. AI’s lack of sophisticated expertise and contextual understanding of complex data often leads to misinterpretations, misleading anxious writers who blindly rely on the misinformation without questioning or vetting it. Furthermore, unlike the human mind and intelligence, AI lacks creativity. Its generative potential is limited to the datasets it pulls from and its ability to retrieve and repackage information differently. AI cannot generate new ideas and innovate, which is essential to any field’s advancement.
AI Authorship Criteria and Usage Disclosure
The academic community continues to deliberate on AI's role in authorship. Some argue that AI’s contributions are akin to those of human coauthors. In December 2022, an Elsevier journal (Nurse Education in Practice) and PLoS Digital Health allowed human authors to credit ChatGPT as a coauthor in their ahead-of-print publications.7,8 After the articles’ peer-reviewed publication, however, ChatGPT was retracted from the author list.9,10 There was significant backlash from the scientific community, highlighting concerns for accountability and contributorship,11 as supported by the International Committee of Medical Journal Editors (ICMJE),12 stipulating that authors must make substantial contributions to the work and be accountable for its content.
Because AI lacks consciousness and cannot assume responsibility, it does not meet authorship criteria. Consequently, journals like Nature and JAMA Network13 have policies that preclude listing nonhuman AI tools as authors. Furthermore, journals like JAMA and Science require transparent reporting of AI usage in manuscript preparation, including the specific systems and prompts employed in writing.14,–,16
The Journal’s Role in Managing AI’s Impact
The Journal must balance leveraging AI’s potential and ensuring the integrity and rigor foundational to scholarly work. It is imperative to carefully develop editorial strategies and proactively create clear ethics guidelines that incorporate the current realities of technological integration in academic work to ensure the use of AI is ethical, transparent, and aligned with the highest standards of scholarly publishing.
Updating Editorial Policies
Given the growing reliance on AI tools, the editorial policy should mandate authors to disclose all AI technologies they use. A workgroup of editorial board members could be formed to discuss this matter and propose a Journal-specific disclosure form that holds authors accountable for specifying AI’s role in the research and manuscript writing process. This may include detailing whether they used AI tools to enhance clarity, assist in drafting sections of the manuscript, generate statistical models, or identify patterns in large datasets. In this form, authors could be asked to assume responsibility for all published information obtained from programs such as ChatGPT.
By requiring such disclosures, The Journal achieves multiple purposes: it builds trust with the readers, reviewers, and contributors by promoting transparency and ensuring The Journal maintains an accurate and responsible record of how manuscripts are produced. Additionally, the editorial team and peer reviewers could better examine AI-generated content’s potential strengths and limitations and apply appropriate scrutiny to ensure a solid review process. This approach could help mitigate concerns about AI’s reliability and accountability, ensuring the editorial process remains rigorous and transparent.1
Moreover, The Journal should actively engage in ongoing training for authors and reviewers to ensure they are well-versed in AI’s ethics and technical implications in academic writing. Authors would benefit from guidance on proper AI use in their research and writing processes without compromising their work and intellectual integrity. For instance, The Journal could offer guidelines on how authors and reviewers could review AI-generated content for factual accuracy, relevance, and proper attribution. Such training would empower authors to harness AI to enhance their scholarly output rather than substituting it. It would also help reviewers recognize the potential pitfalls of AI-generated content, including biases, inaccurate information, and hallucinations. In addition, they would learn to evaluate the originality of AI-sourced content and the accuracy of AI-suggested citations to ensure that accepted manuscripts meet high academic standards.2
Conclusion
As we enter the unchartered territory of an artificial and supposedly intelligent world, it behooves us to acknowledge the importance of genuineness and integrity in our work. Intelligent work is conscious, aware, and responsible. Forensic psychiatry is an ever-evolving field, constantly updating based on medical advances and legal developments. The Journal has always hosted remarkable scholarship spanning cultural, sociological, and medicolegal topics. It is not beyond The Journal to keep up with technological advancements without fearing the consequences of that reality or skipping on the quality of its publications. It is necessary, however, to develop clear and educated guidance and policies to manage AI’s use in proposed submissions properly. The biggest concern otherwise would be that the world has just created a Frankenstein’s monster that would soon be out of control.
Disclosure
In preparing this manuscript, I resorted to several AI tools: I used Google Scholar to search for articles and vet articles later suggested by GPT. Grammarly and Microsoft Editor helped refine my sentences and stylistic and grammatical errors. GPT helped explain AI terminology in a simplified way, provided examples of AI tools, created the table after I fed it the information, and refined specific sentences in the text. Through the usual Google search (and its integrated AI tool), I conducted independent research to find and confirm information included in this text through multiple sources, including references.
As I indicated in this editorial, GPT hallucinated several references. It sometimes provided inaccurate answers and replied in an overly generic (and stiff) text when I asked it to elaborate on some ideas. As a result, vetting, correcting, and refining all information included in this editorial was as time-consuming for me as writing it in pre-AI days.
Nevertheless, when I ran the text through two separate plagiarism checks, I received two different results: Scribbr indicated that 100 percent of the text was human-generated, with zero percent being AI-generated or AI-refined. But the free version indicated a “high risk” of plagiarism, which I pledge not to have engaged in. Grammarly indicated that eight percent of the text resembled matched sources and 20 percent had patterns resembling AI text (numbers changed to 1% and 23%, respectively, when I deleted all references from the checked text).
Footnotes
Disclosures of financial or other potential conflicts of interest: None.
Editor’s Note: Readers are invited to review the addition of AI-specific guidance in the Instructions for Authors and the Statement of Authorship Responsibility at jaapl.org and in the print version of The Journal.
- © 2025 American Academy of Psychiatry and the Law