
Image Description: Venn diagram depicting the different types of AI from the broadest category to the most narrow category: Artificial Intelligence, machine learning, deep learning, and generative AI, respectively.
Many academic publishers have policies on acceptable use of generative AI in manuscript writing. Although most are focused on the manuscript and not the research process itself, it's still a good idea to review the publisher's policy if you have a target journal in mind.
Sample publisher policies on generative AI for authors:
If your journal's publisher isn't listed, try a web search for "[publisher] author AI policy" or "[journal title] author AI policy" or ask your librarian.
If you do use AI, you'll want to consider how you should report this usage. Consider the following:
Publisher/journal policies on AI use (see Publisher Rules and Guidelines for Using AI above) often describe how authors should disclose their use of AI. Make sure to follow any detailed instructions provided.
Most publisher guidance suggests:
Disclosing and describing any use(s) of AI, often in the Methods section or Acknowledgements section.
*Not* including generative AI as an author.
Citation styles such as AMA and APA have guidelines for citing AI: see Citing Generative AI and Copyright.
The RAISE (Responsible AI in Evidence Synthesis) initiative of June 2025 led by the Cochrane Methods Artificial Intelligence Group brought together the Cochrane Collaboration, JBI, the Campbell Collaboration, and the Collaboration for Environmental Evidence to identify and promote best practices of using AI that support the principles of research integrity for evidence synthesis. The guidance addresses activities conducted by eight roles that are to participate collaboratively in the evidence synthesis ecosystem:
The guidance is currently published in three parts as of June 3, 2025: