Skip to Main Content

Systematic Reviews

Guide that helps users to understand what a systematic review entails, as well as provide resources to do one.

Artificial Intelligence (AI) in Evidence Synthesis

Although the efficacy and validity of many Artificial Intelligence and Gen AI tools is still being studied and evaluated, some researchers are beginning to use these tools in their research. Different disciplines, organizations, publishers, and other stakeholders may be developing policies around the use of AI in research. Understanding the emerging guidance, best practices, and limitations on using AI in research is critical to being a good steward of research.

This page will further explore: 

Defining AI

Defining Types of Artificial Intelligence (AI)

For each type of AI, the table provides a definition and an example of how that type of AI may be used in an evidence synthesis tool.
Type of AI Definition Example
Artificial Intelligence (AI) AI is technology that enables computers, machines or algorithms to simulate intelligent human behavior, including learning, comprehension, problem solving, decision making, and creativity and autonomy.  A duplicate detection tool in a systematic review program that doesn't learn from data, but rather executes the rules it has been programmed to follow to detect duplicate articles.
Machine Learning (ML) ML is a subset of AI that learns from historical data and creates models by training an algorithm to make predictions or decisions based on that data without being explicitly programmed.  An abstract screening prioritization tool that learns from manual screening of a subset of articles a researcher has marked "include" or "exclude" to make suggestions for which articles to include or exclude in the review.
Deep Learning Deep learning is a subset of ML that uses multilayered neural networks (deep learning networks) that more closely and rapidly teaches itself to simulate the complex decision-making power of the human brain by performing a large number of iterative calculations on an extremely large dataset.  A risk of bias assessment tool that reads the full text of an article and assesses bias across different domains. The model may try to understand the semantic meaning of sentences, allowing it to interpret a study's methodology without relying on simple keywords. 
Generative AI (Gen AI) Gen AI are a subset of deep learning models that can create original content such as long-form text, high-quality images, realistic video, or audio. It responds to a submitted prompt by learning from a large reference of databases to provide a more detailed response.  A tool that can produce original content in response to a prompt, such as generating a hypothesis for a research project after a user prompts it to consider several research questions. 

(Note: Ideas for examples of each type of AI in that could be used in an evidence synthesis project were generated by Gemini on 9/24/25) 

Roles for AI in Evidence Synthesis

Ways artificial intelligence may streamline some of the most time-intensive steps of the evidence synthesis process

  • Exploration - Using AI tools (along with other resources) to identify a) potential gaps in the literature and b) a research question.
  • Literature search - Analyzing relevant articles to identify keywords for your search query.
  • Study selection - Assisting with screening studies for inclusion in the review.
  • Data Extraction & Risk of Bias Assessment - Supporting data extraction from included articles.

Using VCU-licensed tools in the review process

The following AI tools licensed by VCU can be used to supplement some stages of the review process. Using VCU-licensed AI tools includes the assurance that your inputs will:

  • Not be available to other customers
  • Not be used to train or improve any third-party products or services (such as OpenAI models)
  • Not be used to train or improve Google AI models

Covidence

  • Study selection

    • Automatic removal of non-randomized controlled trial studies ("Cochrane Classifier") that can be turned on / off by the user.
    • Sorting items by "Most Relevant" during screening places studies deemed most likely to be included higher in the list of studies to be screened. The decision to mark a study as relevant is based on a machine learning model that analyzes past screening behavior.
  • Data extraction

CoPilot / Gemini

  • Generative AI tools like CoPilot (Microsoft) and Gemini (Google) can be used in the exploratory phases of a project. The results of these exploratory activities should be validated through review of peer-reviewed literature or other trusted resources to minimize the risk of inaccurate or incorrect information (ie., hallucinations). Potential applications for these tools include:
    • Identifying gaps in research and research questions 
    • Crafting and refining research questions
    • Suggesting highly relevant articles
    • Brainstorming keywords to help develop search queries

NotebookLM

  • NotebookLM is a tool that uses information uploaded by the user such as PDFs, URLs, or Youtube videos, to summarize information. It can be used as a way to summarize themes, identify patterns, and analyze keyword frequency in articles related to your review topic.

Determine if the publisher you want to work with allows the use of AI

Many academic publishers have policies on acceptable use of generative AI in manuscript writing. Although most are focused on the manuscript and not the research process itself, it's still a good idea to review the publisher's policy if you have a target journal in mind.

Sample publisher policies on generative AI for authors:

If your journal's publisher isn't listed, try a web search for "[publisher] author AI policy" or "[journal title] author AI policy" or ask your librarian.

How to report use of AI

  • Publisher/journal policies on AI use (see "Determine if the publisher you want to work with allows the use of AI" box) often describe how authors should disclose their use of AI. Make sure to follow any detailed instructions provided.
  • Most publisher guidance suggests:
    • Disclosing and describing any use(s) of AI, often in the Methods section or Acknowledgements section.
    • *Not* including generative AI as an author.
  • Citation styles such as AMA and APA have guidelines for citing AI: see Citing Generative AI and Copyright

Guidance on using AI in evidence synthesis

Responsible AI in Evidence Synthesis (RAISE) recommendations and guidance - June 3, 2025

An initiative led by the Cochrane Methods Artificial Intelligence Group between the Cochrane Collaboration, JBI, the Campbell Collaboration, and the Collaboration for Environmental Evidence to identify and promote best practices of using AI that support the principles of research integrity for evidence synthesis. The guidance addresses activities conducted by eight roles that are to participate collaboratively in the evidence synthesis ecosystem:

  • Evidence synthesists
  • Methodologists
  • AI tool development teams
  • Organizations that produce evidence synthesis
  • Publishers
  • Funders
  • Users
  • Trainers of evidence synthesis methods

The guidance is currently published in three parts as of June 3, 2025:

  • Part 1 - Recommendations for practice for each of the eight roles
  • Part 2 - Guidance on building and evaluating AI synthesis tools
  • Part 3 - Guidance on selecting and using AI evidence synthesis tools