Although the efficacy and validity of many Artificial Intelligence and Gen AI tools is still being studied and evaluated, some researchers are beginning to use these tools in their research. Different disciplines, organizations, publishers, and other stakeholders may be developing policies around the use of AI in research. Understanding the emerging guidance, best practices, and limitations on using AI in research is critical to being a good steward of research.
This page will further explore:
Type of AI | Definition | Example |
---|---|---|
Artificial Intelligence (AI) | AI is technology that enables computers, machines or algorithms to simulate intelligent human behavior, including learning, comprehension, problem solving, decision making, and creativity and autonomy. | A duplicate detection tool in a systematic review program that doesn't learn from data, but rather executes the rules it has been programmed to follow to detect duplicate articles. |
Machine Learning (ML) | ML is a subset of AI that learns from historical data and creates models by training an algorithm to make predictions or decisions based on that data without being explicitly programmed. | An abstract screening prioritization tool that learns from manual screening of a subset of articles a researcher has marked "include" or "exclude" to make suggestions for which articles to include or exclude in the review. |
Deep Learning | Deep learning is a subset of ML that uses multilayered neural networks (deep learning networks) that more closely and rapidly teaches itself to simulate the complex decision-making power of the human brain by performing a large number of iterative calculations on an extremely large dataset. | A risk of bias assessment tool that reads the full text of an article and assesses bias across different domains. The model may try to understand the semantic meaning of sentences, allowing it to interpret a study's methodology without relying on simple keywords. |
Generative AI (Gen AI) | Gen AI are a subset of deep learning models that can create original content such as long-form text, high-quality images, realistic video, or audio. It responds to a submitted prompt by learning from a large reference of databases to provide a more detailed response. | A tool that can produce original content in response to a prompt, such as generating a hypothesis for a research project after a user prompts it to consider several research questions. |
(Note: Ideas for examples of each type of AI in that could be used in an evidence synthesis project were generated by Gemini on 9/24/25)
The following AI tools licensed by VCU can be used to supplement some stages of the review process. Using VCU-licensed AI tools includes the assurance that your inputs will:
Study selection
NotebookLM is a tool that uses information uploaded by the user such as PDFs, URLs, or Youtube videos, to summarize information. It can be used as a way to summarize themes, identify patterns, and analyze keyword frequency in articles related to your review topic.
Many academic publishers have policies on acceptable use of generative AI in manuscript writing. Although most are focused on the manuscript and not the research process itself, it's still a good idea to review the publisher's policy if you have a target journal in mind.
Sample publisher policies on generative AI for authors:
If your journal's publisher isn't listed, try a web search for "[publisher] author AI policy" or "[journal title] author AI policy" or ask your librarian.
An initiative led by the Cochrane Methods Artificial Intelligence Group between the Cochrane Collaboration, JBI, the Campbell Collaboration, and the Collaboration for Environmental Evidence to identify and promote best practices of using AI that support the principles of research integrity for evidence synthesis. The guidance addresses activities conducted by eight roles that are to participate collaboratively in the evidence synthesis ecosystem:
The guidance is currently published in three parts as of June 3, 2025: