Skip to Main Content

Generative Artificial Intelligence

This guide provides information for VCU students, faculty and staff on the topic of generative artificial intelligence tools so that they may assess practical and ethical issues relevant to their work within an academic setting.

Cabell Library and Health Sciences Library

Cabell Library - Monroe Park Campus

901 Park Ave., Box 842033

Richmond, VA 23284-2033

Phone: (804) 828-1111

Health Sciences Library - MCV Campus

509 N. 12th St., Box 980582

Richmond, VA 23298-0582

Phone: (804) 828-0636

Academic Integrity

Guidance specific to VCU is available in an evolving document entitled "Generative Artificial Intelligence (Gen AI) and Teaching & Learning Tool" from VCU Office of the Provost-Faculty Affairs.


As mentioned already, generative AI tools are not search engines. There are a few different ways in which these tools can be used to generate misinformation, either intentionally or by accident. 

  1. False results: Sometimes GenAI tools can "hallucinate," when they do not know the answer to something or have been given an illogical prompt. For example, sometimes it might generate fake citations to fake studies to prove a pointGenerative AI is also susceptible to model collapse, a degenerative learning process that occurs as more AI developed content is scraped into training data until outputs no longer reflect reality. 
  2. Deep fakes: Sometimes GenAI tools are used intentionally to create false images, videos, and voice recordings, to mislead the audience into believing they are real materials. These types of "deep fakes" can be especially dangerous when they are used to misrepresent political leaders or historical events

What can you do to protect yourself and others against this kind of misinformation and disinformation? 

So the bad news is that there is no one-button solution for identifying if a piece of text or media is fake. The good news is that some of our oldest methods of information verification still hold true today.  

  • Always verify the source of a citation by confirming with the actual source material. 
    • ChatGPT may produce fake citations and false summaries of articles or books. 
    • You can request ChatGPT to produce an ISBN (International Standard Book Number) or a DOI (Digital Object Identifier) for the reference in question. 
    • You can then either check VCU Libraries search or WorldCat to see if there's a match. (Note: Google Scholar can sometimes collect fake articles, and therefore is not recommended for this specific task.).
    • Do not use a reference to a material that you have not confirmed yourself, as ChatGPT can sometimes give false attribution or summaries of a source. 
  • Always verify the source of an image, video, or soundbite.  
    • For still images, Google reverse image search can be an extremely helpful starting point. This will identify if the image came from a personal social media page (unreliable) or from a more reliable publication (credible news story, peer-reviewed article, etc.)
    • For audio, search part of the soundbite using quotation marks "quote from the soundbite" to see if a full transcript can be found. Then verify the quality of the sources of that transcript and listen to the quote in its full context. This soundbite should not be trusted until you can verify it within the context of its full original recording. 
    • For video, verify the source of the video. Is it from a credible news source or posted on social media? Are either multiple angles of the event or multiple streams across different news stations? Is this a video that has been verified by experts? 
    • For more information about the unique challenges related to deep fakes, please check out the RadioLab segment "Breaking News"


When it comes to copyright and artificial intelligence (AI), there are still many open and evolving questions. Because copyright is a matter of federal law,  the most authoritative information on current law comes from federal government sources like the Copyright Office.

If you are interested in keeping up with copyright lawsuits related to AI, follow along with the lawsuits filed against the makers of the Stable Diffusion AI image generation platform, which are anticipated to have major consequences for whether or not AI image generation platforms represent copyright infringement or are protected by fair use.

As copyright lawsuits related to AI make their way through the federal courts, keep in mind that a court decision reflects the specific context of the lawsuit in question. That means that any single court decision is not necessarily generalizable to other contexts, and you should not use the decision as the sole guidance for what is or is not allowable.


Privacy concerns surrounding the use of different generative AI tools include: 

  1. Data Privacy and Ownership: Many generative AI models require large datasets to be trained effectively. The use of personal or sensitive data in these datasets can raise questions about data privacy and ownership. Individuals might be uncomfortable with their data being used without their explicit consent, and there's a risk that personal information could be unintentionally included in generated content.

  2. Re-identification: Generative AI models have the potential to generate content that could inadvertently lead to the identification of individuals, even if their personal data is not explicitly present. This could result in the re-identification of anonymized data, undermining the privacy measures that were originally put in place.

  3. User Profiling and Manipulation: Generated content can be used to manipulate or deceive users, leading to privacy concerns related to personal experiences, opinions, and emotions. This is particularly relevant in social media, where AI-generated content could be used to manipulate public opinion.