Skip to Main Content

Generative Artificial Intelligence

This guide provides information for VCU students, faculty and staff on the topic of generative artificial intelligence tools so that they may assess practical and ethical issues relevant to their work within an academic setting.

Academic Integrity

Guidance specific to VCU is available in an evolving document entitled "Generative Artificial Intelligence (Gen AI) and Teaching & Learning Tool" from VCU Office of the Provost-Faculty Affairs. Faculty should also be aware that emerging research on so-called "detectors" suggests extreme caution when considering use due to inaccuracy and bias.  

Misinformation

As mentioned already, generative AI is designed to make things up. There are a few different ways in which these tools can be used to generate misinformation, either intentionally or by accident. 

  1. False results: Sometimes GenAI tools can "hallucinate," when they do not know the answer to something or have been given an illogical prompt. For example, sometimes it might generate fake citations to fake studies to prove a pointGenerative AI is also susceptible to model collapse, a degenerative learning process that occurs as more AI developed content is scraped into training data until outputs no longer reflect reality. 
     
  2. Deep fakes: Sometimes GenAI tools are used intentionally to create false images, videos, and voice recordings, to mislead the audience into believing they are real materials. These types of "deep fakes" can be especially dangerous when they are used to misrepresent political leaders or historical events

What can you do to protect yourself and others against this kind of misinformation and disinformation? 

So the bad news is that there is no one-button solution for identifying if a piece of text or media is fake. The good news is that some of our oldest methods of information verification still hold true today.  

  • Always verify the source of a citation by confirming with the actual source material. 
    • ChatGPT may produce fake citations and false summaries of articles or books. 
    • You can request ChatGPT to produce an ISBN (International Standard Book Number) or a DOI (Digital Object Identifier) for the reference in question. 
    • You can then either check VCU Libraries search or WorldCat to see if there's a match. (Note: Google Scholar can sometimes collect fake articles, and therefore is not recommended for this specific task.).
    • Do not use a reference to a material that you have not confirmed yourself, as ChatGPT can sometimes give false attribution or summaries of a source. 
       
  • Always verify the source of an image, video, or soundbite.  
    • For still images, Google reverse image search can be an extremely helpful starting point. This will identify if the image came from a personal social media page (unreliable) or from a more reliable publication (credible news story, peer-reviewed article, etc.)
    • For audio, search part of the soundbite using quotation marks "quote from the soundbite" to see if a full transcript can be found. Then verify the quality of the sources of that transcript and listen to the quote in its full context. This soundbite should not be trusted until you can verify it within the context of its full original recording. 
    • For video, verify the source of the video. Is it from a credible news source or posted on social media? Are either multiple angles of the event or multiple streams across different news stations? Is this a video that has been verified by experts? 
    • For more information about the unique challenges related to deep fakes, please check out the RadioLab segment "Breaking News"

Copyright

When it comes to copyright and artificial intelligence (AI), there are still many open and evolving questions. Because copyright is a matter of federal law,  the most authoritative information on current law comes from federal government sources like the Copyright Office.

If you are interested in keeping up with copyright lawsuits related to AI, follow along with the lawsuits filed against the makers of the Stable Diffusion AI image generation platform, which are anticipated to have major consequences for whether or not AI image generation platforms represent copyright infringement or are protected by fair use.

As copyright lawsuits related to AI make their way through the federal courts, keep in mind that a court decision reflects the specific context of the lawsuit in question. That means that any single court decision is not necessarily generalizable to other contexts, and you should not use the decision as the sole guidance for what is or is not allowable.

Privacy

Privacy concerns surrounding the use of different generative AI tools include: 

  1. Data Privacy and Ownership: Many generative AI models require large datasets to be trained effectively. The use of personal or sensitive data in these datasets can raise questions about data privacy and ownership. Individuals might be uncomfortable with their data being used without their explicit consent, and there's a risk that personal information could be unintentionally included in generated content.

  2. Re-identification: Generative AI models have the potential to generate content that could inadvertently lead to the identification of individuals, even if their personal data is not explicitly present. This could result in the re-identification of anonymized data, undermining the privacy measures that were originally put in place.

  3. User Profiling and Manipulation: Generated content can be used to manipulate or deceive users, leading to privacy concerns related to personal experiences, opinions, and emotions. This is particularly relevant in social media, where AI-generated content could be used to manipulate public opinion.

Scholarly Publishing

Editorial boards and publishers have taken different stances on how generative AI should be used for manuscript development and peer review. Implications on a scholar's approach to writing for publication are being formalized and should be reviewed in author guidelines wherever you submit manuscripts. Expectations for peer review are also likely to be found in these author guidelines.

The main points of contention revolve around what should be human first work or human only work

 An example of a human first approach is that first drafts of manuscripts should be entirely human generated without the assistance of genAI tools and revisions for can then be aided with. Some view peer review as something to be conducted only by people, where peer reviewers rely on their own expertise and not use genAI at all in the process. This prohibition also mitigates concerns about privacy and control of unpublished work. These are examples that fall on one side of the spectrum of possibilities that you may encounter because many recognize the functional benefits of these tools as well as the desire to use assistive technology.

At this time there is a lack of consensus on these issues and expectations from publishers as illustrated in the articles that follow.