Guidance specific to VCU is available in an evolving document entitled "Generative Artificial Intelligence (Gen AI) and Teaching & Learning Tool" from VCU Office of the Provost-Faculty Affairs.
As mentioned already, generative AI tools are not search engines. There are a few different ways in which these tools can be used to generate misinformation, either intentionally or by accident.
What can you do to protect yourself and others against this kind of misinformation and disinformation?
So the bad news is that there is no one-button solution for identifying if a piece of text or media is fake. The good news is that some of our oldest methods of information verification still hold true today.
When it comes to copyright and artificial intelligence (AI), there are still many open and evolving questions. Because copyright is a matter of federal law, the most authoritative information on current law comes from federal government sources like the Copyright Office.
If you are interested in keeping up with copyright lawsuits related to AI, follow along with the lawsuits filed against the makers of the Stable Diffusion AI image generation platform, which are anticipated to have major consequences for whether or not AI image generation platforms represent copyright infringement or are protected by fair use.
As copyright lawsuits related to AI make their way through the federal courts, keep in mind that a court decision reflects the specific context of the lawsuit in question. That means that any single court decision is not necessarily generalizable to other contexts, and you should not use the decision as the sole guidance for what is or is not allowable.
Privacy concerns surrounding the use of different generative AI tools include:
Data Privacy and Ownership: Many generative AI models require large datasets to be trained effectively. The use of personal or sensitive data in these datasets can raise questions about data privacy and ownership. Individuals might be uncomfortable with their data being used without their explicit consent, and there's a risk that personal information could be unintentionally included in generated content.
Re-identification: Generative AI models have the potential to generate content that could inadvertently lead to the identification of individuals, even if their personal data is not explicitly present. This could result in the re-identification of anonymized data, undermining the privacy measures that were originally put in place.
User Profiling and Manipulation: Generated content can be used to manipulate or deceive users, leading to privacy concerns related to personal experiences, opinions, and emotions. This is particularly relevant in social media, where AI-generated content could be used to manipulate public opinion.