Generative AI creates materials that are similar to those that were in the training data, so errors and biases in the training data, as well as information that is superficially similar to correct information but that is no longer correct ("hallucinations") can happen in the process.
People sometimes use generative AI for misrepresentation. Sometimes that's to claim authorship for work that is not their own, and sometimes that's to create truthful-seeming information that is not true.
Deep fakes: Sometimes GenAI tools are used intentionally to create false images, videos, and voice recordings, to mislead the audience into believing they are real materials. These types of "deep fakes" can be especially dangerous when they are used to misrepresent political leaders or historical events.
For more information about the unique challenges related to deep fakes, please check out the RadioLab segment "Breaking News"