Generative artificial intelligence (GenAI) can do many things, but also is not the best tool for everything.
GenAI tools, like ChatGPT, can be incredibly powerful in the research process for tasks like brainstorming, writing abstracts, or even soliciting feedback on your writing as a form of peer-review.
However, these GenAI tools are not a search engine, and should not be trusted to generate accurate, truthful information.
Artificial Intelligence (AI) can be tricky to define because of its position in the popular imagination, where it is often understood in science fiction as a type of machine that thinks like a human.
This explains why Merriam-Webster provides two definitions for AI:
a branch of computer science dealing with the simulation of intelligent behavior in computers (the technical one)
the capability of a machine to imitate intelligent human behavior (the popular imagination one)
Within computer science there are different ways of understanding the term “AI”:
General vs. Generative AI:
With regards to the popular idea of “AI,” computer scientists define this as “Artificial general intelligence (AGI),” which is a form of AI that does not yet exist and “would be when an AI system can learn, understand, and solve any problem that a human can.” [Glossary].
Generative AI, on the other hand, is simply a system built with a neural network approach to AI, in which content is produced (LLM, text-to-image generators, etc.).
How exactly does generative AI learn? There are three main phases:
Pre-training: Pre-training the model is the initial learning phase, in which an incredible amount of data is used to train the entire foundation of the generative AI model.
Fine-tuning: Fine-tuning is the phase in which humans intervene to help refine the results generated by the neural networks.
Embeddings: In short, an embedding is a method in which a synthetic material generated by the machine is fed back into the neural network in order to fine-tune it.