Artificial general intelligence (AGI) is a hypothetical form of artificial intelligence in which a machine can learn and think like a human. AGI could learn to accomplish any intellectual task that human beings or animals can perform. For AGI to be possible, it would need self-awareness, consciousness, and the ability to solve problems, adapt to its surroundings, and perform a broader range of tasks. AGI would implement logic into the process rather than just applying an algorithm or coded process.
AGI is considered to be strong artificial intelligence (AI) and is in contrast to weak or narrow AI, which is the application of artificial intelligence to specific tasks or problems. Existing forms of AI haven’t quite reached the level of AGI, but in theory, AGI could perform a wider array of tasks than weak artificial intelligence and perform creative actions that previously only humans could.
The term "artificial general intelligence" was used as early as 1997, by Mark Gubrud in a discussion of the implications of fully automated military production and operations. The term was re-introduced and popularized by Shane Legg and Ben Goertzel around 2002. AGI research activity in 2006 was described as "producing publications and preliminary results".
AGI is difficult to precisely define, but it refers to a superintelligent AI recognizable from science fiction. Despite the similarity in the phrases generative AI and artificial general intelligence, they have very different meanings. Generative AI refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on, but the ability of an AI system to generate content does not necessarily mean that its intelligence is general.