Categories of AI: Narrow, General & Super AI
There are three boxes people sort AI into. Understanding the difference matters because it changes how you think about what AI can actually do right now versus what's theoretical - and why nearly every claim about AGI timelines should be read with scepticism.
menu_book In this lesson expand_more
Three categories. That's all there is. Narrow AI, General AI, and Super AI. Every discussion you'll read about AI timelines, AI risk, and AI capability maps onto one of these. Getting them straight changes how you read the news.
Narrow AI: The Only Kind That Exists
Narrow AI, sometimes called ANI (Artificial Narrow Intelligence), is every AI system in existence today. ChatGPT. Image recognition. Spam filters. Medical diagnostic tools. Self-driving cars. All of it.
The defining characteristic is that it solves one specific problem. ChatGPT generates text with impressive fluency. It can't drive. It can't actually see. It doesn't want anything. It doesn't learn from your conversation - each session starts from scratch with the same training it always had.
The word "narrow" sounds like a criticism. It isn't. Narrow AI systems are extraordinarily capable within their domain, often better than any human. The point is that capability doesn't transfer outside that domain.
Real Examples of How Narrow AI Really Is
This becomes clearer with specifics.
Spam filters: Trained to spot patterns in spam emails. Excellent at that task. They can't tell that your grandmother's health update is more important than a marketing newsletter, even if the newsletter matches more of the patterns they were trained on. Context is invisible to them.
Go-playing AI: DeepMind's AlphaGo beat Lee Sedol, one of the world's best Go players. It can't play chess. It can't play poker. It does nothing except Go. That was by design.
Face recognition: Works in controlled settings, degrades in the real world with different lighting and angles. Fails systematically on faces from demographics underrepresented in its training data. Can't recognise someone it's never seen before.
Content recommendation: Netflix, YouTube, and Spotify are good at "people who watched this also watched that." They're bad at understanding whether you actually want to watch something or just need background noise.
Each of these systems is narrow in a way that's easy to miss until it causes a problem. If you're building with AI or evaluating AI claims, the first question to ask is: what exactly was this trained on, and how far does the task stray from that training?
See Lesson 1 for more on the basics of what AI actually is and how it differs from general intelligence.
General AI: Theoretical and Nowhere Close
General AI, or AGI (Artificial General Intelligence), is the thing that keeps researchers and venture capitalists awake. It would be a system that understands any task a human can understand, and can learn and apply knowledge across domains the way humans do.
Ask it to write poetry, fix your car, solve a maths problem, or code an app - it would handle all of these differently but competently. It would transfer learning from one context to another without retraining.
We don't have this. Nobody has this. The confusion arises because ChatGPT looks general - you can ask it about almost anything and get a coherent response. But it's not general. It was trained on almost all human-written text, so it can generate text about almost anything. Generating plausible text about a topic is very different from understanding it or reasoning about it.
The hard problems for AGI aren't solved by more data or bigger models:
Transfer learning: Humans learn something and apply it to new contexts automatically. A child who learns what a dog is immediately understands what a wolf is without being taught separately. AI doesn't do this. You have to retrain it.
Causal reasoning: Humans understand that X causes Y. AI finds correlations. A system that correlates "wet pavement" with "rain" can't distinguish between rain causing wet pavement and a burst pipe doing the same. Correlation and causation look identical to it.
Knowing what you don't know: When you don't know something, you know you don't know it. AI systems don't have this. They hallucinate confidently, stating false things as fact because there's no internal uncertainty signal.
Embodied understanding: We understand the world through having bodies. We know what "heavy" means because we've lifted things. AI pattern-matches descriptions of physical experience without inhabiting the world those descriptions came from.
Super AI: Completely Speculative
Artificial Super Intelligence (ASI) would surpass human intelligence across all domains, not just match it. It doesn't exist. There are no working prototypes. It's philosophy more than engineering at this point.
Some researchers think ASI is an inevitable outcome of building better AGI - that once you have a system that can improve itself, it improves rapidly until it's beyond human control or comprehension. Others think it's not achievable. Others think the risks of getting there carelessly are serious enough to warrant significant caution now, even though we're not close.
This is where reasonable, informed people genuinely disagree. The disagreement isn't about facts - it's about which assumptions you hold about things we can't yet test.
How Far Away Is AGI Actually?
Honest answer: nobody knows. Anyone claiming certainty is selling something.
The optimists point to recent progress. Five years ago, GPT-3 seemed like a step change. Today it looks like a preview of something bigger. But the field has been "20 years away" from AGI for 70 years. Every capability that seemed essential for AGI becomes "well, that's not really intelligence" once we build something that does it. The goalposts move.
The pessimists point out diminishing returns on scaling. Bigger models with more data get better at their narrow task. But scaling hasn't clearly solved transfer learning, causal reasoning, or embodied understanding. It's not obvious those problems yield to more compute.
My read: narrow AI will get significantly more capable over the next five years. AGI is not arriving this decade. Anyone confident otherwise is guessing, and they have financial reasons to guess optimistically.
What matters for practical purposes: every AI system you'll interact with today is narrow. Treat it that way. Don't expect it to generalise. Check its outputs. Understand what it was trained for. That's not pessimism - it's how you actually use these tools well.
Check your understanding
2 questions — select an answer then check it
Question 1 of 2
DeepMind's AlphaGo beat the world's best human Go players. Which category of AI does AlphaGo belong to?
Question 2 of 2
Which of the following is listed as a core unsolved problem that stands between current AI and AGI?
