What Is AI? History, Definitions and the Turing Test
The question sounds simple until you realise nobody in the field can fully agree on an answer. That's not because we're bad at definitions - it's because AI keeps becoming different things as it evolves. Here's what it actually is right now.
menu_book In this lesson expand_more
The question "what is artificial intelligence?" sounds simple until you realise that nobody in the field can agree on an answer. That's not because we're bad at definitions - it's because AI keeps becoming different things as it evolves.
The Real History (Not the Textbook Version)
Most AI courses start in the 1950s at Dartmouth when researchers got excited about the possibility of machine intelligence. But the actual origin story is messier. People were trying to build thinking machines since the 1930s, and mathematicians like Alan Turing were asking whether machines could think while working on encrypted messages during World War II.
What matters is that from the 1950s through the 1980s, AI had these distinct periods. Early on, people thought human-level intelligence was maybe 20 years away. Every 20 years, the timeline reset to "another 20 years." This wasn't stupidity - it was optimism meeting hard problems.
Then the field went through "AI winters" where funding dried up because the technology didn't deliver what people promised. These winters happened because companies and governments invested billions expecting human-like robots that could reason about anything. What they got instead was narrow tools that were good at specific jobs. The disappointment was mutual.
What AI Actually Is (Right Now)
Here's what I think is honest: AI is software designed to handle tasks that typically require human judgment or pattern recognition. It learns from data rather than following step-by-step instructions you write.
That's it. That's genuinely it.
The thing people get wrong is assuming AI is thinking or conscious or smart in some general sense. A chess engine that beats every human on Earth isn't smart about weather, cooking, or friendship. An image recognition system that identifies diseases in X-rays can't tell you why your code is broken.
Most AI today is "narrow" - it solves one specific problem. It's remarkably good at that one problem, often better than humans, but it's hopeless at anything else. See Unit 1 Lesson 2 for a full breakdown of narrow vs general AI.
The Turing Test: Still Overrated, But Not for the Reasons You Think
Alan Turing posed a test in 1950: if a machine can fool you into thinking you're talking to a human, does it matter whether it's "actually" thinking?
The test itself is elegant. A human judge has written conversations with either a human or a machine, and has to guess which is which. If the machine wins often enough, Turing said, we can't justify saying it's not intelligent.
For decades, this was treated like the ultimate arbiter. Researchers chased it like a holy grail. In 2014, a chatbot (Eugene Goostman) kind of passed it - or at least, it fooled enough judges in one particular tournament. The internet proclaimed victory. Everyone else rolled their eyes.
Here's why I think the Turing Test is actually a bad measure: it's testing deception, not intelligence. A machine could pass the Turing Test by being incredibly good at mimicking human conversation patterns - which modern large language models kind of do - while knowing nothing about the world. It could tell you convincingly false information because it learned from text where humans said false things.
The real problem is we don't know what we're even testing for. When we say "think," do we mean reasoning? Consciousness? Adaptability? The Turing Test punts on this. It basically says "if you can't tell it's not human, it's intelligent." But Eliza, a 1960s chatbot, could partially fool people just by reflecting their statements back at them. Was that intelligence? Not really.
What matters more than the Turing Test is whether a system can do what you need it to do, whether it can explain itself, and whether it's reliable.
What AI Can and Can't Do
This is crucial because it separates reality from hype.
AI is genuinely good at: finding patterns in huge amounts of data, getting better at specific tasks through practice, doing things faster than humans, and sometimes doing things better than humans in narrow domains.
AI is genuinely bad at: reasoning about things it hasn't seen before, handling situations that require understanding context from the real world, explaining why it made a decision in a way that makes sense to humans, and knowing when it doesn't know something.
That last one matters. An AI system will confidently give you wrong information. It doesn't have doubt. It doesn't know when it's hallucinating. That's not a flaw we'll fix with more data - it's closer to a core feature of how these systems work. The lesson on hallucinations and bias covers this in depth.
The Definition That Actually Sticks
AI is pattern matching at scale, trained on examples, applied to new situations. It works when the patterns it learned match the new situation. It fails when they don't, often without warning you it's failing.
Everything else - consciousness, understanding, genuine intelligence - is philosophy we layer on top. Some of that matters for ethics and safety. Some of it's just us trying to make sense of something that doesn't think the way we do.
Check your understanding
2 questions — select an answer then check it
Question 1 of 2
According to Alan Turing's 1950 test, what is the key condition for determining machine intelligence?
Question 2 of 2
What is the main reason AI systems produce false information confidently - what the lesson calls a "core feature" rather than a bug?
