Animal is not artificial intelligence in the way people usually mean the term today. It does not parse language deeply, infer biological facts, train a statistical model or discover patterns from large datasets.
But it absolutely belongs in the broader story of early AI-like computer experiences. It creates an illusion of learning by turning each failed guess into a new piece of stored knowledge. For many users, especially on early home computers, that was enough to feel striking.
What Animal actually does
- stores yes/no questions
- stores animal names
- follows a decision tree
- patches the tree after a wrong guess
What Animal does not do
- understand animals
- search the web
- learn from examples automatically
- reason outside its stored tree
The important word is “appears”
Animal appears to learn because the user can see its behaviour improve. The next time the same animal is chosen, the program may ask the newly supplied question and reach the correct answer. That is a simple but powerful feedback loop.
The illusion is not dishonest. It is educational. The program exposes the mechanism by which its future behaviour changes. That makes Animal a useful antidote to more mysterious forms of AI: it shows that a small amount of structure can produce surprisingly lively interaction.
A fair label
The fairest label is early AI-style learning game. It is not AI in the strong or modern machine-learning sense. It is a hand-built, user-trained decision-tree system that demonstrates one narrow form of adaptive behaviour.
That is enough to make it historically interesting. It also makes it a good companion to ELIZA: another small program whose effect on the user was much larger than the code alone might suggest.