Animal does not learn by statistics, neural networks or language modelling. It learns in a much simpler and more visible way: when it makes a wrong guess, it asks the player for a new animal and a question that separates that new animal from the animal it guessed incorrectly.

That new question is inserted exactly where the program failed. Over time, the tree grows from a tiny starter brain into a larger network of questions and answers.

The starter tree

A traditional starter version can be as small as one question and two animals. For example:

Does it swim?
  Yes → fish
  No  → bird

After one mistake

If the program guesses bird but the answer was dog, the user might teach it the question “Is it a mammal?”

Is it a mammal?
  Yes → dog
  No  → bird

What happens after a wrong guess?

  1. The program reaches an animal leaf and guesses it.
  2. The player says the guess is wrong.
  3. The program asks for the animal the player had in mind.
  4. The program asks for a yes/no question that distinguishes the new animal from the old one.
  5. The old animal node is replaced by a new question node, with the two animals below it.

Why it feels like learning

The cleverness is not in the depth of the individual rule. Each new rule is supplied by the human player. The cleverness is in the structure: the program knows exactly where it failed and can patch that point in the tree. That makes the next play-through feel more informed.

It is a good example of why early computer programs could feel lively even when their machinery was transparent. The user was not merely answering questions; they were training the machine’s future behaviour.

Why the browser version autosaves

This remake saves the tree in localStorage whenever a new animal is learned. It also has manual export/import controls so a user can keep a copy of the tree as a JSON file. That is a modern replacement for the older habit of saving the current state of the program to tape or disk.