Jenson Baker

Dinosaur T-Rex - Easter egg in Google Chrome

Big analysis: AI learned to play dinosaur from Chrome. That rare case when you want to stay without the Internet. See website https://dinorunner.com/

There is a dinosaur game in the Google Chrome browser. When there is no internet, the browser shows a game with a dinosaur.

Recently, Chrome has added the ability to play this game even with the Internet: type in the chrome address://dino

A programmer from Australia named Evan (on YouTube — CodeBullet) wrote a neural network that plays this game itself

Spoiler: At the end, the AI just tears the game apart.

Let's take a step by step look at what he did and what he got in the end. The video itself is in English, so if you don't know English, consider this article a semantic translation of what is happening.

Creating a game You can teach the AI to play the game by just looking at the screen and analyzing everything that happens there. But then the AI's performance will be limited by the speed of the screen, that is, the AI will no longer be able to play at some super speeds. And we want to play at super speeds, so it will be more effective to embed AI directly into the game.

The floor and the bouncy character. To try the first version of the game as quickly as possible, Evan does not draw a dinosaur, but makes a jumping rectangle instead. It's the same with the surface: a simple line instead of a road with perspective and sand in random places. The only thing that is still possible in the game is to jump with a rectangle on the spot.

By the way, if you pay attention to the game in Chrome, you will notice that although the dinosaur (feels like) running on the ground, in fact its X coordinate on the screen does not change. You can imagine that this is not a dinosaur running, but cacti flying at him at an ever higher speed. An illusion!

Traffic and obstacles. In the next step, Evan makes cacti move towards the dinosaur. But cacti also take a long time to draw, so we take rectangles again. First we make them small and see what happens.

So far, everything is fine: the character jumps, the rectangles move. You can take the next step - add cacti of different heights and widths, as in the original game. Again, these are still rectangles.

Death from cacti. The last thing Evan does is add a condition to the game that as soon as the character touches the cactus, he dies. This is done simply by checking the intersection of the borders of one and the second object. I touched the cactus — everything disappeared.

Evan didn't start programming the whole game at once with dinosaurs, graphics and beautiful cacti. Instead, he made a mock-up of the game and physics; then he made sure that everything worked; and only after that he replaced the rectangles with a dinosaur and cacti, and the line on the floor with a road with sand. He simply cut all this out of the game and inserted it into his project.

The way Evan made birds remained behind the scenes: they can fly low, higher or very high. But we already understand that at first it was a rectangle above the line, and then it was replaced with a picture with a bird.

The dinosaur also had to learn how to duck — a rectangle that was decreasing its height turned into a crouching dinosaur.

Neural network

When the game is ready, you can screw artificial intelligence to it. To do this, Evan writes a simple self-learning neural network that works on the principle of reinforcement learning. This means that the AI at first knows nothing about the world in which it was placed, and its task is to determine for itself the rules that will help to play the game as long as possible.

In short, it works like this:

  • make the first generation of the network;

  • they launch it into the game and watch the result;

  • those versions of the first generation that showed the best results or played the longest remain, and the rest are removed;

  • these successful versions are re-launched into the game and also see which of them will show the best result;

  • new lucky ones are left, the rest are removed, and everything repeats until the AI learns to completely pass the game.

The first version of the AI that Evan made just jumped randomly, and, if lucky, jumped over cacti.

The first few generations of AI had a primitive tactic: you just jump and hope that the interval of jumps will coincide with the distances between the cacti. It didn't work, so by the seventh generation, the neural network found the relationship between the distance to the obstacle, the distance between the obstacle and the moment when it is necessary to jump:

Now the AI is able to wait until the cacti are close enough to jump, instead of jumping over them randomly.

An interesting point: since Evan uses a self-learning neural network, we can notice how at some points the dinosaur splits or splits into many parts.