Technology
minutes read

TensorFlow.js, Machine Learning and Flappy Bird: Frontend AI

Written by
Daniel Capeletti
Published on
June 27, 2018
TL;DR

What happens if we mix up a well known game and machine learning models? Here’s an experimental use of TensorFlow.js in the Flappy Bird game.

Author
Daniel Capeletti
Software Engineering Manager
My LinkedIn
Download 2024 SaaS Report
By subscribing you agree to our Privacy Policy.
Thank you! Your submission has been received
Oops! Something went wrong while submitting the form.
Share

When I hear about machine learning, I automatically relate to Python or Java implementations. But the question I’ve been asking myself lately is if frontend developers could also benefit from implementing machine learning, and how.

In order to try machine learning in frontend development, I started to read about TensorFlow.js, a JavaScript library to build and train models. What follows is my experiment, based on an HTML5 project using Flappy Bird game model and machine learning.

My goal was to replace Synaptic with TensorFlow.js. To better explain the concepts used in this experiment and to describe the exact steps I took, I divided the article into 7 parts, including one with some useful links:

1. Understanding Machine Learning, Deep Learning and Neural Networks

2. What’s TensorFlow.js and why the hype?

3. I have the tools, what’s next?

4. Creating my first Neural Network

5. Flappy Bird using TensorFlow.js

6. Resources & Ideas

7. Conclusion

Before we go any further we need to understand the concepts used in this experiment.

1. Understanding Machine Learning, Deep Learning and Neural Networks

In a nutshell, machine learning is machine’s ability to learn the problem based on examples, without explicitly implementing any algorithm.Let’s take jumping over an obstacle as an example.

Just like people can learn to jump over an object after several tries, with Machine Learning we could do the same. A machine would run a number of tests with different power and evaluate the results to confidently tell if the jump is possible based on speed and an obstacle’s size.

1.1 Machine Learning

Getting a bit more technical now, we can ask what Machine Learning can do for us or what tasks it can accomplish. Tasks are usually classified into two categories:

  • Supervised learning: In this task we can train the machine by providing it with examples of sample inputs and their desired outputs. The machine will try to find a function that maps the inputs to the outputs.
  • Unsupervised Learning: Here our inputs are not directly connected to expected outputs. This is useful when trying to discover hidden patterns in data.

How to tell which one to use? It depends on whether we have a learning signal or not. In these two categories above we are talking about how to provide good inputs for the machine, but when it comes to the outputs we have other types of tasks.

These are the ones usually mentioned as “what type of problem is this?” And the categories follow:

  • Classification: By providing the machine with inputs and outputs for training, we can use data to predict if it belongs to one category or another. For example, we can create a machine that is able to tell cats from dogs by providing it with sample images of cats and dogs as inputs, and the output is the classification of those.
  • Regression: In regression tasks we are aiming to optimize the function that produces the outputs over time rather than a predefined value. In our jumping over an obstacle example we can apply a regression task based on the distance between you and the obstacle and also its height. In other words, the machine will optimise the function that tells us whether to jump based on the inputs.

Clustering, densityestimation and dimensionality reduction are other types of tasks we can apply, not as important as the first two categories for this article.

1.2 Deep Learning

In task-specific algorithms an input will produce an output that will tell us something. In Deep Learning these outputs won’t simply be a feedback for us right away – they will rather serve as inputs for another layer of functions.

One common architecture is to make use of Neural Networks, where each hidden layer will receive inputs from the previous one.

1.3 Artificial Neural Network

This fancy name and concept comes from biological neural networks which make up human brains. Neural Network consists of nodes (neurons) connected between each other, applying some knowledge to the input values.

Neurons

Imagine a neuron being an entity that takes a number and multiplies it by another value. This value is called weight and can be initially randomized. In order to get an output value from that neuron we need to activate it, which means we will apply an activation function that will produce an output.

Coming back to exemplary obstacle problem, imagine we don’t know when it is the best moment to successfully jump over the obstacle. Our neural network will initially apply a random weight to our input value, so our output will tell either if we should jump or not, but not knowing anything yet.

Another point of the activation function is the bias, a value added that will shift our function axis. For example, take a function that returns a value between 0 and 1.

We can expect our outputs to be in that range meaning we will design our conclusion based on that. But our pattern would work in a another range, between 2 and 3 for example. For that reason, we can add the bias to move the axis and accommodate the new output range.

Layers

A neuron alone won’t do the trick, we need more neurons. A common name for a neuron layer is hidden layer where you will determine a number of neurons and how they’re connected to the previous layer.

Layers can be connected as you please, but a common way to do it is to create a fully connected neural network, meaning the neurons from one layer are connected to every neuron of another layer.

Other layers we will find are the input and output layers, also composed of any number of neurons, depending on our problem. If you want to learn more about Artificial Neuron Networks, check this video from 3Blue1Brown channel .

2. What’s TensorFlow.js and why the hype?

TensorFlow is an open source machine learning framework widely used and mainly distributed for Python, but it’s also possible to install for Java, Go and C. With a wide community it has been improved over the years reaching a reverence in machine learning and being used even by NASA.

"Any application that can be written in JavaScript, will eventually be written in JavaScript" – Jeff Atwood

Greatly received by the community, TensorFlow.js was released with an API similar to the one found in the Python implementation, but it was completely rewritten for Javascript! For more, read the release article of TensorFlow.js.

Now we can use tensors and all of TensorFlow power completely on the client side! It’s worth noting, however, that TensorFlow.js is not the only and absolute machine learning library for the web, we can mention Synpatic and Brain.js.

Tensor

A tensor is a mathematical structure similar to a matrix or a vector but more flexible, meaning you can have a multidimensional structure.

In TensorFlow.js the tensor API provides us with an easy way to create and manipulate a tensor, for example:

tf.tensor2d([[1, 2], [3, 4]]).print();//outputTensor    [[1, 2],     [3, 4]]

3. Creating my first Neural Network

So after going through the basics on TensorFlow.js tutorials I feel I want to try and create my own simple Neural Network.Let’s start by creating a hidden layer:

[code]const NEURONS = 8;const hiddenLayer = tf.layers.dense({    units: NEURONS,    inputShape: [3],    activation: 'sigmoid',});[/code]

So the layers API is really handy here, we’re using dense which describes a fully connected network. Note we didn’t create an input layer here, we’re rather saying that my hidden layer will have an input shape of 3, meaning we will pass 3 values for the hidden layer.

For this particular layer I decided to use an activation function called sigmoid. This function is known for having a S shape, and follows the equation:

Image source

And this will tell that my outputs will be in between 0 and 1. We’re missing an output layer. So let’s add one:

[code]const outputLayer = tf.layers.dense({    units: 1,});[/code]

Now we have our neural network, right? Well, not quite. We do have our variables set, but they’re not connected to each other. We need a model!

[code]const model = tf.sequential();model.add(hiddenLayer);model.add(outputLayer);[/code]

Now we're about to compile our model, but we need to specify two things: a loss function and an optimizer function:

[code]model.compile({ loss: 'meanSquaredError', optimizer: 'sgd' });[/code]

The loss function will tell us how far our output is compared to the desired one, and the optimizer function will take that loss and update our weight and biases. There we go! Our first model is ready to train and predict results.

4. I have the tools, what’s next?

When you’re coding for frontend you always have to be aware of browsers’ performance limitation, and it’s no different using TensorFlow.js. This library is optimized to process the calculations using the GPU, which means that a mistake can lead to a very low FPS.

Luckily the API provides us with enough tools to deal with that. Whenever you’re performing an operation manually, let’s say you want to add a value to a tensor, you will want to use tidy function:

[code]return tf.tidy(() =>  {    return tensor.add(tf.randomUniform(tensor.shape, min, max));});[/code]

Why? After we’re done doing the operations we want, tidy will take care of cleaning up all tensors used in the function, except for the one we’re returning. Note that this is used to process synchronous operations, so wrapping a Promise with a tidy won’t work.

Ok! Now I think I have everything so I can try and play around on my own.Please note, that I did go through the basic TensorFlow.js Tutorials available here and I recommend you do the same.

But… what should I do? I remember some time ago there was this game called Flappy Bird which is a very cool one, and this game was solved using Machine Learning and a Genetic Algorithm. It is described here.

That implementation makes use of Synaptic neural networks to power the prediction engine. So this raised a question: can we adapt this implementation to use TensorFlow.js?

5. Flappy Bird using TensorFlow.js

I didn’t want to rewrite the application from scratch, since it’s well described in the original repository, so I downloaded it and started playing around. The first thing I did was to remove Synaptic usage and implement TensorFlow.js models.

As described in the original repository, the author was using neural network with 2 inputs, a 6-neuron hidden layer and 1 output. Like that:

So I’m going to create this structure:

[code]const NEURONS = 6;const hiddenLayer = tf.layers.dense({    units: NEURONS,    inputShape: [2],    activation: 'sigmoid',    kernelInitializer: 'leCunNormal',    useBias: true,    biasInitializer: 'randomNormal',});const outputLayer = tf.layers.dense({    units: 1,});[/code]

The logic to tell if the bird should flap or is:

[code]if (output > 0.5) {    bird.flap();}[/code]

That tells me the output will be somewhere between 0 and 1 (do you remember the sigmoid function?). Sounds like we have our activation function!I chose the kernel initialiser and bias initialiser by trying and checking the ones that would leave my output somewhere around 0.5.

Ok, so now we have our model and we can start creating our population of birds using the genetic algorithm. But what is it exactly?

5.1 Genetic Algorithm

This algorithm uses natural selection on a population to generate the next one, based on the best individuals. We need a way to tell which individuals are the best ones and for our problem we can say that the best ones are the ones who go the furthest.

The original implementation defines this as fitness calculated as follows:

fitness = total distance travelled - distance to the closest gap

We’re going to choose the top 4 winners based on their fitness, then we’re going create some crossovers. This is where our implementation starts being slightly different than the original one.

[code]evolvePopulation: function() {    const Winners = this.selection();    const crossover1 = this.crossOver(Winners[0], Winners[1]);    const crossover2 = this.crossOver(Winners[2], Winners[3]);    const mutatedWinners = this.mutateBias(Winners);    this.Population = [crossover1, ...Winners, crossover2, ...mutatedWinners];}[/code]

As you can see, the new population is consistent of 4 previous winners, 2 crossovers and 4 mutated winners. To create a crossover, we're using the following function:

[code]crossOver: function(a, b) {    const biasA = a.layers[0].bias.read();    const biasB = b.layers[0].bias.read();    return this.setBias(a, this.exchangeBias(biasA, biasB));},[/code]

This will return a tensor containing the biases values for the layer.

[code]const biasA = a.layers[0].bias.read();[/code]

Remember tidy? We’re operating over the tensors we got in the crossover function.

[code]exchangeBias: function(tensorA, tensorB) {    const size = Math.ceil(tensorA.size / 2);    return tf.tidy(() => {        const a = tensorA.slice([0], [size]);        const b = tensorB.slice([size], [size]);        return a.concat(b);    });},[/code]

Because I don’t want to change the original bias, I’m copying it. Note that TensforFlow.js’ objects are immutable, so the function write will return a new tensor, rather than setting it.

[code]setBias: function(model, bias) {    const newModel = Object.assign({}, model);    newModel.layers[0].bias = newModel.layers[0].bias.write(bias);    return newModel;},[/code]

I want to create mutated individuals, so my mutate function will return a new model with a random bias:

[code]mutateBias: function(population) {    return population.map(bird => {        const hiddenLayer = tf.layers.dense({            units: NEURONS,            inputShape: [2],            activation: 'sigmoid',            kernelInitializer: 'leCunNormal',            useBias: true,            biasInitializer: tf.initializers.constant({                value: this.random(-2, 2),            }),        });        return this.createModel(bird.index, hiddenLayer);    });},[/code]

We’re randomizing the bias here, but there might be a more logical way to do it, for example the further the bird goes, the smaller the randomized value is. But, random between -2 and 2 worked well for us. There you go, this is the core of our genetic algorithm.

5.2 How do we train the model?

Influenced by the first examples I found in the tutorials, I decided to train the model, but without really thinking about it.

In order to test a model we have to use fit API:

[code]trainPopulation: function(population) {    return population.map(async model => {        await model.fit(tf.tensor2d(model.history), tf.tensor1d(model.outputHistory), {            shuffle: true,        });    });},[/code]

Note that training a model is an asynchronous process, so we're using async/wait in our example. Predicting the result from a model is not an async operation, but getting the output value is.

[code]tf.tidy(() => {   const outputs = this.Population[bird.index].predict(tf.tensor2d([ inputs ]));   outputs.data().then(output => {       if (output > 0.5) {           bird.flap();       }   });});[/code]

So which data are we training? You see we’re using model.history as our first parameter and model.outputHistory as our second parameter. I decided to collect the inputs and outputs from the model and see if we could speed up the population evolution, but I wasn't sure if that would help.

What I noticed right away is how slow it is to train a model. And well, it’s not an easy task. But what about evolving the population faster? No, training the model didn’t help.

Training the model will usually require us to have correct input data and the desired output, so we teach the model how to behave when we get such inputs, but our problem in particular is about to find the best way and just evolve the population with that knowledge.

As we haven’t had the proper inputs and outputs beforehand, therefore training the model didn’t help.

5.3 Results

We did manage to replicate the results described in the original solution. Was it better? Since we decided to pick a random bias to mutate the population and the way we were doing our crossovers we could see a winning individual in generation 19, so we can objectively say yes.

This could be also achievable in the original algorithm by tweaking the learning rate. It was really interesting going through this algorithm and trying to modify, add more layers, more neurons and play around.

My goal is not to improve the original algorithm, rather, we want to plug a newer technology and see how it would behave and if we could achieve the same results.

6. Resources & Ideas

Here are some links to resources I used and the ones that inspired my experiment:

7. Conclusion

I started this because I wanted to achieve two things: trying Machine Learning in frontend development and playing with TensorFlow.js. I can confidently say these goals were achieved.

Machine learning isn’t a simple topic and I certainly had to go through many more articles, videos and books than I had expected… and that’s awesome. It's great to see that we have now enough tools and computing power on frontend to explore more complex tasks.

When it comes to the technology itself, TensorFlow.js brings to frontend development the power of tensors, great examples on how to use machine learning and providing us the tools to elaborate more intelligent solutions for our tasks.

I’m looking forward to seeing where the community will take this technology to and the advances it will bring for the frontend development field.

Discover More Blog Posts

Explore our collection of insightful blog posts.