paint-brush
Under the hood of Pixling Worldby@fredriknoren
5,914 reads
5,914 reads

Under the hood of Pixling World

by Fredrik NorénSeptember 5th, 2018
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This will be a look under the hood of <a href="https://pixling.world/" target="_blank">Pixling World</a>, an <a href="https://hackernoon.com/tagged/artificial" target="_blank">artificial</a> life/evolution simulator/god simulator I’m building. As a player you take the role of an old-school deity, who gets to create his/her own world, put some creatures (“Pixlings”) into it and then breathe life into them. You can’t control the behavior of the creatures directly, as it’s evolved through real darwinian evolution over time, but as a god you can give them abilities to interact with each other and their environment. The goal is to create worlds that fascinate yourself and others.

Companies Mentioned

Mention Thumbnail
Mention Thumbnail
featured image - Under the hood of Pixling World
Fredrik Norén HackerNoon profile picture

This will be a look under the hood of Pixling World, an artificial life/evolution simulator/god simulator I’m building. As a player you take the role of an old-school deity, who gets to create his/her own world, put some creatures (“Pixlings”) into it and then breathe life into them. You can’t control the behavior of the creatures directly, as it’s evolved through real darwinian evolution over time, but as a god you can give them abilities to interact with each other and their environment. The goal is to create worlds that fascinate yourself and others.

When I’ve been posting about it online (post on /r/javascript and post on /r/proceduralgeneration) people have asked me “how does it work”, so I figured I’d give an overview in this blog post. First I’ll give a brief overview of the game, then I’ll talk about how neural network evolution works and finally I’ll get into the technical implementation of all this.

The game

For anyone not familiar with the game, here’s what it looks like:

The entire state consists of three textures: Environments, Pixling state and Pixling neural network parameters. Environments are defined by the user; one slice per environment. The Pixling state is a combination of predefined properties such as Alive, Species and LastAbility, and whatever properties the user define for the Pixlings such as Energy, DaysSinceBirthday, NumberOfFishInPocket etc. The Pixling neural network parameters are, as discussed above, evolved.

Each texture is stored as a R32F 2d array textures in WebGL, i.e. textures with a width, height and depth and a single floating point value per texel. So for instance, the Magma environment at cell (x: 15, y: 23) has a value of 19.3. Or the Pixling state texture at (x: 4, y: 19) of the alive slice (index 0) has a value of 1, which means the Pixling of cell (x: 4, y: 19) is considered alive.

Simulation

The overall simulation loop looks something like this:

Each step:

  • Run all user defined environment computations
  • Build inputs to neural networks
  • Run forward pass on neural networks
  • Run all user defined rules and abilities
  • Handle movement and reproduction

All of those steps involve what I call Computations. A Computation takes a number of texture and variables as inputs, runs a function over that and writes the result to one or more textures. Then another Computation uses that output as its inputs, and so on; the simulation is basically just a big graph of Computations into Computations. The functions are implemented as GLSL fragment shaders (a function that runs per pixel on the GPU), so the whole Computation runs asynchronously on the GPU.

Many of the Computations use the game state as input and output, but there are also a number of secondary buffers used, for instance to keep the result while performing the forward pass of the neural network or to keep track of how to move Pixlings in the next update. I put an example Computation at the end of this blog post for anyone interested.

One of the core reasons the simulation can be very fast is because at no point does the simulation need to synchronize with the CPU. The Computations are sent to the GPU for processing but we don’t need to wait for the result, we can just keep sending Computations to the GPU. Since it’s very cheap to schedule the computations, the game is GPU bound most of the time.

Computations are generated for each environment, rule and ability that the user defines (i.e. it’s generating GLSL code from what you define in the UI). At the beginning of the game loop the computations of the environments are run. These are often things like combining the values of two environments, running a blur filter or adding noise to an environment.

Next the neural networks are run. First inputs are gathered into one big vector. Then each layer of the network is run and finally an argmax is run on the output of the network to decide what the next action of the Pixling will be. I put the code for the dense layer in the appendix for people who are interested.

After this I run all of the rules and abilities, where of course for each Pixling only the ability chosen by the network is invoked.

Finally I move and clone Pixlings that indicated they wanted to during the ability and rule computations. Since it’s all done in parallel in shaders, there’s a bunch of code to handle movement and reproduction. At a high level it works by first computing what I call “deltas”, which are a vector for each cell that indicates a unique neighboring cell that it “points” at. I also calculate the inverse of this. This can then be used to “move” or “copy” a Pixling from one cell to another, all in parallel. Currently the deltas are all random, but in the future it may be interesting to explore letting the neural network decide its delta (right now it can decide where to go only by switching between “don’t walk” and “walk randomly”, which works since it gets information about where it’s about to walk, as seen in the Apple hunters example, but could be even faster).

Rendering

Once the simulation has run a step (or a number of steps if you’re on the Extreme speed setting), I render the results to the screen. Rendering looks something like this:





render:draw each environment that is under the Pixlings;draw the current selection rectangles;draw Pixlings;draw each environment that is over the Pixlings;

The rendering is more or less just single quads that cover the entire screen (one quad per environment, one for the Pixlings) and most of the work happens in shaders. This means that each pixel on the screen is calculated fairly similarly independent of zoom level or position, so you get a really smooth zoom experience:

Zooming in and out on a fairly large map, on my MacBook Air.

Sampling & Metrics

Another big area of the game is sampling and metrics. Sampling is what I use to keep track of species in the world; basically I now and then record the entire game state and figure out who is a descendant of whom in a web worker. Metrics is calculated by running a “reduce” Computation which takes a texture and halves it in size, each pixel being the sum of its parent four pixels. That’s repeated until the texture is small enough to be moved fairly quickly to the CPU (this is expensive so only happens every 100 steps). This is how I can count the population for instance.

That’s all!

There’s a lot more I could write about Pixling World but perhaps this will be enough for a rough overview. There are also a lot of things I’m excited to try adding to Pixling World: LSTMs (Pixlings with memory would be so cool), a way for Pixlings to “see” more of the world and have better control of their movement, some more game like things like items and traits, manual training of their neural networks (perhaps take control of one and use that for backprop) and much much more. If anyone is interested in either hearing more about any specific area of Pixling World, or have suggestions for ways to take it forward I’m all ears; drop a comment here or on the reddit thread.

And finally, if you’re interested in trying the game you can do so at https://pixling.world (alpha stage so expect some bugs weirdness).


Thanks for reading!/Fredrik

(To get news and updates about Pixling World you can follow the project on Twitter and Reddit)

Appendix

A challenge: Pixling Battlegrounds

In preparation of this article I was working on a world which I was hoping would exhibit some interesting behaviors, but rather than finishing it I realized it may be more fun to put it out as a “challenge”. So for anyone inclined; here’s a world you can try to see if you can complete: https://pixling.world/4YUmyd5eZUgzPVHW73Uvk6 (fair warning: I have no idea if it’s possible to complete or not)

Example of a Computation

This takes the output of the neural network (brainRes), computes the argmax of it and stores it in cellProperties (the Pixling state). Since it both reads and writes from cellProperties it’s double buffered and the result is automatically copied back to the back buffer.

// GLSL code for this computation:


in vec2 texcoord;out vec4 out_value;


uniform sampler2DArray brainOutputs;uniform sampler2DArray cellProperties;













void main() {float maxv = texture(brainOutputs, vec3(texcoord, 0)).r;int maxi = 0;for (int i=1; i < ${abilitySlotsSize}; i++) {float val = texture(brainOutputs, vec3(texcoord, i)).r;if (val > maxv) {maxv = val;maxi = i;}}float ability = texture(cellProperties, vec3(texcoord, ${abilititySlotsStart} + maxi)).r;out_value = vec4(ability, 0.0, 0.0, 1.0);}

// Using this computation in the app:














computeCopyBack(this.copyMultiLayer,this.computations.abilitiesArgmax,this.config.worldSize,outputs2darray(this.state.textures.cellProperties[0],FixedCellProperties.InvokingAbility, 1),this.state.textures.cellProperties[1],{texture2darrays: {brainOutputs: this.state.textures.brainRes[this.state.brainResRW.read],cellProperties: this.state.textures.cellProperties[1]}});

Dense layer code

This code generates a shader that can compute outputs for multiple nodes in the network at the same time (up to maxColorAttachments).































return multilayerShaderSource(maxColorAttachements,`uniform sampler2DArray inputs;uniform sampler2DArray weights;uniform int layerWeightsOffset;uniform bool activation;uniform int outputSize;#define inputSize ${inputSize}`,`float inputVals[inputSize];for (int i=0; i < inputSize; i++) {inputVals[i] = texture(inputs, vec3(texcoord, i)).r;}`,i => `float res = 0.0;// inputLayer is actually the layer in the outputint weightsOffset = layerWeightsOffset + inputLayer * inputSize;for (int i=0; i < inputSize; i++) {float weightVal = texture(weights, vec3(texcoord, weightsOffset + i)).r;res += inputVals[i] * weightVal;}float biasVal = texture(weights, vec3(texcoord, layerWeightsOffset + outputSize * inputSize + inputLayer)).r;res += biasVal;if (activation) {res = max(res, 0.0);}out_values[${i}] = vec4(res, 0.0, 0.0, 1.0);`);