**Register Now!**

5,901 reads

A few weeks ago, I started a project. I wanted to do real-time lighting on the PICO-8, a fantasy console with purposefully limited horsepower. This turned out to not only work, but also look awesome (demo here), so I decided to write an article explaining how I did it to share the fun. To nobody’s surprise, it turned out that explaining technical topics takes a lot of space. My original article turned into a multi-parter, part 2 of which you are now reading.

I’m going to sue whoever is responsible for floors in this temple.

Part 1, which explained how to apply a palette effect to a single line, was pretty easy to write. All I had to do was crawl through the innards of PICO-8 with you, explain a little bit about its inner workings and try to make everybody excited about juicy tidbits like in-memory pixel order. This part will be a little tougher, as some of the fun retro details will have to be replaced by **gasp** actual math. On the plus side, by the end of it, we’ll have a working lighting effect.

Our starting point is a rectangle on the screen with our scene already in place, correctly clipped, but with no lighting applied yet. The goal is darkening each pixel in this rectangle by the right amount, which means we have to first figure out what the right amount is.

Before-after pictures — as useful in programming tutorials as in dieting

Many things in nature get weaker the further away you are. This works with sound, gravity and also, conveniently — light. The exact rules governing all these things are described by the venerable inverse-square law, whose four-page Wikipedia article boils down to this: the brightness of a light falls with the square of your distance from it. Get two times as far away, and you’ll get four times less light hitting your eyes — a handy fact to remember for the next major hangover.

In the real world, this change is continuous, with the light getting a little dimmer micron by micron. We, on the other hand, only have six distinct light levels to work with, one of which is actually pitch black.

The basic structure of our light with each level shown as a color.

So instead of using the brightness of the light directly, we’ll just make a table of cutoff distances — the furthest distances that each of our light levels is going to reach. For ease of use later, we’ll store the squares of these distances, as that’s what we’ll be using later.

*-- the dimmest light level reaches 42 pixels out***light_rng = {** 10*42, 18*42, 26*42, 34*42, 42*42 **}**

Once we have the table, the most obvious (and we suspect, completely useless) way to use it is:

**for each pixel we need to filter:** calculate the **distance-squared** from this pixel to the lightpick a **light level** based on that**apply the palette** for that level, darkening the pixel

If this sounds like an awful lot of work per-pixel, that’s because it is. This will definitely not work on the PICO-8 — unless we find a way to plug a GPU into a console that doesn’t physically exist.

Since going pixel by pixel is not going to work, let’s use a trick we already learned in part 1. Instead of thinking about the whole rectangle at once, we’ll focus on a single line at a time.

Doing this highlights a few things that we can exploit to improve our solution. When we go left-to-right, no matter which line we pick, there are three important things that always hold true:

- there are long stretches where the lighting stays the same
- when the lighting does change between these, it only goes one level brighter or darker
- the light level always increases at first, as we are getting closer to the light, and only starts decreasing once we’re past the halfway point

Having these uniform stretches is quite fortunate, as I just spent 2000 words in part 1 explaining how we can draw them efficiently. Reframing our problem from drawing pixel-by-pixel to segment-by-segment, our algorithm becomes:

**for each horizontal line in our rectangle:****find all the breakpoints** where the lighting level changes**find the light level** for each segment between the breakpoints**draw each segment** using our fast palette routine

This way, we don’t have to do math for every pixel, but just for the few breakpoints found within our line. What’s more: since our light is symmetrical, we can calculate only half of the breakpoints and mirror them horizontally to get the rest. In total, that’s around five calculations per line instead of calculating something for each of the eighty-something pixels.

The moment we’ve all dreaded has come: I’m going to try explaining some moderately involved math without turning this write-up into a lecture. Fingers crossed.

To make our calculations easier in general, we will work within the light’s frame of reference. This is just a fancy way of saying our calculations will assume that `(0,0)`

is wherever the light is. Screen-space coordinates have to be moved to light space first by simply subtracting the position of the light. Once we get to the actual drawing on-screen, we’ll do the reverse.

The points in question — on both sides of the symmetry line.

The `light_rng`

table we introduced at the very beginning stores the cutoff distances for each light level — and the breakpoints are simply points lying at exactly that distance from the light source. Since we’re working in light space, the square of distance from it is easy to calculate:

D² = x² + y²

The `light_rng`

table stores distance-squared directly, and we know the `y`

coordinate of our line. Spending a few quality moments with the back of a napkin yields a formula for the `x`

of each of our breakpoints:

L = light_rng[light_lv]x² = L — y²x = ±√(L — y²)

Like we expected, there are two symmetrical solutions — one negative, and one positive, corresponding to the breakpoints on each side of the light.

Since the formula features a square root, it’s only valid when `L — y² ≥ 0`

. There is a good reason for that — if there is no solution, this means that our `y`

is too far away from the light and this particular light level will never be reached. Knowing that, we calculate both the breakpoints and the brightest light level we need to draw in one fell swoop:

ysq = y*y**for** lv = 5, 1, -1 dolrng = light_rng[lv]xsq = lrng - ysq**if** xsq > 0 **then**brkpts[lv] = sqrt(xsq)**else**brightest_lv = lv + 1break**end****end**

Once we have the positions for all the breakpoints, we can finally get to actually drawing some lines — and we’ll do them in three batches.

The first batch starts at the leftmost end of the line. Since the breakpoints are in the light’s frame of reference, we have to transform them back to screen space by adding `lx`

, the x-coordinate of our light. In this part of the line, all breakpoints are the negative ones, so the actual coordinates for them become `lx — brkpts[lv].`

We proceed breakpoint by breakpoint, drawing progressively lower light levels, until we reach the brightest part of the line.

xs = x1**for** lv = start_lv, brightest_lv+1, -1 **do**xe = lx - brkpts[lv-1]fills[lv](xs, xe-1, y)xs = xe**end**

Once we’re done with that, the light will start darkening. Our light level will be increasing now, and all breakpoints will be at positive X coordinates with respect to the light source.

**for** lv = brightest_lv, end_lv-1 **do**xe = lx + brkpts[lv]fills[lv](xs, xe-1, y)xs = xe**end**

The last segment is special, since it doesn’t end at a breakpoint — it just ends wherever our line does. We need one extra statement outside of the loops to finish it up:

fills[end_lv](xs, x2, y)

There is one strange thing about the actual line drawing in the snippets above. We might expect the call for each segment to be something like `apply_light(xs, xe-1, y, lv)`

. That’s indeed the most straightforward way to do it, but if we’ve learned anything doing this, it’s that the straightforward can usually be improved upon.

There are two special light levels that can be handled a bit more efficiently than others. Level 1, the brightest, has a palette that simply does nothing, so we might just as well not draw it at all. Meanwhile, the darkest level (6) simply makes everything black — so we can get away with drawing a black line instead of a costlier palette blend.

These two levels together make up more than **25%** of the area of our filter, so it’s definitely worth it. Unfortunately, handling each level differently would mean a lot of conditionals — and we wouldn’t want these confusing the CPU and messing up our svelte loops. Enter the `fills`

table:

**fills = {**fl_none,fl_blend(2), fl_blend(3), fl_blend(4), fl_blend(5),fl_color(0)}

For every light level, this table contains a **function** that draws it in the most efficient way. The first level uses `fl_none`

— an empty function that does nothing, but can be safely called. The darkest level uses `fl_color(0)`

, which draws in solid black using the built-in `rectfill`

. Levels 2 through 5 use the routine we wrote in part 1, as we can’t get away from doing some actual blending work for those.

Preparing these functions upfront means we can simply call `fills[lv](…)`

and have it use the best approach for that light level, without any conditionals needed.

There is one small thing left to explain in our drawing routine: the `start_lv`

and `end_lv`

variables, which were used for the light levels at the leftmost and rightmost end of our line, respectively.

In an ideal world, the rectangle we apply our effect to would always contain the whole area of the lit circle. That would mean that each line both starts and ends at pitch black. Both `start_lv`

and `end_lv`

would just be six, and we’d call it a day.

To our chagrin, this simple approach will be foiled by something all graphics programmers learn to hate: clipping.

When the light source approaches the screen boundaries, the entirety of the lit circle doesn’t fit on the screen — it gets clipped. And if we’re not drawing the whole region, we actually have to calculate the light level at the leftmost and rightmost point of our line. If we want to support off-screen light sources, there is even more annoying cases to handle, as we also have to properly calculate the brightest level reached (the mid-point where it usually lies might not even be on screen).

The alternative would be not clipping the rectangle and implementing clipping for each of the lines we draw instead. While conceptually simpler, it would also be much slower, as each of the line segments we draw would have to be checked and clipped individually. If we used built-in graphics routines, clipping would be done for us — but we’re blasting pixels directly into memory, so some extra care about not putting them in the wrong place is in order.

When we put it all together, we have a function that can do a single properly-lit line. All we have to do is form these lines into a rectangle, and *voila!*

This already looks pretty good, but feels much less “alive” than the finished product. Part 3 will concentrate on the finishing touches needed to make the effect truly shine, and introduce more mad optimization science to push its performance to the limits.

As usual, if you have any questions, I’ll be happy to answer them on Twitter, where I’ll also let you know when part 3 is ready.

Until then, may programming bring a light to your life!

*Part 1* *|* *Part 2**|* *Part 3* *|* *Part 4* *|* *Play the game*

L O A D I N G

. . . comments & more!

. . . comments & more!