My experience writing Golang as a Node.js & React developer
Disclaimer: I write this article as I am beginning to learn the language so any comments about Golang are written with significant gaps in knowledge.
I’ve been toying with the thought of learning Go for a while now — my primary stack is Node.js and React and I’ve been on the look out for a more performant, concurrency focussed language for a bit of a change and simply because Node isn’t always the answer. This article is a small documentation of how I ended up trying the language, and why I’m really enjoying it.
A while ago I had an idea which required a processed collection of data based on the contents of the english dictionary for a side project, only I didn’t want a database, I wanted a file I could store in memory that I didn’t have to host — because costs. It was going to be the basis of an express application I was going to display with a simple React frontend so I thought to write the script to build the data file in Node; partly because I knew it well, partly because it plays great with JSON and loosely nested data structures, and partly because it’s so quick to get going in.
After a few hours of npm installing, promise wrapping and some number crunching I set up my script to run and was doing some console logging as it ran to track progress (tip: don’t console.log()
if you want to squeeze all the perf out of Node that you can) expecting it to finish over night.
It didn’t.
It got to the letter P and slowed down to a halt. But why..? Well I miscalculated a few things before I wrote this script. First off is file size; my data set was based on 129,000 words. Each word would have a nested object with an array of values and scores — so I was storing 129,000 deep-nested objects in memory.
Did you know the V8 compiler’s maximum memory limit by default is 512MB for 32bit systems and only 1GB for 64bit? You can boost it to ~1.7GB but that’s it. So I was running out of memory for my giant in-memory object.
Shit. Not ideal. My solution?
Streams! What if I stream the data to the JSON file so I don’t have to worry about it all in memory? Perfect. So I refactored my code in a few hours and got it streaming to my file. Memory usage was down and it was crunching along so again I left it over night. When I awoke it was finished; great! Not great. My newly created JSON file was 4.2GB. Not exactly usable and actually impossible considering Nodes memory limits. I wouldn’t necessarily recommend loading in a 4.2GB file anyway regardless of language.
So what do I do? I need this data to be accessed and spat out by an API. What if I calculated the values I needed on the fly? I was under no illusions that this side project was going to explode and need to handle a crazy amount of traffic so I figured that was fine. So again I set out to refactor the code and integrated it straight into my API. Problem #3? Speed.
It took around 500ms to calculate a single value I needed in realtime in Node.js, this was an issue due to the fact that I’d need to hit this API multiple times per request for my final product. Waiting 5 seconds for a result simply wasn’t an option (well it was, but lets pretend people are gonna use this and need it to be fast).
Why was it so slow? Well remember Node.js isn’t multi-threaded (even though people like to pretend it is), its concurrency model is based off of the event loop which allows it to perform non-blocking I/O operations. It does this by handing operations off to the system kernal (usually multi-threaded) which can handle processes in the background ’til its ready to hand the result back to Node.
Hence I needed something multi-threaded, performant, and fun to work with. I’ve done some Java, C++, C# and PHP in the past but I figured this’d be a good opportunity to work with something different — I considered the functional side of things with Scala, Erlang or Elixir but I also wanted results and didn’t want to get bogged down in a paradigm I had only dabbled with in my known languages.
Though I do want to pickup a functional language, suggestions?
So I went for Golang after some praise from a colleague — I did some of the online tour (impatience) and it took some getting used to — having to define types again (I haven’t tried Flow yet) was something I wasn’t used to doing. Regardless, in a few hours I had the basics needed to recreate the real-time data lookup I needed to calculate.
Lets start with speed. My libraries real time look up takes 8ms on average to grab a value I need. That’s a 6250% performance increase. Ridiculous!
I could definitely work with speed like this.
The Golang standard library is beautifully detailed and so far has contained every bit of functionality I have required to create 2 projects. The documentation is detailed, contains examples, is easy to traverse, and besides that, it’s just really refreshing not having to npm install **everything**
.
The Go language specification is tiny (seriously compare it to Java’s). The result is that it takes very little time (comparatively) to learn what the language is capable of and just lets you get on with actually programming. There aren’t 50 ways of doing everything, there’s usually 1 or 2 which means you won’t be stumped reading other peoples code even if you’re new to the language.
Go’s dependency management being based off of a strict file structure also means everything is always where you expect it to be, you don’t have to dig around the folder structure looking for things you’re importing, it’s immediately obvious.
Gofmt is another gem that makes writing similarly styled code a dream; a built in command that formats your code (it can even simplify it for you).
No more ESLint ❤.
Go handles concurrency using goroutines and channels; easy ways of spawning concurrent functions and ways of receiving their results respectively. I’m not going to go into detail about how they work I’m just going to mention how easy I found using them — a dream compared to Java worker threads I tangled with a few years ago. Take the official Golang tour if you’re interested.
Read more about Go’s Concurrency here and here
go run myProgram.go
go build myProgram.go
Simple built-in commands that compile and run your main package and package them into a binary executable that can be deployed anywhere with a simple command. Flags can be used to change the target platform.
Go is compiled. It also doesn’t let you compile unless you use everything you have declared. That means every variable, package, and function you declare has to be used somehow or it’ll yell at you until you remove them. At the beginning I’ll admit I hated this — I’m a JavaScript developer, sometimes I just write throwaway code to be removed later, prototype in a hasty manner, console.log()
things to debug what’s going on etc — you can’t do this with Golang and it’s caused me to think about what I’m doing more and keeps my code tidier from the start.
Obviously Go isn’t always going to be 6250% faster every single time, it just happened to work very well for my use case due to the concurrent number crunching required — however the other things I’m loving about the language are baked right in and haven’t been skewed for me yet. So far I’m a big fan, so I’m planning to continue using it for my side projects when applicable.
I haven’t mentioned some of the other nice things about it — testing, benchmarking, the incredible HTTP server baked right in, and more. I only wanted to talk about my findings and to stress that sometimes Node.js isn’t always the answer.
It’s quick to get started with, easy to get productive with, it’s fast as fuck when it’s running but it’s a little strict, and that’s okay.
As for the code mentioned in the article, I’ll post it if people are interested, but the API using my lib hasn’t been started yet (I’m busy okay?) so I haven’t open sourced it.
Enjoyed my ramblings? Follow me on twitter to watch my Go journey unfold or keep up with my side projects on my personal site. 💻