paint-brush
A Frontender’s Guide To The Galaxy: Pt I & IIby@d.gieselaar
212 reads

A Frontender’s Guide To The Galaxy: Pt I & II

by Dario GieselaarMarch 6th, 2017
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

<em>For the last few months, I’ve been working on creating a foundation for moving away from an outdated frontend stack (ColdFusion) towards something that will lasts us another 3 to 5 years. In this article I will outline the choices made and how they solve (classical) frontend development problems. It is technical, but it doesn’t assume the reader is up to date with the developments in the frontend world.</em>

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - A Frontender’s Guide To The Galaxy: Pt I & II
Dario Gieselaar HackerNoon profile picture

For the last few months, I’ve been working on creating a foundation for moving away from an outdated frontend stack (ColdFusion) towards something that will lasts us another 3 to 5 years. In this article I will outline the choices made and how they solve (classical) frontend development problems. It is technical, but it doesn’t assume the reader is up to date with the developments in the frontend world.

I. Setting up shop

NodeJS: Probably the first thing anyone venturing into frontend development in 2017 should do, is install NodeJS. NodeJS is a JavaScript runtime which was originally designed to deliver fast and scalable network applications, but more importantly from our perspective, has grown to be the foundation of many tools used in frontend development today.

npm: NodeJS comes with a package manager, npm (short for node package manager), which allows developers to share JavaScript modules/packages in one central registry. That registry is now holding over 400,000 separate packages, ranging from small utility modules like left-pad, to frameworks like angular, to… full-fledged package managers_,_ like bower. You can simply install a dependency by navigating to your project root and running npm install package.

Other than being smart and using other people’s code instead of rolling your own by npm installing, npm also allows us to manage dependencies by using a JSON file that contains the (versions of) packages you need for your application, removing the need to check-in external dependencies in your repository. (You can also use the newer Yarn client for the npm registry, it’s much faster and predictable).

Git Bash: If you’re on Windows, like I am, Command Prompt doesn’t get you very far. You need something that comes a little more close to what Bash gives you. Windows 10 has the incredible (well, for Windows) Bash on Ubuntu on Windows, but it’s a little more work to get up and running. Git Bash is a lightweight alternative that gets you there 98% of the time.

Visual Studio Code: Microsoft’s latest code editor packs a lot of power: the usual stuff like syntax highlighting, code completion, snippets, theming, extensions, but also an integrated terminal, excellent debugging support, and first-class (okay, maybe second-class) support for Git for those who avoid the command line. Added bonus: extensions are written in JavaScript (and executed via NodeJS), so creating/contributing those is a breeze (and I think Microsoft has extension development a delightful experience, so try it out!). (Tip: use these keybindings for quicker tab cycling).

II. Scaffolding

Now that we have a toolbox, we need to have a workplace. We need to set up a build system that allows us to:

  • Develop and build a single page application
  • Develop and build standalone components which can be embedded in both the single page application and any given website
  • Develop, build and publish separate packages with minimal overhead

Folder structure: The first thing we need to do, is separate the packages from the applications we’re going to deploy. packages is where the packages live, apps is where we will build applications and components (let’s call them apps for the sake of brevity) from.


- apps- packages

In apps, we define a package.json file, which contains the dependencies of the apps. We assume these dependencies are roughly the same for all apps, so we’ll share them rather than defining the dependencies for every app. We’ll then create two folders in apps.








apps_- entries_-- app-- componentA-- server_- shared_-- utilpackage.json

entries contains the entry points for our separate apps, one folder for each. For instance, we might have an entry point for our single-page application, or for componentA, or for the server-side rendered application shell. Then in shared, we have code which will be shared across apps, e.g. authentication services.

Lerna: To link the separated packages together, and use them in our application code, we use Lerna. It’s a great tool which basically creates a symlink in your node_modules folder (where your npm-installed dependencies live) to the defined package. This allows us to develop very quickly because we don’t have to build and publish every change — we can directly use the changed packages in our application code.

Webpack: To build our apps, we use Webpack: a ridiculously powerful bundler for the modern age. It’s a little difficult to set up, but once everything is, here are some things what this incredible piece of machinery does for us:

  • Seamlessly import and use not only JavaScript modules, but JSON, CSS, images, fonts, basically everything you can think of.
  • Use those imports to build a dependency graph, and a) only serve the code that we’re actually using, leading to smaller bundles and better performance, and b) easily find and delete dead code to keep our code base clean.
  • Enables a development mode which focuses on fast rebuilds and automatically refreshes the page on changes. This allows the developer to quickly see the result of their changes (usually within 1 second), preventing loss of focus, improving productivity. Even more amazing is the experience when developing with React and React Hot Loader: you don’t even have to refresh the page to see code changes. That’s not just CSS that is updated — even JavaScript functions are hot-swapped in place. An incredible experience for any developer.
  • Optimized production builds with an increased build time, but smaller size, and better performance (for instance, some libraries remove debugging features in production builds). Production-targeted bundles ship with source maps, allowing us to “beautify” minified, unreadable code for debugging live bugs.
  • Allows the developer to define “split points”, which split out a part of your application’s code to a separate bundle which is then loaded only when needed (e.g. separate views). This again helps us keeping the initial JavaScript bundle small so the page can boot up faster.
  • Error out when of our bundles exceeds 300KB, a recommended maximum for JavaScript files, making sure we keep focusing on performance.

Babel: One of the issues with web development has always been cross-browser compatibility: we can only use (new) JavaScript features that are supported by the browsers we are targeting. It’s hard to remember what works in what browser, feature detection is hard to get right, making most developers err on the side of caution and staying away from language features that are new-ish. That’s where Babel comes in: a tool that transpiles “modern” JavaScript code to a subset that is supported by every browser. Together with the amazing [babel-preset-env](http://babeljs.io/docs/plugins/preset-env/), we can simply supply a list of browsers to support, and Babel will transpile just enough to make sure It Works Everywhere ™. This way we can safely use nice things like arrow functions, object destructuring, spread operators, rest arguments, and more advanced stuff like async/await and generators.

PostCSS: The same thing goes for CSS as for JavaScript: cross-browser compatibility quickly becomes an issue when you’re on the bleeding edge of CSS specifications. That’s why we use PostCSS, a compiler which targets CSS, and allows you to define a set of plugins used to transform the source code. Some of the plugins we use are [autoprefixer](https://github.com/postcss/autoprefixer), which automatically adds vendor prefixes to properties that need it, or [postcss-custom-properties](https://github.com/postcss/postcss-custom-properties), which allows us to use CSS variables (for now, only at compile time), that we use for theming.

Flow: One of the benefits of JavaScript, the fact that it’s incredibly dynamic, can quickly become a downside in larger code bases, increasingly leading to runtime errors. A stricter type system would allow us to catch errors at compile time instead of runtime. Flow is a static type checker, developed by Facebook, which allows you to define types for variable declarations and function arguments. Flow can then warn you if this variable is not used correctly. For instance, if you pass a number to a function that expects a string, Flow will error out. But the great thing of Flow is that it does most of the work for you — it traverses your code (and external dependencies) and determines what type a variable is without having to define it. Probably the only point where Flow needs a little help is with data coming out of the backend. That’s why we added types to our frontend code which reflect the object model of our platform, allowing us to confidently access and transform data coming out of the backend APIs, and being able to see where our code will break when the object model changes and we update the types. I use a Flow extension for Visual Studio Code, which offers excellent auto-completion and displays errors in the editor.

ESLint: Lastly, we use ESLint to check our code for patterns we (or others) think are a bad idea. ESLint is completely configurable, and allows you to do anything from enforcing single or double quotes to consistent spacing in function parentheses. I use an ESLint extension for Visual Studio Code as well, which displays errors inline and fixes “fixable” rules on save (for instance, convert braces-after-line to braces-on-line). Additionally, I have a precommit git hook which runs ESLint on my code every time I try to commit new changes.

Stay tuned for the third part, in which I explain what libraries we’re using that actually end up in our application and how they’re helping us ship more quickly and reliably.

In the mean time, follow me on Twitter, or read some of my previous articles:


A sense of speed_Improving (perception of) performance in a single-page application_medium.com


The offline experience (or, saying goodbye to imperative data fetching)_We’ve recently had the pleasure of being able to start from scratch on our (Angular) web app, Zaaksysteem.nl. It…_medium.com


Structuring a modern web app, take two_After working for a few years with a very basic concatenate and minify setup, it became very uncomfortable as our…_medium.com