The web disseminates ideas globally, instantly at zero cost. Because JS executes on the Web it has an unfair advantage over every other language. the feedback loop for collaboration is very tight. We can see code executing without compiling or building. Great minds rapidly gather around promising ideas and help mold them. As the tools for collaboration get better and the size of the community grows the loop tightens.
A student remarked to Pixar’s Ed Catmull he must be surrounded by geniuses because his team had so many algorithms named after them. His reply was that at the time nothing existed to do the things they wanted, so to do anything they had to invent it. Innovation is easy on the cutting edge. It is no more difficult than creatively hacking legacy software to keep up with the status quo, but much more rewarding. The further back you are from the frontier the harder it is to come up with anything new.
In the past I’d say things like I don’t miss types in JS. I’d seen many developers move from Java to JS. They found the instant feedback made them more productive than having rigorous static analysis with slow builds. One thing that began to change my mind was seeing Jonas Gebhardt talk: Evolving Visual Programming with React.
In it he showed that by mining Flowtypes from props he was able to visually wire up components to APIs and other components. Every time he pulled an edge out from a node everything would be grayed out except the props of nodes that could handle that type. In this way applications could be visually snapped together out of components and APIs. I saw that types are really like the modular connectors that you have with the pipework under your sink. They tell you exactly what can connect to what without getting leaks. And it turns out that types and fast feedback are not mutually exclusive.
The three pillars of functional programming are first class functions, immutable data and static types. The last two are not native to JS but can be bolted on through immutable.js and Flow annotations. If we’re going to transpile JS anyway why not transpile from a language that has first class types and immutability?
Enter Reason. Reason is a transformed view of OCaml that makes it look a bit like ES2016 without JavaScript’s bad parts. OCaml can be transpiled to readable JavaScript via BuckleScript.
Selling points:
MirageOS is like Docker but without Linux. It creates unikernals (app and OS fused together) you only include the parts of the OS you need. For many web applications that is just enough OS to run an HTTP server. A unikernal might be just 5MB whereas a Docker container is more likely 50–100MB. Mirage instances can be spun up and torn down so quickly one could be started up for every request (like Amazon’s Lambda but much faster to start and lighter weight). Or you could have thousands running on a single machine. Because there is less OS code and what there is is strongly typed there is a smaller attack surface area.
Community: Reason was created by Jordan Walke the creator of React. Cheng Lou from the React core team has created bindings to JavaScript React. I believe a native Reason version of React is also being written React on Reason (RoR). We may see some fusion between the Oculus and React ecosystems.
This language was commissioned by JavaScript’s creator Brendan Eich while he was CTO of Mozilla to solve memory leaks in Firefox. The Servo rendering engine is written in Rust and parts will be merged into Firefox in 2017. Rust makes it safer to write code that can be run in parallel. So compositing render layers will now be able to make use of multiple processors. Hopefully this will provoke the next round of browser performance wars focused on rendering and layout. It may also make it worthwhile including more cores in desktop and mobile CPUs (this is perhaps why Samsung helped sponsor the project).
Rust has many of the features of functional languages lambdas, data is immutable by default and strong types. Yet it has the raw performance of a systems language like C. It is not garbage collected and this turns out to be important because Web Assembly does not have a garbage collector. Go, Haskell, OCaml and Reason cannot be compiled to Web Assembly because they need a garbage collector. So Rust is probably the nicest language to write new performance critical code to compile to Web Assembly. The ability to compile Rust to Web Assembly will put pressure on spec writers to add support for concurrency to Web Assembly.
Some really interesting code is being written in Rust such as the XI editor this uses CRDT (Google Docs style syncing) to allow text editing reflows to use multiple processors. Having CRDT at the foundation may make it easier to add multi user concurrent editing in the future. If compiled to Web Assembly this might present opportunities for applications like Atom to enable remote pairing (which was the original intent of the Atom editor). XI’s reflow code is already being used in Servo’s textareas.
One take home is that multi user and multiprocessor problems may have similar solutions. As more people write safe multiprocessor code it may cross pollinate the web ecosystem.
The web now supports Canvas, SVG, Web Audio, Video & WebGL. The browser has done loads of heavy lifting when it comes to creating multimedia applications we just need to add UI. multitracks.com for example splits songs into their component tracks allowing you to mute individual tracks.
Live bands can then use instrumental tracks for instruments they lack or use individual tracks for practice (some songs have 21 tracks). It is an awesome application but surprisingly easy to write. You just need a component that turns an audio sample into a bitmap and to repeat it a few times on the page. The hard part playing back multitrack audio cross platform is done by the browser.
But there are few great creative tools on the web for working with bitmaps, vector graphics, audio, video and 3D graphics. However there are some pretty decent open source desktop apps: Gimp, Inkscape, Audacity and Blender.
Most of these hark back to the Sourceforge era and other than Blender their development pace has slowed. Their codebases do have working C++ and C code for image, 3D graphics, sound and vector manipulation. As well as support for plugins like VST for sound.
It would be great to see processing code extracted from these projects and compiled to Web Assembly. UI code could then iterated on quickly in the form of React components. Applications could then be quickly created by snapping editors together.
It would be great to see some of the contributors to these projects welcomed into the JS / Web Assembly community.
My open source break came through creating SubDivide a Redux implementation of Blenders subdivision UI. Similar UI can be seen in Atom and the Zeit’s hyper application. The cool thing about this style of UI is that it allows a user to have one responsive component per pain and then snap together an application according to a their needs. Individual panes can be used fullscreen on mobile but a desktop with three screens can use all available space.
You could mix and match editors and sequencers for audio, text, code, video, timeline, SVG, bitmaps and 3D models. This allows the same set of components to be recomposed for 3D rendering, a Digital Audio Workstation, stop motion animation sequencer, a video editor or a web page editor.
Blender uses this style of UI. Though it takes a bit of getting use to, bringing these tools to the web would bring down the level of expertise needed to hack the UI.
But the real payoff of moving these apps to web would be if it allowed multiple users to collaborate on the same project. Blender attempted to do this with the verse project a few years ago.
In 2016 the popularity of Redux exploded and GraphQL gained traction. Redux taught us to value immutable data and also to value data history not just its current value. GraphQL and Relay gave us the ability to fetch the minimal data needed for a component without needing a custom Rest endpoint. However there are trade offs in GraphQL’s design because it needs to work with any conceivable backend.
If we were more opinionated and data was strongly typed on the front and backend we could have guarantees that would allow subscriptions, mutations and shouldComponentUpdate to be handled seamlessly. OM.next has been able to achieve some of these things in ClosureScript. I hope we see some of these ideas bear fruit JS.
For many creative applications like video editing and 3D rendering you will still want to do heavy processing work on the backend. For 3D rendering you may want to rent 1000's cores for a few seconds and have each render a frame of your animation and this can be done for similar price to renting 8 cores for a few hours.
Scaling servers previously required expert knowledge but hopefully in 2017 we will see lots of repos coming not just with dev servers but scalable backends that can be deployed with a single command. Services like Now.sh take a lot of the pain out of writing and deploying scalable Node, Docker and static file servers.
In 2016 GitHub added the ability to treat a docs folder as a static site. Though this is a very tiny change it makes it trivial to include examples or an entire running application in the GitHub docs. This should tighten the loop when it comes to popularising new components as you will be able to try them in playgrounds in the docs.
Sadly when it comes to writing applications the web still lacks some pretty basic features like fine control of layout and decent font metrics. Measuring the size of DOM elements creates reflows and there is no ability to add responsive breakpoints based on the size of a parent rather than the size of of the window.
Enter Houdini I guess it is so called because it allows us to breakout of the current box model. Houdini allows developers to take fine control of rendering in a performant way.
WebRTC has been included in Firefox and Chrome for several years now. It first caught my attention after writing Fireflies a multiplayer web game.
Players made an average of 4 key presses a second. I wanted games to have 10 players and each data frame (generated by a key press) was roughly 64 bytes (because a header is needed for each frame). The engine used a reducer on each client to create the game state from key presses (it was kind of like Redux with time travel back in 2012). So the bandwidth required for each game each second was:
64 x 4 x 10 x 10 = 25KB/Sec/Game
That quickly adds up: 1GB every 40sec and 1TB every 11 hours and that is just for one game. If the game became remotely popular I would lose lots of money. For this reason I never launched it. WebRTC however would allow me to scale to millions games without paying anything for bandwidth (all I need now is some time to make the switch).
Data Channel is in Firefox and Chrome, Edge still sees it a low priority and Safari still needs to implement WebRTC (but it is at least underway). However because there are now 2 billion Chrome users there is vast potential audience for great WebRTC applications.
The success of SnapChat shows there is hunger to keep data private. With governments, Google and Facebook wanting to index every word we type peer to peer provides a great way to keep data private.
IndexDB can be used to persist distributed databases and content in the browser. So it is possible to create in browser applications like Facebook and Twitter where only your friends hold your data. WebRTC can also run on the server so if you wish to make sure data is always available you may want to seed it from a server running on a RaspberryPI. But with 5G we can expect most users to have always on IoT devices.
Service Worker enables applications to be cached in the browser and run offline. So one could imagine a Service Worker based browser app shell that connects you to a peer to peer version of the Web. Applications are served from a remote peer’s IndexDB and rehydrated by injecting script tags and content.
Mozilla is also experimenting with Flyweb a spec that allows web apps to serve content to other devices on the local network.
WebRTC is the route by which the latest open audio and video codecs will gain traction. Opus is now available in Chrome, Firefox and Edge. It boasts better performance than AAC, Vorbis and MP3 specially at low bitrates.
AV1 a new video codec backed by Google, Mozilla, Microsoft, Amazon and Netflix will be released in early 2017. It should join the Opus audio codec as the standard for WebRTC video.
Decentralized currencies should start to affect the web in 2017. There is currently too much friction on the web if you want to sell content. There is high unit cost for credit card transactions. They require entering personal information which is time consuming and raises concerns over privacy. While Bitcoin offers a distributed ledger Ethereum offers a decentralized turing complete way of scripting contracts. It could allow you to automate financial transactions. Transactions would have less friction because they have negligible unit cost and do not need to reveal the purchasers identity. This means you could sell articles, blog posts or videos without needing to sell subscriptions or needing advertising.
At the beginning of the article I spoke about a site that decomposes music into tracks for individual instruments. However each of these tracks also represents an individual musician. If these musicians allowed their tracks to be remixed they could potentially receive micropayments each time the resulting tracks are streamed.
So while many are lamenting the accounting and admin jobs that will be lost through automation there creative jobs that can be gained. Automating accounting lowers the bar for everyone who wants to start their own business. You no longer need songwriting and accounting, games programming and accounting or flower arranging and accounting. You just need a core skill.
So for me the most exciting challenges in 2017 are making collaborative creative software more widely accessible and making creative work more financially rewarding.