Replacing the angular 1 router with Elm — Part 1

Written by julianjelfs_61852 | Published 2017/01/09
Tech Story Tags: javascript | elm | angularjs

TLDRvia the TL;DR App

If you are trying to migrate away from angular 1 (and you aren’t interested in angular 2) it seems that you have two options: you can either try to replace individual components and eat angular from the inside out, or you can try to eat it from the outside in by replacing the router first.

I am interested in trying to get out from underneath the angular 1 router and so I really want to go for the latter option. This enables me to start treating angular 1 as just one way to render components rather than a framework that’s in charge of everything.

The technology I’m most interested in migrating towards is Elm and I was inspired by Richard Feldman’s talk on Web Components in Elm and it seemed to me that we ought to be able to achieve something similar with angular 1 components / directives.

It is straightforward to render an angular directive using the “node” function from Elm’s Html.Attributes module; the problem is that it will not do anything unless the angular framework knows that it has been added to the DOM and gets a chance to compile it.

For example, let’s say that I have the following angular component:

angular.module('MyApp', []).component('pageOne', {template: '<div>Page One</div>',controller: function PageOneController () {console.log('we are in page one');}});

I can render this from Elm like this:

node "page-one" [] []

But it does me no good because angular knows nothing about it. We need to tell angular about this.

Elm is emitting virtual DOM of course and we have no obvious way to know when the real DOM has been updated.

Mutation Observers

So with a client side routing solution we are effectively swapping content in and out of a single root node when the route changes. We can use a mutation observer to monitor that root node and then instruct angular to compile the contents when there is a change.

Something like this:

var root = document.getElementById('root');var observer = new MutationObserver(triggerDigest);observer.observe(root, { childList: true, subtree: true });

function triggerDigest() {var $body = angular.element(document.body);var $rootScope = $body.injector().get('$rootScope');var $compile = $body.injector().get('$compile');$rootScope.$apply(function() {$compile($body)($rootScope);});}

So here we obtain a reference to the root element into which we will be embedding the Elm app. We create a mutation observer which will call a triggerDigest function. That function obtains a reference to angular’s $rootScope and its $compile service. It then compiles the whole tree against the root scope.

Infinite loops

But we have a problem. As it stands we are monitoring the whole subtree and compiling the angular directive will cause more mutation and trigger another compile. This is going to result in an infinite spiral of compilation and mutation and kill the page.

One solution is to have Elm tell us when we should start observing and then stop observing as soon as we have compiled. This can be done using ports. We can create an outbound port in Elm like this:

port watchDom : String -> Cmd msg

Then we need to send a message to this port when the url changes:

update : Msg -> Model -> ( Model, Cmd Msg )update msg model =case msg ofUrlChange location ->( { model | route = Url.parsePath route location }, (watchDom ""))

We can then modify our javascript as follows:

var root = document.getElementById('root');var app = Elm.Main.embed(root);var observer = new MutationObserver(triggerDigest);

app.ports.watchDom.subscribe(function(msg) {observer.observe(root, { childList: true, subtree: true });});

function triggerDigest() {var $body = angular.element(document.body);var $rootScope = $body.injector().get('$rootScope');var $compile = $body.injector().get('$compile');$rootScope.$apply(function() {$compile($body)($rootScope);});observer.disconnect();}

And that will break the loop for us and leave us with the beginnings of a viable solution. Not sure how efficient it is, but it appears to work at least.

One caveat with using ports in this way is that it means that the 0.18 debugger is not going to work properly. It would be possible to avoid the use of ports by wrapping the `history.pushState` function and tracking the `popstate` event. One advantage of ports is that it allows us to supply some meta data about the route we are navigating to e.g. does it contain content that needs compiling perhaps.

There is still a problem

In part two I will explore how to deal with internal angular links.

Full source code for this proof of concept can be found here.


Published by HackerNoon on 2017/01/09