In the spring of 2015, Evercam, a construction timelapse and project management camera software company , open-sourced evercam-server. evercam-server is a Phoenix app that communicates with connected cameras, delivers still images via email, shares live video streams, and more.
Open-sourcing evercam-server was a beautiful gift to the Elixir community. While it’s fairly easy to find small example Phoenix apps, it’s more difficult finding apps running in production with a significant code base. evercam-server checks those boxes: it’s a critical part of Evercam’s stack, has 10.4K lines of cleanly organized code, and a steady commit stream.
I spent some time exploring evercam-server from the perspective of a Rubyist, looking for interesting patterns. Below is a dive into the app, starting with easier bits and wrapping up with the more advanced parts of the app.
One of the most interesting things about evercam-server: the robustness of the Erlang — which Elixir builds on — eliminates many of the outside dependencies Ruby developers like myself are used to. From background tasks to Cron to caching, evercam-sever leverages services already provided by Erlang.
evercam-server uses Intercom, a CRM for SaaS businesses, to track key customer events like a user signup or a cancellation. When a user signs up, evercam-server contacts the Intercom API. This is done via [Task.start/1](https://hexdocs.pm/elixir/Task.html#start/1)
as there’s no need to wait on the result:
Note that Intercom.get_user/1
conducts an HTTP call to the Intercom API. It makes sense to include this within the function passed to Task.start/1
and not conduct the HTTP call inline during the request.
I’m unclear how evercam-server handles connectivity issues to the Intercom API that would prevent communication issues to Intercom. In the Ruby world, I’ve performed a similar Intercom integration via Sidekiq — which will retry creating a user — and a separate scheduled Cron task that walks through our customers and verifies they exist in Intercom.
Elixir’s pipeline operator is great for transforming data via a set of operations. However, chaining can fall down when functions return inconsistent data.
For example, take a look at the code below. Intercom.get_user/1
attempts to find the associated Intercom user record for a local Evercam User
:
Intercom.get_user(user) |> fn({_,json}) -> json.intercom_id end
If the external HTTP call to Intercom fails, thejson
payload won’t contain an intercom_id
and we’ll fail with an unclear error. Properly handling this with pipelines generates some particularly ugly code.
Thankfully, Elixir 1.2 introduced [with/1](https://hexdocs.pm/elixir/Kernel.SpecialForms.html#with/1)
and evercam-server uses this extensively to clearly handle errors. Take a look at how a Snapmail
record is updated from the controller:
As long as the left side of <-
is a match, execution continues. If a match fails, execution jumps to the else
block.
If you are coming from Ruby on Rails, one of the main differences you’ll notice in Ecto model queries are explicit calls to preload associations:
Unlike ActiveRecord, Ecto does not load associated records if they are accessed at runtime. While this is more up-front development work, it reduces the number of database-related performance issues (like N+1 queries) that reach production.
Sidenote: we’re building an Elixir app monitoring service at my day job to track the performance of Ecto queries, HTTP calls, and more. Signup for our BETA.
If you’re coming from Ruby, you’ve likely used Redis as a Cache. With Erlang under Elixir’s hood, there native caching options.
Evercam uses ConCache, which uses Erlang Term Storage (ETS), for its caching needs. One advantage to ConCache: you can store all of the Elixir types vs. the limited types Redis supports. For example, the following stores a User
in ConCache vs. just a User
id:
evercam_server uses SeaweedFS, a distributed file storage system, to store image captures (snapshots) from cameras. evercam_server uses the SeaweedFS HTTP API to upload and download images.
Since creating and closing HTTP connections is expensive, a connection pool is defined for uploads and downloads:
The HTTPoison Elixir client is used for HTTP calls. HTTPoison is powered by Hackney, an Erlang HTTP client. Hackney-specific options (like connection pools) can be passed to HTTPoison. For example, the seaweedfs_upload_pool
is used to save an image:
evercam_server leverages quantum_elixir for running Cron-like scheduled jobs for tasks like cleaning up short-term file storage and sending reminder emails:
Quantum provides a number of benefits over Cron. Some of the highlights:
Perhaps the most interesting flow in this Elixir app are scheduled, recurring still camera image captures called Snapmail
. This really shows off some Elixir-goodness and patterns I haven’t used in a language like Ruby.
When a Snapmail is inserted into the database, three new processes are created. That’s 3 processes for each **Snapmail**
database row. If you’re coming from Ruby like me, creating processes for every database row sounds incredibly heavy.
Of course, we’re in Elixir land and processes are cheap. These processes — started via SnapmailSupervisor.start_snapmailer/1
— start a Snapmailer
GenServer, which starts Poller
and GenEvent
linked processes. The Poller
wakes up at a configured time, takes snapspots from cameras, and sends an email with the resulting still images.
The Snapmailer
and Poller
are linked processes: if the Poller
dies, it kills the associated Snapmailer
process as well. SnapmailSupervisor
would then respawn the Snapmailer
, which in turn would make a new Poller
. The same behavior exists between the Snapmailer
and GenEvent
processes.
If you know of — or are working on — a large open-source Phoenix app, add a response. It’s great seeing production-quality Phoenix apps.