The year was 2016. I was a hobbyist with ideas to burn. Naturally, I needed an inexpensive hosting provider for my latest web app. Where would I start? Who would prove to be the elusive “hostess with the mostest?” This is the story of my migration to and from DigitalOcean, AWS, and Heroku — the trial and error, the pros and the pain points. You’ve seen the archetypal hero’s tale. Now cinch up your belt, oil your sword. Prepare to experience firsthand the epic tale of a simple village hobbyist — and his quest for just the right host. Background: The idea and the app. . I followed a training plan as I prepared for my next race, but all the training plans had one thing in common: they prescribed different paces for different types of runs. For example, I was expected to adjust my pacing depending on whether I was heading out for an “easy” run, a “distance” run, or a “long” run. But what an “easy” run? Give me the numbers in minutes per mile — how do I know when I’m on target? To find out, I would consult pacing charts based on a runner’s latest 5K race time and write down my prescribed pace for a scheduled run. I like to run is Manually looking up paces in pacing charts was tedious and boring. So I wrote an app to do it for me: . No matter where I am, I can lookup my prescribed pace for a given run type, and hit the ground running. RunbyPace But where would I host my app? I didn’t want to spend much money. I budgeted about $5–10 a month for it. My search began at DigitalOcean. Hosting with DigitalOcean: The savings (and the struggle) are real. DigitalOcean was a great start. They have droplets that come pre-installed with the requisite tooling for different types of apps. In my case, I had a Rails app, so Ruby, Postgres, Bundler, Unicorn, and Nginx were all pre-installed and ready to roll on the rails. The biggest pain point I experienced surrounded updates — keeping the OS patched, updating the Ruby ecosystem, and pushing new versions of the app itself. It was all manual and difficult because of the droplet’s limited memory. DigitalOcean Pros Inexpensive, at $5 per month. Easy to get started. Lots of well-written guides. DigitalOcean Cons Instead of working on new app features, I had to , including Nginx, SSL/TLS certs. manage everything myself . OS updates were a manual process. I configured a Cron job to automatically install security updates, which helped somewhat, but still. Manual updates Updates to the Ruby ecosystem were also manual. . Releasing a new version of the app was not a one-click affair. I had to ssh into the VM and run my scripts by hand. Manual deploys Only droplet, so availability was so-so. Updates and new app versions meant the app was offline temporarily. Availability. one Only possibility for scaling was vertical, with a bigger droplet. Scaling. . The basic droplet only had 512MB RAM, which was only enough headroom for basic operations. I had to kill Unicorn in order for Rake tasks to complete. Limited RAM between production and development made me afraid of releasing big changes to the app because I wasn’t sure if they would work in production. Often I put off releases. Environmental differences DigitalOcean got me going, but the pain of releasing new features made me want something more automated and more robust. I got interested in trying AWS. Hosting with AWS: No one visits your site, but everyone could — if they wanted to. After my DigitalOcean experience, I decided to give AWS a go. I signed up and qualified for AWS’ free tier, which meant I could have a little fun at limited cost — but for only a year. Goals with AWS With DigitalOcean fresh in mind, here were my goals with AWS: Parity between development and production environments. High availability, even during updates, releases, and deploy failures. Architecture on AWS I went all in with the AWS ecosystem. I moved my runbypace.com domain to AWS Route 53 and used their DNS infrastructure. DNS directed visitors to an Elastic Load Balancer (ELB), which then directed visitors to EC2 instances running dockerized versions of Nginx and my Rails app on top of the Elastic Container Service (ECS). On the backend we had Postgres running on AWS RDS. AWS also handled SSL certificates for me. # AWS architecture diagram \n\n Route 53 | Elastic Load Balancer | | ec2/ecs ec2/ecs (nginx) (nginx) (rails) (rails) | | PostgreSQL AWS Pros . On the free tier, the whole robust world-class architecture only cost me around $11 per month. Most of that was due to the second t2.micro instance. Cost . Once the infrastructure was in place and I’d written my deployment scripts, I could deploy new versions of the app with a single command. Easy deployments . On my dev box I only had to run and start writing code. Because the app was containerized I had the confidence that if it worked on my box, it would work in the cloud. It wasn’t 100% seamless though, as you’ll see in the “Cons” section. Dockerfiles made it pretty easy to package up and deploy assets and artifacts that I didn’t want open-sourced. Dev/Prod parity via containers docker compose up . The entire time I had the site on AWS, I never had one outage. When I pushed a new app version, ECS would handle stopping one of the web tasks to make room for the newly registered one. Once the new task passed the health check, ECS would stop the other web task(s) and replace them with the new version. The whole process was seamless. If I made a boneheaded move and broke something, the new version would fail the health check and ECS would keep running the old version of the site. Availability If you run standard AMIs (Amazon Machine Images), they handle the security updates for you. Less update management. AWS Cons After the free tier expired, I expected to pay around $50 to $65 per month for my existing architecture. Ouch! Way too rich for a hobby app that makes me $0.00. Cost, post free tier. AWS documentation is a little scatterbrained . At the time, Kubernetes wasn’t a thing on AWS. ECS was the only option. Wait, so you mean I have to manage my containers the VMs they run on? I just wanted to push my containers to the cloud. And there’s no easy way to autoscale EC2 instances in response to pressure from the ECS containers? — I have to manually increase the number of EC2 instances? Ugh. So I better have access to the AWS console if I ever experience a big traffic spike. ECS is awkward and Again, why do I have to manage EC2? And why do I have to manually update the ECS container agent when I’m using AMIs? Still have to manage the EC2 instances. For local development, I kept configs in files consumed by . For production I pushed configs along with the ECS task definitions. When I added or modified an config but forgot to update the task definitions, trouble ensued. If I’d given this more thought, I suppose I could have automated this. App config differences between dev/prod. .env docker-compose .env AWS was incredibly robust, but not pain free, and the pain machine was about to be cranked up to 50, $50+ per month! I looked into Elastic Beanstalk, which is supposed to deploy code directly to the cloud without the need to manage EC2, but it is expensive too! The cost would have been about the same as my previous AWS architecture. So our quest for the right hosting provider continues — this time with Heroku. Hosting with Heroku: So far, so good I just started using Heroku, so I’m still in the honeymoon phase, but so far things look pretty good. The price is good for my needs, and deployment is a cinch. Heroku Pros Only $7 per month for a Hobby dyno. Price. . Apparently it’s possible to scale both horizontally (additional dynos) and vertically (more powerful dynos) Scalability Drop dead . Now all I have to do is push to master and Heroku deploys to production automatically. simple automatic deploys Build configuration via . With my AWS setup I used Dockerfiles to tweak my build setup. With Heroku I can easily create my own build packs. For some examples see a build pack for , and another for . Heroku build packs deploying the current commit hash to an arbitrary path deploying a keybase proof to an arbitrary directory Free SSL certs Heroku Cons Issues with with Google Domains. After leaving Route 53, I took runbypace.com back to Google Domains. For my custom domain , Heroku requires me to configure my DNS provider with an record pointing to . Google Domains does not support records… ¬_¬ The workaround on Google Domains is to create a pointing to and then a synthetic record to redirect to . , this breaks down somewhere with the SSL certs, causing to have an invalid certificate, even though works just fine. This was a big bummer, so I switched to PointDNS for now, as it does support records. SSL Certificates on naked domains runbypace.com ANAME runbypace.herokudns.com Unfortunately ANAME CNAME www.runbypace.herokudns.com @ www Unfortunately again https://runbypace.com https://www.runbypace.com ANAME As I gain more life experience with Heroku, I’ll report back with new findings. A respite in my hosting quest Fellow travelers, we have reached another respite in this quest for the best hosting provider. We’ve met DigitalOcean, which is inexpensive, but leaves the burden of setup and configuration to you. Then there’s the Hilton of hosting, AWS. If you have the need for massive scalability and a bazillion 9s of availability, it’s one of your only choices. If you just want to dream dreams and deploy code, Heroku seems like the best fit, for now… Safe travels and happy coding! Originally published at tygertec.com on January 27, 2018.