Before you go, check out these stories!

Hackernoon logoThe Challenges of Running Laravel on AWS by@deleu

The Challenges of Running Laravel on AWS

Author profile picture

@deleuMarco Aurélio Deleu

After working for two years with Laravel at a corporate-level SaaS, I’ve gathered some valuable experience on trivial things as well as hard challenges and their consequences. This article will just present some high level overview of the challenges when working with Laravel and AWS. Some followup articles will dive deeper into a few solutions.


Running a software behind a virtual private cloud means that everything is closed by default. We need to explicitly open the strictly necessary services. Running a Laravel application on AWS Elastic Container Services (Fargate) on a VPC brings one important challenge: you will never SSH into your server. If there’s any bug in production that accidentally slipped, there’s no terminal access to see what’s going on inside the container. For a team that is used to having SSH access, this seems quite a big challenge, however when developing an application with the mindset of closed access, logging and monitoring gets a well deserved attention to make sure you have as much information as you need when things go south.


The database is also closed behind the VPC firewalls. In theory, we could use a bastion host to grant ourselves access to the RDS, but much like the containers, we try to keep this as an extreme last resort. Two weeks ago we acquired a license to Laravel Nova to try and mitigate this problem. The goal is to have at least read-only access to every Eloquent Model and forever avoid spinning a bastion EC2 to connect to the RDS and see what’s going on.

1 Container = 1 Service

Laravel has a great wrapper around cron. Simply write one cron that runs every minute and use Laravel Task Schedule to run your tasks. But unless you want to pay for one container to run 24/7, this feature will not be your friend anymore. One could argue that the Web Server container could use supervisord to start your Web Server as well as the Task Schedule, however that starts to go off on the purpose of a container. ECS interacts nicely with the concept of running a container for the purpose of one service and one service only, specially when defining your Health Check. If your container has two purposes, how do you define it’s health state? Supervisord is likely to never fail, but your web server might. Having only one task means that if your web server fails, Amazon will promptly replace that container with a new one.

The cost of a microservice

I work with a software solution that is provided in the United States, Europe and Australia. We use three AWS Regions in production and one for development. There’s some work leading towards having one extra account dedicated for QA and potential customers that would require a new region inside Asia. In other words, one microservice means (currently) at least 4 containers and (soon) 6 containers. For the sake of stability, some services run 2 containers at all times instead of one (production-only). That takes the numbers to (currently) 7 containers and (soon) 10 containers.

Besides that, a microservice usually is powered by one frontend application and one backend application. That double the numbers.


As mentioned before, there’s no SSH access to production. Ever. That means no php artisan migrate or php artisan custom:command. This was an annoying challenge that we used to deal with by exposing APIs instead. Want to run something on production? Write an API and call it after release. It was a horrible design because we were exposing APIs that was never meant to be an API. Among many consequences, there’s one that sticks out: HTTP context on non-http task. Things like Request Timeout or PHP configuration that was suppose to be different between php-cli and the web server.

Continuous Deployment

Most of our APIs are consumed by the frontend team only. That means we’re a lot less strict about breaking changes. On the other hand, deploying a new backend container doesn’t necessarily mean a new frontend container goes out. Even more so, at some point during the deployment, 50% of the containers running will be running the previous version of your software and the other half will be running the newer version. Breaking changes become something extremely important to keep the releases smooth. That does not necessarily mean maintaining multiple versions of a feature or API. It just means that any release should be compatible with the immediate previous version that is still running. You can deprecate things in one release and remove in the next, when you’re sure there’s no older version still running.


Every service should be designed with the assumption that there could be more than one container running at the same time. Even more, it’s important to know what is the bottleneck of your service in order to choose the right metric for scaling. Memory Usage rapidly increasing? Scale up before your users start to notice. CPU bound service? The scaling metric is now another. There’s even services that scale based on the amount of messages on AWS SQS. If there’s too much information to be processed, run more containers.

Future Follow Up

This is a quick high level overview of important topics I noticed that impacted my development workflow the most while working with Laravel and AWS. I want to write some follow up articles to cover possible solution for these issues. Laravel Telescope and Laravel Nova are incredibly powerful addition when working with corporate software inside a VPC and end up being no brainers. More challenging things such as spinning a new container to run a custom Artisan command or deploying a migration container are some of the things that is worth teaching.

Follow me on Medium and stay tuned for tips and tricks on how to run your Laravel application smoothly on AWS.


Become a Hackolyte

Level up your reading game by joining Hacker Noon now!