paint-brush
Dev Log #1: Introducing Freighterby@stevenajohnson

Dev Log #1: Introducing Freighter

by Steven JohnsonOctober 24th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

Introducing Freighter, the game hosting platform that provides the usage-based payment plan.
featured image - Dev Log #1: Introducing Freighter
Steven Johnson HackerNoon profile picture

For a while now, I’ve wanted to create “Dev Log” type content on a project that I’ve been working on and off for the last two years called Freighter. Freighter is a game server hosting platform that provides a usage-based payment plan.


My plan for the future is to put out a quick dev log every time I reach a milestone, to publish what I learned and the pitfalls I experienced along the way.


There is no set interval in which I plan on writing these, but I figure they will most likely be out every week or so.

What Is the Purpose of Freighter?

The issue that Freighter would be trying to solve is the cost of entry for server hosting. Using the top three results when searching “Minecraft server hosting platforms” (BisectHosting, Apex Hosting, and Shockbyte) you will notice that all the payment plans are monthly and start to get expensive the more memory the plan includes.


For casual play (especially for parents who are paying for the children’s servers), paying $10 plus a month for a server that has 80% downtime seems like a bad investment. Out of the three platforms listed, the cheapest option still costs $2.50 a month for a server with one gigabyte of RAM that won’t be able to handle more than a couple of players in a small world.


How does Freighter solve this issue? Freighter provides the user the option to shut down their game server on demand, which frees up resources on the host machine and allows for more game servers to be hosted on a single machine.


Seeing you’re only paying for the time that the server is up, you are incentivized to shut down your server when it’s not in use.


With the usage-based payment plan, you would be saving money because you won’t be paying for all of the down time.

How Freighter Came to Be.

Freighter has gone through many iterations in its lifetime, starting as a project a group of us did for a class in college.

The Origins

In my last semester of college, I took a course that was designed to simulate an Agile development environment. One member of the group was designated as the project manager and would break the work up into epics and sprints.


The project that we created was the ancestor of Freighter. We used Azure for the infrastructure and utilized virtual machines to host the servers.


By the end of the semester, we had a working minimum viable product, but it had its limitations. It could only handle one game server per virtual machine, which is a very large overhead and waste of resources. It also had to rely on PowerShell scripts to interact with the game server.


This proved to be very inefficient and difficult to manage.

Iteration 1

Quickly after graduation, this project was abandoned, but it lived rent-free in my head for months after. I believed that there was potential for this project. Eventually, I got a group of three friends together to start developing the next iteration.


This time we would use AWS to handle the infrastructure and EC2 to handle the game servers. This project had a lot of progress initially, but we eventually ran into complications. We could not programmatically create EC2 instances through lambdas, which would force us to pre-generate the EC2 instance and build a script to create new game servers within the instance.


This is essentially where the original group project ended off.

Iteration 2

A few months later, I came up with another approach that I wanted to try. I started this iteration solo, and the plan consisted of at minimum two servers. The first server would handle the web application, and the second would be used to automate the Docker Engine.


The game servers would be hosted using Docker which would make programmatically starting and stopping the game servers easy. The main issue with this approach was the design of the APIs. The web app API would send commands to the hosting server API in order to automate Docker.


This increased the complexity of the infrastructure, especially when attempting to scale. One of my main concerns going into this iteration was the memory consumption of the hosting server’s API. I wanted to reserve as much memory for the game servers as possible, so I couldn’t go with a framework like .NET.


I settled on creating the API using Rocket.rs. While I’m familiar with Rust and can build simple applications, this proved difficult. It would take hours to add endpoints or new functionality to the API, and I would often run into issues.


Many of these issues were framework-specific, but not having a deep understanding of Rust did not help. It goes without saying that this approach failed shortly after starting.

Iteration 3

The third iteration of the project was a success. Since the failure of the last iteration, I had learned of PocketBase. PocketBase is a backend as a service (Baas) that ships a database (using SQLite), an API, and authentication within a single executable.


If you don’t want the prebuilt version of PocketBase, you can also use it as a framework to extend its base functionality using either Go or JavaScript. To alter PocketBase to fit the needs of Freighter, I extended the functionality using Go.


Thanks to PocketBase shipping most of the web application functionality out of the box, I could focus more on the game hosting side of things.


This iteration would still use two servers, but instead of using APIs to interact between the servers it used SSH. When a user makes a request to alter the online status of a server within the database, I would first attempt to SSH into the hosting server and start the container.


On success, I would then update the database and return the result.


The next milestone in the project was to create a Docker image that when given the SITGERM signal would gracefully shut down the game server to ensure no progress was lost. This was a learning experience for me because I had little prior knowledge of building my own images.


It took some tinkering around, but I was eventually able to capture the SITGERM signal, and then run the stop command in the Minecraft server terminal.


With that done, I had a fully functional product. PocketBase managed my web application, it communicated to the host servers through SSH, and the host server can start and stop Minecraft servers gracefully. Now, the only thing to figure out is cost.


Like I mentioned earlier, this uses a multiple-server solution. That means I will have to pay for both the web application server as well as pay for the hosting servers themselves. This may be more complicated than expected, seeing as PocketBase is a monolithic structure, it does not scale well.


SQLite has no way to scale horizontally, and there is no way to shard or replicate the data without making some kind of sync job between servers hosting the PocketBase application. Even with the sync job, all instances of PocketBase will grow synchronously.


Eventually, the two terabytes of data that SQLite can handle will be filled and all instances of PocketBase will go down. This is a pretty big problem which can’t be overlooked.

The Current Iteration

That brings us to the current day. To solve this scaling issue, I’m currently working to migrate the PocketBase functionality over to AWS. With my experience using AWS for the first iteration of Freighter as well as other projects, I learned how to build SAM templates that let me define my infrastructure as code. These SAM templates save me a lot of time when iterating the design.


The infrastructure is pretty standard. It uses DynamoDB for the database, two API Gateway using Python Lambdas, and Secret Manager to hold sensitive information like JWT signature keys.


I’m currently building my own authentication system to reduce the cost and the number of dependencies required for the project.


By using the minimum number of required dependencies, I can save money and pass the savings on to the end user making the service even more affordable.

What Now?

As I mentioned at the start, future dev logs will detail what I learned while developing this service. The most recent pitfall, and what I’ll be writing about next, was securing my APIs using a custom Lambda Authorizer as well as creating Lamba Layers.


As of the time of this writing, I have completed this task. That dev log should be out within the next week or two.


Thanks for reading, and I’ll see you in the next one.