paint-brush
My 4-Year Journey to Building a Social Network as a Solo-Developerby@mariostopfer
655 reads
655 reads

My 4-Year Journey to Building a Social Network as a Solo-Developer

by Mario StopferOctober 18th, 2022
Read on Terminal Reader
Read this story w/o Javascript

Too Long; Didn't Read

Aims to give content creators the ability to engage with their community and have them contribute to the content creation process, while being able to monetize their effort, all in one place. The idea to build a social media platform came from the desire to not just create content as individuals, but also engage with the community. The platform itself, if it wanted to be available to as many people as possible, would need to be a website and with almost no web development experience, none of this seemed achievable. To think that this was not going to be easy would be an understatement of the century.
featured image - My 4-Year Journey to Building a Social Network as a Solo-Developer
Mario Stopfer HackerNoon profile picture


This is a story, a full expose, of my solo-dev project, Immersive Communities — a Social Media Platform for Content Creators, which I took up in early 2018 and completed in 2022.


I hope it will serve as a guide to anyone who is starting a large project or is in the middle of building one and needs the motivation to keep going.

The Idea

Way back in early 2018, faint voices started emerging. People began to voice their opinion on the state of the web and social media in general.


Some people were dissatisfied with Facebook and its policies on user privacy, others were complaining about YouTube and its monetization practices, while still, others turned their opinion against the very platforms where they were voicing their opinion like Medium.com.


But what would the alternative to all of these look like?

The Solo-Dev Project – History in the Making

The idea to build a social media platform came from the desire to give content creators the ability, to not just create content as individuals, but to also engage with their community and have them contribute to the content creation process while being able to monetize their effort; all in one place.


The platform should be free to use and anyone should be able to join. It goes without saying that the creators, who invest time into their work, which educates and entertains people around the world, should also be able to make a living from what they love doing.


Would any of this solve all the problems we face today online? No, but it’s an alternative, and that’s what counts. It’s a seed, which can over time, turn into something beautiful.


As a technologist, I also wondered if all of this could be done by a single person.


Would it actually be possible for a single person to step up to the challenge and deliver a robust, enterprise-level platform that people would love using, and if yes, what would it take to deliver such a system?


To be completely honest, I didn’t have an answer to those questions, but I decided to take up the challenge.


In those early days of 2018, I had just around 6 years of work experience as a software developer, which was mostly back-end .NET and some WPF on the front end.


I have also been living in Tokyo, where I moved less than 3 years ago, and working overtime was the norm; the idea of taking up additional work, for all intents and purposes was unreasonable, to say the least.


The platform itself, if it wanted to be available to as many people as possible, would need to be a website and with almost no web development experience, none of this seemed achievable.


The fact that it seemed impossible, was exactly why I decided to start sooner rather than later. Would all of this be a giant waste of time? Only time would tell, but the sooner I would start, the sooner I would find out the answer.


Then, on February 1st of 2018, I started planning.

2018 — The Plan

The first thing I decided to do was to tell myself that to think that this was not going to be easy, would be the understatement of the century. I had never done anything like this before, so I literally had no idea what I was getting myself into.


This meant no hobbies or anything resembling a life until this thing is completed. Working overtime at my full-time job and then coming home and working on this project, is going to become the new norm.


Would I take a vacation and travel at some point? Yes, but I would have to spend these vacations mostly working as well.


If I’ve learned anything from my previous experience, it’s that being organized is what makes or breaks your project and keeps you on track. The more you prepare and design upfront, the less trial-and-error type of work you need to do later on.


Thinking and planning are certainly easier than building and less time-consuming, so the more I do in the planning phase, the less I would have to discover during the actual development phase, which requires more resources time-wise.


Resources were something I didn’t have, so I had to make up for that with planning and design. I also knew that the web was the place to go when I would get stuck but I decided I would not be using Stack Overflow for this project and ask any new questions.


The reasoning behind this decision was quite simple. Seeing as there will be a lot to learn here, if I just go and ask someone to solve all my problems, I wouldn’t gain any experience.


The further the project progresses, the harder it will become and I will have not gained any experience to tackle it on my own.


Therefore, I decided to only search the web for already existing answers, but not to ask new questions in order to solve my problems. Once the project would be done, I could engage with Stack Overflow again, but for this particular development goal, it would be off-limits.


I would utilize the OOAD approach to designing the system which I wanted to build. The system would tell me how each part works and how it interacts with other parts of the system. Furthermore, I would extract business rules which I would later implement in code.


I then started taking notes and realized there were two main points I had to focus on:


  • Project Design
  • Tech Stack


Since I knew that there were only 24 hours in a day, and most of those I’ll be spending either at work or traveling, I had to optimize my time carefully.

Project Design

For the Project Design part, to maximize my time, I decided to look at which products already worked well and use those as a source of inspiration.


Clearly, the system needs to be fast and accessible to everyone and since I don’t have time to write code for multiple platforms, the web was the answer.


I then turned to design. Apple being well-known for its well-accepted design practices was an immediate source of inspiration. Next, I turned to Pinterest and decided that this was the simplest possible design that worked well. The idea behind my decision was the old saying.


Content is king. — Bill Gates, 1996.


This gave me the idea to remove as many unnecessary design details as possible and focus on presenting the content. If you think about it, people will come to your website once because it has a nice design, but they won’t keep coming back unless you have good content.


This had the effect of reducing the time required to design the front end.


The system itself had to be simple. Every user should be able to create and own their own community. Other users should be able to join as members and write articles to contribute content to this community. This feature would be inspired by Wikipedia, where many users can edit the same article.


If there are many people, engaging together on a certain topic, then this tells us that what we have at hand is a community, and as such, should be separate from the rest.


As far as features go, users would need to be able to write not just regular articles and connect them like Wikipedia does, but also write reviews, which would require a different type of article.


Thus, regular articles would be called “Interests” and would be factual and informational in nature, with anyone being able to edit them.


On the other hand, “Reviews” would be based on each interest, and only the author of the review would be able to edit it.


In short, people could collaboratively write about a movie, let’s say “The Matrix”, and edit that article whenever they want. They could add any factual information to the article they wanted.


Then each user would be able to write their own review of that movie. Naturally, the original article would then show a list of all the reviews written for this movie.


It was clear to me, that I also had to add two more important features. The first would be a search option that would search articles in each community. The other feature was recommendations, which the users would be served based on what they liked once they scrolled to the end of the article.


Each user should also be able to post short updates or “Activities” to their profile, which would act as a Wall on Facebook. This was the basic outline that I made before I started thinking of the technologies which I could use to actually deliver the project.

The Tech Stack

The second thing I focused on was the Tech Stack. Since I already decided I was going to build a web-based project, I decided to write down all the popular and modern technologies which are most commonly used at that time. I would then choose those which were technically as close to what I already used in my career in order to spend as little time as possible learning new technologies.


Furthermore, the idea which led my thinking the most during this phase was to make such design decisions that would require me to write as little code as possible, thus saving time.



After extensive research I settled on the following main Tech Stack:



Furthermore, by choosing SaaS services I would further save time because I would not need to implement these features by myself. The services I decided on were as follows.



These were the main technologies I had to learn as quickly as possible to even begin working on the project. Everything else would have to be learned along the way.


At this point, I started learning new technologies. For the main and most important ones, I started reading the following books.



I decided I would work on my full-time job during the day but evenings were a fair game. When I would come home, I could study until I went to sleep. As far as the sleep itself goes, I would be cut down to 6 hours to gain some more time to study.


I initially predicted I would only need a year to build this project. Then, after almost a year of learning new technologies, it was December of 2018 and I just finished the main reading material, all the while having written exactly 0 lines of code.

The Development Starts

In December 2018, I finally started developing. I set up Visual Studio Code and started building out my development environment.


It was clear to me from day one that I will not be able to maintain all the servers necessary for such a large project. Not just from a technical perspective, but from the budget side as well. Thus, I had to find a solution.


Luckily, I found the solution in the form of DevOps and the Serverless approach to back-end infrastructure.


It was immediately clear to me, even before these approaches became widespread as they are today, that if we can describe something succinctly with code, we can also automate it, therefore, saving time and resources.


With DevOps, I would unify and automate both the front-end and back-end development, while with Serverless, I would remove the need for server maintenance and lower the cost of operation as well.


This philosophy clearly went along the lines of my thinking and the first thing I decided to set up was a CI/CD pipeline with Terraform.


The design consisted of 3 branches, Development, UAT, and Production. I would work each day and when the work was done, I would simply commit the changes to the Development branch in AWS CodeCommit and this would trigger AWS CodeBuild to build my project.


I would keep a Terraform executable in the repository, both for macOS, for local testing, and a Linux-based one, for builds on CodeBuild.


Once the build process started, the Terraform executable would be invoked inside CodeBuild and it would pick up the Terraform code files, thus building out my infrastructure on AWS.


All of these parts would then have to be connected and automated, which I did usingAWS CodePipeline, which would move the code from CodeCommit to CodeBuild every time I made a commit.


This would help me keep my code and my infrastructure in sync. Whenever I was ready to move forward, I would simply merge the Development branch to UAT, or the UAT to Production to sync my other environments.

2019 — COVID-19 Hits

Once I finished the CI/CD pipeline for the back end, I would turn to setting up the actual website locally in order to start developing the front end.


The initial step of setting up the front end was to create an Aurelia-based project. The decision behind using Aurelia and not React, which was and still is the most popular choice for a JavaScript framework, was because of the MVVM pattern.


The MVVM pattern is prominently used in WPF desktop apps which I had experience with. Thus, learning React would have taken more time than simply building on what I already knew.


On the other hand, the decision to use Aurelia and not Angular or Vue was based on the philosophy behind Aurelia which is – to have the framework get out of your way. In other words, there is no Aurelia in Aurelia.


What you are basically using, while developing with Aurelia is HTML, JavaScript, and CSS, with some added features like data binding to attributes, which I was already familiar with.


Therefore, the decision was final. Next, coming from the C# world which is statically typed, I decided to go with TypeScript over JavaScript.


Next came WebPack. There was a clear need to split the application into chunks which would facilitate lazy loading. Since I already knew that there were going to be many features in the app, it was imperative to build it as a SPA that would be able to load parts on demand.


Otherwise, the user experience would be unacceptable.


To make it easier to handle the WebPack configuration, I decided to add Neutrino.JS to the mix and use its fluent interface to set up WebPack.


It was well known that mobile web browsing was on the rise even back in 2019. To be ready for the modern web, the development approach was defined as follows.



This was facilitated by two main technologies – Tailwind CSS and PWA. When it comes to styling, in order to simplify the system even more, I added Tailwind CSS, which turned out to be one of the best decisions I made, since it is mobile-first by default.


Also, it is utility-based, thus it has high re-usability, which was exactly what I was looking for.


Furthermore, since I was trying to offer a native-like experience, but had no time to build native apps, I decided to go for the next best thing – an app that can be directly installed from the browser and worked offline.


This is what Progressive Web Apps (PWA) aim to give to the user, but a manual setup would be too error-prone and time-consuming, thus I decided on using Google WorkBox which has the Service Worker installation, offline, and caching features built-in.


Once the core was set up, it was time to take it for a ride online. It was clear that the website should be accessible to anyone at all times, outages do not belong in a modern system, so I decided to set up the system in the following way.


Firstly, the HTML and JS files will be served from an AWS S3 bucket. In order to make it available with low latency globally, it will be fronted by AWS CloudFront CDN Network.


A slight setback, in this case, was when I also decided to use the Serverless Framework because setting up AWS Lambda functions were easier than with Terraform, thus I introduced a new technology that I had to take care of.


Once the setup was done, I bought the domains in AWS Route 53, which I used for testing and the production domain – https://immersive.community.


The idea behind the name comes from a similarly named community-based network – Mighty Networks. I simply decided on the word „Immersive“ since at that time, it started highly trending on Google.


Since „immersive.networks“ and „immersive.communities“ were already taken, I settled on „immersive.community“.


Now that I had the front end launched, it was time to start working on the database. Even though I was used to SQL-based, relational databases in the past, they were clearly too slow for this particular project, so I decided to go with a NoSQL databse. I chose AWS DynamoDB because of its Serverless offering.


In order to access the database, I chose AWS AppSync, which is a managed GraphQL implementation and is also serverless.

A Multi-Tenant System

At this point, it was time to start solving one of the biggest problems I faced, namely:


How to allow users to join multiple communities but keep the private or restricted data, in each community, separate from each other?


The easiest way to solve this problem would be to create multiple databases, but this has clear limitations because, at some point, we would run out of databases we could create.


It turns out that each AWS account has limitations on how many resources you can create, so this was not a viable solution.


I would then finally solve this problem by assigning a type column to each entry in the DynamoDB database. Each user would have its type set to „user“, while each community would simply be set as „web“.


I would then indicate that a user has joined a community by adding a new row where the key of this row is designated as „user#web_user#web“. Since the user and the community name would be unique, this key would be unique as well, so the user couldn’t join the community multiple times.


If I want to perform an action that can only be performed if a user has joined a community, I would simply use Pipeline Functions provided by AppSync which allow you to query for multiple rows in DynamoDB.


I would then query and check if the user is a member of a community and only if he is, allow the user to perform the action.


This solved the multi-tenancy problem, but one of the largest problems to solve was just around the corner.

Highly Available Architecture

It goes without saying that an enterprise-level system is built with fault tolerance and high availability in mind. This would allow users to continue using the system, even if there are failures in some of its components.


If we want to implement such a system, we should do it with redundancy in mind.


My research led me to the optimal solution in this case, which is an Active-Active Highly Available architecture. It turns out that most services on AWS are already highly available, but AppSync itself is not. Thus, I decided to create my own implementation.


The web didn’t offer a solution to this problem, so I had to build my own. I started thinking globally. Meaning, my visitors would be coming from different regions, and if I were to place my AppSync in the US, then the visitors in Asia would have higher latency.


I solved the latency and high availability problem in the following way. I’ve decided to create 10 different AppSync APIs in all available regions at that time. Currently, the APIs are located in the US, Asia, and Europe.


Furthermore, each API needs to be connected to the corresponding DynamoDB database, located in the same region. Thus, I further created 10 additional DynamoDB tables.


Luckily, DynamoDB offers a Global Table feature that copies the data between the connected DynamoDB tables and keeps them in sync.


Now, regardless of where a user writes to the database, a user in a different region would be able to read that same information, after the data gets synced.


The question which now arose was the following.


How would users be routed to the closest API? Not only that, but if it were the case that one API were to fail, how would we immediately route the call to the next available API?


The solution came about in the form of CloudFront and Lambda@Edge functions. It is an amazing feature of CloudFront which can trigger Lambda@Edge functions in the region where the caller is located.


It should be clear that if we know where the user is located, we can pick the API, inside the Lambda@Edge function, based on where the call is coming from.


Furthermore, we can also get the region of the executing Lambda@Edge function, thus, allowing us to pick the AppSync API in that same region. The first step to implement this solution was to proxy AppSync calls through CloudFront.


Therefore, now, the calls would be directly made to CloudFront instead of AppSync.


I then had to extract the HTTP call from the CloudFront parameters inside the Lambda@Edge function. Once I had the region and the AppSync query, which was extracted from the CloudFront parameters, I would make a new HTTP call to the corresponding AppSync API.


When the data would return, I would simply pass it back to CloudFront through the Lambda@Edge function. The user would then get the data that was requested.


But we did not solve the Active-Active requirement just yet. The goal was now to detect the point when an API is unavailable and then switch to a different one. I solved this problem by checking the result of the AppSync call. If it was not an HTTP 200 response, the call clearly failed.

I would then simply choose another region from a list of all available regions and then make a call to the next AppSync API in that region. If the call succeeds, we return the result, and if it fails, we try the next region until we succeed.


If the last region fails as well, then we simply return the failed result.


This is simply the Round Robin implementation of Active-Active Highly Available architecture. With this system now in place, we have actually implemented the following three features:


  • Global Low Latency
  • Region-based Load Balancing
  • Active-Active High Availability


We clearly have low latency, on average, for each global user, since each user will get routed to the closest region he is invoking the call to the API from. We also have region-based load balancing, because users will be routed to multiple APIs in their region.


Lastly, we have an Active-Active High Availability, since the system will stay functional even if some of its APIs or databases fail because users will be routed to the next available API.


It would actually not be enough to simply handle high availability for the APIs. I wanted to have it for all the resources, including the HTML and JavaScript files that were served from CloudFront.


This time, I used the same approach but created 16 AWS S3 buckets. Each bucket would serve the same files, but would be located in different regions.


In this case, when the user visits our website, the browser would be making multiple HTTP calls to either HTML, JS, JSON, or image files. The Lambda@Edge would, in this case, have to extract the URL which was currently being called.


Once I would have the URL, I would have to determine the file type of this file and make a new HTTP call to the corresponding S3 bucket in the region.


Needless to say, if the call succeeds, we would return the file, while if it fails, we would use the same routing system as previously, thus also providing an Active-Active Highly Available system.


With this system now in place, we have reached another milestone and placed another piece of the cornerstone for our enterprise-level infrastructure. This was by far the hardest system to develop and it took 3 months to complete.


As it turns out, we had more problems to solve and this system would prove useful again.

Dynamic PWA Manifest

PWA is an amazing web technology and will be used by more websites as time goes on, but back in 2019, things were only getting started.


Since I decided to serve each community on a separate subdomain, I also wanted to give users the ability to install their branded PWA, with an appropriate title and app icon as well.

As it turns out, the PWA Manifest file, which defines all these features, does not work based on subdomains. It can only define one set of values based on the domain it’s served from.


The fact that I could already proxy HTTP calls using CloudFront and Lambda@Edge came in handy here as well.


The goal was now to proxy each call to the manifest.json file. Then, depending on which subdomain the call is coming from, to get the corresponding community data, which would be the app icon, title, etc., we would then dynamically populate the manifest.json with these values.

The file would then be served to the browser and the community would then be installed as a new PWA app on the user's device.

Moving to the Front End

Once I had these crucial steps figured out, it was time to start working on the front end. In line with the previous subdomain-based requirement, we also had to figure out how to load a different community and its data based on the subdomain.


This would also require loading different website layouts, which would be used in each community.


For example, the homepage would need to list all the available communities, while other subdomains would need to list the articles on each of those communities.


It goes without saying that in order to solve this problem, we cannot simply build multiple different websites from scratch. This would not scale, so instead, we would need to reuse as many of the controls and features as possible.


These features would be shared between these two community types and then would be loaded only if they are required. In order to maximize the re-usability of the code, I defined all controls as 4 different types.


  • Components
  • Controls
  • Pages
  • Communities


The smallest custom HTML elements like <button> and <input> were defined as Components. Then we could reuse these Components in Controls which are sets of these smaller elements, for example, the profile info Control would display the user’s profile image, username, followers, etc.


We would then again, be able to reuse these elements in higher-level elements, which in our case are — Pages.


Each page would basically represent a route, for example, the Trending page, where we could see all the activities or the Interest page where the actual article text would be displayed. I would then compose each Page from these smaller Controls.


Lastly, the highest level elements would be defined in Communities, based on their type. Each Community element would then simply define all the lower-level Pages it requires.


The Aurelia Router came in handy in this case seeing as how you can load the routes dynamically. The implementation was handled in the following way.


Regardless of the subdomain, when the website starts loading, we register the two main branches, which are implemented as Aurelia components. These represent two different community types. I then defined two different web types or layouts:


  • Main
  • Article


The “main” type simply represents the website layout which will be loaded when the user lands on the main https://immersive.community page. Here, we will display all the communities with all the corresponding controls.

On the other hand, once the user navigates to a subdomain, we would then need to load a different layout. In other words, instead of communities, we would load articles and corresponding features and routes, for example, the ability to publish and edit articles.


This would either enable or disable certain routes, based on the community type we were located on.


Our Aurelia and WebPack setup splits the JavaScript into appropriate chunks, so that routes and features which will not be needed, do not get loaded at all, thus improving speed and saving on bandwidth.

At this point, once we determine which subdomain we are located on, we would load the community and the user data for this specific community, thus having successfully implemented the solution.

The Masonry Layout

It was my reasoning that we should try to keep the design as simple as possible. So, since the users are coming to the website for the content, we will focus on displaying the content, as opposed to secondary features.


The articles would need to be displayed in lists, but they should not look stale, thus I’ve decided that each article would simply consist of the following.


  • A Cover Photo
  • Article Title
  • Article Category
  • Date when the article was posted or edited
  • Author’s Profile


The main way I made sure the list of articles wouldn’t look stale, was to make sure the user can choose the aspect ratio of each cover photo for their article. The inspiration came from the way Pinterest display’s its pins, so each article would also have a different aspect ratio.


This required me to implement the masonry layout, which can’t be chosen out-of-the-box in either CSS Grid or FlexBox.

Luckily, there are several useful open-source implementations that I tried out and used for the layout. I had to add several improvements, like loading paginated data and scaling with the screen size.


And then…


In November 2019, the first signs of COVID-19 started to appear. The world was soon thrown into turmoil and nobody had a clue what was going on, but it would change the world and how we interacted with each other in ways nobody could imagine.

Soon after, we would start working from home. This would have a great impact on my development process since I didn’t have to travel to work anymore. Ironically, the world came crashing down, while I got the break that I needed!

Interests and Reviews

Back in the development world, the idea behind writing articles on Immersive Communities was based on collaboration. To this end, I went with Wikipedia as a basis for the collaborative effort.


Also, community websites like Amino Apps and Fandom.com and blogging website HubPages.com played their role as well.


Writing blog posts as a single person can be a good start, but we can go beyond that by having people write articles together.


Once we add hyperlinks to the text and connect these articles written by different people, we basically create a community where people come together to engage in topics they are interested in.


I’ve decided to define two types of articles, namely,


  • Interests
  • Reviews


Interests would be short articles, approx. 5000 character-long articles which would factually describe any particular interest a person might have. Then, each person could write a review, with a rating of this particular interest.


The main interest page would then hold a reference to all the reviews written for this particular interest. The main difference would be that anyone can edit Interests, but only the person who authored a Review could edit it, thus adding a personal touch to each article.


Here, our earlier decision to go with CloudFront to proxy AppSync calls came back to bite us. It turns out that CloudFront only supports query string lengths up to 8,192 bytes, thus, we cannot save data that is longer than that.


As far as each article goes, each interest could be liked and commented on. Thus, the users could come together and discuss how each article will be written and edited. Furthermore, each Interest could be added to the user’s profile page, for quick access.


Once all these features were in place, the end of the year came. The situation looked good and I was certain the project would be completed next year. This assumption did not turn out to be accurate, to say the least.

2020 — Full Speed Ahead

The year more or less started off well. The economy still held up somewhat but after a while, started to go down. The markets started responding to the pandemic and likewise, the prices started rising.


Early 2020 was the year where I put in a lot of work but didn’t have a truly working product. There was still a lot left to be done, but I was confident in the outcome, so I continued to push forward.


At my day job, the work hours were extended as well and we had to reach our deadlines faster than usual. It goes without saying that I had to reorganize my schedule and the only way to save some more time would be to sleep for only 4 hours each night.


The idea was to come home by 6 or 7 PM and then go straight to work on the project. I could then work until 3 or 4 AM and then go to sleep. I would then have to wake up around 7 AM and quickly get to my day job.


This would of course, not be enough sleep each night, but I figured that I would make up for that time by sleeping for 12 hours during the weekends. I’ve also scheduled all vacation days and public holidays for work as well.


The new system was set up, and I proceeded as planned.

The Markdown Editor

It goes without saying that an article-writing website should have an easy-to-use text editor. During early 2020, Markdown emerged as a very popular way to write text. I decided that Immersive Communities would have to support it out of the box.


This would not only require me to write the Markdown but display it as HTML as well. The Markdown-It library would be used to transform Markdown into HTML. But there was an additional requirement, so the complete list of different media we should display is as follows.


  • Text
  • Image
  • Video
  • Embeds


Furthermore, images and videos should be displayed as a slider where users could swipe images like on Instagram. This would require a mix of Markdown and other HTML elements.


The editor would be split in several parts where there were two types of inputs, the text field and the media field. Each field in the editor can be moved up or down which was quite easy to implement using Sortable.js.


When it comes to input fields, the Markdown field was simple enough to create with an <textarea> element. The editor also loads the Inconsolata Google Font which gives the text, which is being typed, the typewriter look.

Furthermore, in order to actually style the text, a bar was implemented which would add Markdown to the text. The same was done using keyboard shortcuts using Mousetrap.js. Now, we can easily add bold text in form of ** Markdown tags using Control+B, etc.


While typing, it’s only natural to have the <textarea> element expand as the amount of text increases, so I used the Autosize.js library to implement this feature.


The media field would be able to display either images, videos, or iframes which would contain embedded websites. The type of media field would switch based on the type of media itself. I used Swiper.js to implement the swiping between images.


The video component was implemented using the Video.js library.


The issues started arising when it came time to actually upload the media. As far as images go, it was easy to use the browser’s File API to load photos and videos from your device. What I then had to do, was to first transform images, which might have been in HEIC format into JPEG.


Then I would compress them, before uploading them to the back end. Luckily, Heic-Convert and Browser-Image-Compression libraries served this purpose well.


Another issue came in when I had to choose the correct image aspect ratio and crop it before uploading. This was implemented using Cropper.JS but unfortunately, didn’t work out of the box on the Safari browser.


I spent quite a lot of time setting the appropriate CSS in order for the image to not overflow from the container. In the end, the user can easily load an image from his device, zoom in and out, and also crop the image before uploading.


Once everything was completed, the media would be uploaded to Cloudinary, which is a service for managing media files.


It was then time to put all of this together and display it to the user in the form of articles. I was fortunate enough that Aurelia has a <compose> element which can load HTML dynamically.


Therefore, depending on the input type, I would load either media elements or Markdown elements, which would be transformed into HTML.


This HTML would then have to be styled with CSS, especially the HTML tables which I would transform depending on the screen size. On larger screens, tables would be shown in their regular horizontal layout, while on smaller screens, they would be shown in a vertical layout.


This would require an event-driven approach which would tell us when and how the screen size is changing. The best library to use in this case was RxJs, which handled the „resize“ events, and I was able to format the table accordingly.

Improving the Data Input

I then came back to the articles. I had to change the way articles were being saved to the database since it was the case that multiple people could be modifying the article at the same time.


I would then save the new article as an initial article type, but the actual data of each article would be saved as a version. I would then be able to track which user and when each article was changed.


This enabled me to prevent a new version from being saved if the user didn’t load the latest version first. Also, if a certain update was inappropriate, it could be disabled and a previous version would then be visible again. Drafts for each article would be saved in the same way.


As far as actual data input goes, I decided to implement it as a pop-up. The pop-ups themselves would not simply appear on the screen but would slide up from the bottom. Furthermore, it would be possible to swipe inside the pop-up.


For this purpose, I reused the Swiper.Js library, while all the other animations were done using Animate.CSS library.


The pop-up was not simple to implement because it required scaling with the screen size. Thus, on larger screens, it would take 50% of the screen width while on smaller screens it would take 100% of the width.


Furthermore, in certain cases, like with the list of followers, I implemented the scroll to be contained within the pop-up. This means that the list which we were scrolling did not stop at the top but would disappear when scrolling.


I also added further styling and dimmed the background and disabled scrolling or clicking outside the pop-up. On the other hand, the Preview pop-up for the article editing system moves with the screen.


This was inspired by Apple’s Shortcuts app and how its pop-ups appear, which also goes for the pill buttons and titles above the elements.

The Navigation Bar

One of the most important UI features which I implemented was further inspired by the iPhone, which is its navigation bar. I’ve noticed that almost all mobile apps have a fairly basic navigation bar, with simple and small icons which don’t really fit into the overall design of the application.


I’ve decided to simply replicate the iOS bar and use it throughout the website. Needless to say, it should not be always visible, but should instead disappear when we scroll down and appear when we scroll up.


When the user is scrolling down, we assume he is interested in the content and is not going to navigate away from the current page, thus we can hide the bar.


On the other hand, if the user is scrolling up, he might be looking for a way to leave the page, so we might as well show the bar again.


There are four buttons on the bar and they allow the user to navigate to the four main parts of the website. The Home button navigates to the homepage of each community. The Trending button navigates to the Trending page where the user can see all the recent activities which other users have posted.


The next button is the Engage button which navigates to the list of all the features and settings the community offers. Lastly, the Profile button leads us to our profile page.


It was also necessary to take into account larger screens, so the bar actually moves to the right side of the screen when displayed on a large screen. It becomes sticky and does not move anywhere at that point.

Real-Time Batch Processing

Once the most important work on the front end was done, it was time to visit the back end once more. This part of the system would prove to be one of the most complex to implement but ultimately, very important and would also make it quite easy to proceed with other features.


In Object Oriented Programming, there exists a concept of Separation of Concerns, where we keep our functions simple and make them do only one thing, which they are meant to do.


Furthermore, the idea of Aspect Oriented Programming is specifically about the separation of concerns, where we need to separate business logic from other cross-cutting concerns.


For example, saving a user to a database should naturally be accompanied by logging, while the saving of the user is being processed. But the code for these two features should be kept separate.


I’ve decided to apply this reasoning across the board and extract as many features from the UI, which are not important to the user and move them to the back end.


In our case, we are mostly concerned with saving data to the database, which relates to communities, articles, comments, likes, and so on.


If we want to keep track of how many likes an article gets, we could have a process that counts all the likes for each article and updates them periodically.


Since we are here dealing with a large amount of data that is stored in the database and potentially a large amount of data that is constantly flowing to the database, we will need to employ real-time data processing in order to handle this situation.

I have chosen AWS Kinesis for this task. Kinesis is able to ingest large amounts of real-time data and we can write SQL queries to query and batch this data in near real-time as well. By default, Kinesis will batch data for either 60 seconds or the batch reaches 5 MB, whichever comes first.


Thus in our case, we will query the incoming data, meaning, the creation of new communities, addition or deletion of articles, users, activities, etc., and update the database every minute with fresh data. The question which now arises, is how do we get the data into Kinesis in the first place?


Our database of choice, DynamoDB is actually able to define triggers that are invoked, in form of Lambda functions, whenever the data is either added, removed, or modified. We would then catch this data and send it to Kinesis for processing.


It just so happens, that one of our earlier decisions would make this process slightly harder to implement because we are not dealing with 1 database, but are actually dealing with 10 databases.


Therefore, once the data is added, the Lambda functions will be invoked 10 times instead of once, yet we need to handle each case because the data could come from any database since they are located in different regions.


I solved this problem by filtering out data that is copied as opposed to the original data which was added to the database by the user.


The “aws:rep:updateregion” column gives us this information and we can determine if we are dealing with the data in the region where it was inserted or if it represents copied data.


Once this problem was out of the way, we would simply filter, either the addition of new data or its removal. Furthermore, we would filter the data based on its type, meaning either we are dealing with data that represents a community, article, comment, etc.


We then collect this data, mark it as either “INSERT” or “DELETE” and pass it on to Kinesis. These ideas from the Domain-Driven Design approach are called Domain Events and allow us to determine which action happened and update our database accordingly.




We then turn our attention to Kinesis. Here we had to define three main parts of the system


  • AWS Kinesis Data Streams
  • AWS Kinesis Firehose
  • AWS Kinesis Data Analytics


The Kinesis Streams allow us to ingest data in real-time in large amounts. Kinesis Analytics is a system that allows us to actually query this data in batches and aggregate it based on a rolling time window.


Once the data is aggregated, we would push each result further into Kinesis Firehose, which can handle large amounts of data and store it in a destination service, which in our case, is an S3 bucket in the JSON format.


Once the data reaches the S3 bucket, we trigger another Lambda function and handle this data in order to update the DynamoDB database.


For example, if 5 people liked an Interest in the last minute, we would find this data in our JSON file. We would then update the like count for this Interest and either increment or decrement the like count. In this case, we would simply increment it for 5 likes.


Using this system, the statistics of every community would stay up to date within a minute.


Furthermore, we would not need to write and execute complex queries when we need to display aggregate data, since the exact result is stored in the fast DynamoDB database in each record, thus increasing the query speed for each record.


This improvement is based on the idea of data locality

Cloudinary

It was now time to start implementing 3rd party services which would handle the features which I needed to have but was easier to buy a subscription for than build on my own. The first service I implemented was Cloudinary, which is a service for media management.


I’ve set up all the presets on Cloudinary to eagerly transform images for the following responsive screen breakpoints.


  • 576 px
  • 768 px
  • 992 px
  • 1200 px


These would also be breakpoints set in Tailwind CSS where our website would conform to different screen sizes for mobile phones, small tablets, large tablets, and computer monitors.


Then depending on the current screen size, we would appropriately invoke eagerly created images from Cloudinary using the scrcset attribute on the <image> element.


This would help us to save on bandwidth and shorten the time to load images on mobile devices.


As far as the video feature goes, after it had been implemented, I decided to drop it because the pricing for the videos at Cloudinary was too expensive. So, even though the code is there, the feature is currently not used but might become available later on.


This will require me to build a custom system on AWS in the future.

Embed.ly

I decided to use Embed.ly to embed content from popular websites such as Twitter, YouTube, etc.


Unfortunately, this did not work without issues so I had to use several techniques to manually remove Facebook and Twitter scripts from the website because they would interfere with the embedded content after it gets loaded multiple times.

Algolia

When it comes to search, I chose Algolia and implemented the search for communities, activities, articles, and users. The front-end implementation was simple enough.


I simply created a search bar which, when clicked, would hide the rest of the application and while we’re typing, would display the result for the specific subdomain we are currently browsing.


Once we press „Enter“, the masonry on the home page displays the articles which fit the query. It goes without saying that I also had to implement the pagination which would load the results incrementally, to mimic the look and feel of Pinterest.


The problem came in when I realized that there was no way to actually search the activities unless you stored the whole text in Algolia, which I wanted to avoid. I, therefore, decided to store only relevant tags for each activity, but the question was how to extract relevant tags from each activity.


The answer came in the form of AWS Translate and AWS Comprehend. Since the amount of items that would be added to the database would be large and we would like to add this data to Algolia, we might overload the API if we were to add each record separately.


We would instead like to handle them in real-time and in batches, therefore we would again employ Kinesis as a solution.


In this case, each addition of a new item to the database would trigger a Lambda function which would send that data to Kinesis Data Streams, which would in turn send the data to Kinesis Firehose (no need for Analytics this time) and further store them in an S3 bucket.


Once the data is safely stored, we would trigger a Lambda function which would send it to Algolia, but before that, we would need to process this data.


In particular, we would need to process activities, from which we would strip out Markdown text using the markdown-remover library. We would then be left with plaintext. Once we have the actual text, we can proceed with extracting the relevant tags which will be used for the search.


This can be easily done using the AWS Comprehend service, but the problem is that it supports only some languages. Thus, if a user is writing in a language that is not supported, we would not be able to extract the tags.


In this case, we simply use AWS Translate and translate the text to English. Then we proceed to extract the tags and then we translate them back into the original language.


Now, we simply store the tags in Algolia as intended.

Recombee

One of the most important features Pinterest has is its recommendation engine. Once the user clicks on a Pin, he is immediately shown the full-size image of the Pin, while under the image, we can see the recommendations that the user might like, based on the current Pin.


This is a very good way to increase user retention and make them keep browsing the website. In order to implement this feature, which in my case would have to show similar articles to users, I chose Recombee — which is a recommendation engine SaaS.


The implementation was easier this time as opposed to Algolia since I reused the same principles. Seeing as how we will need to recommend communities, articles, and activities, for each new item that is created by a user, I would use Kinesis to batch these items and send them to Recombee.


The recommendation process is based on Views, meaning every time a user sees an article, we would send this View for this specific user and the article, to Recombee.


We can also assign other actions to the items in Recombee, based on how the user interacts with them. For example, writing a new Interest would be mapped to a Cart Addition for that Interest. If a user likes an Interest, this would be an addition to the Rating.


If a user joins a community, this would be mapped to the Bookmark for this community.


Based on this data, Recombee would create recommendations for users.


On the front end, I would simply get the article that the user is currently reading, and get the recommendation data for this specific article and the user. This would be displayed at the bottom of each article, as a paginated masonry list.


This would give the user a list of potential articles he might be interested in reading.

Locize

Seeing as how the website would be aiming for a global audience from the start, I had to implement localization as well. For the initial release, I decided to go with 10 languages and settled for a SaaS service — Locize, which is implemented based on the i18next localization framework.


We will need to localize words based on the amount, meaning either singular or plural, and would have to localize time as well. Seeing as how we are displaying the time when each article was created or last updated.


I’ve chosen English as the default language and translated all the words using Google Translate into other languages like German, Japanese, etc. It is again very convenient that Aurelia supports localization as well.


Once all the translations were done, I imported the translated JSON files into the application and had them split based on the community type, so that we don’t load unnecessary text that won’t be used.


Aurelia then allows us to simply use templates and binding which would automatically translate the text. But I also used Value Converters which would format the time, to show how long it has been since an article was written, as opposed to showing an actual date.


Furthermore, I had to format the numbers as well, thus instead of showing the number 1000, I would display 1K. All these features were handled by libraries such as Numbro and TimeAgo.

Twilio

A community website requires communication, but not just in public. It requires private communication as well. This meant that a real-time private chat should be something that I need to offer as well. This feature was implemented using the Twilio Programmable Chat service.


Every user can have a private chat with any other user in each specific community. The back-end implementation was easy enough to implement using Twilio libraries. When it came to the front end, I decided to style the chat based on Instagram because it had a clean and simple design.

Pre-Rendering the SPA

I’ve also chosen a service called Prerender to use to pre-render the website to make it available to search engine crawlers. After realizing the pricing might be a concern, I decided to actually build the pre-rendering system on my own.


To this end, I found a library called Puppeteer, which is a Headless Chrome API.


This library could be used to load websites programmatically and return a generated HTML with executed JavaScript, which the search crawlers at that time didn’t do. The implementation would load Puppeteer in a Lambda function, which would load a website, render it, and return the HTML.


I would use the Lambda@Edge to detect when my user was actually a crawler, and then pass it on to the pre-rendering Lambda. This was simple enough to do by detecting the „user-agent“ attribute in the CloudFront parameters. It actually turned out that the Lambda couldn’t load the Puppeteer library because it was too large.


This was not a show-stopper since I then found the chrome-aws-lambda library, which did all of this work out of the box and would be much smaller since it uses only the Puppeteer core, which was needed for my purposes.


Once the system was completed, the search engines were already powerful enough and started executing JavaScript as well. Thus, even though I completed this feature, I turned it off and I simply allow search engines to crawl my website on their own.

Stripe

One of the core features of Immersive Communities is its Revenue Sharing scheme, where the users share 50% of member subscriptions and ad revenue.


As stated previously, we need to enable our creators to not just create their content, but monetize it as well. The question was now how to implement this system. It goes without saying that the default choice was Stripe, so I proceeded as follows.


I’ve decided to design the Revenue Sharing system based on each community. This way, a user can create several communities and earn based on each community. The revenue for each community would come from two sources.


  • Member Subscriptions
  • Self-Service Ads


Member subscriptions were the easiest to implement. I would create three price points for member subscriptions, $5, $10, and $15 monthly. The members of each community can then support the owner of the community on a monthly basis, and in return, would not be shown any ads.


The ad system was based on the same monthly subscriptions but would range between $100 and $1000 monthly. The company which wants to advertise in a particular community can simply choose the monthly payment amount and set the ad banner.


Assuming there are several advertisers in a single community, the ads would be chosen at random with every page load or route change. The way the advertiser can increase the frequency of showing their ads, compared to other advertisers, by increasing the monthly payment amount.


We would also need to show the advertiser how their ads are performing, so I again employed a Kinesis setup to measure both views and clicks. This system would then update the statistics as usual and I then used the Brite Charts library to display the statistics.


The most important part was the actual revenue-sharing feature. This was simply implemented by the Stripe Connect feature. The user simply needs to add their bank account and connect to Stripe Express and the system would then have all the info needed to send payments.



I would then have a scheduled Lambda system that would get all the users on a daily basis and update transactions and make sure that 50% of each transaction (either member subscription or ad payment) is transferred to the owner of the community where the payment is made.

AWS Cognito

The last service which had to be implemented was Auth0, which would help with user authentication. After some research, I’ve decided on a passwordless setup, based on SMS messages.


Seeing as how we are now in a mobile-first world, it made only sense to forgo passwords and base the authentication on something everyone already has — their mobile phone.


It turned out that the Auth0 implementation of passwordless authentication was sub-optimal since it would redirect to their website every time and would be based on URL parameters, which I wanted to avoid.


The pricing as well wouldn’t scale for something like a social network, so I decided to build my own implementation using AWS Cognito.


It was quite convenient that Cognito has triggers that can be connected to Lambda functions which is what I used to trigger authentication. The Lambda functions would be used to collect user data during signup.


At this point, the user only needs to provide his phone number and his username to register.

During the login procedure, the Lambda function would collect the user’s phone number and send an SMS message, containing a verification code to the user, using AWS SNS.


The user would then simply type in this code to get verified through Cognito and would be redirected to his profile page.


Of course, once the user gets authorized and gets the validation data passed back to the front end, we would have to encrypt it before we can store it. The same authorization data gets encrypted before gets stored on the back end.


Also, during each signup and login, we store the IP of the user.


It would later turn out that users would actually have an issue with giving out their mobile phone numbers, so I decided to replace the SMS with email messages.


There was a problem with duplicated messages when I wanted to use AWS SES, so I switched to Twilio’s SendGrid in order to send emails to users.


With this system completed, the year was out and the project which I started 2 years ago was nowhere near complete. There was no other choice but to continue working and trying to complete it as soon as possible. Little did I know that the biggest challenges were yet to come.

2021 — No End in Sight

Here is when everything had to fall into place, but working as a solo developer without any feedback for this long makes you question the direction the project is going.


The question that any developer who is currently in the same place right now might ask himself the following question.


How can I keep myself motivated and able to keep going, even though I see no end in sight?


The answer is quite simple.


You simply shouldn’t question your decisions, regardless of how you currently feel about the project. You can’t allow, your current emotional state, to determine how you will act.


You might not feel like continuing right now, but you might feel like it later and you sure as hell will feel bad if you do actually quit.


So if you do quit, you will not have the project anymore and all the work will have been for nothing. So the only thing to do is to simply keep going forward, regardless of what happens.


The only thing to keep in mind at this time is that every delivered feature, every single keyboard press, is getting you closer to the goal.


During this project I actually changed my job 3 times, each time being quite involving, but even though I had to go for job interviews, I would still go back home, sit behind my desk, and continue to work on my project.


What you have to ask yourself, if you are lacking motivation, is the following.


If you quit now, where will you go? The only way you can go, after quitting, is back to where you came from. But you already know what’s back there. You already know what its like, and you did not like it, which is why you set out on this journey in the first place. So, you now know for a fact, that there is nowhere to go back to. The only way you can go, is forward. And the only way to go forward, is to just keep working.


This is all the motivation I had available during this project, as I said already, it was either that, or going back where I already was, so I decided to keep moving forward.

The Admin System

It was now time to start bringing things together and to start, I decided to implement the Admin System which would be used to maintain each community. Each community owner would be able to make decisions about the removal of content in their community.


This would imply that we can disable ads, articles, and activities and ban users if their actions are not in line with the rules of conduct.


The owner of each community is also able to give admin rights to other users as well. But we also need to make it possible for the admins on the main community to be able to administrate all other communities.


Furthermore, these admins would be able to completely disable other users from all communities and even disable the community as a whole.


In order to make it easier for admins to do their job, I introduced the flagging system, where each item can be reported to the admins. The users can now report anything on the website that they deem to be inappropriate.


The actual validation of permissions for each user would be decided on the back end. I would simply create a Lambda function that would be invoked inside each AppSync call which would validate each request.


Furthermore, the front end would use routing-based authorization which is provided by Aurelia. I would simply define rules which would either allow or disallow the current user from proceeding to a certain route.


For example, you would not be able to see your profile if you have been banned from a certain community. But this system could also be used to prevent someone from navigating to a profile page if they are not logged in, instead, they would be redirected to the login page.

The Analytics Dashboard

Another feature that would be useful for the users would be the Analytics Dashboard page. Each community owner can see charts that display exactly how much interaction is happening in his community.


For this particular case, I would re-use the data which was aggregated by Kinesis, and display it with charts using the Brite Charts library.


Furthermore, I would also take the Stripe data, and display the number of subscribers, advertisers, and total earnings this community has.


The only problem which had to be solved, was the responsive design, meaning how to display charts on both small and large screens. Again, I used RxJs to detect the „resize“ event and apply styling based on the screen breakpoints defined in Tailwind CSS.

The Security

An additional level of security was also on the roadmap and I decided to implement a WAF in front of my CloudFront distributions.


I used the AWS Marketplace and subscribed to the Imperva WAF system which would proxy my traffic and make sure to allow only the traffic which has been validated as safe.


The solution was quite easy to implement, But once the first month was out, the bill was way too much to handle so I disconnected the system and decided to simply rely on what CloudFront had to offer by default.

The Last Minute Redesign

At this point, I had to start looking at all that I have done and start fixing the small issues which were still left. Many things needed to still be polished, but the largest thing that had to be changed was the DynamoDB database setup.


It turns out that my initial setup, which wasn’t the one that I’m using now, was not going to scale well. This is why I decided to completely redesign it and start using the „#“ separator to indicate branching in the record’s identifier.


Previously, I was making separate records and using AppSync Pipelines to locate each related record, which was clearly unsustainable. This also had an effect on the Kinesis and 3rd party services setup like Algolia and Recombee.


In turn, it took 3 months to completely redesign the system to work properly. Once this was done, I could continue with the new features again.

The Hottest Summer on Record

Summers in Tokyo are hot and humid. It is quite a challenge to stay on point with anything you’re doing especially in July and August.


During that time, the Olympics were being held in Tokyo, and on August 7th, it was reported that the hottest temperature was recorded in the history of the Olympics.


Needless to say, going to work by train would not make sense anymore because the weather would be too exhausting, leaving me drained and unable to work in the evening. I realized that I had to save some more time by taking a taxi to work instead.


This gave me some more time to sleep and would keep me from being too tired to work once I got back home.

Real-Time Notifications

PWA is a great technology and it offers us a way to send notifications to users using Push Notifications. I decided that this would be a system that would also be needed and proceeded with the implementation.


The notification system would be implemented based on the user who is being followed. If you are following a user, then when he creates a new activity or an article, you would need to be notified.


Currently, the only issue with the Push Notifications is that at the time of this writing, they are still not supported by the Safari browser on iOS devices. Instead of native Push Notifications, I’ve decided on the browser’s Notification API.


On the back end, I would create a new instance of AWS API Gateway and set it up to work with real-time data.


On the front end, I would make a connection using WebSocket API to the API Gateway. Once the user who is being followed, publishes a new article, this data will be sent to Kinesis. Again, using batch processing, we get all the users who follow the author and then, use the API Gateway to send the notifications to the front end.


On the front end, the WebSocket connection gets triggered, which in turn we use to invoke the browser’s Notification API and display the notification.


Furthermore, when it comes to comments that the users can write on each article, we need to keep track and show to the user where he is currently engaged in discussions.


I also implemented an unread indicator that would show which comment section has new comments that the user has still not read.


This would be checked when the user loads the application without using the await keyword when invoking the AppSync call. This would make sure that the execution does not wait for the call to be completed, but instead, the more important data gets loaded first.


Once the call would return, we would simply update the UI and show the notification to the user.

I would also use notifications in the form of pop-ups to signal to the user when an action was completed successfully or not.


For example, I would create a pop-up message which would tell the user if the article update has failed.

Front End Validation

Seeing as how the back-end validation was completed, we had to give the user an even better experience by implementing the validation on the front end to give the user faster feedback.


Thankfully, Aurelia has a validation plug-in and is appropriately implemented with a fluid interface. This made it quite easy to create business rules which would limit, for example, the number of characters the user could type into an <input> field, for an article name.


I would use the Aurelia property binding system to collect the validation messages and then display them on the UI. I would further need to incorporate this with the localization system and make sure the messages are displayed in the correct language.

Finalizing the Work

The rest of the year consisted of working on smaller details. It required me to create things like loading placeholders. I specifically decided that I did not want to display loading placeholders as separate screen elements.


Instead, I wanted to indicate to the user that an element is being loaded. That is why I used the outline of the element which was being loaded and gave them a transparent loading animation instead. This was inspired by the Netflix mobile app which works in the same way.


By this point, the end of the year came and I was now working on the main home page. This page would only display all the communities which we currently have. Luckily, the component-based system I created earlier made it quite easy to reuse most of the code I wrote, so the task was completed quickly.


The year finally ended and I was satisfied with the work which was done. Even though the project was not yet done, I knew the success was within my reach.

2022 — The Last Mile

This year was to be the final year. I did not know whether I would actually implement everything I wanted to, but I knew I had to do it regardless of what happened.


I did not want a repeat of the work during summer like I did last year because it was more probable that it would be even hotter than the last year.


My prediction came true and it actually turned out that Tokyo had the hottest summer temperature in 2022, measured in the last 147 years!

The Landing Page Design

I started off by designing the landing page. The question was the following.


How do I want my users to feel when they visit my landing page?


I didn’t want to have the users feel like this would be too serious of a website, but rather a friendly and collaborative community.


I noticed that lately, landing pages had illustrations as opposed to photographs of real people, so it would make sense to go down this path. That’s why I decided on a set of illustrations which I bought on Adobe Stock.


The landing page had to be simple and would also have to quickly describe everything the website is offering. This had to be localized as well, so I used the localization feature to translate all the landing page titles and subtitles which are on display.


The only technical issue which had to be overcome was how to introduce color inside the text. Luckily, I was able to use the styling feature inside the translation definitions and then, use Markdown to dynamically generate the HTML which would be displayed on the landing page.


Required data, like „Privacy Policy“ and „Terms of Use“ were purchased online and translated into multiple languages using Google Translate.

Expect the Unexpected

It was now time to tie up all the loose ends, so I spent the rest of the time making sure logging was present in all Lamda functions on the back end. This would help me in making sure that if issues were to happen, I would know what was going on.


By the time I was finishing up, the War in Ukraine had begun. This again increased the uncertainty of the global economy but I continued to work and kept myself focused on the final goal.

Having not kept the PWA implementation up to date, I had to make sure that all the features were working, and so, some further development was needed to improve JavaScript and image caching.


The offline feature was finally turned on and the application was now properly behaving as an offline app.


I also had to move the changes which I made on the back end and actually spread the changes I made on AppSync to other regions. Since it would be too cumbersome to have done that during development I made no changes to other regions since I started developing.


The same goes for the environments. It would have wasted too much time to constantly build all three environments, so I finally took some time to sync all the environments and move the code to UAT and Production.


Lastly, I had to implement the https://immersive.community domain, which would have to work without the „www“ subdomain and redirect to the homepage correctly.


At this point, we were in the early morning hours of the 25th of April 2022. My 4 year-long project was finally over. I created the first post on the website and went to sleep. I knew I finally succeeded. Not only did I finish what I set out to do, but I also did it before the summer came.

Final Words

Ironically, the final words of my adventure, are that this is not the end, but just the beginning. Now that the system is live, the content which needs to be created and the promotion and advertising which will be needed for brand awareness will be a completely new adventure.


But, what have I actually learned from this exercise?


Well, quite a lot. First of all, I can say with confidence I would never do it again.


Not that I’m not pleased with the outcome, quite the opposite, I’m very satisfied, but this is the kind of thing you do once in a lifetime and it wouldn’t make sense to try to outdo yourself, simply to prove that you can do it even better.


I wanted to know whether it was possible to build an enterprise-level system as a solo developer and I have shown that it can be done with the tech stack that we have at our disposal.


More than anything, this is a statement to every developer working on their side-project or is thinking of starting one.


Would I recommend this approach to other developers out there? Absolutely. Not because it’s an optimal way of doing things, it most certainly isn’t.


Not asking for help when you’re stuck is certainly not the fastest way to solve a problem, but what it will do is help you to discover your limits.


Once you’ve decided to do something like this, and you succeed, you will know that everything else you decided to do after that, will be easier to achieve.


I believe my story will motivate you to finish whatever you started regardless of how you feel and even if you don’t see the end of the road you are now walking down, just remember that „back there“ is not where you want to be.


If you found this story inspiring, subscribe to my YouTube channel because I will start doing an advanced „Full Stack Dev“ programming course where I will go into detail on all the tech which I used to build Immersive Communities.


What I didn’t touch upon in this article are the philosophical underpinnings and justifications for the way I approached each problem and the techniques I used to analyze and design the solutions to each problem.


This was an even more important component than simply knowing the technologies and how to use them. How you approach a problem and your thought process, which brings you to the solution, is something I will go in-depth in my YouTube videos.


This will be a great way for developers to learn programming from someone who created a real-world system and is ready to share his knowledge. See you on YouTube!


Also published here