Hello everyone, dev.family is in touch. We would like to tell you about an interesting project that we have been working on for almost six months and are still continuing. During this time, a lot has happened in it, a lot has changed. We discovered something interesting for ourselves, managed to fill the bumps.
What will our story be like?
So, what are we still working on? In fact, this question at some point became very relevant, as it was, for example, for the owner of McDonalds corporation at one time. We started the project as a crypto-loyalty program that provides end-users with rewards for certain actions, and customers receive analytics on these same users. Yes, it's pretty superficial, but it doesn't matter.
It was necessary to develop Shopify modules for connecting to Shopify stores, a portal for brands, an extension for Google Chrome, a mobile application + a server with a database (well, actually, nowhere without them). In general, with what we need, we decided and started working. Since the project was immediately assumed to be large, everyone understood that it could grow like magic beans of delayed action.
It was decided to do everything “correctly” and “by all standards”. That is, everything is written in one language - TypeScript. So that everyone writes the same way, and there are no unnecessary changes in the files, linters (a lot of linters), so that everything is “easy” to reuse, put EVERYTHING into separate modules, and so that they do not steal under the Github access token.
Repository for linters and ts config separate (style guide)
Repository for a mobile application (react native) and a Chrome extension (react.js) (together, since they repeat the same functionality, only aimed at different users)
Another repository for the portal
Two repositories for Shopify modules
Repository for blockchain stuff API repository (express.js) Repository for infrastructure
Huh ... think I listed everything. It turned out a bit too much, but okay, let’s keep rolling. Oh yeah, why were two repositories allocated for Shopify modules? Because the first repository is UI-modules. There is all the beauty of our babies and their settings. And the second is integrations-Shopify. This is in fact its’ implementation in Shopify itself with all liquid files. In total, we have 8 repositories, where some should communicate with each other.
Since we are talking about development in TypeScript, we also need package managers to install modules, libraries. But we all worked independently in our repositories, and it didn't matter to anyone what to use. For example, while developing a mobile application on React Native, I did not think for too long and kept YARN1. Someone may be more used to using the good old NPM, while others love everything new and use the fresh YARN3. Thus, somewhere there was NPM, somewhere YARN1, and somewhere YARN3.
So we all started to make our applications. And almost immediately the fun began, but not that complete. Firstly, some did not think about what TypeScript was for, and used “Any” wherever they were too lazy, or where they “didn’t understand” how they could not write it. Someone did not realize all TypeScript power and the fact that in some places everything can be made much easier. Therefore, the types came out of cosmic dimensions. Yes, I forgot to say, we decided to use Hasura GraphQL as a database. Manual typization of all answers from it sometimes looked like something else. And in one case, some even wrote in good old Javascript. Yes, the situation turned out to be cool: the first guy put “Any” once again to not to strain too much, the second writes canvases of types with his own hands, and the third still do not write types at all.
Later it turns out that in cases where we repeated the logic, and, in a good way, it should have been taken out in a separate package - no one was going to do this. Everyone writes and writes code for himself, for everything else - spit from a high bell tower.
What do we have? We have 8 repositories with different applications. Some are needed everywhere, others communicate with each other. Therefore, we all create .NPMrc files, prescribe credits, create a github token, then through the package manager-module. In general a slight hassle, although unpleasant, but nothing unusual.
Only in the case of updating something in the package, you need to upgrade its version, then upload it, then update it in your application/module, and only then you will see what has changed. But this is totally inappropriate! Especially if you can just change the color somewhere. In addition, some code is repeated and not reused, but simply quietly rewritten. If we are talking about a mobile application and a browser extension, the redux store and all the work with the API are completely repeated there, something is just completely rewritten or slightly modified.
In total, what we are left with: a bunch of repositories, a rather long launch of applications / modules, a lot of the same things written by the same people, a lot of time spent on testing and introducing new people into the project, and other problems arising from the above.
In short, this led us to the fact that the tasks were performed for a very long time. Of course, this led to deadlines being missed, it was quite difficult to introduce someone new to the project, which once again affected the speed of development. Everything was going to be quite dreary and long, in some cases, thanks to webpack for that.
Then it became clear that we were moving far from where we were striving, but who knows where. After analyzing all the mistakes, we made a number of decisions, which will be discussed now.
Probably, the most important thing that influenced a lot in the future was the realization that we are not building a specific application, but a platform. We have several types of users, there are different applications for them, but they operate within the same platform. So we immediately closed the issue with a large number of repositories: if we are working on one platform, why split it into repositories when it is easier to work in one.
I want to say that working in a monorepo made our lives a hell of a lot easier. Some applications or modules had a direct relationship with each other, and now you can work on them with peace of mind on the same branch in the same repository. But this is far from the main advantage.
Let's continue. We have moved everything into one repository. Cool! We continued to work at the same pace until it came to reusability. In fact, this is a “rule of good taste” we have in our work. Realizing that in some places we use the same algorithms, functions, code, and in some places separate packages that we installed via github, we decided that all this “doesn’t smell very good” and started putting everything into separate packages within a monorepo using Workspaces.
Workspaces (workspaces) are sets of functions in the NPM cli, with which you can manage multiple packages from a single top-level root package.
In fact, these are packages within one package that are linked through a specific package manager (any YARN / NPM / PNPM), and then used in another package. To tell the truth, we did not immediately rewrite everything on workspaces, but did it as needed.
From one file
{ "type": "module", "name": "package-name-1", ... "types": "./src/index.ts", "exports": { ".": "./src/index.ts" }, },
To another file
{ "type": "module", "name": "package-name-2", ... "dependencies": { "package-name-1": "workspace:*", }, },
An example using PNPM
Nothing complicated, actually if you think about it: write a couple of commands and lines, and then use whatever you want and wherever you want. But “there is one caveat, comrades”. Earlier I wrote that everyone used the package manager they wanted. In short, we have a repository with different managers. In some places it was funny when someone wrote that he could not link this or that package, having in mind a fact that he uses NPM, and there is YARN.
I will add that the problem was not because of different managers, but because people used the wrong commands or configured something wrong. For example, some people through YARN 3 just did a YARN link and that's it, but for YARN 1 it didn't work the way they wanted due to the lack of backward compatibility.
By this point, it became clear that it is better to use the same package manager. But you need to choose which one, so at that time we considered only 2 options - YARN and PNPM. We discarded NPM right away, because it was slower than others and uglier. There was a choice between PNPM and YARN.
YARN initially worked well - it was faster, simpler and more understandable, which is why everyone used it then. But the person who made YARN left Facebook, and the development of the next versions og it was transferred to others. This is how YARN 2 and YARN 3 appeared without backward compatibility with the former. Also, in addition to the yarn.lock file, they generate a yarn folder, which sometimes weighs as node_modules and stores caches in itself.
Therefore, we, like many other developers, turned our attention to PNPM. It turned out to be as convenient as the first YARN at its time. Workspaces can be easily used here, some commands look the same as in the first YARN. In addition, shamefully-hoist turned out to be a nice additional option - it is more convenient to install node_modules everywhere at once than go to some folder every time and do PNPM install.
In addition, we decided to try turborepo. Turborepo is a CI/CD tool that has its own set of options, cli, and configuration via turbo.json file. Installed and configured as easy as possible. We put a global copy of the turbo cli through
PNPM add turbo --global.
Adding turbo.json to the project
turbo.json
{ "$schema": "https://turbo.build/schema.json", "pipeline": { "build": { "dependsOn": ["^build"] } } }
After that we can use all the available functions of turborepo. We were most attracted by its features and the possibility of using it in a monorepo.
Incremental builds (Incremental builds - collecting builds is quite painful, Turborepo will remember what was built and skip what has already been calculated);
Content-aware hashing (Content-aware hashing - Turborepo looks at the contents of files, not at timestamps, to figure out what needs to be built);
Remote Caching (Remote hashing - share a remote build cache with the team and CI / CD for even faster builds.);
Task pipelines (A task pipeline that defines relationships between tasks and then optimizes what and when to create.).
Parallel execution (Performs builds using each core with maximum parallelism, without wasting idle CPUs).
We also took the recommendation for organizing a monorepo from the documentation and implemented it in our platform. That is, we split all our packages into apps and packages. To do this, we also create the PNPM-workspace.yaml file and write:
PNPM-workspace.yaml
packages:
'apps/**/*'
'packages/**/*'
Here you can see an example of our structure before and after:
Now we have a monorep with customized workspaces and convenient code reuse. I will add a few more points that we did in parallel. I mentioned two things earlier: we had a chrome extension, and that we decided - we were making a platform.
Since our platform worked with Shopify as a priority, we decided that instead of an extension for Chrome or in addition to it, it would be nice to make another module for Shopify, which can be simply installed on the site, so as not to once again force people to download a mobile application or Chrome extension . But it must completely repeat the extension. Initially, we did them in parallel, but we realized that we were doing something wrong, because we simply duplicated the code. In every sense we write the same thing in different places. But since we now have all the workspaces and reuse configured, we easily moved everything into one package, which we called in the Shopify module and Chrome extension. Thus, we saved ourselves a lot of time.
The second thing that saved us a lot of time was the elimination of webpack and, in some places, builds in general. What's wrong with webpack? In fact, there are two critical points: complexity and speed. What we have chosen is vite. Why? It is easier to set up, it is quickly gaining popularity and already has a large number of working plugins, and an example from the docks is enough for installation. In comparison, the build on the webpack of our Chrome web extension took about 15 seconds, on vite.js
about 7 seconds (with dts file generation).
Feel the difference. What's with the rejection of builds? Everything is simple, as it turned out, we didn’t really need them, since these are reusable modules and in package.json, in exports, you could simply replace dist/index.js with src/index.ts.
How it was
{... "exports": { "import": "./dist/built-index.js" }, ... }
How it’s now
{ ... "types": "./src/index.ts", "exports": { ".": "./src/index.ts" }, ... }
Thus, we got rid of the need to run PNPM watch to track application updates related to those modules, and do PNPM build to pull updates. I don't think it's worth explaining how much time it saved us.
In fact, one of the reasons why we collected builds was TypeScript, more precisely index.d.ts files. So that when importing our modules / packages, we know what types are expected in certain functions or what types others will return to us, such as here:
But given that you can simply export from index.tsx, there was another reason to abandon builds.
But still, why TypeScript? I think it makes no sense now to describe all the advantages of TS: type safety, facilitating the development process due to typing, the presence of interfaces and classes, open source code, errors made during code modification are visible immediately, and not at runtime, and so on.
As I said at the very beginning, we decided to write everything in one language so that if someone stops working or leaves, we can support or insure. First we chose JS. But JS is not very secure, and without tests on large projects it is quite painful. Therefore, we decided in favor of TS. As practice has shown, it is very convenient in monorepo, due to the fact that you can simply export *.ts files, and when using components, the expected data and their types are immediately clear.
But one of the main useful features was auto-generation of types for GraphQl query and mutations. For everyone who is not very knowledgeable, GraphQl is a technology that allows you to go to the database through the same query (to get data) and mutation (to change data), and looks something like this:
query getShop {shop { shopName shopLocation } }
Unlike the REST API, where until you receive it, you won’t know what will come to you, here you yourself determine the data that you need.
Let's get back to our President-elect. We used Hasura, which is a GraphQL wrapper on top of PostgreSQL. Since we are working with TS, then in a good way we must type the data from both requests and those that we send to payload. If we are talking about the code from the example above, there should be no problems, kind of. But in practice, a query can reach a hundred lines, plus some fields may or may not come, or have different data types. And typing such canvases is a very long and thankless task.
Alternative? Of course I have! Let the types be generated via commands. In our project, we did the following:
We used the following libraries: graphql and graphql-request
First, files with *.graphql resolution were created, in which queries and mutations were written.
For example:
test.graphql
query getAllShops {test_shops { identifier name location owner_id url domain type owner { name owner_id } } }
codegen.yaml
schema: ${HASURA_URL}:headers: x-hasura-admin-secret: ${HASURA_SECRET}
emitLegacyCommonJSImports: false
config: gqlImport: graphql-tag#gql scalars: numeric: string uuid: string bigint: string timestamptz: string smallint: number
generates: src/infrastructure/api/graphQl/operations.ts: documents: 'src/**/*.graphql'
plugins: - TypeScript - TypeScript-operations - TypeScript-graphql-request
There we indicated where we were going, and at the end - where we save the file with the generated API (src/infrastructure/api/graphQl/operations.ts) and where we get our requests from (src/**/*.graphql).
After that, a script was added to package.json that generated the same types for us:
package.json
{... "scripts": { "generate": "HASURA_URL=http://localhost:9696/v1/graphql HASURA_SECRET=secret graphql-codegen-esm --config codegen.yml", ... }, ... }
They indicated the URL that the script accessed to obtain information, the secret, and the command itself.
import { GraphQLClient } from "graphql-request"; import { getSdk } from "./operations.js"; export const createGraphQlClient = ({ getToken }: CreateGraphQlClient) => { const graphQLClient = new GraphQLClient(‘your url goes here...’); return getSdk(graphQLClient); };
Thus, we get a function that generates a client with all query and mutations. The bonus in operations.ts lays all our types that we can export and use, and there is a complete typing of the entire request: we know what needs to be given and what will come. You don't need to think about anything else, except to run the command and enjoy the beauty of typing.
Thus, we got rid of a large number of unnecessary repositories and the need to constantly push the slightest changes in order to check how things work. Instead, they came up with one in which everything is structured, decomposed according to its purpose, and everything is easily reused. So we made our life easier and reduced the time to introduce new people to the project, to launch the platform and modules / applications separately. Everything has been typed, and now there is no need to go into each folder and see what this or that function/component wants. As a result, development time has been reduced.
In conclusion, I want to say that you should never be in a hurry. It is better to understand what you are doing and how to do it easier than to deliberately complicate your life. Problems are everywhere and always, sooner or later they will come out somewhere, and then deliberate complication will shoot you in the knee, but will not help in any way.
The dev.family team was with you, see you soon!