Hackernoon logoNo More Heavy RAM Memory Consumption: Apply These 3 Secret Techniques by@aspecto

No More Heavy RAM Memory Consumption: Apply These 3 Secret Techniques

Aspecto Hacker Noon profile picture


Find, fix and prevent microservices issues as early as your local env

It turns out that running 

ts-node-dev / ts-node
 is constantly consuming hundreds of megabytes of RAM even for small and simple applications.

In development, it is usually not a big concern, however, it can be, if your application is running inside a docker container with limited resources (for example, with Docker Desktop on Mac which allocates by default only 2GB of RAM to all the containers in total).

Typescript code should be transpiled to Javascript which can be done either before running the process (

), or in runtime (

The most efficient way is transpiling before running, however, this isn’t as developer-friendly since it takes forever. 

 loads everything into memory then watches the changes the developer is making and transpiles the project fast on every change.

We encountered the issue while building a demo application to showcase our product at Aspecto.

We were running multiple typescript services with docker-compose and started seeing arbitrary 

 processes exiting without even running the application, displaying the message “Done in 79.06s”.

This was due to a lack of memory. Each typescript service was using ~600MB of RAM out of the total 2GB available for all containers.

After digging a bit, we found a few possible solutions and wanted to share them.

with option 

In our case, adding the 

 option to 
 reduced the consumed RAM from ~600MB to ~170MB.

The price was that the typescript code would only be transpiled, and typechecking would be skipped. Most modern IDEs (vscode, web storm), has built-in typescript IntelliSense which highlights errors, so for us, it was a fair price to pay.

If you use 

 to run code in production that was already successfully compiled and tested in the CI, you can only benefit from setting this option.

Compile the code with
and monitor file changes with

Instead of using 

, which consumes a lot of memory, it is possible to compile the application directly with 
 and then run it from dist/build like this: 
node dist / index.js 

For automatic reload on source file changes, 

nodemon / node-dev
 can be used.

This is our “start” script in package.json:

"scripts": {

  "start": "nodemon --watch src -e ts --exec \"(tsc && node dist/index.js) || exit 1\""


This approach reduced the RAM on our service from ~600MB to ~95MB (but there was still a spike in RAM to 600Mb for few seconds while 

 was compiling).

Unlink the previous option, this approach does check for typescript errors and warnings, and the service does not start if errors exist in the code.

The price to pay here is a longer compilation time. In our setup, it’s about 10 seconds from saving the file until the service restarts.

Increase Docker desktop available RAM

This is the easiest fix. Just allocate more Memory to Docker Desktop by going to Preferences => Resources => Memory, and increase the value.

While it fixes the immediate problem, the containers still consume a lot of memory, and if you have plenty of them, it might be a problem soon enough.

In addition, changing the default configuration should be done by every user that wants to run the system with docker-compose, which introduces complexity in installation and usage.


If memory consumption is not an issue for you, just use 

 in production and 
 in development.

However, if you do care about memory, then you have a tradeoff between fast restart time after modifications (but typechecking only in the IDE, set 

, or typechecking in compilation) and slower restart on each modification (directly use tsc and 

Also published on Medium


Join Hacker Noon

Create your free account to unlock your custom reading experience.