paint-brush
Improving React App Performance with SSR and Rust: Rust vs Node.js [Part III]by@alex.tkachuk
3,572 reads
3,572 reads

Improving React App Performance with SSR and Rust: Rust vs Node.js [Part III]

by Olex TkachukMay 4th, 2020
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

In this article, I am going to compare web performance for three approaches - CDN (without SSR - Server Side Rendering), Node.js + SSR and Rust (actix-web) PageSpeed Score. We need to create a dummy React.js app with cool content and a lot of JavaScript code and then setup SSR for it. We use Linode Cloud Hosting Service for all deployments and use single Frankfurt (Germany) location for the servers. We will use Envoy Proxy as Front Proxy for load balancing multiple containers.

Company Mentioned

Mention Thumbnail
featured image - Improving React App Performance with SSR and Rust: Rust vs Node.js [Part III]
Olex Tkachuk HackerNoon profile picture
In theory, a new technology or a modern approach should have a lot of benefits, but the main question that matters is that - what is actually practical advantages of that in numbers?

In this article, I am going to compare web performance for three approaches - CDN (without SSR - Server Side Rendering), Node.js + SSR and Rust + SSR.

Making Star Wars website using React.js

Firstly, we need to create a dummy React.js app with cool content and a lot of JavaScript code and then setup SSR for it.
Let's grab our web app from the How To Improve React App Performance with SSR and Rust: Part I and add more heavy content there - JavaScript libs and code, images, text and css.

Deploying the React.js Web App

I am going to use Linode Cloud Hosting Service for all deployments and use single Frankfurt (Germany) location for the servers.
Linode Object Storage is suitable as static CDN. Node.js and Rust Web Servers will be deployed as Docker containers in Linode VM with next configuration: `Ubuntu 18.04 LTS, Nanode 1GB: 1 CPU, 1GB RAM`. In addition, we will use Envoy Proxy as Front Proxy for load balancing multiple containers.

Web Performance testing without scaling

Firstly, will test single container without scaling. We need to monitor Web Performance testing results in real browser, in order to measure with different settings and a particular Region. Suitable tool for it - PageSpeed Green, especially Free Plan that lets run up to 200 audits per month for free - more than enough for us.

CDN PageSpeed Score

As expected, React app with JavaScript code that needs to render data and do some mapping and JSON parsing is not well performed with just client rendering: 31 Score (First Contentful Paint (FCP): 0.95s, Speed Index (SI): 5.84s, Time to Interactive (TTI): 6.05s, First Meaningful Paint (FMP): 0.95s, First CPU Idle (CPU): 6.04s, Max Potential First Input Delay (FID):1.42s):

Node.js PageSpeed Score

Express.js has simple API and all features that we need:

const app = express();
app.use(compression());
app.get('/test', (req, res) => res.send('ok'));
app.use('/static', express.static(path.join(__dirname, 'dist/web')))
app.get('/*', async (req, res) => {
  try {
    const content = await getServerHtmlByRoute(req.path || '/');
    res.send(content);
  } catch (e) {
    console.error(e);
  }
});

Function

getServerHtmlByRoute()
contains the same implementation as in the How To Improve React App Performance with SSR and Rust: Part I article.

Using Server Side Rendering improved PageSpeed score significantly - from 31 to 79 (FCP: 0.41s, SI: 1.80s, TTI: 3.3s, FMP: 1.1s, CPU: 3.21s, FID: 1.35s)

Rust (actix-web) PageSpeed Score

Actix-web implementation is based on the Part II : Rust Web Server article with one improvement: instead of reading static files from a disk every request, the web server read all files at the start to cache in memory and then serves files from the cache.

#[macro_use]
extern crate lazy_static;

lazy_static! {
    static ref STATIC_FILES: HashMap<String, Bytes> = {
        let files = match get_files() {
            Ok(res) => res,
            Err(_) => HashMap::default(),
        };

        files
    };
}

async fn index(req: HttpRequest) -> impl Responder {
    let path_req = req.match_info().query("tail").get(1..).unwrap_or_default().trim().clone();
    let path = if path_req.len() == 0 {
        "home_page"
    } else {
        match ROUTES.get(path_req) {
            Some(r) => r,
            None => "index"
        }
    };

    match STATIC_FILES.get(&format!("static/{}.html", path)) {
        Some(file) => {
            let body = once(ok::<_, Error>(file.to_owned()));

            HttpResponse::Ok()
                .content_type("text/html; charset=utf-8")
                .header("Cache-Control", "no-cache, no-store, max-age=0, must-revalidate")
                .header("pragma", "no-cache")
                .header("x-ua-compatible", "IE=edge, Chrome=1")
                .streaming(body)
        },
        None => {
            println!("index.html is not found");

            HttpResponse::Ok()
                .content_type("text/html; charset=utf-8")
                .header("Cache-Control", "no-cache, no-store, max-age=0, must-revalidate")
                .header("pragma", "no-cache")
                .header("x-ua-compatible", "IE=edge, Chrome=1")
                .body("Resource not found")
        }
    }
}

Rust web server is faster: 86 (FCP: 0.45s, SI: 1.26s, TTI: 3.21s, FMP: 0.45s, CPU: 3.19s, FID: 1.53s)

In real production we should scale our web servers, let's have a look if it improves web performance.

Scaling Node.js

We have 1Gb memory limitation, so will try to scale to 3, 5, 10 and 20 instances.

  • 3 instances: 82 score (FCP: 0.45s, SI: 1.32s, TTI: 3.56s, FMP: 0.45s, CPU: 3.54s, FID: 2.04s)
  • 5 instances: 84 score (FCP: 0.49s, SI: 1.62s, TTI: 3.06s, FMP: 0.49s, CPU: 3.03s, FID: 1.35s)
  • 10 instances: 78 score (FCP: 0.33s, SI: 1.95s, TTI: 3.39s, FMP: 0.33s, CPU: 3.37s, FID: 1.86s)
  • 20 instances: 73 score (FCP: 0.34s, SI: 2.56s, TTI: 3.13s, FMP: 0.34s, CPU: 3.06s, FID: 1.33s)

Node.js is good perform with 5 instances. Network loading can help us to understand Node.js Web Server performance for serving dynamic (*.js files) and static content:

Scaling Rust Web Server

  • 3 instances: 87 score (FCP: 0.46s, SI: 1.27s, TTI: 3.11s, FMP: 0.46s, CPU: 3.06s, FID: 1.42s)
  • 5 instances: 88 score (FCP: 0.45s, SI: 1.31s, TTI: 2.95s, FMP: 0.45s, CPU: 2.93s, FID: 1.39s)
  • 10 instances: 89 score (FCP: 0.33s, SI: 1.16s, TTI: 3.07s, FMP: 0.33s, CPU: 3.02s, FID: 1.39s)
  • 20 instances: 87 score (FCP: 0.34s, SI: 1.18s, TTI: 3.13s, FMP: 0.34s, CPU: 3.10s, FID: 1.49s)

Rust service uses less memory, so we can scale up to 10 instances. In addition, actix-web can handle much faster web requests:

Summary

Rust microservices require less resources and it means more scalability for the same amount of resources. In addition, actix-web handles requests much faster - a Browser downloads from Germany to Australia 45k index.html file with SSL handshaking for 1.12s instead of 1.62s and 174k leia.jpg for 344ms / 957ms from Node.js.

My investigation does not 100% accurate - it could be more or less optimal implementations of Node.js (Express.js) and Rust (actix-web), different measurements, etc… However, the big picture pretty much accurate: if you need maximum Web Performance (PageSpeed Score) - use Rust Web Server with Server Side Rendering for it.

Originally Published at Page Speed Green