paint-brush
Front-End Optimization: My Journey to Accelerate Load Times in Heavy Front-Endby@dsitdikov
1,188 reads
1,188 reads

Front-End Optimization: My Journey to Accelerate Load Times in Heavy Front-End

by Daniil SitdikovOctober 9th, 2023
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

The ideal solution would be to reduce the content and create a dedicated pure HTML/JavaScript page. However, I was unable to convince the business to do so. Therefore, I had to improve my performance by implementing these techniques: 1. Import on visibility 2. Selective hydration 3. Image proxy 4. Loading priority 5. Proper work with svg 6. DNS prefetch
featured image - Front-End Optimization: My Journey to Accelerate Load Times in Heavy Front-End
Daniil Sitdikov HackerNoon profile picture

Recently, a client approached us with a complaint that our product loads slowly in regions where old mobile devices and slow internet prevail. As a result, users are opting to switch to our competitors. In this article, I will share how I was able to improve the app's load time by implementing techniques such as import and render on visibility, image proxy, proficient work with SVG images, and other methods. I hope you'll find something useful for your app as well.

Brief Introduction

On our page, we have a lot of components, including a banner, interactive dynamic content in the center, several sliders, and extensive navigation in the header and sidebar. However, this leads to a large number of DOM elements (6500) and a lengthy total blocking time (12 seconds) on mobile devices. Additionally, the fully rendered main page has a total height of 11000 pixels. Even React Profiler in DevTools can’t handle it and crashes.


We use React with Next.js. The majority of the content on the main page is dynamic and interactive, updating in real time.


Some obvious performance enhancements have already been made:


  1. Caching on the CDN side
  2. Gzip compression
  3. http/2 server
  4. Correctly set loading priorities for resources: defer and async
  5. Images are set to lazy
  6. All internal pages are lazy-loaded by default
  7. The bundle is split into chunks.

Shortcut

First, what I started with and what could quickly solve many problems: 🥁 1..2..3…

just reduce the amount of content on the main page.

or

create a special main page in pure HTML and vanilla JS for such regions.

However, in my case, I was unable to convince the business, and I had to solve the problem using an engineering approach.

Render on Visibility

The question I asked myself was: why spend so much time and resources rendering something the user doesn't even see? The entire page height is 11000px, but a mobile user typically only sees about 700px upon starting. Therefore, 10300px of content can simply be skipped from rendering. The situation is the same on the desktop.


I created a basic component that only renders content when it is about to be visible in the user's viewport. It starts loading the content in advance, with an additional 100px on top, to accommodate slower CPUs. This optimization also applies horizontally. For example, if a slide's content is not within the user's view, it won't be rendered. This approach has proven to be beneficial, especially considering that each slide contains logic and multiple components.


// The component receives two props:
//
// 1. Children - the content which will be hidden
// 2. Classname - a special class with height and skeleton loader to avoid layout shifts after content appear
function LazyLoadedComponentUI({ children, className }) {
  const intersectionRef = useRef(null);
  // useIntersection is used from the react-use library
  // However, it can be easily implemented manually.
  const intersection = useIntersection(intersectionRef, {
    root: null,
    rootMargin: '100px 0px',
    threshold: [0, 1.0],
  });

  const [isShown, setIsShown] = useState(false);

  useEffect(() => {
    // Content won't be removed after it has been rendered. 
    // However, this behavior can be easily changed.
    if (intersection && intersection.intersectionRatio > 0 && !isShown) {
      setIsShown(true);
    }
  }, [intersection, isShown]);

  return isShown ? children : <div ref={intersectionRef} className={classNames('lazy-rendered-component', className)} />;
}

How it looks:

const FooterNav = () => 'complicated component';

...

// It's important not to forget to include loader styles to avoid layout shifts.
<LazyRenderedComponent className="footer-nav-loader loader">
  <FooterNav />
</LazyRenderedComponent>

My goal was to include only the necessary content for the earliest visuals in the main bundle. This includes what the user sees. Modal windows and collapsing sidebars are downloaded and displayed only when needed.


Sometimes, it wasn’t as obvious as it seemed to be. We have a list of 20-30 dynamic items with nested components and logic inside. To prevent layout shifts and maintain smoothness, I implemented lazy rendering specifically for the content of each item on this list. This means that the application reserves empty spaces for a specific item and renders it only when it is near.

Eventually, after applying the lazy load component to many items, we ended up with approximately 700 DOM elements.

Import on Visibility

This allows us to go even further. If we don’t render it outside the viewport, maybe we can even don’t download it? It is possible that the user never scrolls to this content at all. All that remains is to pass the content to our LazyRenderComponent Suspense.


const FooterNav = dynamic(
  () => import('./footer-nav'), 
  { loading: () => <div className="footer-nav-loader loader" /> }
);

...

<LazyRenderedComponent>
  <FooterNav />
</LazyRenderedComponent>


The alternative usage with the plain React:


const FooterNav = lazy(() => import('./footer-nav'));

...

<LazyRenderedComponent>
  <Suspence fallback={<div className="footer-nav-loader loader" />}>
    <FooterNav />
  <Suspence>
</LazyRenderedComponent>


Now, when the viewport is near this component, it will start loading it.


⚠️ As in the previous item, it is very important not to forget about the loading state in order to avoid layout shifts. Here, we pass the fallback property to the Suspense component.

Selective Hydration

During React hydration, React recreates the state it had on the server, essentially re-recreating the virtual DOM. This process can be time-consuming as React needs to parse the current DOM tree. Initially, there may be content in the viewport that should be visible, but it doesn't necessarily need to be interactive right away. To improve performance, I tried deferring hydration by wrapping these components in Suspense, which temporarily delays hydration and keeps them inactive. I initially believed that it completely transformed the game and reduced everything to zero seconds. However, it actually only removed two seconds from TBT.

Image Proxy

In our application, the LCP (Largest Contentful Paint) refers to a banner. Typically, these banners are high-resolution PNG files with an average size of around 3 MB each. However, the challenge lies in the fact that we have no control over the sizes or file formats uploaded by the operator. Furthermore, imposing restrictions on the operators would not be a favorable design approach.

We agreed to convert, compress, and resize images on the fly during the request. This means that we only need to specify the desired URL.


https://IMG_PROXY_URL/resize:fit::300/quality:75/plain/image.jpg@avif


Our solution was to use image-proxy. It perfectly fits our needs in terms of performance, functionality, and scalability.


We use three formats: avif, webp, and jpg/png. The browser automatically loads the best format based on its capabilities. For example, if a browser does not support avif format, it will use webp. If no supported formats are available, the default img tag will be used.


<picture>
  <source srcSet="URL_TO_AVIF" type="image/avif" />
  <source srcSet="URL_TO_WEBP" type="image/webp" />
  <img src="URL_TO_DEFAULT_FALLBACK" />
</picture>


The resize and compression operation is performed only once. Immediately after that, the result of each image request is cached on CDN. This makes sure we don't sacrifice speed.


Architecture Diagram

Now, each image is 120-200 kb instead of 1-2 Mb. They are in the optimal format and have an optimal size based on the device.

Alternative Solutions

  1. If you are using Next.js, perhaps next/images will be sufficient. However, you won’t be flexible enough, and you lose an opportunity to reuse this solution in a different place other than Next.
  2. There is also a library that Next.js itself uses: sharp. It can be set up as a Node.js service. I even played around a little: image-proxy-service

  3. Cloudflare and other services offer their own service for resizing and converting images.

Loading Priority

Since only one image is visible in the main LCP banner, we explicitly set it to loading=eager, and for the other slides, we used loading=lazy. They will load as soon as the user sees them.


Also, for the first slide, I added the attribute fetchPriority="high", which allows the image to be loaded as a priority. As we had up to 100 different requests in the first seconds, prioritizing LCP-images was crucial.

SVG images

We had around 150 SVG icons and several images that we stored directly in React components. This significantly increased the bundle's weight by 60 KB, and their rendering clogged up our thread during the initial paint.


We consolidated all icons into one large SVG sprite and placed it on a CDN. The icons are loaded just once to avoid clogging network connections and to save time on connection establishment. We use the use element to display the required sprite from the entire list.

<svg
  height={size}
  viewBox="0 0 512 512"
  width={size}
>
  <use href={`#${id}`} />
</svg>
<symbol id="icon-1" width="10" height="10" viewBox="0 0 2 2">
  <circle cx="1" cy="1" r="1" />
</symbol>
<symbol id="icon-2" width="10" height="10" viewBox="0 0 2 2">
  <circle cx="1" cy="1" r="1" />
</symbol>
...

Another solution could be to move them to the icon font, although it may not offer the same level of flexibility.


Regarding images, we just relocated them to a CDN as separate files.

DNS Prefetch

Before starting to load a resource from an external URL, the browser spends 20 - 120 ms just on DNS resolving. I marked two <link> tags with the attribute rel="preconnect" so the browser could start the handshake processes in advance: DNS, TCP, and TLS.


<link 
  crossOrigin="anonymous" href="https://fonts.gstatic.com" 
  rel="preconnect" 
/>

Conclusion

Undoubtedly, this was a significant step in performance improvement. Some of the core web vitals on low-end mobile devices:


  1. LCP decreased from 12s to 2.5s.
  2. Number of DOM elements decreased from 6500 to 700.
  3. The lighthouse score was increased from 3 to 52
  4. TBT was decreased from 12s to 5.6s.

These numbers have not yet reached the green zone. It means that there is a lot of work to be done. See you in the second part, where I'll share new results!

Future Plans

  1. Consider replacing the default virtual DOM with an alternative solution. For instance, Million.js
  2. Experiment with service workers.
  3. Remove SSR from components where it causes rehydration and negatively affects performance.

Let's Learn Together

Each of us has our own unique experiences, challenges faced, and solutions discovered. I'd love to hear about your own adventures in front-end optimization. What worked for you? What would you recommend doing? Please share your experiences in the comments below!