paint-brush
Pixel Streaming vs WebGPU: Which One is Best for Your Immersive Experience?by@jpolhem
1,024 reads
1,024 reads

Pixel Streaming vs WebGPU: Which One is Best for Your Immersive Experience?

by Johan PolhemJanuary 11th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

When it comes to crafting immersive experiences that scale, technologists must make critical architecture decisions before the project enters production. Here's how to consider whether WebGPU or Pixel Streaming is right for your next immersive media project.

Company Mentioned

Mention Thumbnail
featured image - Pixel Streaming vs WebGPU: Which One is Best for Your Immersive Experience?
Johan Polhem HackerNoon profile picture

Before building Bzar, we debated whether we would base our stack on pixel streaming, which renders content on the cloud, or a locally rendered tech that uses a device’s own computation power. Since this choice drastically impacts a project’s cost, scalability and user experience, it’s an important aspect to get right from the outset, before commencing writing a line of code.


Until now, we have had to choose between quality and scale. WebGL is cheap but the graphics leave a lot to be desired. Cloud rendering on the other hand enables high-end graphics but is way too expensive to scale. Even recent developments in cloud rendering where up to 4 users can utilize one GPU won’t solve the core problem.


But that’s recently changed.


After a lengthy period of development, the Google Chrome team released something called WebGPU. As opposed to the previous iteration, WebGL, this rollout enables your browser to access more of the computational power of your device's GPU. (Graphics Processing Unit).


In other words, real-time computation of high-quality 3D graphics can now happen locally on your device – a fantastic feat that will drive mainstream adoption of immersive virtual spaces at scale.


So how does WebGPU stack up against pixel streaming?


Let’s dive into both technologies, consider the pros and cons of both based on real-world testing and explain how to approach decisions around which is best for a specific immersive application.

WebGPU vs Pixel Streaming: Head-to-Head

To gain a comprehensive understanding of user experiences, we conducted a test, pitting the two technologies against each other head-to-head.


The objective was to construct an identical environment using Pixel Streaming (leveraging Unreal 5.3 and a MetaHuman Avatar) and WebGPU (employing Babylon.js and a Ready player me Avatar). We gained a nuanced perspective through extensive hands-on work with both solutions and careful analysis of the different outcomes.


A quick note:  our exploration will concentrate on graphics on mobile devices, tablets, and PCs—excluding VR, which comes with different challenges and solutions.


Our tests show a few core differences:


  1. The fidelity contrast between the two environments is perceptible but not monumental. Particularly on mobile devices, the average user might not discern a substantial difference.
  2. The quality of avatars is the most striking divergence. A Metahuman avatar, for instance, is considerably more realistic than a Ready Player Me avatar. However, achieving this realism with a Metahuman Avatar demands pricier hardware, making it a consideration for specific use cases constrained by cost and availability.
  3. The remarkable responsiveness of the WebGPU experience stands out, showcasing negligible latency even on an average internet connection. While it might not attain the same photorealistic heights as Pixel Streaming, WebGPU represents a substantial improvement from its predecessor, WebGL.
  4. WebGPU may struggle to render larger virtual experiences in real-time. Transitions between areas inside the experience require brief loading intervals.
  5. Even so, dynamic light and reflections rendered by WebGPU contribute to the overall realism, adding depth to the visual experience.


While there’s not a clear winner, as both pixel streaming and WebGPU have their unique use cases, we've identified the respective strengths and weaknesses of each technology.

Pixel Streaming: Pros and Cons

Pixel Streaming was built to render graphics in real-time using cloud servers, and then deliver that experience to an end-user’s device. That means that it can support high-quality experiences without requiring users to download larger files, install software or use high-powered devices.

The Pros of Pixel Streaming

Pixel streaming has several advantages:


High-Quality Graphics: End-users benefit from reduced local device demands, as pixel streaming efficiently renders high-quality graphics on robust server-grade hardware, This ensures a visually engaging user experience and elevates the overall visual quality, creating a compelling immersive environment.


Unreal Engine: Commonly associated with high-end gaming experiences, Unreal Engine stands out as being easy to use for creating high-quality graphics. Offering both fidelity and user-friendly features, it comes with tools for automating optimization and simulations, simplifying and speeding up the process for developers.


High-Quality Avatars: In our tests, pixel-streamed MetaHuman avatars implemented through Unreal 5.3 proved to be highly realistic and customizable. This flexibility allows users and enterprises to modify avatars according to their preferences, enhancing the customization aspect of the immersive experience.


Device Independence: Pixel streaming enables users to access graphics-intensive applications on various devices, including low-powered ones. The offloading of rendering tasks to servers liberates users from the constraints of local hardware, promoting inclusivity across different devices.\

The Cons of Pixel Streaming

However, there are challenges:


Bandwidth Dependency: Pixel streaming's efficiency relies on a robust internet connection, making it bandwidth-intensive. This dependency may pose limitations, particularly for users with slower connections or those concerned about data usage costs.\

Latency Challenges: The issue of latency can affect the seamless flow of the experience. Delays in data transmission, often due to the geographical distance between the user and the cloud hardware may negatively impact the experience.


Server Infrastructure Costs: Implementing pixel streaming necessitates a robust server infrastructure, leading to high operational costs for maintaining and scaling server farms. The financial aspect becomes a crucial consideration in balancing the benefits of the technology with its maintenance expenses.


Cost & Scalability: The economic factor plays a pivotal role in evaluating the suitability of pixel streaming for specific use cases. Rendering graphics in the cloud can be extremely costly at scale,, and as user numbers and engagement time increase, so do the associated costs, requiring a careful cost-benefit analysis.

WebGPU: Pros and Cons

WebGPU was specifically created to unlock the potential of modern graphics hardware, allowing developers to harness the full power of modern GPUs for rendering and computation tasks within the web browser – no fancy gaming console or a revved-up computer required. The previous iteration, WebGL, can also render 3D graphics in the browser, but only at low fidelity.

The Pros of WebGPU

WebGPU stands out in a few core areas:


Enhanced Performance: WebGPU enables web applications to directly tap into GPU capabilities, fostering more efficient graphics rendering and computation. This direct access enhances the performance of web-based applications and significantly elevates the performance of web applications compared to WebGL. This translates into a smoother and more responsive user experience, a crucial aspect for engaging online interactions.


Low Latency: Rendering graphics locally means lower bandwidth requirements. WebGPU delivers a remarkably smooth and responsive experience even on average internet connections.


Cost & Scalability: Unlike Pixel Streaming, WebGPU doesn't demand expensive cloud hardware or robust internet connections. This cost-effective approach facilitates scalability, allowing a large user base without significant costs.


Visual Fidelity. WebGPU supports advanced rendering techniques, such as ray tracing and high dynamic range (HDR), enabling significantly improved visuals through dynamic lighting, reflections, and shadows. High visual fidelity allows brands to create truly immersive experiences that captivate users and eliminate any lingering metaverse skepticism.

The Cons of WebGPU

Even so, WebGPU has a few limitations, namely:


Limited Browser Support: The widespread adoption of WebGPU faces a hurdle in the form of limited browser support. As of today, not all web browsers fully embrace WebGPU, posing a compatibility challenge for developers, at least in the short term.


Learning Curve: WebGPU's low-level nature necessitates a deeper understanding of GPU programming. This characteristic potentially increases the learning curve for developers, demanding a higher level of technical proficiency.


Development Time: The absence of tools like Unreal Engine prolongs the development time for immersive experiences on WebGPU. Developers often find themselves working with less advanced 3D software such as PlayCanvas and Babylon.js, contributing to extended project timelines.


Security Concerns: Direct GPU access raises security concerns, emphasizing the need for meticulous implementation. Poorly developed applications could expose vulnerabilities that might be exploited for malicious purposes.


Avatars: WebGPU applications currently lack top-end avatar options comparable to MetaHuman. The local rendering capacity falls short of handling the same complex 3D graphics as Pixel Streaming, resulting in less photorealistic avatars. This limitation may impact the overall visual richness of WebGPU-powered applications.

So which is better: Pixel streaming or WebGPU?

The choice between Pixel Streaming and WebGPU comes down to the specific use case: every application is different and should have its ideal architecture, based on that application’s unique needs and end-user expectations.


Pixel Streaming excels in delivering remote, high-quality graphics but contends with challenges related to bandwidth, latency, and substantial running costs.


Conversely, WebGPU empowers web applications with direct GPU access, elevating performance, although at the expense of somewhat inferior graphics and security considerations.


The decision between these technologies hinges on the intent of the experience and the intended audience. For enterprises developing high-security, high-fidelity experiences with a limited user base, Pixel Streaming emerges as the logical choice. Such solutions, often with shorter timeframes, find resonance in applications like staff training or technical simulations.


On the flip side, if the goal is to construct a scalable, consumer-facing experience for a large user base, WebGPU becomes the logical choice. Mobile device use cases, in particular, find WebGPU to be a favorable choice, where fidelity differences are less pronounced, and considerations like bandwidth, connection speed, and the cost of mobile data weigh less heavily. The path chosen ultimately aligns with the needs of the customer and the specifics of the envisioned use case.


As leading experts in this realm with a wide range of customers, we apply both technologies with equal enthusiasm – we want our clients to be successful and that requires immersive experiences that consumers love. All tech decisions must emanate from that core directive!