Rendering “External Texture”: A Flutter Optimization Story by@alibabatech
7,343 reads

Rendering “External Texture”: A Flutter Optimization Story

Read on Terminal Reader

Too Long; Didn't Read

People Mentioned

Mention Thumbnail

Companies Mentioned

Mention Thumbnail
Mention Thumbnail

Coin Mentioned

Mention Thumbnail
featured image - Rendering “External Texture”: A Flutter Optimization Story
Alibaba Tech HackerNoon profile picture


Alibaba Tech

1st-hand & in-depth info about Alibaba's tech innovation in AI,...

Learn More
react to story with heart

Optimizing the way open-source SDK Flutter deals with external texture for Android and iOS apps

This article is part of Alibaba’s Utilizing Flutter series.

In computing, as in much of life, any given method can see a lot of use before its latent flaws reach a decisive impasse. For Alibaba, discovering one such flaw in software development kit Flutter meant the difference between success and failure in the group’s recent work on a mobile app for its Xianyu(闲鱼) second-hand trading platform.

Now the Alibaba team has successfully optimized Flutter for a new range of uses unique to Xianyu’s marketplace, implementing an OpenGL process for the entire UI rendering process to reduce CPU and GPU resource overhead.

In today’s article, we look at the group’s optimization efforts alongside a detailed view of the inner workings and “external texture” of Flutter that technical audiences can explore in their own work.

Flutter Rendering Framework

The Flutter rendering framework is organized into a series of layers, with each layer building upon the previous layer. The architecture design of the Flutter Rendering Framework is shown below:


The Flutter rendering framework, by layer

1. Layer Tree: The rendering pipeline is a tree-like structure output by the Dart API at runtime. Each leaf node on the tree represents an interface element, such as buttons, images, and so on.

2. Skia: A cross-platform rendering framework sponsored and managed by Google. It serves as graphics engine for iOS/Android applications. The bottom layer of Skia is known as OpenGL drawing. Vulkan support is very limited and Metal does not provide the support.

3. Shell: A platform feature that includes iOS/Android platform implementations, EAGLContext management, returning data on a screen, and the external texture implementations.

The Layout process is executed at the Dart runtime and outputs a Layer tree. Each leaf node of the Layer tree is traversed in the pipeline to call the Skia engine for completing the drawing of the interface elements.

After completing the above-mentioned process, do the following:

1. iOS: Run the glPresentRenderBuffer command to display the render buffers contents on the screen. Android: Run the glSwapBuffer command to perform a buffer swap on the layer in use for the current window.

2. Click the Complete display-on-screen link.

Based on this principle, Flutter can implement UI isolation on Native and Flutter Engine. It also captures the UI code without analyzing the platform implementation on the cross-platform solutions.

Implementation Problems

There are pros and cons to this implementation. Flutter is isolated from Native, which can sometimes make it feel like there is a mountain separating Flutter Engine and Native. This poses issues when Flutter wants to capture high-memory images of the Native side, such as camera frames, video frames, album images, and so on.

Traditional applications (RN, Weex, and so on) can directly obtain this data by bridging the NativeAPI. Flutter, meanwhile, determines whether the data can be directly captured and sends a notification message based on the defined channel mechanism, inevitably leading to huge memory usage and CPU utilization while transmitting the data.

Bridging the Gap with External Texture

Flutter provides a special mechanism known as an external texture. Note that textures are images that can be applied to an area of the Flutter view. They are created, managed, and updated using a platform-specific texture registry. This is typically done by a plugin that integrates with host platform video player, camera, or OpenGL APIs, or similar image sources.

The architecture diagram of LayerTree is shown below:


LayerTree architecture

Each leaf node represents a control of dart code layout. TextureLayer node at the end corresponds to the texture control in Flutter. When a texture control is created in Flutter, it represents the data displayed on this control that needs to be provided by Native. Note that this texture is different from GPU’s texture. It is for Flutter’s control.

The following is the final drawing code of the TextureLayer node on the iOS platform. The code for the Android platform is similar, but its method of obtaining texture is slightly different.

It is recommended to run the code following these three steps:

1. Make the call of external texture’s copyPixelBuffer function to get CVPixelBuffer

2. Create the OpenGL ES Texture — CVOpenGLESTextureCacheCreateTextureFromImage

3. Capture the OpenGL ES texture into a SKImage and make the call of Skia’s DrawImage function to complete drawing.

void IOSExternalTextureGL::Paint(SkCanvas& canvas, const SkRect& bounds) {  if (!cache_ref_) {    CVOpenGLESTextureCacheRef cache;    CVReturn err = CVOpenGLESTextureCacheCreate(kCFAllocatorDefault, NULL,                                                [EAGLContext currentContext], NULL, &cache);    if (err == noErr) {      cache_ref_.Reset(cache);    } else {      FXL_LOG(WARNING) << "Failed to create GLES texture cache: " << err;      return;    }  }  fml::CFRef<CVPixelBufferRef> bufferRef;  bufferRef.Reset([external_texture_ copyPixelBuffer]);  if (bufferRef != nullptr) {    CVOpenGLESTextureRef texture;    CVReturn err = CVOpenGLESTextureCacheCreateTextureFromImage(        kCFAllocatorDefault, cache_ref_, bufferRef, nullptr, GL_TEXTURE_2D, GL_RGBA,        static_cast<int>(CVPixelBufferGetWidth(bufferRef)),        static_cast<int>(CVPixelBufferGetHeight(bufferRef)), GL_BGRA, GL_UNSIGNED_BYTE, 0,        &texture);    texture_ref_.Reset(texture);    if (err != noErr) {      FXL_LOG(WARNING) << "Could not create texture from pixel buffer: " << err;      return;    }  }  if (!texture_ref_) {    return;  }  GrGLTextureInfo textureInfo = {CVOpenGLESTextureGetTarget(texture_ref_),                                 CVOpenGLESTextureGetName(texture_ref_), GL_RGBA8_OES};  GrBackendTexture backendTexture(bounds.width(), bounds.height(), GrMipMapped::kNo, textureInfo);  sk_sp<SkImage> image =      SkImage::MakeFromTexture(canvas.getGrContext(), backendTexture, kTopLeft_GrSurfaceOrigin,                               kRGBA_8888_SkColorType, kPremul_SkAlphaType, nullptr);  if (image) {    canvas.drawImage(image, bounds.x(), bounds.y());  }}

Where does the external_texture_object come from? Before making the call of RegisterExternalTexture on the Native side, create an object for implementing the FlutterTexture protocol, which is assigned to the external_texture object. The external texture is a bridge between Flutter and Native, used to continuously obtain the image data to be displayed.


As shown in the figure, the PixelBuffer is the carrier of data transmitted by Flutter and Native while using the external texture. The data source (camera, player, and so on) of the Native side transmits the data into PixelBuffer. Flutter takes PixelBuffer and converts it into OpenGL ES texture for Skia to complete the drawing.

At this point, Flutter can easily draw all the data that the Native side wants to draw. In addition to the dynamic image data (the camera player), the display of the image provides another possibility beyond the Image control, especially for the Native side when there are large-scale image loading libraries like SDWebImage. This process is very time consuming and laborious when there is a need to write a copy with dart on the Flutter side.

Optimizing Processing Speed

The entire process described above seems to solve Flutter’s problem of displaying native-side big data, but it has the following limitations:


As shown in the above diagram, the video image data processing generally uses GPU processing on the Native side to improve the performance. The copyPixelBuffer interface is defined by the Flutter side. The entire data flow goes through GPU > CPU > GPU process. Note that the memory swapping between CPU and GPU is the most time-consuming in all operations. One go-and-back usually takes longer to complete than the process time of the entire pipeline.

The Skia rendering engine requires GPU Texture and the Native data processing output is GPU Texture. This GPU Texture can be used directly using a shared resource of EAGLContext. An EAGLContext object manages an OpenGL ES rendering context — the state information, commands, and resources needed to draw using OpenGL ES.

The thread structure of Flutter is introduced as below:


Flutter usually creates 4 Runners. The TaskRunner is like the iOS Grand Central Dispatch (GCD). It is a mechanism for executing tasks in a queue. Generally, the TaskRunner communicates to a thread and Platform Runner is running on the major thread.

The following 3 TaskRunners are relevant to this article:

1. GPU TaskRunner: Responsible for GPU rendering related operations.

2. IO TaskRunner: Responsible for the loading of resources.

3. Platform TaskRunner: Responsible for all swapping between Native and Flutter Engine and running on the major thread.

Usually, an app thread design that uses OpenGL will have two threads — one thread for loading resources (from image to texture) and another thread for rendering. However, it is often the case that in order to enable the texture created by the load thread to be used in the render thread, the two threads share one EAGLContext. This is not standard practice, as it is unsafe. Multi-threaded access to the same object with lock will inevitably affect performance. It will even cause deadlock when the code is not well processed.

To avoid this problem, Flutter provides a new mechanism for the use of EAGLContext — each thread uses its own EAGLContext and shares texture data via ShareGroup for iOS apps and shareContext for Android apps.

(Although the users of the two Contexts are the GPU and the IO Runner respectively, the two Contexts of the existing Flutter logic are created under the Platform Runner. This aspect of Flutter’s design is rather puzzling and creates all sorts of issues, but these fall outside the scope of this article.)

For the module that uses OpenGL on the Native side, that also creates a Context corresponding to and under its own thread. To deliver the Texture created under this Context to the Flutter and send this data to Skia for complete drawing, disclose the ShareGroup for iOS apps while creating two internal Contexts in Flutter and then save the ShareGroup on the Native side. When Native creates Context, it will also use this ShareGroup to create it. In this way, Native and Flutter can share texture.

The following are two benefits that come from using external_texture:

1. Reduced compute time Alibaba tests concluded that a frame of 720P RGBA format video on an Android model takes about 5ms from reading GPU to CPU, then takes another 5ms from CPU to GPU. Even when PBO is introduced, there is also about 5ms time consumption, which is obviously unacceptable for high frame rate scenarios.

2. Reduced CPU memory consumption As one might intuit, data is passed on the GPU, especially in picture scenes (because there are many pictures to be displayed at the same time).

Some Side Notes

Now that the basic principles of Flutter external textures and optimization strategies have been introduced, this section deals with some drawbacks and exceptions to these principles.

Integrity of Flutter’s Texture vs. Pixelbuffer

A question many people ask at this point is: If using Texture directly as an external texture is as good as claimed, then why is Google using Pixelbuffer?

If you use Texture, you will have to disclose the ShareGroup, which means the GL environment of Flutter is open. If the external OpenGL is not working properly, the OpenGL object is just a number, a Texture or a FrameBuffer to the CPU and to the user is a GLuint (Unsigned binary integer) when breaking points. If the environment is isolated, the user can operate deleteTexture and deleteFrameBuffer as they wish without affecting the objects in other environments. Otherwise, these operations would probably affect the objects under the Context of Flutter. As a framework designer, the biggest priority is making sure the framework is a closed environment so as to maintain its integrity.

Troubleshooting: Crashes

During the development process, the team encountered a strange problem: Flutter would routinely crash during rendering, but nobody could figure out why.

After much searching, the cause was finally identified as being making the call of glDeleteFrameBuffer when the main thread did not have setCurrentContext, which meant Flutter’s FrameBuffer was accidentally deleted. If you do choose to use this program, the relevant GL operations on the Native side should follow at least one of the points below:

1. Try not to perform GL operations on the main thread;

2. Always add setCurrentContext before the call of function with GL operation.

iOS vs. Android

Most of the logic in this article is based on iOS examples. The overall principles are the same for Android, but the implementation is slightly different.

The external texture of Flutter on the Android side is implemented by SurfaceTexture. The mechanism is actually the copy from CPU to GPU memory. There is no concept of ShareGroup in Android OpenGL. Instead, it uses shareContext, which means Context is transmitted out directly.

Additionally, the GL implementation of Android in the Shell layer is based on C++, so Context is a C++ object. To share this C++ object with the Java Context object of the Android Native side, you need to make the call in the jni layer as follows:

static jobject GetContext(JNIEnv* env,                          jobject jcaller,                          jlong shell_holder) {    jclass eglcontextClassLocal = env->FindClass("android/opengl/EGLContext");    jmethodID eglcontextConstructor = env->GetMethodID(eglcontextClassLocal, "<init>", "(J)V");        void * cxt = ANDROID_SHELL_HOLDER->GetPlatformView()->GetContext();        if((EGLContext)cxt == EGL_NO_CONTEXT)    {        return env->NewObject(eglcontextClassLocal, eglcontextConstructor, reinterpret_cast<jlong>(EGL_NO_CONTEXT));    }        return env->NewObject(eglcontextClassLocal, eglcontextConstructor, reinterpret_cast<jlong>(cxt));}

(Original article by Chen Lujun陈炉军)

Alibaba Tech

First hand and in-depth information about Alibaba’s latest technology → Facebook: “Alibaba Tech”. Twitter: “AlibabaTech”.


. . . comments & more!
Hackernoon hq - po box 2206, edwards, colorado 81632, usa