As an , Joget Workflow allows both non-coders and coders alike to visually build apps rapidly. Beneath this simplicity though, there is a lot going on and performance has always been a priority. The focus is to provide a fast runtime for apps, and we diligently try to ensure that there is as little overhead as possible at the platform level. open source low-code / no-code application platform The Joget team has been hard at work on , currently in public beta. We have recently been looking into ways to streamline and optimize performance of the platform even more. Joget Workflow v6 This technical article details how this was done for the latest beta release, and could provide helpful tips for your own projects or products. The article gets quite technical and is targeted at developers interested in the inner workings of the platform. Let’s take a look under the hood. Server-Side Code Profiling The Joget platform is built on Java, so numerous code profiling tools are available just a Google search away. In our environment, we used the , a fully featured Java profiling tool integrated into the NetBeans IDE. development Netbeans Profiler Back in 2014, we had already performed performance profiling for v4. Here’s a screenshot of hot spots uncovered during sampling then. v4 Snapshot Hot Spots All those prominent hot spots and bottlenecks have been removed since v4 through code optimization and appropriate caching. In the latest v6 snapshot, we uncovered some additional hot spots in several controller methods that could be possible candidates for optimization. v6 Snapshot Pre-Optimization Hot Spots The method calls are actually extremely fast, but it seems that there are many invocations so we decided to try to implement some refactoring and caching using the to reduce the amount of calls needed. Ehcache library The post optimization results show great promise with all those invocations avoided, which should reduce CPU cycles under high load. v6 Snapshot Post-Optimization Hot Spots All avoidable server-side hot spots looks to have been eliminated, so next we moved on the client-side browser rendering portion of performance. Client-Side Browser Critical Rendering Path and Perceptual Speed Measurement Using the new panel powered by Lighthouse introduced in Chrome 60, we ran a set of tests to measure the quality of the Joget apps using the latest inspired . Google Chrome Audits Material design v6 Universal Theme Here’s the audit result before we started the optimization: Pre-Optimization Performance Audit The performance score was a lowly 36, with slow perceptual speed for the user who only sees the of the UI after 4 seconds. Ouch! Why was this happening? first meaningful paint Using the Chrome DevTools , we discovered that the was unoptimized. This simply means that the browser has to do a lot of work like loading and parsing the HTML, scripts and CSS before it’s able to render something to the user. Performance Analysis critical rendering path Pre-Optimization Performance Analysis In this particular case: There is a client-side AJAX request to process a file, and LESS CSS There was blocking while the browser needed to load all resource files (scripts, CSS, etc) before being able to perform a first meaningful paint. There were some blocking JavaScript functions during the onload and document ready events. All these factors collectively delayed the rendering of the page, hence affecting the user’s perception on the page load speed. Having identified these issues, we got to work addressing them: The LESS CSS processing was moved server-side using the and cached. LESS Engine We removed unnecessary blocking of resource loading by making use of as well as . asynchronous loading of scripts CSS Non-critical JavaScript functions called during the onload and document ready events were modified to be called asynchronously using . setTimeout With the changes made, running the Chrome Performance Analysis gave the following results: Post-Optimization Performance Analysis There was a great difference in rendering speed, with the first meaningful paint time dropping tremendously. The performance audit also shows significant improvement. Post-Optimization Performance Audit So, What Were The Optimization Results? We ran a load test using to compare the performance of the current stable version of Joget Workflow v5 against a latest build of the optimized v6 code. Running on Apache Tomcat 8.5.16 against a mixed use case test app, there was a 26.2% improvement in throughput (requests per second) so it looks like the optimizations paid off. If it doesn’t sound like much, that’s probably because the previous versions are already pretty well optimized, and this latest optimization effort squeezes out the remaining ounces of inefficiencies. Apache JMeter Throughput Comparison What’s Next With emphasis on performance optimization at the platform level, Joget Workflow incurs low overhead when running apps. This has been the case since v4, and has even been improved upon for the upcoming v6. If there are any specific bottlenecks, it would usually be at the application level. At the application level, there are various guidelines and best practices that are available in the article in the . v6 also provides the , and introduces easy to use caching capabilities as described in . Performance Optimization and Scalability Tips Joget Workflow Knowledge Base Performance Analyzer Performance Improvement with Userview Caching To learn more and get started with Joget Workflow, visit https://www.joget.org