Why Visual Testing Is Essential for Applications With 3D Engines When testing applications with embedded 3D engines or any other related to generating and processing computer graphics, regular functionality is not sufficient. It is also necessary to verify the visual appearance of the application, providing a thorough evaluation of its design and layout. For instance, when working on a game written in Unity 3D, Quality Assurance is expected to assess all game levels’ logic, to ensure the absence of contradictions and unexpected behavior. However, this all does not touch how 3D characters and environments are displayed, whether the lights are reflected as they are expected, or if scaling objects retain these values on restarting the level. Indeed, lots of scenarios that cannot be covered by regular testing. All that has been described above is an example of what Visual testing is capable of handling. But why cannot we simply keep an eye on the visual aspects of such applications instead of spending much more time covering all these scenarios? Well, it’s a really good point. Considering a bunch of factors like high costs of development, maintenance, and additional runs, it is possible to assume it’s not worth the trouble. However, putting it all on humans can also invoke a number of problems: Depending on the set failed threshold, the automation script will compare the actual image with the reference one pixel by pixel – a task hardly feasible for a human. Human error: Since the application is tested by multiple individuals, the visual testing results can vary due to the peculiarities of the tester’s eyesight and visual perception of one or another asset, etc. Inconsistent results: Just like in regular manual testing, automation scenarios allow us to reduce the time for regression checks and focus on other potential scenarios. Repetitive regression scenarios: These costs can often surpass the price of automation and subsequent maintenance. Costs of manual testing: What Visual Testing Ensures Entire Interface page (All UI elements, their position, scale, color, etc.) Visual 2D/3D assets in terms of geometry and polygonal for 3D and vector structure for 2D Lights and Shadows (The effect that gives different sources of light to an object) Materials (Rendering materials on objects in the current scene) Specific shaders used for rendering All these issues are better to be not fully replaced but at least be complemented by exact calculations of different percentages. How to Test Web Applications First, you will need to configure a Cypress project. Cypress is a multi-purpose automation framework that provides all the necessary tools to cover automation scenarios, including visual testing, which is our agenda for today. Undoubtedly, there are many other tools that support visual testing, but covering all of them is beyond the scope of this article. Installing Cypress To install Cypress and configure your first project, I recommend looking through this article: https://learn.cypress.io/testing-your-first-application/installing-cypress-and-writing-your-first-test Now, we are good to go. Installing Cypress Plugin for Comparing Snapshots Install a cypress-image-snapshot plugin according to these instructions: . https://www.npmjs.com/package/cypress-image-snapshot Adding a Custom Command Compared to what the article says, let’s add a custom command to the ‘commands.js’ file, or you can do it the same way I did – create a separate file dedicated only to your commands. commands.js Cypress.Commands.add('compareSnapshotWithBaseImage', (options) => { const nameDir = options?.nameDir; const element = options?.element; const fullName = nameDir.replace('cypress/e2e/', ''); return cy .get(element) .matchImageSnapshot(fullName); }); Here, I’ve added a custom command . When executed for the first time, it creates a reference image with a set name. During the subsequent runs, it compares the actual image with previous captures, marks it as a reference, and gives the difference image as an output if they differ more than the allowed failed threshold. compareSnapshotWithBaseImage This command operates on a particular object passed with the parameter. options Picking Locators It’s important to pass a locator of the object but not the object itself. In my case, returns the following string: . Just to remind you, the locator can be pulled out from the (Firefox) and (Chrome) tab of the Dev console (F12). canvas.getCanvasSelector() ‘canvas#3DCanvas’ Inspector Elements When you click on the mouse cursor in the upper left corner, you are in the picking element mode. Move the cursor over the targeted element (in my case, it is a ), and click on it. 3D canvas Here is an example of the locator found in the dev console: To let Cypress know about this element, there are two options: We need to identify the type of element (canvas). Use the hashtag sign ( ) to concatenate a string with the element’s id ( ). The result string is . # 3DCanvas ‘canvas#3DCanvas’ Alternatively, you can simply use the element’s id preceded by a dot . ‘.3DCanvas’ Calling a Comparison Function Therefore, I can invoke the comparison function from any part of the code using the following syntax: And('I compare 3D scene with the reference image', () => { const specName = Cypress.spec.relative + '/'; cy.wait(2000); cy.compareSnapshotWithBaseImage({ nameDir: specName + 'canvas_img', element: canvas.getCanvasSelector(), }); }); Testing Results Once this function is called, the first run of this test always passes as Cypress cannot find a file with a specified name and path and creates a new one. Here is what I have after my first run: An actual image of a 3D scene with all its controls and a model of a motorcycle is captured as a reference one. This is a scenario where I am checking how changing directional light intensity affects the appearance of the 3D motorcycle. As you can see, I have multiple sources of light in the scene, so the next run will ensure that changing the directional light of the entire scene does not affect these point lights. A combination of these verifications is also possible and saves builds running time. For example, you can combine changing lights and setting the object’s transformations (rotation, position, scale). Do not combine too many features into one as it makes debugging more difficult and running tests less effective if they fail, let’s say, on changing lights verification, you’ll never know how settings transformations work until lights are fixed. Now, let’s emulate the situation when setting the light intensity does not properly work. To demonstrate this ‘bug’, I changed the value of light intensity from 1.2 to 0.6 to get a darker image of the motorcycle. However, we remember that Cypress will compare with the reference image captured above, a lighter image. Let’s see what we have after this test run. The lighter image is a reference one, the image is the actual image captured in the latest run. The image is a diff image. It shows what exact pixels are different in these two images. left right middle If you manually compare the left and right images, you can certainly see the difference. However, this difference is quite subtle. It is likely that during manual testing such broken lights will be overlooked, whereas automated tests will catch them. How to Test Desktop Applications General Information About Squish Framework For desktop applications, I prefer to use the framework by Squish Froglogic company. According to the official documentation, one fundamental aspect of Squish's approach is that the AUT and the test script that exercises it are always executed in two separate processes. This ensures that even if the AUT crashes, it should not crash Squish. Squish runs a small server, , that handles the communication between the AUT and the test script. squishserver The test script is executed by the tool, which in turn connects to squishserver. squishserver starts the instrumented AUT on the device, which starts the Squish hook. squishrunner With the hook in place, squishserver can query AUT objects regarding their state and can execute commands on behalf of squishrunner. squishrunner directs the AUT to perform whatever actions the test script specifies. All the communication takes place using network sockets which means that everything can be done on a single machine, or the test script can be executed on one machine and the AUT can be tested over the network on another machine. The following diagram illustrates how the individual Squish tools work together. A step-by-step guide on how to create screenshots verifications can be found here: , while I would like to focus more on the interface of as it looks very informative. https://doc.qt.io/squish/how-to-create-and-use-verification-points.html#how-to-do-screenshot-verifications screenshots comparison Working with Verification Point Viewer I ran a test on 3D desktop software where my 3D model was moved along Y-axis. So, my screenshots verifications failed. Let’s open the Verification point and see the difference. To do this, find the line about Verification point failure on the tab. Right-click on it, and select the option. Test Result View Differences The first tab comprises all the available options for analyzing screenshots. Differences actual and reference images are being shown alternately, displaying the difference with a red outline. Flicker: all common pixels are removed, and only the different ones are shown. The way the subtraction works can be influenced by changing the color settings, and by checking or unchecking the Invert checkbox. Subtract: HSV (Hue, Saturation, Value) : works pretty much the same as the Subtract mode, but instead of highlighting all pixel differences in color, it focuses on differences in . This means that it considers variations in shades of gray rather than color differences. Gray Diff grayscale Absolutely identical pixels are green, the areas that differ between actual and reference images are red. Red/Green Diff: The mode features a draggable slider that displays the reference image when dragged to the right and the actual image when dragged to the left (as shown in the image above). Split View: Besides the visual representation of screenshot differences, there is one more tab of the Verification point viewer called . The Comparison mode is crucial for specifying how the actual application behavior should be compared to the expected behavior. Comparison Mode It defines the criteria for determining whether the verification point passes or fails and what is the acceptable failure threshold. Several other options and settings are available for the Comparison mode, but I would like to talk more about color histograms. Here you can see the for my 3D test model. Histogram It is pretty useful for cases when the color profile didn’t change significantly, but the object was rotated, scaled, or transformed, exactly like what happened to my object when I moved it along Y-axis. How it works: Divides the color range (0-255) of each color component (RGB) of every pixel by the number of (or baskets) and calculates the number of pixels in each bin. Bins Divides the total number of pixels in the image by the number of pixels in each bin to get . These respective values are put back into the corresponding bins. normalized values The values of all corresponding bins are subtracted from one another and the resulting values are summed up. This value represents the difference between the reference and actual images. This mode lets you configure the number of and , which represents the maximum difference between two images for which they are still considered to be "equal." The interface of setting the number of Bins and Allowed failures is shown in the image: Bins Allowed failures Conclusion Long story short, I hope I helped you understand that visual testing plays a significant role in testing applications with visual assets, such as design platforms, games, game development engines, 3D modeling, and engineering software. Automating the detection of most visual defects can mitigate the impact of the 'human factor' and facilitate further analysis. Resources: https://testsigma.com/guides/visual-testing/#What_is_Visual_Testing https://www.coderskitchen.com/visual-testing-of-a-unity-based-3d-world/ https://learn.cypress.io/testing-your-first-application/installing-cypress-and-writing-your-first-test https://dev.to/bornfightcompany/cypress-tests-in-bdd-style-52n5 https://doc.qt.io/squish/screenshot-verification-point-dialog.html Header image: Image by pikisuperstar on Freepik