Software quality advocate!
In 2020, the mobile gaming industry saw a massive spike in engagement and revenues, growing more than 13% year over year. With game quality and performance being of critical importance to stakeholders in the space, improving it has become top priority for mobile game development and QA teams globally.
At Apptim, we’ve been collaborating with several teams within the mobile game industry to better understand how they approach game performance testing.
With that aim in mind, we invited Aliaksandr Kavaliou, Performance Test Manager at Playtika, and Robson Siebel, QA Lead at Infinity Games to join our CEO, Sofia Palamarchuk, and Melissa Eaden, Quality Engineering Manager at Unity Technologies, for a panel discussion and Q&A session about mobile game testing!
Below is a recap of the webinar (see video here), including questions by the audience:
Melissa shared that one of the biggest challenges teams find themselves tackling first is testing on the myriad of different devices and operating systems that are a direct result of the growing fragmentation within the Android ecosystem. For example, the emergence of newer devices every year, makes testing more complex than ever before.
Mobile game testing and quality isn’t only focused on functionality and performance, Eaden shared. She and her colleagues are focused on helping Unity developers to make games that are engaging and interactive enough that keep players coming back again and again.
Some other areas in game quality that present challenges, she added, were:
Robson shared that at Infinity Games, they create games that are designed to be immersive and relaxing, so there’s nothing worse than a user finding a bug, thus ruining their experience. Worse, if that user that experienced a buggy game leaves a negative review on the app store, it could lead to a negative financial impact for the business.
Another challenge for his team in particular is that they don’t have a dedicated tester, which, in the past, had resulted in devs being handed over imprecise or less helpful bug reports than desired.
For Aliaksandr, his greatest challenges at Playtika revolve around capacity testing / capacity planning on a huge setup.
He explained, “Once you have more than one data center, hundreds of virtual machines, a lot of Kubernetes setups and so on… data warehouses, databases, and hundreds of thousands or millions of users, and you’re awaiting a new feature or a marketing campaign that is expected to generate a user traffic increase… Planning for this enormous setup is a headache, really.”
The second biggest challenge he mentioned in mobile performance testing for games is the high cost of running end-to-end performance tests. The closer to production, the more expensive it is for any team to run these tests. He elaborated, the cheaper the tests are, the less realistic numbers you can obtain, and you can’t somehow approximate those results and understand what exactly you need and how much to scale.
For Robson, he has found ways to help his team overcome the challenge of not having dedicated testers. He assumed the role of the QA lead and also adopted Apptim, which has helped enable anyone on the team to run functional and performance tests on their games.
He shared that, before Apptim, “The bug reports were either vague or they were sometimes incorrect and that led to a lot of wasted time for the developer trying to hunt or to reproduce a bug.” He continued,
“Being able to have a tool that recorded the play session of the person and automatically got all of the logs and generated a performance report in the end, is something that for us, as a small company, was pretty huge. We’ve improved the quality of the games a lot because of the crashes or exceptions that we couldn’t catch before, and now we have that access.”
Aliaksandr shared that, to deal with the high cost of end-to-end mobile game performance tests, he found you can reuse existing automation tests. He commented, “It will also save you time in such cases because almost every time you have automation already created for your functionality, why not add some markers or some [performance] measurements, and it will help you to also understand what’s going on where you are.”
But, he warns, although that’ll help, it can’t protect you from bad code and a bad configuration, so a lot of effort and attention should be placed on the configuration of your performance test, not to saturate earlier than expected due to some numbers conflicting somewhere. “That’s why it’s hard, step-by-step analytical work,” he asserted.
For Melissa, the short answer may seem controversial to testers, but she shared that the best thing she’s learned to do is to get out of the developers’ way. She said,
“Removing myself has been the biggest, reducing that bottleneck, putting automation in place, letting the developers really collaborate on those ideas and moving the production cycles and everything faster because I am not stationed on a team, I’m not embedded, but I’m there as somebody that can coach them and help them.”
She explained that she’s a big advocate of modern testing principles, which give a lot of credence and leverage to the teams themselves. “Developers know the code the best, the more they interact with the code and the more they understand the code, the more they can test it, and they can test it on a level that oftentimes as someone who was traditionally in the role of a tester might not be able to get to.”
She continued, this is the collaborative part; helping developers understand where their knowledge gaps are and that’s where she comes in. She’ll often be the person that asks questions like, “Hey, you’ve developed this wonderful piece of functionality, but did you think about how to scale it?”
She remarked that there are a lot of moving pieces and developers are really well positioned to handle that. It’s about helping developers uncover their blind spots.
Melissa added, “So I’m not perfect, they’re not perfect, but getting out of the collective folks’ way that deal with the code on a day-to-day basis has really improved removing that bottleneck from the development process, along with adding in DevOps and CI/CD practices, which have just made things a lot easier.”
“Automation always makes things easier as much as it makes it harder, but you have to balance the positive with the negative, right?”
The first answer that came to Robson’s mind about reducing testing as a bottleneck was, “You can always ask the developer to stop creating bugs. That works very well.”
Kidding aside, he shared that in his experience, having a well-defined test pipeline in a reliable software that can automate some of the steps goes really a long way.
Another key area of improvement has been reducing the friction between developers and whomever is testing the games:
“Using a tool like Apptim for that can also help catch some underlying bugs and reduce the friction between the tester and the developer because sometimes there is a lot of back and forth.”
To illustrate the point, he said, “Today, I had an issue where someone reported a bug. ‘Hey, this is happening at the end of level six of the game.’ I checked the video from Apptim because he was using it for that session, and it was actually at level 10. So if I didn’t have the video to back it up, I might have lost quite a lot of time chasing a bug that didn’t really exist.”
The consensus among the panel was the added complexity that games have at different levels and understanding how they interact. Melissa stated,
“Mobile apps are complex as it is, but you have multiple platforms you’re looking at, and then you multiply that by the game complexity and the algorithms and the performance you have to maintain.”
Melissa shared several other factors that add more complexity in mobile game testing:
For Aliaksandr, when choosing where to prioritize performance, he stated areas like the game lobby and anything a brand new player will see first are top priority.
He explained that, for a fresh player, he or she should see the functionality of your game as soon as possible and enter the game fast. This aspect should be the most prioritized because players will not want to wait a long time to get started.
But, he added, it depends on what experience the user has during the waiting; “If it’s something interesting, he interacts with something, even during the game, during the waiting lobby or whatever inside the game, it’s maybe a good thing. But if he sees just a blank page and a spinner, it’ll make him crazy. Of course, for fresh users, I suppose getting to the lobby as fast as possible is a critical thing, but then other features should work also fine.”
Aliaksandr volunteered to expand upon this follow up to the previous question, saying that it can be measured from the number of users. But first, it’s important to understand business critical transactions. Then step by step, you can do benchmarking, load testing, and so on. He also shared, “Use such a tool as Apptim to set markers and see what’s wrong with these events, with this transaction, this feature, deep dive, and use code profiling… and so on.”
He stated that self-written solutions will be the most accurate because you inject something in your code and this code will show the real duration, but it takes time to do. Therefore, if you have a good profiler on hand, which can easily show you the bottlenecks, then it’s better to use that at first.
This article was originally published here: https://blog.apptim.com/mobile-game-testing-qa-panel/
Create your free account to unlock your custom reading experience.