DevOps, should be the replacement of process, in the form of out-of-date Wiki pages, the one person in the team that knows how to configure the servers correctly, or that person you always go to in order to fix the database issues, with automation.
Everything-as-code. Infrastructure-as-code, Networking-as-code and database-upgrades as code.
But in this world of everything-as-code, whose responsibility is it to test it? Does the role of the tester still have to be limited to the core application? Who tests all the other “apps” and why don’t people think of them as mission-critical?
Maybe in a DevOps world you can use tools, applications and frameworks to build your perfect deployment and automation pipeline. Articles so far on “TestOps” or “DevQAOps” point the testers toward these tools in order to run their integration tests and iterate at the same pace as the team. It’s far broader than that, I think that DevOps has changed the volume, complexity and types of applications that are being built and testing teams are missing out by sticking to the same processes and tools.
DevOps also means that people who were focused on operations now have to understand software using DSLs and scripts. Take Chef for example, I work closely with a team that uses Chef for configuration management of custom applications and OS-environments. They develop C# based Microservices, and yet the 2nd biggest language is Ruby in this GitHub organisation.
Why? Because as soon as you stray outside of the lines with Chef, Puppet, SaltStack or Ansible, you’re extending their product to work with yours and that means coding in Python or Ruby. It is quite common for these libraries to go completely untested, into production in many teams.
Let’s look at this typical DevOps pipeline (developed by IBM). So, in this case, there are 3 environments.
But if our environments are software-defined; built using repeatable and tested DevOps tools, why have a seperate staging environment?
When cloud-servers can be deployed in seconds, why have long-running environments other than production anyway? Surely, you would want each environment to be clean so that you know the scenario and configuration under which you’re testing. CI/CD makes this possible, but only as far as the “build” or “test” environment, when it gets handed over to the testing team who may work on a totally different cadence.
One of the more dangerous things I’ve seen in DevOps scenarios is developers running diagnostics and making environment-specific changes without documenting them. So the next time the package gets deployed, that configuration might hang around, the firewall configuration as an example. That should be in the Chef recipe or Ansible playbook, it shouldn’t be a “oops, I forgot to do that”.
Why? Because having a mature testing team or capability is about deploying with confidence each and every time. And they aren’t involved.
There are all sorts of packages, scripts, tools and libraries that teams build but aren’t in scope of a typical testing team.
Deployment
Monitoring
For a complex application, these aren’t just 5-line Perl scripts here and there, they can be huge, monolithic, badly written and completely untested tools**.**
If we put those code-bases into a Pyramid, and say that they are out-of-scope for testing professionals, I would ask the reader to question what is the purpose of the testing team?
To validate that the delivered application is functioning to spec? To validate quality within the bounds of a budget and scope?
Here are some of the things that a tester could be doing in a mature DevOps environment.