paint-brush
How to Write Tests for Freeby@sergiykukunin
175 reads

How to Write Tests for Free

by Sergiy KukuninAugust 1st, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This article describes deeper analysis on whether to write tests or not, brings pros and cons, and shows a technique that could save you a lot of time and efforts on writing tests.
featured image - How to Write Tests for Free
Sergiy Kukunin HackerNoon profile picture

Today, I'd like to cover the topic of tests. I assume you already know a lot about tests and have a strong opinion. Not that there are a lot of arguments against writing tests, but, in practice, we still see a lot of projects skip that part. It's like flossing your teeth: every dentist tells this mantra to you that you need to floss every day. But... We live in a real world, you know.


I understand your concern that tests take time to write, are often hard, and overall, are often not worth the hassle, especially under pressure from the business side. We need to be efficient, we need to deliver faster, and we need to optimize the process by replacing all its redundant parts. Tests are often falling into this category.


But give me a chance to show another perspective on the tests and show that it's not just a luxury you can afford only in some cases, but an efficient technique that actually saves time. You just need to cook it right.

Project Movement

Let me start from afar. Wouldn't it be right to compare the activity we do every day in our jobs with growing a long-term organism from scratch? Indeed, most of the software we use has years of development and big teams behind it. There is a reason why it's called "development" rather than "implementation" because implementation is only one part necessary for success. There are also requirements gathering, productizing, integration, etc.


Every day, we bring new features to the project. Every feature brings little value to the project, but at the same time, makes it more complicated and, thus harder to maintain. That's why it's so easy to start the project, but then, over time, it becomes harder and harder to make new changes to increased codebases.


We need to tackle the growing complexity and keep big projects easy to maintain. The bigger the project is, the more radical approaches we need to apply to keep it maintainable.


Implementation of every feature remains in the project codebase. Often, in order to implement a feature we need to edit tens of files: little here, little there. I know, we're good boys and follow the Open-Closed Principle, but still, we modify common files: routers, navigation, common entities, associations, etc. Every feature becomes spread over the codebase and actually loses its original shape.


By working with a mature codebase, it's really hard or impossible to reproduce separate requirements, reconstruct the history of their development, etc. Git helps a little, but don't check the history every day, only when something is already bad.


With a feature implemented and merged into the codebase, we're losing the original requirements. Requirements, especially non-trivial, don't come up out of the head immediately, there was some product work behind.


Without having persistent requirements, they are forgotten, and this leads to bugs, regressions, or even security issues that are hard to notice. Also, it's much harder to onboard new people to such projects.

Tests Are Much Better Than You Think

Tests are actually the way to persist the original requirements. We can say, that tests are formal requirements. Such "baked" specification has advantages over the classic documentation, such as Confluence, JIRA tickets, Mira boards, etc.:


  • it's always up-to-date
  • it's always true
  • you can grep it from your favorite IDE
  • it's trackable in Git: you see the evolution of requirements


This is the reason why the best testing framework ever is called RSpec (hey Rubyists), not RTest.


A project that has a comprehensive test suite is greatly protected against regressions and bugs and makes refactorings easy and safe. Of course, it doesn't eliminate all bugs, but such bugs are often just gaps in the requirements, rather than problems with the implementation (this "oh, I never thought about this scenario" moment).


Refactoring is another thing. It's a big topic, but shortly, I believe that refactoring is an integral part of the development that we can't ignore. And having a comprehensive test suite is a necessary requirement for easy and safe refactorings. I plan to cover this topic in another article soon.


The comprehensive test suite also serves as a set of entry points for all of your functionality. Imagine you need to change the logic in the reminders, that happen once a day. Or, you're working on the "Thank you" page that appears after the three-step wizard form is submitted. It's hard to reach out to that code, and you typically would create some hacky API endpoints or hack some code to trigger the code manually.


Imagine a new developer who recently joined the team; how much time would it take for him to figure out how to reach the desired function? This problem doesn't exist with tests, because the tests are a plain, simple shortcut for all functions in the codebase. Just open a test file, and add your new expectations there.

Tests Are Much Simpler Than You Think

Ok, I hope we agree that tests are more important than they appear, but is it still worth the overhead? What if I tell you, that by following some discipline and making the habit, tests come for free, or at least with a negligible price? Even in time-pressure environments (hackathons and contests) I've been writing some tests (far from 99% coverage) that saved me time on the debugging.


One time, in a contest I eventually won, the following task was based on a previous stage, and having some tests written made me adapt my solution to the new requirements much faster, which led me to victory.


Yes, I write some tests even in time-constrained situations, like hackathons. I truly believe they make me faster


I can identify two primary ways of writing tests in real life: tests after implementation and tests during implementation. How it happens typically, and why we hate writing tests: I've just finished my implementation, and now, I meet the decision: do I write some tests now or maybe next time? Should I stop, or do I cover more edge cases? Does it sound familiar?


In the approach where we write tests after the implementation, we, obviously, write implementation without tests. But still, we do a lot of mini-runs when we write code. We run a local server, open the application in the browser, and click through pages until we check the freshly written code.


We wrote a function, and we don't move on until we ensure it works as expected. We repeat this cycle over and over again until the full implementation is done. And it's so natural for us.


The secret here is to drop this manual testing with the local server, browser, clicking, and using test files instead. When we wrote a function that we wanted to check, instead of switching to the browser, we just added another it expectation in our test file. We don't think about the whole test suite in general at this moment; we just need to pass the code execution to our new function.


The test case becomes our entry point that we can run any time we want. When you test another edge case manually in the browser, you leave no traces of it after the feature is merged into the upstream. But when you do write a test case for it, you persist it into the codebase. You can be sure that this edge case is covered not just for the given revision but for future revisions too. Future you and your colleagues will thank you for doing that.


So, the main idea is to intercept any switches to the local server and the browser for checking new code and replace it with a simple dummy test case. It shouldn't be perfect; you'll refactor everything later. It's true that the initial test case would take more time to write instead of just clicking in the browser, but consequent runs are much simpler. Remember the time you were entering this form the 20th time while trying to catch that bug.


Moreover, you can save even more time by using advanced testing techniques: factories, fakers, mocks, shared examples, VCRs, etc. I love the Ruby ecosystem for a variety of such libraries, that make tests slim, expressive, and easy to read.


If you write test cases while implementation, you can be sure that your final tests are comprehensive, because you checked everything that came up in your head in the process of implementation (remember, we didn't click anything on the browser). This makes the test suite more trustable. If they are green, it means with high confidence that there are no bugs in the new revision.


You can refactor the codebase more easily and not be afraid of broken production (that still might happen though, but it wouldn't be your fault).

Wait, Are You About TDD?

Notice, that I haven't said anything about TDD yet. TDD became a buzzword that everybody heard and everybody knows. At least, they think that they know it. TDD is about writing tests, isn't it? What I described above actually matches with what TDD actually means: you don't write an implementation without having a test case for it, without having this entry point. It's actually defined in a stricter way (one test, one implementation), that creates a barrier for people to actually try and follow it.


I believe the strictness of TDD is unnecessary. I often draft the whole test case first (3-5 test cases for non-existing code) and then try to make them green at once. I often start with a more complex implementation, bypassing the primitive steps. I often do refactorings in advance, etc. My experience gives me some insights, so I can bypass or re-organize TDD steps, and it's totally fine.


The main thing is that I don't double spend my time on manual testing and then writing tests afterward. So, I have my tests for free. Actually, it would take more time for me to implement something without tests, because it'd be felt unnatural.

TL;DR

Lets summary:

  • Bigger projects require more radical approaches to keep themselves maintainable.


  • Tests are more than just a set of expectations. They are formalized specifications, and they serve as documentation.


  • Tests serve as a set of entry points.


  • Tests unlock fear-free refactorings.


  • Tests overhead are not so big if you drop manual testing in the browser.


  • TDD is a discipline; you should make efforts to get used to it. After some time, it becomes natural and you can't go without it.


  • You shouldn't follow TDD strictly. After you get some experience, it's ok to adapt the process to your pace and style.


Thank you for reading up to the end, I appreciate it. Share it with your colleagues if you found it useful. Happy to read your thoughts and feedback in the comments.


Feel free to follow me to see my future articles about different fundamental topics about software. Also, I do consulting and workshops for organizations. I have a channel on Telegram, where I post some insights about software from time to time. I'm also building Platoform - a cost-efficient Kubernetes Clusters Hosting.


Have a great one!