How To Get More Out of Writing Tests in Your Development Routine

Written by regquerlyvalueex | Published 2019/10/26
Tech Story Tags: python-tips | python | unit-testing | software-engineering | coding | progrramming | latest-tech-stories | software-development

TLDR Using testing framework in everyday routine can drastically decrease amount of repeated actions you perform during debugging, investigation or just playing with your code. The main reason is the amount of time and manual work needed to start the debugging process. Time to reproduce bug or test some behaviour can be very different and reach a few minutes in some complex cases. When a project is large, complex or involves many people, it becomes even more relevant because makes you take the same steps again and again.That's when tests come to the rescue.via the TL;DR App

TL;DR
Using your testing framework in everyday routine can drastically decrease amount of repeated actions you perform during debugging, investigation or just playing with your code
DRY
You know that principle? DRY - don't repeat yourself. It can be applicable not only for code. Over the years, working side by side with many others developers, i noticed that in most cases we all fail to follow this principle.
Imagine developing process of some simple API, when initial simple code is already written and you want to test it. The natural intent is to open your favorite browser with autogenerated swagger documentation or postman or any other tool and send request to API.
Sometimes you want to put a breakpoint somewhere in the code to explore some third party library behaviour with help of debugger or examine some unclear points. Something similar you will do, when you need to find and fix some bug.
You simply send a request to a broken API, and then investigate the problem with the help of debugger.
That is the simplest approach and it works perfectly fine, so why would i complain about it? Well, the main reason is the amount of time and manual work needed to start the debugging process. Time to reproduce bug or test some behaviour can be very different and reach a few minutes in some complex cases.
When a project is large, complex or involves many people, it becomes even more relevant because makes you take the same steps again and again.
AUTOMATE EVERYTHING
That's when tests come to the rescue. Let`s look at a small, simple and naive example.
Here's a small web API to receive a shopping list and calculate a total price for a product in a cart. I took Flask and pytest to keep the example small, but the idea is relevant to the development process in general.
app.py
import json

from flask import Flask, Blueprint, request
from flask_restplus import Resource, fields, Api


api = Api(version='1.0', title='Test API', doc='/doc')
namespace = api.namespace('carts')


def calculate(products):
    result = {
        'products': list(products),
    }

    for product in result['products']:
        product['total'] = product['quantity'] * product['price']

    result['total'] = sum(product['total'] for product in products)
    result['average'] = result['total'] / sum(product['quantity'] for product in products)

    return result


# serializers
Product = api.model('Product', {
    'product': fields.String(required=True),
    'price': fields.Float(required=True),
    'quantity': fields.Integer(required=True),
})


Cart = api.model('Cart', {
    'products': fields.List(fields.Nested(Product)),
})


@namespace.route('/cart/')
class CartResource(Resource):
    @api.expect(Cart, validate=True)
    def post(self):
        cart = json.loads(request.data)
        processed_cart = calculate(cart['products'])
        return processed_cart


def create_app():
    flask_app = Flask(__name__)

    blueprint = Blueprint('api', __name__, url_prefix='/api/v1')
    api.init_app(blueprint)
    api.add_namespace(namespace)

    flask_app.register_blueprint(blueprint)

    return flask_app


app = create_app()

if __name__ == '__main__':
    app.run(host='0.0.0.0')
test_cart.py
import pytest

from .app import app


@pytest.fixture
def client():

    app.config['TESTING'] = True
    client = app.test_client()

    yield client


def test_valid_cart(client):
    data = {
        'products': [
            {
                'product': 'milk',
                'price': 10,
                'quantity': 1
            },
            {
                'product': 'bread',
                'price': 6,
                'quantity': 2
            }
        ]
    }
    res = client.post('/api/v1/carts/cart/', json=data)
    assert res.status_code == 200
    assert res.json['total'] == 22
Command to run application
python app.py
, to run tests -
pytest
The application will work only with correct input data, if i'll send empty list of products, i'll get
500 error
Here's when i use curl first time.
curl -X POST \
  http://localhost:5000/api/v1/carts/cart/ \
  -H 'Content-Type: application/json' \
  -d '{
  "products": [
  ]
}'
I'm not gonna provide full traceback, only valuable part. Last lines from traceback below.
  #.......
  File "/app/app.py", line 43, in post
    processed_cart = calculate(cart['products'])
  File "/app/app.py", line 20, in calculate
    result['average'] = result['total'] / sum(product['quantity'] for product in products)
ZeroDivisionError: division by zero
To fix this we can add validation for input data, let's start from changing
Cart
API model, adding
min_items=1
to
products
# app.py 

Cart = api.model('Cart', {
    'products': fields.List(fields.Nested(Product), min_items=1),
})
After receiving same request API will return valid error
Here's when i use curl second time.
curl -v \     
  http://localhost:5000/api/v1/carts/cart/ \
  -H 'Content-Type: application/json' \
  -d '{
  "products": [
  ]
}'
*   Trying 127.0.0.1...
* TCP_NODELAY set
* Connected to localhost (127.0.0.1) port 5000 (#0)
> POST /api/v1/carts/cart/ HTTP/1.1
> Host: localhost:5000
> User-Agent: curl/7.58.0
> Accept: */*
> Content-Type: application/json
> Content-Length: 23
> 
* upload completely sent off: 23 out of 23 bytes
* HTTP 1.0, assume close after body
< HTTP/1.0 400 BAD REQUEST
< Content-Type: application/json
< Content-Length: 114
< Server: Werkzeug/0.16.0 Python/3.6.8
< Date: Mon, 30 Sep 2019 11:13:59 GMT
< 
{
    "errors": {
        "products": "[] is too short"
    },
    "message": "Input payload validation failed"
}
* Closing connection 0
So, is used curl twice - to find issue and to verify fix. However for now not everything is fixed, i can force
ZeroDivisionError
if sum of
quantity
for all products will be 0, for example:
{
    "products": [
        {
            "product": "milk",
            "price": 10,
            "quantity": 0
        }
    ]
}
So i need to fix this issue and check again, which means i will use curl 2 times more. That doesn't look like a big deal, but imagine having a long workflow, when issue checking setup will take up to 5-10-15 minutes.
That means sometimes you gonna stuck for hours doing the same things over and over again.
Instead of that you can write test. Just get a corrupted data and put it into test function. Add the following code to the
test_cart.py
#test_cart.py

def test_empty_cart(client):
    data = {
        'products': []
    }
    res = client.post('/api/v1/carts/cart/', json=data)
    assert res.status_code == 400
Now you can call this in one command
pytest -s test_cart.py::test_empty_cart
instead of configuring a requests in postman, curl or going over complex flow via your project web interface.
But this is not the only advantage you can use. Let's create a dummy test without any assertions to debug api with data, that will cause
ZeroDivisionError
and add to
calculate
function an
ipdb.set_trace()
statement.
#app.py

def calculate(products):
    import ipdb; ipdb.set_trace()
    result = {
        'products': list(products),
    }

    for product in result['products']:
        product['total'] = product['quantity'] * product['price']

    result['total'] = sum(product['total'] for product in products)
    result['average'] = result['total'] / sum(product['quantity'] for product in products)

    return result
Now you can run
pytest -s test_cart.py::test_zero_quantiy_cart
. Thanks to
ipdb.set_trace()
statement, code execution stopped at the beginning of the
calculation
function, now you have access to the debugger
Important note about pytest and debugging: don't forget
-s
flag to prevent pytest from capturing stdin and stdout.
pytest -s test_cart.py::test_zero_quantiy_cart
========================================================================================================= test session starts =========================================================================================================
platform linux -- Python 3.6.8, pytest-4.4.1, py-1.8.0, pluggy-0.13.0
rootdir: /app
collected 1 item                                                                                                                                                                                                                      

test_cart.py > /app/app.py(14)calculate()
     13     result = {
---> 14         'products': list(products),
     15     }

ipdb> products                                                                                                                                                                                                                         
[{'price': 10, 'product': 'milk', 'quantity': 0}]
Now you can play around with debugger, and if you need to run this code again, you just run tests again. After issue is fixed, you can add some assertions to the test and leave it in your code, since it covers some case that has not yet been covered, otherwise the problem would not have appeared. In this case to fix the bug it's enough to add
min=1
to
quantity
field in
Product
model
#app.py

Product = api.model('Product', {
    'product': fields.String(required=True),
    'price': fields.Float(required=True),
    'quantity': fields.Integer(required=True, min=1),
})
#test_cart.py

def test_zero_quantiy_cart(client):
    data = {
        'products': [
            {
                'product': 'milk',
                'price': 10,
                'quantity': 0
            }
        ]
    }
    res = client.post('/api/v1/carts/cart/', json=data)
    assert res.status_code == 400
The same you can do not only with ipdb, but with any debugger, for example, in PyCharm you can run tests in debug mode.
Everything, that can by executed, can be also debugged.
This approach also has following advantages :
  • it always gives the same results
  • you always have access to tests, written in the past
  • you don't need any other software, like postman, to test your code
  • you have access to mocks, provided by your testing framework
  • you'll have your specific case covered with tests, when you finish your work
WHEN THIS CAN BE USEFUL
Here are some real scenarios when I found this approach useful.
  1. While developing some new functions i tend to create some basic code to proof the concept, then write some tests and continue development with their help. That can save a lot of time. Sometimes i don`t open anything except my IDE for a week.
  2. While having a bug you can put a breakpoint somewhere in a suspicious place, look into local context and create a test with data you found in debugger. It can also save a lot of time.
  3. While developing integration with third party API you can use tests to simplify and automate authorization, preparing data and so on. Also, after integration is done, you can use mocking, so your tests won`t send any real requests.
  4. I tend to use REPL(e.g. IPyhton) a lot while playing around with new libraries or functionality. If i need to perform some actions many times - it's a good idea to wrap them into a test.
  5. In conclusion, i would like to say, that this little technique not only can save a lot of time, but also can help you concentrate on your code and avoid frustration due to the constant repetition of the same actions. Coding feels much more enjoyable, when you actually do coding.

Written by regquerlyvalueex | Python developer
Published by HackerNoon on 2019/10/26