How To Write Integration Tests in Go Appsby@mbvlabs

How To Write Integration Tests in Go Apps

by mbvApril 17th, 2024
Read on Terminal Reader
Read this story w/o Javascript
tldt arrow

Too Long; Didn't Read

This article guides you through setting up and running effective integration tests in Go, using Docker and Makefile commands for seamless testing. Gain confidence in your code with best practices and essential testing strategies.
featured image - How To Write Integration Tests in Go Apps
mbv HackerNoon profile picture

Picture this: you've just left Node for the promised land of Go. You've learned about composition, interfaces, simplicity, domain-driven design (and understood nothing), and unit tests. You feel all grown-up and ready to take on concurrency and ride off into the promised land. You've built a sweet CRUD application that will revolutionize how we do X. Since this is a sure-fire success; you've of course already contacted a real estate agent to view that penthouse apartment you've always wanted and tweeted at multiple VC funds and investors. You're basically ready to go live but before you release your code into the wild, you whip out your favorite terminal and type in go run main.go. It runs, glory days!

You set up multiple demos and investor meetings. You add "CEO and Founder" to your LinkedIn profile (and Tinder, of course) and tell your mom that you're going to vacate the basement any day now. You show up to your first meeting, do a couple of power poses, and start demoing that really-crucial-endpoint-that-is-the-backbone-of-the-app. With your right thumb on the ENTER key (and your left thumb mentally ready to pop the champagne), you go ahead. Aaaaaand, it completely crashes. What the fuck?! Not only do you get laughed at your super important meeting, but you also have to tell the real estate agent that you probably can't afford that penthouse apartment, you have also to stay in your mom's basement (and continue trying to convince your tinder dates that it's definitely a temporary solution).

Hopefully, you haven't experienced the above scenario and are just here because you typed some words into Google. I'm going to show you how I set up and run integration tests in Go, using docker and some nifty Makefile cmds.

Aim of the article

A lot has been written on this subject (check out the resources list at the end of the article). This is not an attempt to improve on what is currently available. It's an attempt to show how you can set up and configure integration tests, both locally and externally (CI env), that are extendable and portable. I'm sure there are ways to improve upon what I'm going to show you here, however, the main goal is to provide you with the tools to build out a great test suite. Once you've understood the concepts and configurations it's much easier to customize them to your need. Furthermore, you get a better idea of what to Google (unknown unknowns are a tricky subject), so you can solve issues that might come up later on.


Alright, first thing first. You'll need some tools, so please install and setup the following:

There are more than one way to skin a cat

As with everything in software, there are many ways to do the same thing and even more opinions about how to do that thing. With integration testing, we might discuss how to set up/configure the test suite, what library (if any) to use, and even when something classifies as an integration vs. unit test, E2E test, and so forth. This can make concepts more complex than they need to be especially when you're just starting out.

I think one of the best ways to approach such situations is to agree on some sort of guiding principles/first principles, that become the basis for what we do.

I've found one of the best principles for this is:

test behaviour, not implementation details

To expand on this, we want to have our tests cover cases/scenarios of how a user might use our software. I use the terms user and software very broadly here as it's very context-specific. A user might be a co-worker who calls some method in the codebase that we have created or it might also be someone consuming our API. Our tests shouldn't worry about how we implemented the code, only that given X input we see Y response. We are in a sense testing that the "contract" we have made as developers by exposing our code/functions/methods to the world, is actually kept.

If we want a more concrete definition of an integration test we can dust off the big ol' google book and see how they define it:

...medium-scoped tests (commonly called integration tests) are designed to verify interactions between a small number of components; for example, between a server and a database

Source: Software Engineering at Google, chap. 11

If we borrow some terminology from clean architecture we can think of it as when we include code from the infrastructure layer in our tests, we're in the integration testing territory.

What are we doing?

When I started writing integration tests in Go; what I struggled the most with was how to configure and set it up for real-world usage. This, again, will also differ based upon the developer/use-case/reality as this approach might not work for well a big multinational company. My criteria for my integration tests are that they should run across different operating systems, be easy to run and play nicely with CI/CD workflows (basically, play nice within a dockerized environment).

I'm a strong believer in YAGNI (you're not going to need it), so I'm going to show you two ways of setting up integration tests:

  • the vanilla only-using-the-standard-library approach
  • using a library

This should hopefully illustrate how you can start out relatively simple (we could skip the Docker part, but honestly, that would make it a little trickier to set up our CI/CD flow) and then add on as needed.

What we are testing

I'm going to re-use some of the code from my article on how to structure Go apps (which can be found here. If you haven't read it, it basically builds a small app that lets you track your weight gain during the lockdown. The article is due for an update (I would suggest checking this repo for a great example of how to structure Go apps using DDD and clean architecture), so we're just going to focus on adding integration tests based on the code (a good example of testing behavior vs. implementation details). We want to make sure that calls to our services behave as expected.

You can find the repo here.

"Infrastructure" Setup

Much of modern web development uses Docker and this tutorial will be no exception. This is not a tutorial on Docker so I won't be going into much detail about the setup, but provide some foundations on how to get started. There are ways to improve upon this setup by extending the docker-compose file, using tags inside our Dockerfile, etc. But that often gives me more headaches than gains. We're violating DRY, to some extent, but it allows us to have a completely separate dev and test environment. After reading this, you could try to shorten the docker setup on your own.

Docker test environment setup

1FROM golang:1-alpine
3RUN apk add --no-cache git gcc musl-dev
5RUN go get -u
7RUN go install -tags 'postgres'

And the docker-compose.yaml file:

 1version: "3.8"
 3    database:
 4        image: postgres:13
 5        container_name: test_weight_tracker_db_psql
 6        environment:
 7            - POSTGRES_PASSWORD=password
 8            - POSTGRES_USER=admin
 9            - POSTGRES_DB=test_weight_tracker_database
10        ports:
11            - "5436:5432" # we are mapping the port 5436 on the local machine
12              # to the image running inside the container
13        volumes:
14            - test-pgdata:/var/lib/postgresql/data
16    app:
17        container_name: test_weight_tracker_app
18        build:
19            context: .
20            dockerfile: Dockerfile.test
21        ports:
22            - "8080:8080"
23        working_dir: /app
24        volumes:
25            - ./:/app
26            - test-go-modules:/go
27        environment:
28            - DB_NAME=test_weight_tracker_database
29            - DB_PASSWORD=${DB_PASSWORD}
30            - DB_HOST=test_weight_tracker_db_psql
31            - DB_PORT=${DB_PORT}
32            - DB_USERNAME=${DB_USERNAME}
33            - ENVIRONMENT=test
34        depends_on:
35            - database
38    test-pgdata:
39    test-go-modules:

With that in place, we can now easily run our integration tests, which can be done with:

1make run-integration-tests

Approach 1: Vanilla setup

Side note: if you want to see the code separated from the rest, check out the branch vanilla-approach/running-integration-tests-using-std-library

There is a tendency in the Go community to lean more towards the standard library than to pull in external libraries. And for a good reason - you can do a lot with just the standard library. I'm by no means a purist in this regard and we are also using external libraries for the routing, migration, etc. here, but I think it gives a great understanding of what is happening by starting with the standard library and then using other libraries as you go.

In the earlier segment, we got our infrastructure up and running so we have an active database, a running application, or in this case, the possibility of triggering a test run against our app.

To maximize our confidence in our code, we want our integration tests to mimic our production environment as much as possible. This means that we need to have some setup and teardown functions that can run migrations, populate the database with seed data, and tear everything down after the test so we have a clean environment each time. The last part is important as we want to have a reliable test environment so we don't want previously run tests to affect the current one running.

It's important to mention that this setup requires our integration tests to run sequentially, not in parallel like our unit tests typically do. As a result, running the test suite will take longer. However, I suggest not worrying too much about this initially and focusing on creating integration tests with decent coverage. Once the integration test runs become unmanageably long, then it's time to explore other setups or improve the current one. For instance, one option could be creating a new database for each test, although this would add complexity to the setup. Another option might be using an SQLite database for integration tests, but it's best to address these issues as they arise.

It would be a great exercise to change this code to try out different strategies to speed up the test runs.


I'm a big fan of the golang-migrate library so that is what we are going to use to write our migrations. In short, it generates up/down migration pairs and treats each migration as a new version, so you can roll back to the last working version if you need to.

I'm not going to touch upon migration strategies here, so our tests will be written with the assumption that we have a database with all the latest migrations. To achieve this in our tests we will run the up version before each test is run.

For that, we need the following function:

 1func RunUpMigrations(cfg config.Config) error {
 2	_, b, _, _ := runtime.Caller(0)
 3	basePath := filepath.Join(filepath.Dir(b), "../migrations")
 4	migrationDir := filepath.Join("file://" + basePath)
 5	db, err := sql.Open("postgres", cfg.GetDatabaseConnString())
 6	if err != nil {
 7		return errors.WithStack(err)
 8	}
 9	defer db.Close()
10	driver, err := postgres.WithInstance(db, &postgres.Config{})
11	if err != nil {
12		return errors.WithStack(err)
13	}
14	defer driver.Close()
16	m, err := migrate.NewWithDatabaseInstance(migrationDir, "postgres", driver)
17	if err != nil {
18		return errors.WithStack(err)
19	}
21	if err := m.Up(); err != nil {
22		if errors.Is(err, ErrNoNewMigrations) {
23			return errors.WithStack(err)
24		}
25	}
26	m.Close()
27	return nil

After the test is down, we would like our environment to be clean so we also need this function:

 1func RunDownMigrations(cfg config.Config) error {
 2	_, b, _, _ := runtime.Caller(0)
 3	basePath := filepath.Join(filepath.Dir(b), "../migrations")
 4	migrationDir := filepath.Join("file://" + basePath)
 5	db, err := sql.Open("postgres", cfg.GetDatabaseConnString())
 6	if err != nil {
 7		return errors.WithStack(err)
 8	}
 9	defer db.Close()
10	driver, err := postgres.WithInstance(db, &postgres.Config{})
11	if err != nil {
12		return errors.WithStack(err)
13	}
14	defer driver.Close()
16	m, err := migrate.NewWithDatabaseInstance(migrationDir, "postgres", driver)
17	if err != nil {
18		return errors.WithStack(err)
19	}
21	if err := m.Down(); err != nil {
22		return errors.WithStack(err)
23	}
25	return nil

Basically, we get a new connection to the database, and create a new migrate instance where we pass a path to the migrations folder, the database we use, and the driver. We then run the migrations and close the connection again, pretty straightforward.

Next up, we need some data in our database to run our tests against. I'm a pretty big fan of just keeping it in a SQL file and then having a helper function run the SQL script against the database. To do that, we just need a function similar to the two above:

 1func LoadFixtures(cfg config.Config) error {
 2	pathToFile := "/app/fixtures.sql"
 3	q, err := os.ReadFile(pathToFile)
 4	if err != nil {
 5		return errors.WithStack(err)
 6	}
 8	db, err := sql.Open("postgres", cfg.GetDatabaseConnString())
 9	if err != nil {
10		return errors.WithStack(err)
11	}
13	_, err = db.Exec(string(q))
14	if err != nil {
15		return errors.WithStack(err)
16	}
17	err = db.Close()
18	if err != nil {
19		return errors.WithStack(err)
20	}
22	return nil

With that setup, we are ready to write our first tests.

Our first test

Technically, we could test the code interacting with the database by providing a mocked version of the methods in the database/SQL package. But that doesn't really give us much as it would be tricky to mock a situation where you, for example, miss a variable in the your .Scan method or have some syntax issue. Therefore, I tend to write integration tests for all my database functionality. Let's add a test for the CreateUser function.

We need the following:

 1// testing the happy path only - to improve upon these tests, we could consider
 2// using a table test
 3func TestIntegration_CreateUser(t *testing.T) {
 4	// create a NewStorage instance and run migrations
 5	cfg := config.NewConfig()
 6	storage := psql.NewStorage()
 8	err := psql.RunUpMigrations(*cfg)
 9	if err != nil {
10		t.Errorf("test setup failed for: CreateUser, with err: %v", err)
11		return
12	}
14	// run the test
15	t.Run("should create a new user", func(t *testing.T) {
16		newUser, err := entity.NewUser(
17			"Jon Snow", "male", "90", "[email protected]", 16, 182, 1)
18		if err != nil {
19			t.Errorf("failed to run CreateUser with error: %v", err)
20			return
21		}
23		// to ensure consistency we could consider adding in a static date
24		// i.e. time.Date(insert-fixed-date-here)
25		// creationTime := time.Now()
26		err = storage.CreateUser(*newUser)
27		// assert there is no err
28		if err != nil {
29			t.Errorf("failed to create new user with err: %v", err)
30			return
31		}
33		// now lets verify that the user is actually created using a
34		// separate connection to the DB and pure sql
35		db, err := sql.Open("postgres", cfg.GetDatabaseConnString())
36		if err != nil {
37			t.Errorf("failed to connect to database with err: %v", err)
38			return
39		}
40		queryResult := entity.User{}
41		err = db.QueryRow("SELECT id, name, email FROM users WHERE email=$1",
42			"[email protected]").Scan(
43			&queryResult.ID, &queryResult.Name, &queryResult.Email,
44		)
45		if err != nil {
46			t.Errorf("this was query err: %v", err)
47			return
48		}
50		if queryResult.Name != newUser.Name {
51			t.Error(`failed 'should create a new user' wanted name did not match 
52				returned value`)
53			return
54		}
55		if queryResult.Email != newUser.Email {
56			t.Error(`failed 'should create a new user' wanted email did not match 
57				returned value`)
58			return
59		}
60		if int64(queryResult.ID) != int64(1) {
61			t.Error(`failed 'should create a new user' wanted id did not match 
62				returned value`)
63			return
64		}
66	})
68	// // run some clean up, i.e. clean the database so we have a clean env
69	// // when we run the next test
70	t.Cleanup(func() {
71		err := psql.RunDownMigrations(*cfg)
72		if err != nil {
73			if errors.Is(err, migrate.ErrNoChange) {
74				return
75			}
76			t.Errorf("test cleanup failed for: CreateUser, with err: %v", err)
77		}
78	})

We start by creating a new instance of config and storage (just like we would in main.go when running the entire application) and then run the up migrations function. If nothing goes wrong, we should have something similar to what we would have in production.

We then use the storage instance that we just set up to create a new user, open a new connection to query for the user we just created, and verify that said user is created with the expected values. After, use the Cleanup function provided by the testing package to call the down migrations. This clears the database.

One more thing you might notice is that we have a psql_test.go file.

Open it, and you will find the following function:

 1// TestMain gets run before running any other _test.go files in each package
 2// here, we use it to make sure we start from a clean slate
 3func TestMain(m *testing.M) {
 4	cfg := config.NewConfig()
 5	// make sure we start from a clean slate
 6	err := psql.DropEverythingInDatabase(*cfg)
 7	if err != nil {
 8		panic(err)
 9	}
11	os.Exit(m.Run())

TestMain is a special function that gets called before all other tests in the package it's located in. Here, we're being (justifiable, I would say) paranoid and call a function that drops everything in the database so we are sure we are starting from a clean slate. You can find the function in repository/psql.go if you want to take a closer look.

And that's basically it for running integration tests against our database functions. We could use table-driven tests here and probably should, but this will do for illustration purposes. See here for an explanation of table-driven tests if you don't know them. Next up, let's do a "proper" integration test and ensure that our endpoints are working as expected!

Testing our endpoints

Now we get into the meat of this post.

We've arrived at (or closer to, at least) the definition found in the ol' google book. We're testing that multiple parts of our codes work together, mostly in the infrastructure layer, in the way we expect. Here, we want to ensure that whenever our API receives a request that fulfills the contract we as developers put out, it does what we want. That is, we want to test the happy path. Ideally, we would also want to test the sad path (not sure if that's the word for it, but this is my article, so now it is) but integration tests are more "expensive" so it's a delicate balance. You could choose to mock out database responses and test the sad path in a more unit-test kind of way, or you could add integration tests until the time it takes to run the test suite becomes unbearable. I would probably err on adding 1 integration test to many, and deal with the "cost" when it becomes too big.

Alright, enough rambling. Let's get started.

A side note here: I'm using gofiber which is inspired by express, the Node web framework. The way I'm setting up the POST request sort of depends on how gofiber does things. I say sort of because the underlying thing when sending a post request from Go is using Marshaling. I will point it out when we get to it, but just be aware that if you like gorilla or gin you might have to google a bit.

A quick rundown of router setup

We won’t spend much time on this, as you can find the code in the repo.

Basically, we have this:

 1type serverConfig interface {
 2	GetServerReadTimeOut() time.Duration
 3	GetServerWriteTimeOut() time.Duration
 4	GetServerPort() int64
 7type Http struct {
 8	router        *fiber.App
 9	serverPort    int64
10	userHandler   userHandler
11	weightHandler weightHandler
14func NewHttp(
15	cfg serverConfig, userHandler userHandler, weightHandler weightHandler) *Http {
16	r := fiber.New(fiber.Config{
17		ReadTimeout:  cfg.GetServerReadTimeOut(),
18		WriteTimeout: cfg.GetServerWriteTimeOut(),
19		AppName:      "Weight Tracking App",
20	})
21	return &Http{
22		router:        r,
23		serverPort:    cfg.GetServerPort(),
24		userHandler:   userHandler,
25		weightHandler: weightHandler,
26	}

We set up an HTTP struct that has some dependencies to get our server up and running with a router. On that struct, we define some server-specific methods. It's pretty straightforward.

Testing our endpoint to create a new user

Our endpoint is pretty simple. There is no middleware and authentication, everybody can just spam our server with requests and create a ton of users. That's not ideal, but also not really what we care about right now. We just want to make sure our API does what it's supposed to do.

 1func TestIntegration_UserHandler_New(t *testing.T) {
 2	cfg := config.NewConfig()
 3	storage := psql.NewStorage()
 5	err := psql.RunUpMigrations(*cfg)
 6	if err != nil {
 7		t.Errorf("test setup failed for: CreateUser, with err: %v", err)
 8		return
 9	}
11	userService := service.NewUser(storage)
12	weightService := service.NewWeight(storage)
14	userHandler := http.NewUserHandler(userService)
15	weightHandler := http.NewWeightHandler(weightService)
17	srv := http.NewHttp(cfg, *userHandler, *weightHandler)
19	srv.SetupRoutes()
20	r := srv.GetRouter()
22	req := http.NewUserRequest{
23		Name:          "Test user",
24		Sex:           "male",
25		WeightGoal:    "80",
26		Email:         "[email protected]",
27		Age:           99,
28		Height:        185,
29		ActivityLevel: 1,
30	}
31	var buf bytes.Buffer
32	err = json.NewEncoder(&buf).Encode(req)
33	if err != nil {
34		log.Fatal(err)
35	}
36	rq, err := h.NewRequest(h.MethodPost, "/api/user", &buf)
37	if err != nil {
38		t.Error(err)
39	}
40	rq.Header.Add("Content-Type", "application/json")
42	res, err := r.Test(rq, -1)
43	if err != nil {
44		t.Error(err)
45	}
47	if res.StatusCode != 200 {
48		t.Error(errors.New("create user endpoint did not return 200"))
49	}
51	// query the database to verify that a user was created based on the request
52	// we sent
53	newUser, err := storage.GetUserFromEmail(req.Email)
54	if err != nil {
55		t.Error(err)
56	}
58	if newUser.Height != req.Height {
59		t.Error(errors.New("create user endpoint did not create user with correct details"))
60	}
62	t.Cleanup(func() {
63		err := psql.RunDownMigrations(*cfg)
64		if err != nil {
65			if errors.Is(err, migrate.ErrNoChange) {
66				return
67			}
68			t.Errorf("test cleanup failed for: CreateUser endpoint, with err: %v", err)
69		}
70	})

Most of this looks similar to what we had in the repository tests. We set up the database, the services, and lastly, the server. We create a request, encode it, send it to our endpoint, and check the response. An important thing to note here is that we don't know what is happening under the hood of this beast. We just know that we sent a request with some data and that returns OK and creates a user in the database with the expected data. This is also known as black-box testing. We don't care how this is done, we care that the expected behavior occurs.

One thing about the above code is that there is quite some repetitiveness in how we set up the tests and tear them down after each run. It would be nice if we didn't have to copy-paste all of this and take a long hot bath after each test run because we violated DRY. We could do this ourselves of course, or we could use Approach 2 - using test suites with Testify.

Approach 2 - Using Testify to run our integration tests

For this, we are going to use the testify package which I have used for quite some time now. The main thing this does for us is save some configuration lines and ensure consistency in our test suites. It's easy enough to have the entire codebase in your head when it's only this size, but as it grows, having the setup and configuration done in one place makes things so much easier.

Let's see how the setup is done for our handler integration tests:

 1type HttpTestSuite struct {
 2	suite.Suite
 3	TestStorage *psql.Storage
 4	TestDb      *sql.DB
 5	TestRouter  *fiber.App
 6	Cfg         *config.Config
 9func (s *HttpTestSuite) SetupSuite() {
10	log.SetFlags(log.LstdFlags | log.Lshortfile)
11	cfg := config.NewConfig()
13	db, err := sql.Open("postgres", cfg.GetDatabaseConnString())
14	if err != nil {
15		panic(errors.WithStack(err))
16	}
18	err = db.Ping()
19	if err != nil {
20		panic(errors.WithStack(err))
21	}
22	storage := psql.NewStorage()
24	userService := service.NewUser(storage)
25	weightService := service.NewWeight(storage)
27	userHandler := http.NewUserHandler(userService)
28	weightHandler := http.NewWeightHandler(weightService)
30	srv := http.NewHttp(cfg, *userHandler, *weightHandler)
32	srv.SetupRoutes()
33	r := srv.GetRouter()
35	s.Cfg = cfg
36	s.TestDb = db
37	s.TestStorage = storage
38	s.TestRouter = r

We take the entire setup step and automate it for each test suite. If we check the documentation for the SetupSuite method we see that it's a method that runs before the test in a suite is run. So the whole setup we did with the standard library like so:

 1func TestIntegration_UserHandler_CreateUser(t *testing.T) {
 2	cfg := config.NewConfig()
 3	storage := psql.NewStorage()
 5    ..... irrelevant code removed
 7	userService := service.NewUser(storage)
 8	weightService := service.NewWeight(storage)
10	userHandler := http.NewUserHandler(userService)
11	weightHandler := http.NewWeightHandler(weightService)
13	srv := http.NewHttp(cfg, *userHandler, *weightHandler)
15	srv.SetupRoutes()
16	r := srv.GetRouter()
18    ..... irrelevant code removed

is automated for us, nice!

Now, we also did have some other requirements, namely that we had a "fresh" environment for each test run. This means that we need to run up/down migrations to ensure our database is clean. This was done in the setup and teardown portion before in each test, but with testify, we can just define beforeTest and afterTest where we can run the same methods as we did before, without having to copy-paste them for each test.

One thing you will notice if you check out the repo is that we have almost the entirely same code in the repository as we do here. Only except for the TestRouter in the struct. I don't mind the duplication here as my needs for the endpoints test could change in the future, and keeping my dependencies as few as possible is desirable. So you could, if you wanted, make one large integration test suite. I just prefer to split things up, each to their own.

In conclusion

Will the above steps prevent you from the disaster we went through at the beginning of the article? Maybe, it depends (every senior developer's favorite reply). But, it will increase the amount of confidence you can have in your code. How many integration tests to have is always a balance since they do take some time longer to run, but there are ways to fix that, so testing the happy path until things become unbearably slow is a good rule of thumb. Cross that bridge when you get to it.

As mentioned in the introduction, this is not an attempt to add something new and revolutionize the way we do integration tests in the Go community. But to give you another perspective and some boilerplate code to get going with your integration testing adventures. Should you want to continue and learn from people way smarter than me (you definitely should), check out the resource section.


  • Learn go with tests: basically, read this entire thing. Chris does an amazing job showing you how to get started with Go and test-driven development. Worth the read.
  • HTML forms, databases, integration tests: though it's not in Go, but in Rust, Luca does a great job explaining integration testing. Always try to look for what concepts transcend programming languages and what don't. It's always beneficial to have a nuanced view.