For , which has a little over 50K lines of code as of now, I've only written functional end-to-end tests. This blog post describes a successful setup that took some iterating to get to, and it's one I wish existed when I started with a Go API backend. Terrastruct Context Functional tests mimic how the client would interact with the server, and have intuitive names of what the client is doing, like . TestDeleteMiddleFrame The reason I don't write other types of tests is because I want to ship quickly as a startup. Functional end-to-end tests can catch regressions early on with a relatively small number of tests that at least executes the majority of the application code. End-to-end means that my tests are purely giving input to the API router as a client would, and compares the given responses to the expected responses. Some API calls produce side effects that aren't visible in the response, and I'll check the results of them as well when the response comes back, like confirming that the Redis cache was written to. Since database reads and writes are involved in most API calls, a first-class consideration should be how the database works for tests. I want to mimic real calls as much as possible, so I don't use mocks or stubs for database calls, and instead, spin up a new database for testing every single time. API setup Let's first take a look at how the server is set up. I'm using a web framework called Gin, but since the Go ecosystem encourages lightweight frameworks (Gin is mostly just a router), this guide should be applicable to other variants of servers written in Go. When the server starts, it initializes the routes defined in the router and registers the middleware used for that router. { router = gin.New() injectMiddleware(router) initializeRoutes(router) } { router.Use(middleware.DBConnectionPool()) ... } { router.POST( , handlers.Login) ... } func SetupRouter () func injectMiddleware (router) func initializeRoutes (router) "/login" Database setup What does do? middleware.DBConnectionPool() { pool := db.Init() { c.Set( , pool) c.Next() } } . func DBConnectionPool () gin HandlerFunc return func (c *gin.Context) "DB" The initialization of this middleware does a one-time initialization of the database itself, and returns the function that every request/response will pass through. Here, it's just attaching the same database pool instance to the request context so that the handlers in the API have access to it. As a quick aside, if you're not familiar with the concept of "pool"s, the tldr is this. Applications that interface frequently with a database open multiple connections so that requests can be processed concurrently. Since the cost of opening and closing a connection is nontrivial, connections are reused. The first API call to a server that just booted up opens the connection, and if the second one comes after the first one is done, it can reuse the connection that the first one opened up by seeing if any are in the "pool" Okay, so what does do? db.Init() db *sql.DB { cred := config.DbCredentials() connString := fmt.Sprintf( , cred[ ], cred[ ], cred[ ], cred[ ], cred[ ]) db = connectPool( , connString) db } { db, err := sql.Open( , connString) err = db.Ping() log.Info( ) db } { m := [ ] { : os.Getenv( ), : os.Getenv( ), : os.Getenv( ), : os.Getenv( ), : os.Getenv( ), } m[ ] == { m[ ] = } m[ ] == { m[ ] = } m[ ] == { m[ ] = } m[ ] == { m[ ] = } m[ ] == { m[ ] = } m } var func Init () (conn *sql.DB) "postgres://%s:%s@%s:%s/%s?sslmode=disable" "user" "password" "host" "port" "dbName" "main" return func connectPool (applicationName , connString ) string string (conn *sql.DB) "postgres" // errcheck // errcheck "Successfully connected" return [ ] func DbCredentials () map string string map string string "host" "CASE_DB_HOST" "user" "CASE_DB_USER" "password" "CASE_DB_PASSWORD" "dbName" "CASE_DB_DBNAME" "port" "PORT" if "host" "" "host" "localhost" if "user" "" "user" "me" if "password" "" "password" "password" if "dbName" "" "dbName" "qa_real" if "port" "" "port" "5432" return It initializes the database by building a connection string and calling the builtin library . It also starts with a ping, to fail early if something went wrong. sql.Open Testing setup The relevant parts to this are the credentials. I'm using environment variables to determine the connection string. When I run each test, I spin up a new database, and override the environment variable to use that database. { database := utils.InitDB() utils.Cleanup(database) ... } func TestAddAndGetDiagram (t *testing.T) defer I put these two lines at the top of every test, so that each run uses a fresh DB. This is what looks like. InitDB testDBName = cred = config.DbCredentials() { connString := fmt.Sprintf( , cred[ ], cred[ ], cred[ ], cred[ ], cred[ ]) db, err := sql.Open( , connString) err != { (err) } _, err = db.Exec( + testDBName) err != { (err) } _, err = db.Exec( + testDBName + + cred[ ]) err != { (err) } db.Close() connString = fmt.Sprintf( , cred[ ], cred[ ], cred[ ], cred[ ], testDBName) db, err = sql.Open( , connString) err != { (err) } os.Setenv( , testDBName) db } var "functional_tester" var * . func InitDB () sql DB "postgres://%s:%s@%s:%s/%s?sslmode=disable" "user" "password" "host" "port" "dbName" "postgres" if nil panic // Delete the database if exists "DROP DATABASE IF EXISTS " if nil panic "CREATE DATABASE " " WITH TEMPLATE " "dbName" if nil panic "postgres://%s:%s@%s:%s/%s?sslmode=disable" "user" "password" "host" "port" "postgres" if nil panic // So that the server will use the right db "CASE_DB_DBNAME" return I first connect to my development database to execute some commands within postgres. You can probably do this without an existing database, but this is more convenient. Once inside, I'll drop any existing databases from other test runs. This is necessary because previous runs might've failed and panicked before the function, so this extra safeguard ensures that other test runs aren't brought down by a single test failure. Cleanup Next, I create the database using the operation . This is basically the database equivalent of copy and paste. This ensures that the database we use mirrors the actual one. And then I connect to it to get a reference for the . WITH TEMPLATE Cleanup { err := db.Close() err != { (err) } connString := fmt.Sprintf( , cred[ ], cred[ ], cred[ ], cred[ ], cred[ ]) db, err = sql.Open( , connString) err != { (err) } db.Close() _, err = db.Exec( + testDBName) err != { (err) } } func Cleanup (db *sql.DB) // Close the current connection (since we can't drop while inside db we want to drop) if nil panic // Reconnect without being in current database "postgres://%s:%s@%s:%s/%s?sslmode=disable" "user" "password" "host" "port" "dbName" "postgres" if nil panic defer // Delete the used database "DROP DATABASE " if nil panic Since you can't drop a database with the connection that you're on, I once again reconnect to my dev database to drop it. This cleanup actually tests something extra: a database will not drop if there are active connections. So it tests that the connections used in a a request don't linger, which can happen if you forget to close them, otherwise the panic on the last line is triggered. So this gives up a brand new database that mirrors the one used in development (which is hopefully synced to what is used in production!) and tears it down at the end of each test. Let's look at the rest of the test function. { database := utils.InitDB() utils.Cleanup(database) testRouter := routes.SetupRouter() utils.CreateUser(testRouter, tests.TestEmail, tests.TestPassword) cookie := utils.Login(t, testRouter, tests.TestEmail, tests.TestPassword) newDiagramName := addDiagramRawRequest := [] (fmt.Sprintf( , newDiagramName, , )) addDiagramRequest, err := http.NewRequest( , , bytes.NewBuffer(addDiagramRawRequest)) err != { t.Fatal(err) } addDiagramRequest.Header.Set( , ) addDiagramRequest.AddCookie(cookie) resp := httptest.NewRecorder() testRouter.ServeHTTP(resp, addDiagramRequest) body, err := ioutil.ReadAll(resp.Body) err != { t.Fatal(err) } expectedResponse { Diagram models.Diagram } err = json.Unmarshal(body, &expectedResponse) err != { t.Fatal(err) } addedDiagramID := expectedResponse.Diagram.ID assert.Equal(t, expectedResponse.Diagram.Name, newDiagramName) assert.Equal(t, resp.Code, ) ... } { request := [] (fmt.Sprintf( , email, password, , )) req, _ := http.NewRequest( , , bytes.NewBuffer(request)) req.Header.Set( , ) resp := httptest.NewRecorder() testRouter.ServeHTTP(resp, req) } func TestAddAndGetDiagram (t *testing.T) defer "new diagram" byte ` { "name": "%v", "baseWidth": %v, "baseHeight": %v } ` 1080 720 "POST" "/api/v1/diagrams" if nil "Content-Type" "application/json" if nil var struct `json:"diagram"` if nil 200 // utils func CreateUser (testRouter *gin.Engine, email, password ) string byte ` { "username": "%v", "password": "%v", "baseWidth": %v, "baseHeight": %v } ` 1080 720 "POST" "/api/v1/register" "Content-Type" "application/json" After the database initialization, I initialize the router fresh, and call helper functions that set up certain states in the database. This test is an authenticated request, since only authenticated users can create diagrams, so I create a user. This is done by just sending an API request the way a client would, maintaining the end-to-end principle. Conclusion The test constructs the JSON request, feeds it to the router, records the response, unmarshals it, and does assertions on the response. Every test is a variation of this pattern. I have a git hook to run it before I push. I've found this to work very well for me so far, and it's caught a number of bugs early on. There's no stubbing, mocks, or anything "simulated" in these tests. All the operations are isolated from each other, I don't serially test after , because even though it'd save lines of code, it can also lead to false positive tests that only work when ran in conjunction with another. Instead, I just create another called . DeleteDiagram AddDiagram AddAndDeleteDiagram It's the closest I can get to end-to-end for the entire backend. The downside is that it's very time-consuming to build and tear down the database for every test. On my 2019 Macbook Pro, it's well over a minute for all the tests, and a lot slower on some other machines I've run it on. The tradeoff is worth it though, but maybe as the application grows, I'll have to rely on other types of tests. Are you a software engineer who hasn't found the perfect diagramming tool for mapping out their architecture? Check out , a diagram maker specifically made for software engineers to get work done. Terrastruct Previously published at https://terrastruct.com/blog/functional-testing-database-go/