How To Create Golang REST API: Project Layout Configuration [Part 2] by@danstenger

How To Create Golang REST API: Project Layout Configuration [Part 2]

image
Daniel HackerNoon profile picture

Daniel

GO and functional programming enthusiast

In a previous post I was explaining the basics of setting up GO application for REST API. Now I'll go into details by first creating configurable server, adding http router (mux) and some DB interaction. Let's get (the indoors party) started!

The application is now running in docker, can respond to code changes and reload for instant feedback. For handling http requests I will add another dependency, http router (mux). You can read more about it here.

This is a lightweight, high performance HTTP request router that is easy to use and has everything most api's will need.

$ go get -u github.com/julienschmidt/httprouter

Time to create the server. I'll place it in pkg/ directory as it could potentially be reused:

package server

import (
	"errors"
	"log"
	"net/http"

	"github.com/julienschmidt/httprouter"
)

type Server struct {
	srv *http.Server
}

func Get() *Server {
	return &Server{
		srv: &http.Server{},
	}
}

func (s *Server) WithAddr(addr string) *Server {
	s.srv.Addr = addr
	return s
}

func (s *Server) WithErrLogger(l *log.Logger) *Server {
	s.srv.ErrorLog = l
	return s
}

func (s *Server) WithRouter(router *httprouter.Router) *Server {
	s.srv.Handler = router
	return s
}

func (s *Server) Start() error {
	if len(s.srv.Addr) == 0 {
		return errors.New("Server missing address")
	}

	if s.srv.Handler == nil {
		return errors.New("Server missing handler")
	}

	return s.srv.ListenAndServe()
}

func (s *Server) Close() error {
	return s.srv.Close()
}

As usual,

Get
function returns pointer to our server instance that exposes some public methods that are pretty self explanatory. It will become more obvious when I put this server to work in the main program.

Server will need routes and handlers to communicate with outside world. I'll add that next:

// cmd/api/router/router.go

package router

import (
	"github.com/boilerplate/cmd/api/handlers/getuser"
	"github.com/boilerplate/pkg/application"
	"github.com/julienschmidt/httprouter"
)

func Get(app *application.Application) *httprouter.Router {
	mux := httprouter.New()
	mux.GET("/users", getuser.Do(app))
	return mux
}

// cmd/api/handlers/getuser/getuser.go

package getuser

import (
	"fmt"
	"net/http"

	"github.com/boilerplate/pkg/application"
	"github.com/julienschmidt/httprouter"
)

func Do(app *application.Application) httprouter.Handle {
	return func(w http.ResponseWriter, r *http.Request, ps httprouter.Params) {
		fmt.Fprintf(w, "hello")
	}
}

I define all my routes in

router.go
and call handlers by explicitly passing application configuration so that each handler has access to things like database, configuration with env vars and more.

I keep my handlers separate from router and group them under sub folders

cmd/api/handlers/{handlerName}
. One reason is that handler will have corresponding test file. It will also have multiple middleware files that will also have tests and there can be a lot of handlers. If not grouped correctly, it can get out of hand very fast.

Now there are few more building blocks: server, router, logger. Let's assemble it all in the main program:

// cmd/api/main.go

package main

import (
	"github.com/boilerplate/cmd/api/router"
	"github.com/boilerplate/pkg/application"
	"github.com/boilerplate/pkg/exithandler"
	"github.com/boilerplate/pkg/logger"
	"github.com/boilerplate/pkg/server"
	"github.com/joho/godotenv"
)

func main() {
	if err := godotenv.Load(); err != nil {
		logger.Info.Println("failed to load env vars")
	}

	app, err := application.Get()
	if err != nil {
		logger.Error.Fatal(err.Error())
	}

	srv := server.
		Get().
		WithAddr(app.Cfg.GetAPIPort()).
		WithRouter(router.Get(app)).
		WithErrLogger(logger.Error)

	go func() {
		logger.Info.Printf("starting server at %s", app.Cfg.GetAPIPort())
		if err := srv.Start(); err != nil {
			logger.Error.Fatal(err.Error())
		}
	}()

	exithandler.Init(func() {
		if err := srv.Close(); err != nil {
			logger.Error.Println(err.Error())
		}

		if err := app.DB.Close(); err != nil {
			logger.Error.Println(err.Error())
		}
	})
}

New things to mention is that I assemble the server by chaining method calls and setting properties on the instance. One interesting thing is

WithErrLogger(logger.Error)
, this is just to instruct the server to use my custom logger for consistency.

I start the server in separate go routine so that

exithandler
can still run and gracefully handle program shutdowns.
pkg/logger
contains 2 instances of standard library Logger. Info is printing out messages to
os.Stdout
and Error to
os.Stderr
. I could have used some fancy logger like
logrus
or any other but I was planning to keep it simple.

Next let's take care of the databases. I use migration tool written in GO that can be used as CLI or as a library. You can read more about it and find installation instructions here. After installing it, let's create few migration files. As seen from above, I'll be operating on

/users
resource so it's natural I'll have
users
table:

$ migrate create -ext sql -dir ./db/migrations create_user

This will generate 2 migration files in

db/migrations
, up and down for user table. All files are empty so let's add some sql.

Up:

-- db/migrations/${timestamp}_create_user.up.sql 
CREATE TABLE IF NOT EXISTS public.users
(
    id SERIAL PRIMARY KEY,
    username VARCHAR(100) NOT NULL UNIQUE
);

And down:

-- db/migrations/${timestamp}_create_user.down.sql
DROP TABLE public.users

Pretty simple, but that's how it should be, right? Before running migrations, let's use

golang-migrate
library and create a program to simplify this process. This will also work nicely in CI/CD pipelines as it will let us skip the installation of
golang-migrate
cli as a separate step of the pipeline build. For that to happen I'll add yet another dependency:

$ go get -u github.com/golang-migrate/migrate/v4

I'll name my program

dbmigrate
:

// cmd/dbmigrate/main.go

package main

import (
	"log"

	"github.com/boilerplate/pkg/config"
	"github.com/golang-migrate/migrate/v4"
	_ "github.com/golang-migrate/migrate/v4/database/postgres"
	_ "github.com/golang-migrate/migrate/v4/source/file"
	"github.com/joho/godotenv"
)

func main() {
	godotenv.Load()
	cfg := config.Get()

	direction := cfg.GetMigration()
	if direction != "down" && direction != "up" {
		log.Println("-migrate accepts [up, down] values only")
		return
	}

	m, err := migrate.New("file://db/migrations", cfg.GetDBConnStr())
	if err != nil {
		log.Printf("%s", err)
		return
	}

	if direction == "up" {
		if err := m.Up(); err != nil {
			log.Printf("failed migrate up: %s", err)
			return
		}
	}

	if direction == "down" {
		if err := m.Down(); err != nil {
			log.Printf("failed migrate down: %s", err)
			return
		}
	}
}

Quick overview of what's happening here. First of all I load env vars. I then get pointer to instance of config that will give me easy access to all vars I need with some helper methods. You might have noticed that there's a new

GetMigration
method. It will simply return up or down string to instruct my program if it should migrate database up or down. You can see latest changes here.

Now since I have this tool in place I can put it to work. Best place I found for it is

scripts/entripoint.dev.sh
By running it there I'm avoiding common "oh, I forgot to run migrations" issue. Updated version of
entrypoint.dev.sh
:

#!/bin/bash
set -e

go run cmd/dbmigrate/main.go

go run cmd/dbmigrate/main.go -dbname=boilerplatetest

GO111MODULE=off go get github.com/githubnemo/CompileDaemon

CompileDaemon --build="go build -o main cmd/api/main.go" --command=./main

What's happening here? First run of

dbmigrate
will use all default values from .env file, so it will run the migrations up against
boilerplate
db. In second run I pass
-dbname=boilerplatetest
so that it does the same but against
boilerplatetest
db. Next I'll start my app with clean state:

# remove all containers
docker container rm -f $(docker container ps -a -q)

# clear volumes
docker volume prune -f

# start app
docker-compose up --build

If all the above has worked, we should see

users
table in both
boilerplate
and
boilerplatetest
databases. Let's check that:

# connect to pg docker container
docker exec -it $(docker ps --filter name=pg --format "{{.Names}}") /bin/bash

# launch psql cli
psql -U postgres -W

# ensure both DBs still present
\l

# connect to boilerplate database and list tables
\c boilerplate
\dt

# do same for boilerplatetest
\c boilerplatetest
\dt

# in both databases you should see users table

This is what I see when the above commands are run:

image

And sure thing it's all as expected. Now what if we add new migrations while application is running in docker. I'm pretty sure it's not very convenient to stop docker-compose and rerun the command again for changes to take place. Well,

dbmigrate
program is capable of handling this scenario. In new terminal tab:

# migrate boilerplatetest db down
go run cmd/dbmigrate/main.go \
  -migrate=down \
  -dbname=boilerplatetest \
  -dbhost=localhost

# you can now repeat steps from above to connect to pg container
# and ensure that users table is missing from boilerplatetest DB.

# now bring it back up
go run cmd/dbmigrate/main.go \
  -migrate=up \
  -dbname=boilerplatetest \
  -dbhost=localhost

One thing to mention here is

-dbhost=localhost
. This is because we connect to pg container from our host machine. Within docker-compose we can refer to same container by service name, which is
pg
, but we can't do same from host.

I hope you have learned something useful. In part 3 I'll go through simple CRUD operations for our users resource. It will include middleware usage, validations and more. You can also see the whole project and follow the progress here. Stay safe!

Tags