Before you go, check out these stories!

Hackernoon logoHow To Create Golang REST API: Project Layout Configuration [Part 1] by@danstenger

How To Create Golang REST API: Project Layout Configuration [Part 1]

Author profile picture


GO and functional programming enthusiast

During past couple of years I have worked on few projects written in GO. I noticed that the biggest challenge developers are facing is lack of constraints or standards when it comes to project layout. I'd like to share some findings and patterns that have worked best for me and my team. For better understanding I'll go through steps of creating a simple REST API.

I'll start with something that I hope one day will become a standard. You can read more about it here. Let's name it boilerplate:

mkdir -p \
$GOPATH/src/ \
$GOPATH/src/ \
$GOPATH/src/ \

pkg/ will contain common/reusable packages, cmd/ programs, db/scripts db related scripts and scripts/ will contain general purpose scripts.

No application is built without docker these days. It makes everything much easier, so I'll use it too. I'll try not to over-complicate things for starters and only add what's necessary: persistence layer in form of PostgreSQL database, simple program that will establish connection to database, run in docker environment and recompile on each source code change. Oh, almost forgot, I'll also be using GO modules! Let's get started:

$ cd $GOPATH/src/ && \
go mod init

Let's create a for local development environment:

# Start from golang v1.13.4 base image to have access to go modules
FROM golang:1.13.4

# create a working directory

# Fetch dependencies on separate layer as they are less likely to
# change on every build and will therefore be cached for speeding
# up the next build
COPY ./go.mod ./go.sum ./
RUN go mod download

# copy source from the host to the working directory inside
# the container
COPY . .

# This container exposes port 7777 to the outside world

I don't want to install/setup PostgreSQL database neither I want any other project contributor to do so. Let's automate this step with docker-compose. The content of docker-compose.yml file:

version: "3.7"

    name: boilerplate-volume

    name: boilerplate-network

    image: postgres:12.0
    restart: on-failure
      - .env
      - boilerplatevolume:/var/lib/postgresql/data
      - ./db/scripts:/docker-entrypoint-initdb.d/
      - boilerplatenetwork
      context: .
      - pg
      - ./:/app
      - 7777:7777
      - boilerplatenetwork
      - .env
    entrypoint: ["/bin/bash", "./scripts/"]

I will not be explaining how docker-compose works here, but it should be pretty much self explanatory. Two interesting things to point out. First is

in pg service. When I run docker-compose, pg service will take bash scripts from host
folder, place and run them in pg container. Currently there will be only one script. It will ensure that test database will be created . Lets create that script file:

$ touch ./db/scripts/

Let's see how that script looks like:


set -e

psql -v ON_ERROR_STOP=1 --username "$POSTGRES_USER" --dbname "$POSTGRES_DB" <<-EOSQL
    DROP DATABASE IF EXISTS boilerplatetest;
    CREATE DATABASE boilerplatetest;

Second interesting thing is

entrypoint: ["/bin/bash", "./scripts/"]
It installs CompileDaemon the way that go.mod is not affected and same package is not picked up and installed in production later. It also builds our application, starts listening for any changes made to source code and recompiles it. It looks like this:


set -e

GO111MODULE=off go get

CompileDaemon --build="go build -o main cmd/api/main.go" --command=./main

Next, I'll create

file in root of our project which will hold all environment variables for local development:


All variables with

prefix will be picked up by our pg service in docker-compose.yml and create database with relevant details.

In next step I'll create config package that will load, persist and operate with environment variables:

// pkg/config/config.go

package config

import (

type Config struct {
	dbUser     string
	dbPswd     string
	dbHost     string
	dbPort     string
	dbName     string
	testDBHost string
	testDBName string

func Get() *Config {
	conf := &Config{}

	flag.StringVar(&conf.dbUser, "dbuser", os.Getenv("POSTGRES_USER"), "DB user name")
	flag.StringVar(&conf.dbPswd, "dbpswd", os.Getenv("POSTGRES_PASSWORD"), "DB pass")
	flag.StringVar(&conf.dbPort, "dbport", os.Getenv("POSTGRES_PORT"), "DB port")
	flag.StringVar(&conf.dbHost, "dbhost", os.Getenv("POSTGRES_HOST"), "DB host")
	flag.StringVar(&conf.dbName, "dbname", os.Getenv("POSTGRES_DB"), "DB name")
	flag.StringVar(&conf.testDBHost, "testdbhost", os.Getenv("TEST_DB_HOST"), "test database host")
	flag.StringVar(&conf.testDBName, "testdbname", os.Getenv("TEST_DB_NAME"), "test database name")


	return conf

func (c *Config) GetDBConnStr() string {
	return c.getDBConnStr(c.dbHost, c.dbName)

func (c *Config) GetTestDBConnStr() string {
	return c.getDBConnStr(c.testDBHost, c.testDBName)

func (c *Config) getDBConnStr(dbhost, dbname string) string {
	return fmt.Sprintf(

So what's happening here? Config package has one public

function. It creates a pointer to Config instance, tries to get variables as command line arguments and uses env vars as default values. So it's the best of both worlds as it makes our config very flexible. Config instance has 2 methods to get dev and test DB connection strings.

Next, let's create db package that will establish and persist connection to database:

// pkg/db/db.go

package db

import (

	_ ""

type DB struct {
	Client *sql.DB

func Get(connStr string) (*DB, error) {
	db, err := get(connStr)
	if err != nil {
		return nil, err

	return &DB{
		Client: db,
	}, nil

func (d *DB) Close() error {
	return d.Client.Close()

func get(connStr string) (*sql.DB, error) {
	db, err := sql.Open("postgres", connStr)
	if err != nil {
		return nil, err

	if err := db.Ping(); err != nil {
		return nil, err

	return db, nil

Here I introduce another 3rd party package
which you can read more about here. Again, there's public
function that accepts connection string, establishes connection to database and returns pointer to DB instance.

Access to database and program configuration is needed all the time across whole application. For easy dependency injection I'll create another package that will assemble all our mandatory building blocks.

// pkg/application/application.go

package application

import (

type Application struct {
	DB  *db.DB
	Cfg *config.Config

func Get() (*Application, error) {
	cfg := config.Get()
	db, err := db.Get(cfg.GetDBConnStr())

	if err != nil {
		return nil, err

	return &Application{
		DB:  db,
		Cfg: cfg,
	}, nil

There's public

function again, remember, consistency is the key! :) It returns pointer to our Application instance that will hold our configuration and access to database.

I'd like to add another service that will guard the application, listen for any program termination signals and perform cleanup such as closing database connection:

// pkg/exithandler/exithandler.go

package exithandler

import (

func Init(cb func()) {
	sigs := make(chan os.Signal, 1)
	terminate := make(chan bool, 1)
	signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM)

	go func() {
		sig := <-sigs
		log.Println("exit reason: ", sig)
		terminate <- true

	log.Print("exiting program")

So exithandler has public

function that will accept a callback function which will be invoked when program exits unexpectedly or is terminated by user.

Now that all basic building blocks are in place, I can finally put them to work:

// cmd/api/main.go

package main

import (


func main() {
	if err := godotenv.Load(); err != nil {
		log.Println("failed to load env vars")

	app, err := application.Get()
	if err != nil {

	exithandler.Init(func() {
		if err := app.DB.Close(); err != nil {

There's new 3rd party package introduced
which will load env vars from a .env file created earlier. It will get pointer to application that holds config and db connection and listen for any interruptions to perform graceful shutdown.

Time for action:

$ docker-compose up --build

OK, now that app is running I want to ensure I have 2 databases at my disposal. I'll list all running docker containers by typing:

$ docker container ls

I can allocate pg servide name in name column. In my case docker has named it boilerplate_pg_1. I'll connect to it by typing:

$ docker exec -it boilerplate_pg_1 /bin/bash

Now, when I'm inside pg container I'll run psql client to list all databases:

$ psql -U postgres -W

Password, as per

file is just a "password".
file also used by pg service to create boilerplate database and custom script from /db/scripts folder was responsible for creating boilerplatetest database. Lets make sure it's all according to the plan. Type

And sure thing I have

databases ready to work with.

I hope you have learned something useful. In next post I'll go through creating actual server and have some routes with middleware and handlers in place. You can also see the whole project here.


The Noonification banner

Subscribe to get your daily round-up of top tech stories!