During past couple of years I have worked on few projects written in GO. I noticed that the biggest challenge developers are facing is lack of constraints or standards when it comes to project layout. I'd like to share some findings and patterns that have worked best for me and my team. For better understanding I'll go through steps of creating a simple REST API. I'll start with something that I hope one day will become a standard. You can read more about it . Let's name it here boilerplate: mkdir -p \ /src/github.com/boilerplate/pkg \ /src/github.com/boilerplate/cmd \ /src/github.com/boilerplate/db/scripts \ /src/github.com/boilerplate/scripts $GOPATH $GOPATH $GOPATH $GOPATH will contain common/reusable packages, programs, db related scripts and will contain general purpose scripts. pkg/ cmd/ db/scripts scripts/ No application is built without docker these days. It makes everything much easier, so I'll use it too. I'll try not to over-complicate things for starters and only add what's necessary: persistence layer in form of PostgreSQL database, simple program that will establish connection to database, run in docker environment and recompile on each source code change. Oh, almost forgot, I'll also be using GO modules! Let's get started: $ /src/github.com/boilerplate && \ go mod init github.com/boilerplate cd $GOPATH Let's create a for local development environment: Dockerfile.dev golang: . # Start from golang v1.13.4 base image to have access to go modules FROM 1.13 4 # create a working directory WORKDIR /app # Fetch dependencies on separate layer as they are less likely to # change on every build and will therefore be cached for speeding # up the next build COPY ./go.mod ./go.sum ./ RUN go mod download # copy source from the host to the working directory inside # the container COPY . . # This container exposes port 7777 to the outside world EXPOSE 7777 I don't want to install/setup database neither I want any other project contributor to do so. Let's automate this step with . The content of file: PostgreSQL docker-compose docker-compose.yml version: "3.7" volumes: boilerplatevolume: name: boilerplate-volume networks: boilerplatenetwork: name: boilerplate-network services: pg: image: postgres:12.0 restart: on-failure env_file: - .env ports: - "${POSTGRES_PORT}:${POSTGRES_PORT}" volumes: - boilerplatevolume: /var/lib/postgresql/data - ./db/scripts:/docker-entrypoint-initdb.d/ networks: - boilerplatenetwork boilerplate_api: build: context: . dockerfile: Dockerfile.dev depends_on: - pg volumes: - ./:/app ports: - 7777 :7777 networks: - boilerplatenetwork env_file: - .env entrypoint: ["/bin/bash", "./scripts/entrypoint.dev.sh" ] I will not be explaining how docker-compose works here, but it should be pretty much self explanatory. Two interesting things to point out. First is in service. When I run docker-compose, pg service will take bash scripts from host folder, place and run them in pg container. Currently there will be only one script. It will ensure that test database will be created . Lets create that script file: ./db/scripts:/docker-entrypoint-initdb.d/ pg ./db/scripts $ touch ./db/scripts/1_create_test_db.sh Let's see how that script looks like: -e psql -v ON_ERROR_STOP=1 --username --dbname <<-EOSQL DROP DATABASE IF EXISTS boilerplatetest; CREATE DATABASE boilerplatetest; EOSQL #!/bin/bash set " " $POSTGRES_USER " " $POSTGRES_DB Second interesting thing is It installs the way that is not affected and same package is not picked up and installed in production later. It also builds our application, starts listening for any changes made to source code and recompiles it. It looks like this: entrypoint: ["/bin/bash", "./scripts/entrypoint.dev.sh"] CompileDaemon go.mod -e GO111MODULE=off go get github.com/githubnemo/CompileDaemon CompileDaemon --build= -- =./main #!/bin/bash set "go build -o main cmd/api/main.go" command Next, I'll create file in root of our project which will hold all environment variables for local development: .env =password =postgres = =pg =boilerplate =localhost =boilerplatetest POSTGRES_PASSWORD POSTGRES_USER POSTGRES_PORT 5432 POSTGRES_HOST POSTGRES_DB TEST_DB_HOST TEST_DB_NAME All variables with prefix will be picked up by our pg service in and create database with relevant details. POSTGRES_ docker-compose.yml In next step I'll create config package that will load, persist and operate with environment variables: config ( ) Config { dbUser dbPswd dbHost dbPort dbName testDBHost testDBName } { conf := &Config{} flag.StringVar(&conf.dbUser, , os.Getenv( ), ) flag.StringVar(&conf.dbPswd, , os.Getenv( ), ) flag.StringVar(&conf.dbPort, , os.Getenv( ), ) flag.StringVar(&conf.dbHost, , os.Getenv( ), ) flag.StringVar(&conf.dbName, , os.Getenv( ), ) flag.StringVar(&conf.testDBHost, , os.Getenv( ), ) flag.StringVar(&conf.testDBName, , os.Getenv( ), ) flag.Parse() conf } { c.getDBConnStr(c.dbHost, c.dbName) } { c.getDBConnStr(c.testDBHost, c.testDBName) } { fmt.Sprintf( , c.dbUser, c.dbPswd, dbhost, c.dbPort, dbname, ) } // pkg/config/config.go package import "flag" "fmt" "os" type struct string string string string string string string * func Get () Config "dbuser" "POSTGRES_USER" "DB user name" "dbpswd" "POSTGRES_PASSWORD" "DB pass" "dbport" "POSTGRES_PORT" "DB port" "dbhost" "POSTGRES_HOST" "DB host" "dbname" "POSTGRES_DB" "DB name" "testdbhost" "TEST_DB_HOST" "test database host" "testdbname" "TEST_DB_NAME" "test database name" return func (c *Config) GetDBConnStr () string return func (c *Config) GetTestDBConnStr () string return func (c *Config) getDBConnStr (dbhost, dbname ) string string return "postgres://%s:%s@%s:%s/%s?sslmode=disable" So what's happening here? Config package has one public function. It creates a pointer to Config instance, tries to get variables as command line arguments and uses env vars as default values. So it's the best of both worlds as it makes our config very flexible. Config instance has 2 methods to get dev and test DB connection strings. Get Next, let's create db package that will establish and persist connection to database: db ( _ ) DB { Client *sql.DB } { db, err := get(connStr) err != { , err } &DB{ Client: db, }, } { d.Client.Close() } { db, err := sql.Open( , connStr) err != { , err } err := db.Ping(); err != { , err } db, } // pkg/db/db.go package import "database/sql" "github.com/lib/pq" type struct func Get (connStr ) string (*DB, error) if nil return nil return nil func (d *DB) Close () error return func get (connStr ) string (*sql.DB, error) "postgres" if nil return nil if nil return nil return nil Here I introduce another 3rd party package which you can read more about . Again, there's public function that accepts connection string, establishes connection to database and returns pointer to DB instance. github.com/lib/pq here Get Access to database and program configuration is needed all the time across whole application. For easy dependency injection I'll create another package that will assemble all our mandatory building blocks. application ( ) Application { DB *db.DB Cfg *config.Config } { cfg := config.Get() db, err := db.Get(cfg.GetDBConnStr()) err != { , err } &Application{ DB: db, Cfg: cfg, }, } // pkg/application/application.go package import "github.com/boilerplate/pkg/config" "github.com/boilerplate/pkg/db" type struct func Get () (*Application, error) if nil return nil return nil There's public function again, remember, consistency is the key! :) It returns pointer to our Application instance that will hold our configuration and access to database. Get I'd like to add another service that will guard the application, listen for any program termination signals and perform cleanup such as closing database connection: exithandler ( ) { sigs := ( os.Signal, ) terminate := ( , ) signal.Notify(sigs, syscall.SIGINT, syscall.SIGTERM) { sig := <-sigs log.Println( , sig) terminate <- }() <-terminate cb() log.Print( ) } // pkg/exithandler/exithandler.go package import "log" "os" "os/signal" "syscall" ) func Init (cb () func make chan 1 make chan bool 1 go func () "exit reason: " true "exiting program" So has public function that will accept a callback function which will be invoked when program exits unexpectedly or is terminated by user. exithandler Init Now that all basic building blocks are in place, I can finally put them to work: main ( ) { err := godotenv.Load(); err != { log.Println( ) } app, err := application.Get() err != { log.Fatal(err.Error()) } exithandler.Init( { err := app.DB.Close(); err != { log.Println(err.Error()) } }) } // cmd/api/main.go package import "log" "github.com/boilerplate/pkg/application" "github.com/boilerplate/pkg/exithandler" "github.com/joho/godotenv" func main () if nil "failed to load env vars" if nil func () if nil There's new 3rd party package introduced which will load env vars from a .env file created earlier. It will get pointer to application that holds config and db connection and listen for any interruptions to perform graceful shutdown. github.com/joho/godotenv Time for action: $ docker-compose up --build OK, now that app is running I want to ensure I have 2 databases at my disposal. I'll list all running docker containers by typing: $ docker container ls I can allocate pg servide name in name column. In my case docker has named it I'll connect to it by typing: boilerplate_pg_1. $ docker -it boilerplate_pg_1 /bin/bash exec Now, when I'm inside pg container I'll run psql client to list all databases: $ psql -U postgres -W Password, as per file is just a " . file also used by pg service to create database and custom script from folder was responsible for creating database. Lets make sure it's all according to the plan. Type .env password" .env boilerplate /db/scripts boilerplatetest \l And sure thing I have and databases ready to work with. boilerplate boilerplatetest I hope you have learned something useful. In next post I'll go through creating actual server and have some routes with middleware and handlers in place. You can also see the whole project . here