After spending many hours debugging Go code in production, I started noticing a pattern: the same bugs kept showing up in code reviews. They weren’t obvious syntax errors, but subtle details specific to Go. Unless you’re already very familiar with the language, these bugs can be difficult to spot at first glance. In this article, I’ll walk through the most common Go pitfalls I’ve seen repeatedly during code reviews.
1- Goroutine Leak
A goroutine leak happens when a goroutine starts but never finishes, it stays alive for the lifetime of the process and keeps consuming resources. It can be caused by channel improper handling such as:
- Receiving from a channel that no one ever sends to, or that is never closed.
- Sending to a channel that has no receiver (unbuffered) or whose buffer is full, with no way for receivers to drain it
This is a typical scenario describing goroutine leak with a channel. In the following example:
workerruns an infinite loop.- It blocks forever waiting on
<-ch. mainexits without closing the channel.
In a long-running service, this worker would never terminate, leading to goroutine leak.
package main
import (
"fmt"
"time"
)
func worker(ch chan int) {
for {
v := <-ch
fmt.Println("Received:", v)
}
}
func main() {
ch := make(chan int)
go worker(ch)
time.Sleep(2 * time.Second)
fmt.Println("Main exiting")
}
How to fix it: Close the channel in the main function once you’ve finished sending items. Then use range ch to read from the channel, the loop automatically exits when the channel is closed.
func worker(ch chan int) {
for v := range ch { // exits automatically when channel is closed
fmt.Println("Received:", v)
}
fmt.Println("Worker exiting")
}
func main() {
ch := make(chan int)
go worker(ch)
ch <- 1
ch <- 2
close(ch) // signal completion
time.Sleep(time.Second)
fmt.Println("Main exiting")
}
A goroutine must always have:
- Proper channel closing logic.
- Timeout or cancellation mechanism. (explained more below in the second example)
2- No Synchronization Between main and Goroutines
When you start goroutines but don’t synchronize them with main, your program can exit before the goroutines finish executing.
In Go, the program terminates as soon as the main function returns — even if other goroutines are still running.
Interrupting goroutines can cause:
- Database writes may not finish
- HTTP requests may not complete
- Logs may not flush
- Background jobs may be interrupted
The main function has no visibility about the goroutines status. There is no sync between the main function and goroutines.
func processItems(items []string) {
for _, item := range items {
go func(i string) {
process(i)
}(item)
}
}
How to fix it:
- Add
sync.WaitGroupto create synchronization betweenmainfunction and goroutines. - Add
contextfor proper life cycle management.
context prevents leaks by giving goroutines a cancellation signal and a lifecycle boundary:
- Cancellation propagation: when parent cancels, all child goroutines stop.
- Timeouts / Deadlines: with
context.WithTimeout, stuck I/O stops automatically.
Now main waits until all goroutines finish before it exits.
func processItems(ctx context.Context, items []string) error {
var wg sync.WaitGroup
wg.Add(len(items))
for _, item := range items {
go func(i string) {
defer wg.Done()
select {
case <-ctx.Done():
return
default:
process(i)
}
}(item)
}
wg.Wait()
return ctx.Err()
}
3- Connections Leak
When you open many HTTP connections, closing the connection with defer is a good practice, except when you run it inside a loop like below, it won't get executed after each iteration as expected. It will only get executed when the main function returns, leading to connections leak, by keeping all the established connections open.
for _, url := range urls {
resp, _ := http.Get(url)
defer resp.Body.Close() // BAD in loop
}
How to fix it: always close HTTP connections with defer inside a closure function in a loop. That way closing a connection happens at the end of each iteration, guaranteeing all open connections are closed.
for _, url := range urls {
func() {
resp, _ := http.Get(url)
defer resp.Body.Close()
}()
}
4- HTTP Requests Without Context
By default, http.Get() has no timeout. The request can hang forever even if:
- Client disconnects
- Parent request is canceled
- Shutdown signal received
Your HTTP call keeps running anyway. This breaks graceful shutdown and request scoping, leading to:
- Exhausting goroutines
- Filling connection pools
func fetchData(url string) error {
resp, err := http.Get(url)
if err != nil {
return err
}
defer resp.Body.Close()
_, err = io.ReadAll(resp.Body)
return err
}
How to fix it: use http.NewRequestWithContext instead, it provides:
- Timeout control
- Cancellation propagation
- Automatic cleanup
func fetchData(ctx context.Context, url string) error {
ctx, cancel := context.WithTimeout(ctx, 2*time.Second)
defer cancel()
req, err := http.NewRequestWithContext(ctx, "GET", url, nil)
if err != nil {
return err
}
resp, err := http.DefaultClient.Do(req)
if err != nil {
return err
}
defer resp.Body.Close()
_, err = io.ReadAll(resp.Body)
return err
}
5- Copying Slices and Maps the Wrong Way
Copying maps and slices is not as straightforward as copying other types in Go. Assignment does not create a copy; it creates a reference to the underlying map or slice
a := []int{1, 2, 3}
b := a
b[0] = 99
fmt.Println(a)
[99 2 3]
m1 := map[string]int{"a": 1}
m2 := m1
m2["a"] = 42
fmt.Println(m1["a"])
42
This is how to properly copy a slice or a map:
src := []int{1, 2, 3}
dst := make([]int, len(src))
copy(dst, src)
b := make(map[string]int, len(a))
for k, v := range a {
b[k] = v
}
6- Capturing Loop Variables by Reference
By default, closures capture variables by reference from the surrounding scope. In the following example:
for _, v:=range servers {
go func(){
fmt.Println(v)
}
}
The goroutines don’t capture the value of v at each iteration, they capture the same variable v, which is reused by the loop.
Goroutines run concurrently, by the time a goroutine executes fmt.Println(v), the loop may have already advanced or even finished.
You may get:
cache
cache
cache
Instead of :
web
db
cache
How to fix it: freeze the value of the variable v, by passing it as parameter to the closure. Each goroutine now receives its own copy.
for -, v:=range servers {
go func(s *Server){
fmt.Println(s)
}(v)
}
7- Modifying a Slice of Structs by Copy
When you have a slice of structs and you want to iterate through that slice to modify the structs. the variable s in the for range loop is just a copy of each element in the slice: s := servers[i], not a reference to the struct. Modifying the copy won't modify the struct itself.
type Server struct{
Name string
Status bool
}
servers :=[]Server{{"web",false},{"db",false}}
for _,s := range servers {
s.status = true
}
After the loop, servers is still the same:
[{web false} {db false}]
How to fix it: use the loop index instead to access the slice element.
for i,_ := range servers {
servers[i].status
}
8- Unmarshaling JSON into Maps
When you unmarshal JSON into map[string]interface{}, the types are not reserved:
- All numbers become
float64 - Strings become
string - Booleans become
bool - Nested objects become
map[string]interface{} - Arrays become
[]interface{}
The following example illustrates the bug:
{
"id": 123,
"name": "web-server",
"active": true
}
var data map[string]interface{}
jsonBytes := []byte(`{
"id": 123,
"name": "web-server",
"active": true
}`)
err := json.Unmarshal(jsonBytes, &data)
if err != nil {
panic(err)
}
id := data["id"].(int) // panic!
fmt.Println(id)
interface conversion: interface {} is float64, not int
The best solution is to use struct instead:
type Server struct {
ID int `json:"id"`
Name string `json:"name"`
Active bool `json:"active"`
}
var s Server
err := json.Unmarshal(jsonBytes, &s)
if err != nil {
panic(err)
}
fmt.Println(s.ID) // Safe, typed, clean
Try to use map[string]interface{} only when the JSON structure is dynamic or when you don't know the schema. Otherwise, stick to structs.
9- Concurrent Access to Shared Variables Without Locks
When multiple goroutines access the same variable without synchronization, you create a data race. This can cause data inconsistency.
With maps specifically, Go detects unsafe concurrent map writes and panics: fatal error: concurrent map writes
count := 0
for i:=0; i<10; i++{
go func(){
count ++
}()
}
time.sleep(time.second)
fmt.Println(count)
How to fix it: add mutual-exclusion lock sync.Mutex, this will protect the shared variable from race condition .
var mu sync.Mutex
count :=0
for i:=0; i<1000; i++{
go func(){
mu.lock()
count ++
mu.unlock()
}
}
You can detect it with: go run -race main.go
Go will show warnings like: WARNING: DATA RACE
10- Dereferencing a Nil pointer
Dereferencing a nil pointer can cause a runtime panic:
var p *int
fmt.Println(*p)
panic: runtime error: invalid memory address or nil pointer dereference
Simple fix is to assign any variable address to the pointer:
x := 10
p := &x
Conclusion:
None of these bugs are considered as "blockers". They compile and most of the time they also work. The real issue arises when they lead to performance bottlenecks and security vulnerabilities.
