Clean Code: Concurrency Patterns, Context Management, and Goroutine Safety [Part 5]

Written by yakovlef | Published 2025/12/09
Tech Story Tags: golang | go-concurrency | goroutines | go-channels | clean-code-in-go | go-race-conditions | go-worker-pool-pattern | go-production-debugging

TLDRThis final installment in the Clean Code in Go series breaks down how to write safe, idiomatic concurrent Go code using context, goroutines, channels, and proven patterns—while avoiding leaks, race conditions, deadlocks, and the production outages they cause.via the TL;DR App

This is the last article in the "Clean Code in Go" series.

Previous Parts:

Introduction: Why Go Concurrency Is Special

I've debugged goroutine leaks at 3 AM, fixed race conditions that only appeared under load, and watched a single missing defer statement bring down a production service. "Don't communicate by sharing memory; share memory by communicating" — this Go mantra turned concurrent programming on its head. Instead of mutexes and semaphores — channels. Instead of threads — goroutines. Instead of callbacks — select. And all this with context for lifecycle management.

Common concurrency mistakes I've encountered:

  • Goroutine leaks: ~40% of production memory issues
  • Race conditions with shared state: ~35% of concurrent code
  • Missing context cancellation: ~50% of timeout bugs
  • Deadlocks from channel misuse: ~25% of hanging services
  • Wrong mutex usage (value receiver): ~30% of sync bugs

After 6 years working with Go and systems processing millions of requests, I can say: proper use of goroutines and context is the difference between an elegant solution and a production incident at 3 AM. Today we'll explore patterns that work and mistakes that hurt.

Context: Lifecycle Management

The First Rule of Context

// RULE: context.Context is ALWAYS the first parameter
func GetUser(ctx context.Context, userID string) (*User, error) {
    // correct
}

func GetUser(userID string, ctx context.Context) (*User, error) {
    // wrong - violates convention
}

Cancellation

// BAD: operation cannot be cancelled
func SlowOperation() (Result, error) {
    time.Sleep(10 * time.Second) // always waits 10 seconds
    return Result{}, nil
}

// GOOD: operation respects context
func SlowOperation(ctx context.Context) (Result, error) {
    select {
    case <-time.After(10 * time.Second):
        return Result{}, nil
    case <-ctx.Done():
        return Result{}, ctx.Err()
    }
}

// Usage with timeout
ctx, cancel := context.WithTimeout(context.Background(), 5*time.Second)
defer cancel()

result, err := SlowOperation(ctx)
if err == context.DeadlineExceeded {
    log.Println("Operation timed out")
}

Context Values: Use Carefully!

// BAD: using context for business logic
type key string

const userKey key = "user"

func WithUser(ctx context.Context, user *User) context.Context {
    return context.WithValue(ctx, userKey, user)
}

func GetUser(ctx context.Context) *User {
    return ctx.Value(userKey).(*User) // panic if no user!
}

// GOOD: context only for request metadata
type contextKey string

const (
    requestIDKey contextKey = "requestID"
    traceIDKey   contextKey = "traceID"
)

func WithRequestID(ctx context.Context, requestID string) context.Context {
    return context.WithValue(ctx, requestIDKey, requestID)
}

func GetRequestID(ctx context.Context) string {
    if id, ok := ctx.Value(requestIDKey).(string); ok {
        return id
    }
    return ""
}

// BETTER: explicit parameter passing
func ProcessOrder(ctx context.Context, user *User, order *Order) error {
    // user passed explicitly, not through context
    return nil
}

Goroutines: Lightweight Concurrency

Pattern: Worker Pool

// Worker pool to limit concurrency
type WorkerPool struct {
    workers   int
    jobs      chan Job
    results   chan Result
    wg        sync.WaitGroup
}

type Job struct {
    ID   int
    Data []byte
}

type Result struct {
    JobID int
    Output []byte
    Error error
}

func NewWorkerPool(workers int) *WorkerPool {
    return &WorkerPool{
        workers: workers,
        jobs:    make(chan Job, workers*2),
        results: make(chan Result, workers*2),
    }
}

func (p *WorkerPool) Start(ctx context.Context) {
    for i := 0; i < p.workers; i++ {
        p.wg.Add(1)
        go p.worker(ctx, i)
    }
}

func (p *WorkerPool) worker(ctx context.Context, id int) {
    defer p.wg.Done()
    
    for {
        select {
        case job, ok := <-p.jobs:
            if !ok {
                return
            }
            
            result := p.processJob(job)
            
            select {
            case p.results <- result:
            case <-ctx.Done():
                return
            }
            
        case <-ctx.Done():
            return
        }
    }
}

func (p *WorkerPool) processJob(job Job) Result {
    // Process job
    output := bytes.ToUpper(job.Data)
    
    return Result{
        JobID:  job.ID,
        Output: output,
    }
}

func (p *WorkerPool) Submit(job Job) {
    p.jobs <- job
}

func (p *WorkerPool) Shutdown() {
    close(p.jobs)
    p.wg.Wait()
    close(p.results)
}

// Usage
func main() {
    ctx, cancel := context.WithCancel(context.Background())
    defer cancel()
    
    pool := NewWorkerPool(10)
    pool.Start(ctx)
    
    // Submit jobs
    for i := 0; i < 100; i++ {
        pool.Submit(Job{
            ID:   i,
            Data: []byte(fmt.Sprintf("job-%d", i)),
        })
    }
    
    // Collect results
    go func() {
        for result := range pool.results {
            log.Printf("Result %d: %s", result.JobID, result.Output)
        }
    }()
    
    // Graceful shutdown
    pool.Shutdown()
}

Pattern: Fan-out/Fan-in

// Fan-out: distribute work among goroutines
func fanOut(ctx context.Context, in <-chan int, workers int) []<-chan int {
    outputs := make([]<-chan int, workers)
    
    for i := 0; i < workers; i++ {
        output := make(chan int)
        outputs[i] = output
        
        go func() {
            defer close(output)
            for {
                select {
                case n, ok := <-in:
                    if !ok {
                        return
                    }
                    
                    // Heavy work
                    result := n * n
                    
                    select {
                    case output <- result:
                    case <-ctx.Done():
                        return
                    }
                    
                case <-ctx.Done():
                    return
                }
            }
        }()
    }
    
    return outputs
}

// Fan-in: collect results from goroutines
func fanIn(ctx context.Context, inputs ...<-chan int) <-chan int {
    output := make(chan int)
    var wg sync.WaitGroup
    
    for _, input := range inputs {
        wg.Add(1)
        go func(ch <-chan int) {
            defer wg.Done()
            for {
                select {
                case n, ok := <-ch:
                    if !ok {
                        return
                    }
                    
                    select {
                    case output <- n:
                    case <-ctx.Done():
                        return
                    }
                    
                case <-ctx.Done():
                    return
                }
            }
        }(input)
    }
    
    go func() {
        wg.Wait()
        close(output)
    }()
    
    return output
}

// Usage
func pipeline(ctx context.Context) {
    // Number generator
    numbers := make(chan int)
    go func() {
        defer close(numbers)
        for i := 1; i <= 100; i++ {
            select {
            case numbers <- i:
            case <-ctx.Done():
                return
            }
        }
    }()
    
    // Fan-out to 5 workers
    workers := fanOut(ctx, numbers, 5)
    
    // Fan-in results
    results := fanIn(ctx, workers...)
    
    // Process results
    for result := range results {
        fmt.Printf("Result: %d\n", result)
    }
}

Channels: First-Class Citizens

Directional Channels

// BAD: bidirectional channel everywhere
func producer(ch chan int) {
    ch <- 42
}

func consumer(ch chan int) {
    value := <-ch
}

// GOOD: restrict direction
func producer(ch chan<- int) { // send-only
    ch <- 42
}

func consumer(ch <-chan int) { // receive-only
    value := <-ch
}

// Compiler will check correct usage
func main() {
    ch := make(chan int)
    
    go producer(ch)
    go consumer(ch)
}

Select and Non-blocking Operations

// Pattern: timeout with select
func RequestWithTimeout(url string, timeout time.Duration) ([]byte, error) {
    result := make(chan []byte, 1)
    errCh := make(chan error, 1)
    
    go func() {
        resp, err := http.Get(url)
        if err != nil {
            errCh <- err
            return
        }
        defer resp.Body.Close()
        
        data, err := io.ReadAll(resp.Body)
        if err != nil {
            errCh <- err
            return
        }
        
        result <- data
    }()
    
    select {
    case data := <-result:
        return data, nil
    case err := <-errCh:
        return nil, err
    case <-time.After(timeout):
        return nil, fmt.Errorf("request timeout after %v", timeout)
    }
}

// Non-blocking send
func TrySend(ch chan<- int, value int) bool {
    select {
    case ch <- value:
        return true
    default:
        return false // channel full
    }
}

// Non-blocking receive
func TryReceive(ch <-chan int) (int, bool) {
    select {
    case value := <-ch:
        return value, true
    default:
        return 0, false // channel empty
    }
}

Race Conditions and How to Avoid Them

Problem: Data Race

// DANGEROUS: data race
type Counter struct {
    value int
}

func (c *Counter) Inc() {
    c.value++ // NOT atomic!
}

func (c *Counter) Value() int {
    return c.value // race on read
}

// Check: go test -race

Solution 1: Mutex

type SafeCounter struct {
    mu    sync.RWMutex
    value int
}

func (c *SafeCounter) Inc() {
    c.mu.Lock()
    defer c.mu.Unlock()
    c.value++
}

func (c *SafeCounter) Value() int {
    c.mu.RLock()
    defer c.mu.RUnlock()
    return c.value
}

// Pattern: protecting invariants
type BankAccount struct {
    mu      sync.Mutex
    balance decimal.Decimal
}

func (a *BankAccount) Transfer(to *BankAccount, amount decimal.Decimal) error {
    // Important: always lock in same order (by ID)
    // to avoid deadlock
    if a.ID() < to.ID() {
        a.mu.Lock()
        defer a.mu.Unlock()
        to.mu.Lock()
        defer to.mu.Unlock()
    } else {
        to.mu.Lock()
        defer to.mu.Unlock()
        a.mu.Lock()
        defer a.mu.Unlock()
    }
    
    if a.balance.LessThan(amount) {
        return ErrInsufficientFunds
    }
    
    a.balance = a.balance.Sub(amount)
    to.balance = to.balance.Add(amount)
    
    return nil
}

Solution 2: Channels for Synchronization

// Use channels instead of mutexes
type ChannelCounter struct {
    ch chan countOp
}

type countOp struct {
    delta int
    resp  chan int
}

func NewChannelCounter() *ChannelCounter {
    c := &ChannelCounter{
        ch: make(chan countOp),
    }
    
    go c.run()
    
    return c
}

func (c *ChannelCounter) run() {
    value := 0
    for op := range c.ch {
        value += op.delta
        if op.resp != nil {
            op.resp <- value
        }
    }
}

func (c *ChannelCounter) Inc() {
    c.ch <- countOp{delta: 1}
}

func (c *ChannelCounter) Value() int {
    resp := make(chan int)
    c.ch <- countOp{resp: resp}
    return <-resp
}

Concurrency Patterns

Pattern: Graceful Shutdown

type Server struct {
    server   *http.Server
    shutdown chan struct{}
    done     chan struct{}
}

func NewServer(addr string) *Server {
    return &Server{
        server: &http.Server{
            Addr: addr,
        },
        shutdown: make(chan struct{}),
        done:     make(chan struct{}),
    }
}

func (s *Server) Start() {
    go func() {
        defer close(s.done)
        
        if err := s.server.ListenAndServe(); err != nil && 
           err != http.ErrServerClosed {
            log.Printf("Server error: %v", err)
        }
    }()
    
    // Wait for shutdown signal
    go func() {
        sigCh := make(chan os.Signal, 1)
        signal.Notify(sigCh, os.Interrupt, syscall.SIGTERM)
        
        select {
        case <-sigCh:
        case <-s.shutdown:
        }
        
        ctx, cancel := context.WithTimeout(context.Background(), 10*time.Second)
        defer cancel()
        
        if err := s.server.Shutdown(ctx); err != nil {
            log.Printf("Shutdown error: %v", err)
        }
    }()
}

func (s *Server) Stop() {
    close(s.shutdown)
    <-s.done
}

Pattern: Rate Limiting

type RateLimiter struct {
    rate   int
    bucket chan struct{}
    stop   chan struct{}
}

func NewRateLimiter(rate int) *RateLimiter {
    rl := &RateLimiter{
        rate:   rate,
        bucket: make(chan struct{}, rate),
        stop:   make(chan struct{}),
    }
    
    // Fill bucket
    for i := 0; i < rate; i++ {
        rl.bucket <- struct{}{}
    }
    
    // Refill bucket at given rate
    go func() {
        ticker := time.NewTicker(time.Second / time.Duration(rate))
        defer ticker.Stop()
        
        for {
            select {
            case <-ticker.C:
                select {
                case rl.bucket <- struct{}{}:
                default: // bucket full
                }
            case <-rl.stop:
                return
            }
        }
    }()
    
    return rl
}

func (rl *RateLimiter) Allow() bool {
    select {
    case <-rl.bucket:
        return true
    default:
        return false
    }
}

func (rl *RateLimiter) Wait(ctx context.Context) error {
    select {
    case <-rl.bucket:
        return nil
    case <-ctx.Done():
        return ctx.Err()
    }
}

Pattern: Pipeline with Error Handling

// Pipeline stage with error handling
type Stage func(context.Context, <-chan int) (<-chan int, <-chan error)

// Compose stages
func Pipeline(ctx context.Context, stages ...Stage) (<-chan int, <-chan error) {
    var (
        dataOut = make(chan int)
        errOut  = make(chan error)
        
        dataIn <-chan int
        errIn  <-chan error
    )
    
    // Start generator
    start := make(chan int)
    go func() {
        defer close(start)
        for i := 1; i <= 100; i++ {
            select {
            case start <- i:
            case <-ctx.Done():
                return
            }
        }
    }()
    
    dataIn = start
    
    // Apply stages
    for _, stage := range stages {
        dataIn, errIn = stage(ctx, dataIn)
        
        // Collect errors
        go func(errors <-chan error) {
            for err := range errors {
                select {
                case errOut <- err:
                case <-ctx.Done():
                    return
                }
            }
        }(errIn)
    }
    
    // Final output
    go func() {
        defer close(dataOut)
        for val := range dataIn {
            select {
            case dataOut <- val:
            case <-ctx.Done():
                return
            }
        }
    }()
    
    return dataOut, errOut
}

Practical Tips

  1. Always use context for long operations
  2. Don't spawn goroutines uncontrolled — use worker pools
  3. Prefer channels to mutexes for coordination
  4. Use sync/atomic for simple counters
  5. Run tests with -race flag
  6. Restrict channel direction
  7. Always think about graceful shutdown

Concurrency Checklist

  • Context passed as first parameter
  • Goroutines can be stopped via context
  • No orphaned goroutines (leaks)
  • Channels closed by sender
  • Mutexes locked in same order
  • Tests pass with -race flag
  • Has graceful shutdown
  • Worker pool for bulk operations

Conclusion

Concurrency in Go isn't just a feature, it's the philosophy of the language. Proper use of goroutines, channels, and context allows writing elegant concurrent code without traditional multithreading problems.

This article concludes the "Clean Code in Go" series. We've covered the journey from functions to concurrency, touching all key aspects of writing idiomatic Go code. Remember: Go is about simplicity, and clean code in Go is code that follows the language's idioms.

What's your worst production incident caused by race conditions? How do you test concurrent code? What patterns have saved you from goroutine leaks? Share your war stories in the comments!


Written by yakovlef | Team Lead | Software Engineer
Published by HackerNoon on 2025/12/09