Chapter 5 — Concurrency in Go: Goroutines, Channels, and the Art of Doing Many Things Well
- 05 Apr, 2025
Chapter 5 — Concurrency in Go: Goroutines, Channels, and the Art of Doing Many Things Well
Go’s concurrency model is one of the language’s defining features. It is simple, elegant, and powerful enough to build everything from high‑throughput servers to distributed systems. Unlike languages that rely heavily on threads, locks, and shared memory, Go encourages a different mental model: don’t fight over memory; communicate instead.
This chapter explores goroutines, channels, synchronization primitives, and the patterns that make Go’s concurrency model both approachable and production‑ready.
Why Concurrency Matters
Modern software rarely does just one thing at a time. Servers handle thousands of requests. Applications stream data, process events, and coordinate background tasks. Concurrency is no longer optional — it is foundational.
Go’s designers wanted a model that:
- is easy to reason about
- avoids the pitfalls of shared memory
- scales naturally with modern CPUs
- encourages safe communication between tasks
The result is a concurrency system built around two core ideas:
- goroutines — lightweight concurrent functions
- channels — typed conduits for communication
Together, they form the backbone of Go’s approach to concurrency.
Goroutines: Lightweight Concurrent Execution
A goroutine is a function running concurrently with other goroutines. It is created with the go keyword:
go doWork()
Goroutines are extremely lightweight. Unlike OS threads, they:
- start quickly
- use small initial stacks
- grow and shrink dynamically
- are multiplexed onto system threads by the Go runtime
This makes it feasible to run thousands — even millions — of goroutines in a single program.
Goroutine Example
func fetch(url string) {
// perform network request
}
func main() {
go fetch("https://example.com")
go fetch("https://golang.org")
}
Each call to fetch runs concurrently. The main function must wait for them, which leads naturally to channels.
Channels: Communication and Synchronization
Channels are typed conduits that allow goroutines to communicate safely:
ch := make(chan int)
Sending a value:
ch <- 42
Receiving a value:
value := <-ch
Channels enforce synchronization. A send blocks until a receiver is ready, and a receive blocks until a value is available. This eliminates many race conditions without explicit locks.
Example: Worker Reporting Results
func worker(ch chan string) {
ch <- "done"
}
func main() {
ch := make(chan string)
go worker(ch)
msg := <-ch
fmt.Println(msg)
}
The main goroutine waits until the worker sends a message.
Buffered Channels
Buffered channels allow sending without an immediate receiver:
ch := make(chan int, 3)
This creates a channel with capacity 3. Sends block only when the buffer is full.
Buffered channels are useful for:
- rate limiting
- batching
- decoupling producers and consumers
The select Statement
select allows waiting on multiple channel operations:
select {
case msg := <-ch1:
fmt.Println("received:", msg)
case ch2 <- "ping":
fmt.Println("sent ping")
default:
fmt.Println("no activity")
}
select is essential for:
- timeouts
- fan‑in / fan‑out patterns
- multiplexing
- cancellation
Concurrency Patterns
Go’s concurrency model encourages a set of idiomatic patterns that appear across real‑world systems.
Fan‑Out
Start multiple workers to process tasks concurrently:
for i := 0; i < 5; i++ {
go worker(tasks)
}
Fan‑In
Combine results from multiple goroutines into a single channel:
for result := range results {
fmt.Println(result)
}
Pipelines
Chain stages of processing:
stage1 -> stage2 -> stage3
Each stage is a goroutine connected by channels.
Worker Pools
Limit concurrency while processing many tasks:
jobs := make(chan Job)
results := make(chan Result)
for i := 0; i < 4; i++ {
go worker(i, jobs, results)
}
Worker pools are essential for CPU‑bound tasks or rate‑limited APIs.
Synchronization Primitives
Although channels are the preferred communication mechanism, Go provides additional tools when needed.
WaitGroups
Wait for a collection of goroutines to finish:
var wg sync.WaitGroup
wg.Add(1)
go func() {
defer wg.Done()
doWork()
}()
wg.Wait()
Mutexes
Protect shared state:
var mu sync.Mutex
mu.Lock()
count++
mu.Unlock()
Mutexes are appropriate when:
- shared memory is unavoidable
- performance is critical
- channels would complicate the design
Context: Cancellation and Deadlines
The context package provides cancellation, timeouts, and request scoping:
ctx, cancel := context.WithTimeout(context.Background(), time.Second)
defer cancel()
select {
case <-ctx.Done():
fmt.Println("timeout")
}
Context is essential for:
- HTTP servers
- background tasks
- distributed systems
- graceful shutdown
Avoiding Common Concurrency Pitfalls
Even with Go’s clean model, concurrency can go wrong. Common issues include:
- goroutine leaks
- unbuffered channels blocking unexpectedly
- forgetting to close channels
- race conditions on shared memory
- deadlocks from circular waits
Tools like go vet and the race detector help catch these issues early.
The Go Way of Concurrency
Go’s concurrency model is built on a simple philosophy:
- Start many goroutines.
- Communicate through channels.
- Avoid shared memory unless necessary.
- Use context for cancellation.
- Keep patterns simple and composable.
This approach scales from small scripts to massive distributed systems.
The next chapter explores Go’s standard library — the batteries‑included toolkit that makes Go productive for everything from networking to file I/O to testing.