Concurrency
Concurrency in Go
Go’s concurrency model is built around goroutines and channels. A goroutine is a lightweight thread of execution managed by the Go runtime, rather than the operating system. Because goroutines are extremely cheap, Go makes it practical to design programs that spin up thousands or even millions of goroutines when needed.
Goroutines and Processors
By default, Go schedules all active goroutines across as many processors (CPUs) as it sees available. However, you can customize the number of CPUs the Go runtime is allowed to use with runtime.GOMAXPROCS
. This function sets the maximum number of OS threads that can execute user-level Go code simultaneously, effectively controlling parallelism.
Basic Goroutine Example
Explanation:
go sayHello()
spawns a new goroutine runningsayHello()
concurrently withmain
.time.Sleep(time.Second)
is a rudimentary wait to let the goroutine complete before the program exits.
Using Multiple CPUs with Goroutines
In Go, you can utilize goroutines to run code concurrently on multiple processors. The runtime
package provides functionality to control how many processors (or CPUs) Go uses.
Steps:
Use Goroutines: To run tasks concurrently.
Use
runtime.GOMAXPROCS
: To set the maximum number of CPUs that Go should utilize.Use
sync.WaitGroup
or Channels: To synchronize or wait for all goroutines to finish.
Example Code:
Explanation:
runtime.NumCPU()
finds the number of logical CPUs available on the system.runtime.GOMAXPROCS(numCPUs)
tells Go to use all those CPUs for parallel execution (you can adjust this to any integer up tonumCPUs
).A
sync.WaitGroup
is used to wait for all goroutines to finish before the program exits.In this example, each goroutine runs the
task(id)
function, simulating concurrent tasks.
Key Points:
Goroutines are lightweight, making it practical to launch many of them for concurrent or parallel operations.
runtime.GOMAXPROCS
lets you tune how many CPU cores the Go scheduler may utilize. By default, Go uses all available cores, but you can limit or expand usage based on your needs.Channels or
sync.WaitGroup
can be used for synchronization. Here,WaitGroup
is the simpler choice to just block until all goroutines complete.
Channels for Communication
While sync.WaitGroup
helps wait for goroutines to finish, channels provide a way to pass data between goroutines safely and synchronize their execution. For instance:
Producer-Consumer with Channels
Explanation:
ch
is an unbuffered channel for integers.producer
sends values0..4
intoch
and then closes it.consumer
reads fromch
until it’s closed, printing the received values.The
done
channel signals the main goroutine that the consumer is finished, avoiding premature program exit.
Synchronization Primitives
Besides channels, the sync
package provides additional concurrency tools:
sync.Mutex
andsync.RWMutex
: For protecting shared data with locks.sync.Cond
: For advanced conditional synchronization.sync.Map
: A concurrent map implementation with optimized operations under certain workloads.
Here’s a quick example with a sync.Mutex
:
Explanation:
The
SafeCounter
uses async.Mutex
to ensure that increments do not race each other.Without the lock, multiple goroutines could increment the value at the same time, leading to race conditions and incorrect counts.
Concurrency vs. Parallelism
Concurrency: Structuring a program as multiple independent tasks that can run out of order or in partial order, without necessarily running in parallel.
Parallelism: Physically executing multiple tasks at the same time on different CPU cores.
Go provides concurrency primitives via goroutines and channels, and you can achieve parallel execution by running goroutines on multiple cores (as showcased by adjusting runtime.GOMAXPROCS
).
Common Pitfalls
Goroutine Leaks:
Always ensure goroutines have a clear exit path or they may leak and remain blocked indefinitely (e.g., stuck on a channel read with no writes).
Data Races:
Occur when two or more goroutines access the same variable simultaneously, and at least one modifies it. Use channels or locks to protect shared data.
Deadlocks:
Two or more goroutines waiting on each other’s locks or channels indefinitely.
Excessive Goroutines:
While goroutines are cheap, creating millions of them without structure can degrade performance or consume memory.
Conclusion
Go’s concurrency model—centered on goroutines and channels—offers a simple yet powerful way to write highly concurrent (and potentially parallel) programs. By using runtime.GOMAXPROCS
, you can fine-tune how many CPU cores the scheduler may employ, allowing tasks to run in parallel for improved performance on multi-core systems. Tools like sync.WaitGroup
, sync.Mutex
, and channels help coordinate goroutines safely. With these constructs, Go encourages writing clear, robust concurrent code that scales effectively across many cores and workloads.