Go, Containers, and the Linux Scheduler

Many Go developers, including myself, deploy their applications in containers. However, when running in container orchestrators, it is crucial to set CPU limits to prevent the container from consuming all available CPU on the host. The issue is that the Go runtime is not aware of these limits and will continue to use all available CPU. This can lead to high latency. To understand why this happens and how to fix it, let’s take a look at how the Go Garbage Collector (GC) works.

The Go GC is predominantly performed concurrently with the execution of your program. However, there are two points during the GC process where the Go runtime needs to stop every Goroutine. This is necessary to maintain data integrity. These stop-the-world phases, known as Sweep Termination and Mark Termination, usually take a few tens of microseconds.

To demonstrate this issue, I created a web application that consumes a lot of memory and ran it in a container with a limit of 4 CPU cores. I used the docker command “docker run –cpus=4 -p 8080:8080” to allocate the necessary resources. It is worth noting that the docker CPU limit is a soft limit, meaning it is only enforced when the host is CPU

https://www.riverphillips.dev/blog/go-cfs/

To top