Using Kubernetes to Generate Load

 

Our development team needed to perform load testing on an API server recently.  Of course our first thought was to build a minimal container with a client, run it in Kubernetes, and scale it up to the desired number.

For a moment we considered running this in the cloud, but we realized that would get expensive very quickly, so we immediately pivoted to running it locally. 

Running the client in a local Kubernetes cluster provided by Rancher Desktop on an M1 Mac worked until we tried to scale it up. It turns out the overhead of virtualization and CPU emulation is not ideal for this kind of thing.

Next we moved our testing to a Linux laptop, building a local Kubernetes "cluster" using kind and docker. That got rid of all that virtualization overhead, but we still ran into a number of limits:

  1. Max pods per node
  2. Number of open files
  3. Pod IP block size
  4. ARP table size

Our solution ended up being a mix of Kubernetes configuration and sysctl settings, but with all of these limits modified, our load generator works!  A script that captures all of this logic (and that you can use for your own load testing) can be found on GitHub.  See the script for the exact commands and implementation, but here are some additional technical details:

  1. There is a default limit of 110 pods per node in kubelet. Since there are a few pods required by the cluster to operate, when we ran kubectl --context kind-loadtest get deployment scaleit, the number in the AVAILABLE column stalled out at 100.  To solve that, when we build the kind cluster we pass in kubeletExtraArgs to set max-pods to a number larger than the number of clients we require.
  2. Our client application was crashing with the message too many open files at around 120 pods.  We found that the default limits for inotify are low enough to cause problems (with all credit to this entry in the Known Issues page for kind), so we are increasing the sysctl values fs.inotify.max_user_watches and fs.inotify.max_user_instances to multiples of the maximum number of pods.
  3. The default IP address block in the cluster is a /24, so that makes you max out around 250 pods (with similar behavior to the max-pods limit we hit earlier). To increase that limit, we also set podSubnet and pass in extraArgs to set node-cidr-mask-size to large values (based on the maximum number of pods above) when building the kind cluster.
  4. At a little over 500 pods, the ARP table overflows and networking breaks.  We found the message neighbour: arp_cache: neighbor table overflow! in dmesg output.  To increase the size of the ARP table, we modify the sysctl values net.ipv4.neigh.default.gc_thresh1, net.ipv4.neigh.default.gc_thresh2, and net.ipv4.neigh.default.gc_thresh3 to large enough values to easily accommodate the maximum number of pods.