Unix Sockets are much faster than TCP loopbacks, use them for Kubernetes sidecars.
So you’ve decided to sidecar a service in your pod. Maybe it’s Nginx or ProxySQL or a custom microservice you’ve written. You’ve likely done this for speed related reasons; if so, shouldn’t you get the most speed out of this colocation as possible? I’m going to show you how to really speed up your network communications by using a Unix Socket instead of a TCP loopback.
Testing the Premise
I’ve written some example go server and client code that we can just execute anywhere to see how much faster a Unix Socket is than a TCP loopback.
Here’s our basic TCP Server:
The only changes we need to make to cause this to use a Unix Socket are here.
But now we’ve got a pesky SockFile hanging around after execution, so let’s also add:
Here’s our TCP client:
And just change this:
For our Unix Socket client.
And the results:
% go run tcpclient.go
2021/07/21 15:20:08 Time taken 1.681643416s
% go run socketclient.go
2021/07/21 15:20:19 Time taken 656.821252ms
That’s about 60% faster. So what’s going on here? Why is the Unix Socket so much faster?
Let’s add this profiling code to our clients to see what’s going on under the hood:
Let’s look at the profile now:
Opening the connections and reading from it are significantly faster with a Unix Socket, while closing the TCP connection is much faster. Let’s take a look at top for each:
We can see, when you use a Unix Socket, you’re just asking your system to do a lot less work.
Side note, when I first wrote the above TCP client, not thinking, I used ‘localhost’ instead of ‘127.0.0.1’. Of course that’s slower, because it has to do a name lookup, but by how much shocked me. To complete this same test, it took over 50 seconds instead of 1.6 seconds. Obviously a properly written client would just resolve localhost once before the loop, but how easy is that to overlook?
So what’s this have to do with kubernetes sidecars again? The answer is simple. You’re going to want to mount an empty volume between the two containers you’ve co-located in the same pod, and communicate via a Unix Socket instead of TCP. Most services like Nginx can read from or listen on a Unix Socket. Additionally, updating your code to support talking over a Unix Socket will likely be easy.
Do keep in mind that the number of open SocketFiles is governed by your open filehandle limits. By default, for a non-root user, that’s going to be 1024 for most Linux distros, so you may need to bump that up depending on what you’re doing.
So we’ve seen that using a Unix Socket can be 60% faster than a TCP loopback. We saw that it is faster because it’s less work to communicate over a Unix Socket. So that also means less system load. We’ve also seen that it’s pretty easy to convert to using a Unix Socket as well. Another thing we already knew, but had it underlined is that resolving DNS is really slow. Now go forth and speed up your kubernetes sidecars.