I’m a retired Unix admin. It was my job from the early '90s until the mid '10s. I’ve kept somewhat current ever since by running various machines at home. So far I’ve managed to avoid using Docker at home even though I have a decent understanding of how it works - I stopped being a sysadmin in the mid '10s, I still worked for a technology company and did plenty of “interesting” reading and training.
It seems that more and more stuff that I want to run at home is being delivered as Docker-first and I have to really go out of my way to find a non-Docker install.
I’m thinking it’s no longer a fad and I should invest some time getting comfortable with it?
Can you expand on that? I use docker-compose and have probably around 10 services on the same box. I don’t forsee any limitations beyond hardware if I wanted to just keep adding more, but maybe I’m missing something.
If one service needs to connect to another service then I have to add a shared network between them. In that case, the services essentially shared a common namespace regarding DNS. DNS resolution would routinely leak from one service to another and cause outages, e.g if I connect Gitlab container and Redmine container with OpenLDAP container then sometimes Redmine’s nginx container would access Gitlab container instead of Redmine container and Gitlab container would access Redmine’s DB instead of its own DB.
I maintained some workarounds, like starting Gitlab after starting Redmine would work fine but starting them other way round would have this issue. But switching to Kubernetes and replacing the cross-service connections with network policies solved the issue for me.
An easy fix for this is to create individual networks for connections. I.e. don’t create one network with Gitlab, Redmine and OpenLDAP - do two, one with Gitlab and OpenLDAP, and one with Redmine and OpenLDAP.
This was the setup I had, but now I am already using kubernetes with no intention to switch back.
Very understandable :)