I’ve had recently moved from docker to kubernetes runtime (loving the Helm charts), I saw that it creates a pod for each pipeline (very clever) and since all containers in a pod shares the same network I was expecting the service to be available at 127.0.0.1.
Now I understand that it is a DIND thing but had no idea how overcame it. I know other people have same issue, if you solved it somehow please share ideas.
This is not possible with the docker-in-docker plugin. The docker plugin uses docker-in-docker and runs in an isolated container and network that is intentionally locked down. If the docker plugin does not meet your bespoke needs you may consider alternatives, including creating your own plugin that uses an alternate daemon or networking configuration [1], or consider avoiding plugins and interacting directly with docker [2][3].
Thanks for the explanational, now I can stop trying to figure out how to make it work.
My next option is creating a kubernetes service that maps the pipeline pod, so that my docker build can access it as an external host.
I can see a few concurrency problems but it’s better than 3 day with pipelines not working.
Do you think someday we would be able to define pod name based on kubernetes metadata?
That would simplify a lot of things, the random name makes it harder do debug.