Drone build without ipv6 interface error

i set the docker daemon.json, its means i enable ipv6 at docker.

    "ipv6": true,
    "fixed-cidr-v6": "2001:db8:1::/64"

also i can run alpine on host with ipv6 successful.
but let drone runner docker do it, i don’t know how it works, because host docker network bridge not have any 172.19.x.x ip.

            "b86e3bed93598cf767afd89fd3de8cc500d6630c952656ee44a6ac8228396619": {
                "Name": "drone-runner",
                "EndpointID": "37feadaa4f6dfff551d8c5f84eaca478498755c4b2a00642c935f9ca052e434e",
                "MacAddress": "02:42:ac:11:00:05",
                "IPv4Address": "",
                "IPv6Address": "2001:db8:1::242:ac11:5/64"

so I’m trying to figure out what it’s doing with /var/run/docker.sock, and it looks like it’s creating a completely different docker network and without ipv6.

latest: Pulling from library/alpine
Digest: sha256:bc41182d7ef5ffc53a40b044e725193bc10142a1243f395ee852a8d9730fc2ad
Status: Image is up to date for alpine:latest
+ ip addr
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet scope host lo
       valid_lft forever preferred_lft forever
28: eth0@if29: <BROADCAST,MULTICAST,UP,LOWER_UP,M-DOWN> mtu 1500 qdisc noqueue state UP 
    link/ether 02:42:ac:13:00:02 brd ff:ff:ff:ff:ff:ff
    inet brd scope global eth0
       valid_lft forever preferred_lft forever
+ ping *
PING * (*): 56 data bytes
ping: sendto: Address not available

and i set --env DRONE_RUNNER_NETWORKS=bridge , but its not working.
and i use dind to test my guess, i find that runner will create random docker network, so can i control it behavior?
Forgive me for my English proficiency in machine translation.

I solved this problem by creating another network
docker network create --subnet="2001:db8:1::/64" --gateway="2001:db8:1::1" --ipv6 drone
And then use
Is it possible to add such features to the drone runner to enable dual-stack networks?

1 Like

@Cciradih glad that you were able to resolve the issue. Feel free to ping us if you have any further queries.