Autoscale and privileged containers

I’ve managed to get autoscaler running on Hetzner cloud and for some projects it works just fine (tested e.g. with docker plugin)

But I’m using it also for molecule (ansible testing framework), which requires running containers in privileged mode - one example here: GitHub - VeselaHouba/ansible-role-bareos
Containers are started, but something prevents them from running systemd tasks: they just time out. This happens only on runners which are crated by autoscaler, on other runners created with simple docker-compose everything works fine.

      version: '3'
          image: drone/drone-runner-docker:1
          restart: unless-stopped
            - /var/run/docker.sock:/var/run/docker.sock
            - ""
              - DRONE_RPC_PROTO=https
              - DRONE_RPC_HOST=my.drone.master
              - DRONE_RPC_SECRET={{ vault_drone_rpc_secret }}
              - DRONE_RUNNER_CAPACITY=2
              - DRONE_RUNNER_NAME={{ inventory_hostname }}

I’ve compared docker inspect from both runners and found no significant differences, so I’m suspecting the VM or docker installation created by autoscaler to have some differences against my own installation (which is more or less default docker install). Maybe the docker daemon.json config, which allows remote calls, but also requires valid SSL. ¯\_(ツ)_/¯
Anyone got any idea where and how to start debugging?

Looks like the only issue was I creating cx11 type VMs, which don’t have enough memory to handle the services. Running with cx21 seems to solve the issue.
So false alarm, this is completely unrelated to autoscaler, which works perfectly.