Comunicazione tra i contenitori Docker su host differenziati nella stessa networking locale

Ho due contenitori Docker in esecuzione su due host diversi:

  • Computer A (PC, utilizza Ethernet), ip 192.168.0.11 [contenitore Docker in esecuzione all'interno di 172.17.0.2], OS Windows 7
  • Computer B (computer porttile, utilizza WLAN), ip 192.168.0.12 [contenitore Docker in esecuzione all'interno di 172.17.0.2], OS Linux Ubuntu

Vorrei collegarli in qualche modo, in modo da poter comunicare tra loro (ad esempio tramite ssh).

Ho cercato di "colbind" i miei contenitori eseguendoli con:

docker run --name master_node --rm -it branislava/ubuntu1504java8:v1 docker run --rm -it --link master_node:master_node1 --name slave_node branislava/ubuntu1504java8:v1 

Come suggerito nei commenti. Questo sta funzionando; i contenitori possono comunicare – ma solo quando vengono eseguiti sulla stessa macchina host.

Come si poteva get questo per i contenitori che sono in esecuzione su diverse macchine nella stessa networking locale?

mostrami il command docker.

Penso che nel tuo command manca il link

ecco un esempio

 docker run --name=redmine -it --rm --link=postgresql-redmine:postgresql \ --volume=/srv/docker/redmine/redmine:/home/redmine/data \ sameersbn/redmine:3.3.1 

non l'ho mai provato, ma tu puoi fare un altro pc

 docker run --name=Client -it --rm \ --env='MEMCACHE_HOST=192.168.1.12' --env='MEMCACHE_PORT=22' \ 

Di seguito è riportto lo script bash per stabilire la comunicazione tra i container docker della stessa image tra diversi host nella stessa networking locale:

  # Firstly, we have to make a swarm # We have to discover ip address of the swarm leader (ie manager, master node) ifconfig # My master computer has ip 192.168.0.12 in the local network sudo docker swarm init --advertise-addr 192.168.0.12 # Info about swarm sudo docker info # info about nodes (currently only one) sudo docker node ls # hostaname got it name automatically, by the host's name # adding new nodes to the swarm # ssh to the slave node (ie worker node) or type physically: # this command was generated after # sudo docker swarm init --advertise-addr 192.168.0.12 # on manager computer docker swarm join --token SWMTKN-1-55b12pdctfnvr1wd4idsuzwx34vcjwv9589azdgi0srgr3626q-01zjw639dyoy1ccgpiwcouqlk 192.168.0.12:2377 # if one cannot remember or find this command # should type again on the manager host docker swarm join-token worker # ssh to the manager or type in directly: # (listing existing nodes in a swarm) sudo docker node ls # adding docker image as a process that will run in this containers # ssh to the manager or type in directly: # replicas will be the number of nodes # manager is also a worker node sudo docker service create --replicas 2 --name master_node image-name sleep infinity # I have to enter inside of the containers and set up some things before # running application, so I say `sleep infinity` # Else, this is not necessary. # what's up with the running process sudo docker service inspect --pretty etdo0z8o8timbsdmn3qdv381i # or sudo docker service inspect master_node # also, but only from manager sudo docker service ps master_node # see running containers (from worker or from manager) sudo docker ps # promote node from worker to manager # `default` is the name of my worker node sudo docker node promote default # denote node from manager to worker sudo docker node demote default # entering container, if needed # getting container id with `sudo docker ps` sudo docker exec -it bb923e379cbd bash # retrieving ip and port of the container # I need this since my containers are communicating via ssh sudo docker ps sudo docker inspect --format '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' 5395cdd22c44 # removing processes from all nodes sudo docker service rm master_node # this command should say `no process` sudo docker service inspect master_node 

Speriamo che qualcuno lo troverà utile.