No default network in Azure Docker - azure

I followed this turorial: https://learn.microsoft.com/en-us/azure/container-instances/tutorial-docker-compose
But when I set up my project there's no default network created.
Anybody know how to solve this?

The default network should be bridge, as can be seen with
$ docker network ls
Whenever you start a container without providing --network flag your container is connected to the default bridge network by default. You can provide the --network bridge flag in your docker run command.
Alternatively, you can connect a container to a network using
$ docker network connect <network> <container>

Related

How to ssh into a docker container running on a different network?

I have a docker container running on a local host with private ip 172.17.0.3.
I want this to be publicly accesible over the internet so that anyone in the world can ssh into this docker container. Is this possible and if possible how? I am trying to create a small public cloud filled with instances of docker containers in my local network which people from all over the world can access and I am sitting behind a NAT which might cause issues.
Any help will be appreciated.
Forward port from inside container to the host please, for example
docker run -it --name mydocker -p 8080:80 docker/tensorflow:latest

Sharing virtual network with docker container

I am working on a project that requires me to create a virtual CAN network on my host machine
$ sudo modprobe vcan
$ sudo ip link add dev vcan0 type vcan
$ sudo ip link set up vcan0
My ifconfig :
My question is how can I share this interface with my docker container.
If its of any use I ran the following command find / -name "vcan0" -print 2>/dev/null on my host machine :
/sys/class/net/vcan0
/sys/devices/virtual/net/vcan0
/proc/sys/net/ipv4/conf/vcan0
/proc/sys/net/ipv4/neigh/vcan0
I can run the Docker container using docker run --rm -it --network=host ... . The only problem is there is no network isolation b/w docker host and containers anymore. Is there a way to achieve the above but without sharing the host network ?
I haven't found a way to share a CAN network interface with a Docker container without --network=host, but there is a possible workaround. You can use a CAN-UDP bridge, like canneloni or can2udp, to send CAN frames over UDP.
I've used this in the past to connect a physical CAN bus on a remote device to a virtual CAN interface on my laptop. But it should work just as well for a Docker container.
One drawback is that you do have to create a vcan interface in the container. Which requires you to run the container in privileged mode.
You can use --cap-add=NET_ADMIN when you run docker image. This will allow you to create inside container:
$ sudo ip link add dev vcan0 type vcan
$ sudo ip link set up vcan0
Of course vcan driver is loaded on host.
I've written up a blog post that should get you most of the way there. At a high level you need to create a vxcan link and move one end of it into your docker container. Then you can forward traffic from your vcan interface to one end of the vxcan interface, and it will be transmitted to the vxcan inside the container. You'll just need to use the correct kernel headers package and in the final cangw step you'll need to specific vcan0 instead of can0.
https://www.lagerdata.com/blog/forwarding-can-bus-traffic-to-a-docker-container-using-vxcan-on-raspberry-pi

Not exposed ports in docker swarm mode

I'm trying to set up very simple service using docker swarm and I have a problem with exposing ports.
I have two machines, lets name them xxx and yyy. When I do simple
docker run -d -p 9200:9200 -p 9300:9300 elasticsearch:7.4.0
both of them works correctly, I can go to xxx:9200 to have instance of Elasticsearch
I tried to do the same with swarm mode so, on the xxx machine I did:
docker swarm init --advertise-addr [external IP of xxx machine]
I got correct token and I successfully joined yyy machine to the swarm.
Then I created new overlay network using
docker network create -d overlay dockerdemo
and created service in this swarm using
docker service create --name swarmelasticsearch --network dockerdemo --replicas 2 -p 9200:9200 -p 9300:9300 elasticsearch:7.4.0
Service is being created successfully, both of machines have running containers with Elasticsearch, but I cannot go to them from outside. When I go to the xxx:9200 or yyy:9200 or port-of-xxx:9200 nothing happens. I cannot reach my site. Why? Do I need to do anything more? Both of my machines are on Azure VM with Ubuntu + latest docker.

Multiple ip on same interface in Docker container

Is that possible to have multiple IPs on eth0 in a Docker container?
I would like having 5 IPs on eth0 in a Docker container interface. I am using "ip" utility. Executing ip address add 172.20.0.200/16 dev eth0 in the container give "Operation not permited.
I tried manually log to the container as root user using "sudo exec
-u root ..".
I have even tried apt-install sudo in the container. Result is same "Operation not permitted"
I have found the answer. There must be added --cap-add option
docker run --cap-add=NET_ADMIN image
But as I understand docker know nothing about static IPs since docker network inspect shows only one IP. Hence think of using custom network

How to reach docker containers by name instead of IP address?

Is there a way I can reach my docker containers using names instead of ip addresses?
I've heard of pipework and I've seen some dns and hostname type options for docker, but I still am unable to piece everything together.
Thank you for your time.
I'm not sure if this is helpful, but this is what I've done so far:
installed docker container host using docker-machine and the vmwarevsphere driver
started up all the services with docker-compose
I can reach all of the services from any other machine on the network using IP and port
I've added a DNS alias entry to my private network DNS server and it matches the machine name that's used by docker-machine. But the machine always picks up a different IP address when it boots and connects to the network.
I'm just lost as to where to tackle this:
network DNS server
docker-machine hostname
docker container hostname
probably some combination of all of them
I'm probably looking for something similar to this question:
How can let docker use my network router to assign dhcp ip to containers easily instead of pipework?
Any general direction will be awesome...thanks again!
Docker 1.10 has a built in DNS. If your containers are connected to the same user defined network (create a network docker network create my-network and run your container with --net my-network) they can reference each other using the container name. (Docs).
Cool!
One caveat if you are using Docker compose you know that it adds a prefix to your container names, i.e. <project name>_<service name>-#. This makes your container names somewhat more difficult to control, but it might be ok for your use case. You can override the docker compose naming functionality by manually setting the container name in your compose template, but then you wont be able to scale with compose.
Create a new bridge network other than docker0, run your containers inside it and you can reference the containers inside that network by their names.
Docker daemon runs an embedded DNS server to provide automatic service
discovery for containers connected to user-defined networks. Name
resolution requests from the containers are handled first by the
embedded DNS server.
Try this:
docker network create <network name>
docker run --net <network name> --name test busybox nc -l 0.0.0.0:7000
docker run --net <network name> busybox ping test
First, we create a new network. Then, we run a busybox container named test listening on port 7000 (just to keep it running). Finally, we ping the test container by its name and it should work.
EDIT 2018-02-17: Docker may eventually remove the links key from docker-compose, therefore they suggest to use user-defined networks as stated here => https://docs.docker.com/compose/compose-file/#links
Assuming you want to reach the mysql container from the web container of your docker-compose.yml file, such as:
web:
build: .
links:
- mysql
mysqlservice:
image: mysql
You'll be pleased to know that Docker Compose already adds a mysqlservice domain name (in the web container /etc/hosts) which point to the mysql container.
Instead of looking for the mysql container IP address, you can just use the mysqlservice domain name.
If you want to add custom domain names, it's also possible with the extra_hosts parameter.
You might want to try out dnsdock. Looks straight forward and easy(!) to set up. Have a look at http://blog.brunopaz.net/easy-discover-your-docker-containers-with-dnsdock/ and https://github.com/tonistiigi/dnsdock .
If you want out of the box solution, you might want to check for example Kontena. It comes with network overlay technology from Weave and this technology is used to create virtual private LAN networks between services. Thanks to that every service/container can be reached by service_name.kontena.local.
I changed the --net parameter with --network parameter and it runs as expected:
docker network create <network name>
docker run --network <network name> --name <container name> <other container options>
docker run --network <network name> --name <container name> <other container options>
If you are using Docker Compose, and your docker-compose.yml file has a top-level services: block (you are not using the obsolete "version 1" file format), then Compose does all of the required setup automatically. The names underneath services: can be directly used as host names.
version: '3.8'
services:
database: # <-- "database" is a usable hostname
image: postgres
application: # <-- "application" is a usable hostname
build: .
environment:
PGHOST: database # <-- use the "database" hostname
Networking in Compose in the Docker documentation describes this setup further.
These host names only work for connections between containers, in the same Compose file. If you manually declare networks: then the two containers must have some network in common, but the easiest setup is to just not declare networks: at all. These connections will only use the "standard" port (for PostgreSQL, for example, always connect to port 5432); a ports: declaration is not required and is ignored if present.

Resources