When I run docker host in swarm mode automatically an ingress network is created, now my question is that can I create second ingress network as well with some command or can I create multiple ingress network ?
The ingress network is internally used by docker swarm to route external traffic on a published port to your services running inside the cluster. You do not configure your services to use this network directly, instead you just publish a port and docker handles the rest.
For communication between your services, that needs to be done with a user defined overlay network. With a compose file and a docker stack deploy command, this is done by default. Without a compose file, you would create a network with the overlay driver, e.g. docker network create -d overlay appnet and then connect your services to that network. To isolate services from each other, you would add them to separate overlay networks (note the ingress network does not allow communication between services, it is strictly for north/south traffic in networking terms).
Related
I have an OPC UA server in a docker container. The server exposes a TCP endpoint with the binary opc.tcp protocol. What are possible methods I can use to expose non http endpoints in Azure? Thank you.
This suggested a WCF workaround, but the server is not WCF application.
How can I host a TCP Listener in Azure?
If it is docker based, but not http, then Microsoft suggested two possible solutions.
Azure container instance - deploy a single docker instance via the Azure website, or you can deploy a multi docker instance as a container group via the Azure CLI. For multi docker instances you have limits on CPU and memory as it is running on the same "server" so scaling could be an issue. Adding a static ip is possible and described here Configure a single public IP address for outbound and inbound traffic to a container group
Use the AKS/Kubernetes cluster in Azure.
I have windows container which should access to external VM database (that is not in container, lets say VM1) so I would define for them l2bridge network driver in order to use the same Virtual Network.
docker network create -d "l2bridge" --subnet 10.244.0.0/24 --gateway 10.244.0.1
-o com.docker.network.windowsshim.vlanid=7
-o com.docker.network.windowsshim.dnsservers="10.244.0.7" my_transparent
So I suppose we need to stick on this definitely.
But now as well I need to make my container accessible from outside, on port 9000, from other containers as well as from other VMs. I suppose this has to be done based on its name (host name) since IP will be changed after the each restart.
How I should make my container accessible from some other VM2 virtual machine - Should I do any modifications within the network configuration? Or I just to make sure they are both using the same DNS server?
Of course I will do the expose of the port, but should I do any kind of additional network configuration in order to allow traffic on that specific port? I've read that by default network traffic is not allowed and that Windows may block some thing.
I will appreciate help on this.
Thanks
It seems you do not use the Azure Container Instance, you just run the container in the Windows VM. If I am right, the best way to make the container accessible outside is to run the container without setting the network, just need to map the port to the host port. Then the container is accessible outside with the exposed port. Here is the example command:
docker run --rm -d -p host_port:container_port --name container_name image
Then you can access the container outside through the public IP and the host port of the VM.
Update:
Ok, if you use the ACI to deploy your containers via docker-compose, then you need to know that ACI does not support the Windows container in VNet. It means Windows containers cannot be created with private IP in the VNet.
And if you use the Linux container, then you can deploy the containers in a VNet, but you need to use the follow the steps here. Then the containers have private IPs and it's accessible in the VNet through that private IPs.
I'm starting with K8S. I installed 2 Debian 10 VMs on Azure (1 master node & 2 slaves).
I installed the master node with this doc:
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/install-kubeadm/
I installed Calico with this one :
https://docs.projectcalico.org/getting-started/kubernetes/installation/calico#installing-with-the-kubernetes-api-datastore50-nodes-or-less
I created a simple nginx deployment:
kubectl run nginx --replicas=2 --image=nginx
I have the following pods (sazultk8s1/2 are the working nodes) :
root#itf-infra-sazultk8s0-vm:~# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE
nginx-6db489d4b7-mzmnq 1/1 Running 0 12s 192.168.47.18 itf-infra-sazultk8s2-vm
nginx-6db489d4b7-sgdz7 1/1 Running 0 12s 192.168.247.115 itf-infra-sazultk8s1-vm
From the master node I can't curl to these nginx:
root#itf-infra-sazultk8s0-vm:~# curl 192.168.47.18 --connect-timeout 5
curl: (28) Connection timed out after 5001 milliseconds
root#itf-infra-sazultk8s0-vm:~# curl 192.168.247.115 --connect-timeout 5
curl: (28) Connection timed out after 5000 milliseconds
I tried from a simple busybox image:
kubectl run access --rm -ti --image busybox /bin/sh
/ #ifconfig eth0 | grep -i inet
inet addr:192.168.247.116 Bcast:0.0.0.0 Mask:255.255.255.255
/ # wget --timeout 5 192.168.247.115
Connecting to 192.168.247.115 (192.168.247.115:80)
saving to 'index.html'
index.html 100% |********************************************************************************************************| 612 0:00:00 ETA
'index.html' saved
/ # wget --timeout 5 192.168.47.18
Connecting to 192.168.47.18 (192.168.47.18:80)
wget: download timed out
From a scratch install:
does a pod can ping a pod on another host ?
is it possible to curl from master node to a pod on a worker node ?
does azure apply restrictions and prevent k8s to work properly ?
Took me 1 week to solve it.
From the master node, you want to ping/curl Pods located on worker nodes. These Pods are part of a Deployment, itself exposed through a Service.
There are some subtilities in Azure networking which make this not "working out of the box" with default Calico installation.
Steps to make Calico work on Azure
In Kubernetes, Install Calico without a networking backend.
In Azure, Enable IP forwarding on each host.
In Azure, Create UDR (user Defined Routes).
1. Kubernetes, Install Calico without a networking backend
A) Disable Bird
By default, calico.yaml is configured to use bird as a network backend, you have to set it to none.
Official installation step: https://docs.projectcalico.org/getting-started/kubernetes/self-managed-onprem/onpremises
Before applying -f calico.yaml, edit the file.
Search for the variable CALICO_NETWORKING_BACKEND
We see that the value is taken from a ConfigMap.
Edit the value in the ConfigMap (located at the top of the file), to set it to none instead of the default bird.
B) Remove Bird from the Readiness & Liveliness probes
Given that we have disabled Bird, it should be removed from the Readiness & Liveliness probes, otherwise, the calico-node deamonset pods won't start. In Calico Manifest, comment out "- -bird-live" and "- bird-ready".
You are done here, you can apply the file: kubectl apply -f
2. Azure, Enable IP forwarding on each host
For each VM in Azure:
Click on it > Networking > click on the Network Interface you have.
Click on IP Configurations
Set IP forwarding to Enabled.
Repeat for each VM, and you are done.
Note: as per the Azure doc, IP forwarding enables the virtual machine a network interface is attached to:
Receive network traffic not destined for one of the IP addresses assigned to any of the IP configurations assigned to the network interface.
Send network traffic with a different source IP address than the one assigned to one of a network interface's IP configurations.
3. Azure, Create UDR (User Defined Routes)
Next, you have to create UDR on your Azure subnet, so that Azure can route the traffic targeted to the (Pod subnet created by Calico on the target Host), to the (IP of the actual target Host itself). So that Azure know that the traffic aimed to that calico subnet, has to be routed to the appropriate node, otherwise Azure doesn't know what to do with this traffic.
Then, when the target node is reached, the target knows how to route the traffic to its underlying Pods.
First, identify the subnet created by Calico on each node.
kubectl get ipamblocks.crd.projectcalico.org \
-o jsonpath="{range .items[*]}{'podNetwork: '}{.spec.cidr}{'\t NodeIP: '}{.spec.affinity}{'\n'}"
On Azure, follows the documentation on how to 'Create a route Table', 'Add Routes of the table', and to 'Associate the route Table to a subnet' (just scroll the doc, sections are one below the other).
The final result should look like this:
You are done! You should now be able to ping/curl your Pods located on other nodes.
References Links
All the reference links expaining the subtilities of Azure Networking, and the different ways to use Calico with Azure (Network+NetworkPolicy, or NetworkPolicy only).
In particular, there are 3 ways to make Calico work on Azure.
The one we just see, where the routes are managed by the User. It seems that this could be called "user managed networking".
Using Azure CNI IPAM plugin. Here we could say "Azure managed networking". Azure will allocate to each Pod an IP inside the Azure subnet, so that Azure knows how to route the traffic.
Calico in VXLAN mode. Here Calico will wrap-up each paquet in another packet, the wrapper will only contain host IPs so that Azure knows how to route them. Then, when reaching the target Node, Calico unwraps the paquet to discover the real target IP, which would be a Pod IP located in the Calico subnet.
In the below documentation, there are explanations on the tradeoff of each setup, in particular the Youtube video.
Youtube (9 min), Kubernetes networking on Azure
Calico-Azure: official site and Git
Cutomizing Calico Maniest
Vocabulary:
CNI = Container network interface
IPAM = IP address management (to allocate IP addresses)
does a pod can ping a pod on another host ?
As per kubernetes networking model yes as long as you have a CNI provider installed.
is it possible to curl from master node to a pod on a worker node ?
You need to create either Nodeport or Loadbalancer type service to access your pods from outside the cluster and for accessing pods from nodes.
does azure apply restrictions and prevent k8s to work properly ?
There may be firewalls restricting traffic between VMs.
I am using docker swarm to deploy an on-premise third party application. The machine I am deploying on is running RHEL 7.6 and has two network interfaces. The users will interact with the application from eth0, but internal communication with their system must use eth1 or the connection will be blocked by their firewalls. My application requires some of my services to establish connections internal services in their network.
I created my swarm using:
$ docker swarm init --advertise-addr x.x.x.x
Where x.x.x.x is the eth0 inet address. This works for incoming user traffic to the service. However, when I try to establish connections to another service the connection times out, blocked by the firewall.
Outside of docker, on the machine, I can run:
ssh -b y.y.y.y user#server
Where y.y.y.y is the eth1 inet address, and it works. When I run the same in my docker swarm container I get this error:
bind: y.y.y.y: Cannot assign requested address
Is there some way I can use multiple network interfaces with docker swarm and specify which one is used within containers? I couldn't find much documentation on this. Do I need to set up some sort of proxy?
By default, Docker creates a bridged network from the network interfaces on your machine and then attaches each container instance to it via a separate virtual interface internally. So you would not be able to bind to eth1 directly as your Docker container has a different interface.
See https://docs.docker.com/v17.09/engine/userguide/networking/ for a general guide and https://docs.docker.com/v17.09/engine/userguide/networking/default_network/custom-docker0/ for custom bridged networks with multiple physical interfaces.
If I have a network setup in Docker, as follows
Here MQTT and Nodejs are two separate docker containers.
Do I also have to use TLS for securing channel A? Do I even have to secure channel A? I guess no, as it is just a container to container channel. Am I correct?
I am a newbie as far as Docker is concerned but I've read that docker0 will allow containers to intercommunicate but prevent outside world from connecting to containers, until a port is mapped from the host to a container.
It's not clear fro the question if the nodejs and mqtt broker are in the same docker container or in 2 separate containers, but...
The interlink between 2 docker containers for a port not mapped to the host will be on the internal virtual network, the only way to sniff that traffic will be from the host machine so as long as the host machine is secure that traffic should be secure with out the need to run a SSL/TLS listener.
If both the nodejs app and the broker are within the same docker container then they can communicate over localhost so not even over the virtual network interconnect so again not need to add SSL/TLS.
I wouldn't worry about the traffic unless the host system has potential hostile containers or users... but sometimes my imagination fails me.
You can use the deprecated --link flag to keep the port from being mapped to the host (IIUC):
docker run --name mqtt mqtt:latest & docker run --name nodejs --link mqtt nodehs:latest
Again, IIUC, this creates a private network between the two containers...
Evidence of that is a netstat -an |grep EST does not show connections between containers that are --link'd this way, even if one the target port is open to the world