I have two containers a webapp and a database.
The webapp is is on network 10.10.1.10/24 using macvlan driver for this user defined network and the name of this network is mnet
link/ether 02:42:0a:84:01:05
inet 10.10.1.10/24 scope global eth0
Db container is on inet 10.10.11.2/24 scope global eth0
Using bridge driver for this user defined network and the name of this network is int
I can ping the gateway of mnet ( 10.10.10.1 ) and all other machines on local network but not the webapp (10.10.10.5 ) hosted in docker from db container.
I want to link the webapp with db on this network but not been able to establish a connection with db or vice versa.
I can link both if i put the db container on same as webapp network.
Reason i don't want them to put on the same network is that mnet is integrated with our LAN so i want to isolate users from accessing the db container.
Environment : Production
Docker version: 17.03.1-ce
OS : RancherOs
Is there any work around,reason or solution for this problem.
A container can be on multiple networks. Make a new user defined network and put both the DB and web app on that. Then when they want to talk to each other they will use this new network but other traffic will use your existing networks.
Related
currently we have the following scenario:
We have established our connection (network wise) from on-premise to the Azure Kubernetes Cluster (private cluster!) without any problems.
Ports which are being routed and are allowed
TCP 80
TCP 443
So far, we are in a development environment and test different configurations.
For setting up our AKS, we need to set the virtual network (via CNI) and Service CIDR. We have set the following configuration (just an example)
Virtual Network: 10.2.0.0 /21
Service CIDR: 10.2.8.0 /23
So, our pods are having IPs from our virtual network subnet, the services are getting their IPs from the Service CIDR. So far so good. A route table for the virtual network (subnet has been associated with the route table) is forwarding all traffic to our firewall and vice versa: Interacting with the virtual network is working without any issue. The network team (which is new to Azure cloud stuff as well) has said that the connection and access to the Service CIDR should be working.
Sadly, we are unable to access the Service CIDR.
For example, let's say we want to establish the kubernetes dashboard (web ui) via https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/. After running the YAML code, the kubernetes dashboard pod and service is being successfully created. The pod can be pinged and "accessed", but the service, which makes it possible to access the kubernetes dashboard via port 443 cannot be accessed. For example https://10.2.8.42/.
My workaround so far is that the kubernetes dashboard (as a Service, type: ClusterIP) is having set an external IP from the virtual network. This sounds all great, but I am not really fond of it, since I have to interact with the virtual network rather than the Service CIDR.
Is this really the correct way? Any hints how to make the Service CIDR accessible? What am I missing ?
Any help would be appreciated.
I am using docker swarm to deploy an on-premise third party application. The machine I am deploying on is running RHEL 7.6 and has two network interfaces. The users will interact with the application from eth0, but internal communication with their system must use eth1 or the connection will be blocked by their firewalls. My application requires some of my services to establish connections internal services in their network.
I created my swarm using:
$ docker swarm init --advertise-addr x.x.x.x
Where x.x.x.x is the eth0 inet address. This works for incoming user traffic to the service. However, when I try to establish connections to another service the connection times out, blocked by the firewall.
Outside of docker, on the machine, I can run:
ssh -b y.y.y.y user#server
Where y.y.y.y is the eth1 inet address, and it works. When I run the same in my docker swarm container I get this error:
bind: y.y.y.y: Cannot assign requested address
Is there some way I can use multiple network interfaces with docker swarm and specify which one is used within containers? I couldn't find much documentation on this. Do I need to set up some sort of proxy?
By default, Docker creates a bridged network from the network interfaces on your machine and then attaches each container instance to it via a separate virtual interface internally. So you would not be able to bind to eth1 directly as your Docker container has a different interface.
See https://docs.docker.com/v17.09/engine/userguide/networking/ for a general guide and https://docs.docker.com/v17.09/engine/userguide/networking/default_network/custom-docker0/ for custom bridged networks with multiple physical interfaces.
I was trying to give IP address to docker container from an azure vnet using the CNI plugin, I was successfully able to give IP address to containers but they were not communicating with each other.
I followed this blog, which is based on this microsoft document.
I created two containers using commands:
sudo ./docker-run.sh alpine1 default alpine
and
sudo ./docker-run.sh alpine2 default alpine
I checked that the IP of alpine1 is 10.10.3.59 and IP of alpine2 is 10.10.3.61 which are the IPs I created inside a network interface as given in the above docs. So they did receive the IPs from a subnet inside vnet but when I ping alpine1 from alpine2 as ping 10.10.3.59, it doesn't work, am I missing something here ? Or I have to do some other configuration after this ?
I would like to understand the purpose of Docker bridge address in Kubernetes cluster. Is it related to container to container communication or is it something else.
The Docker bridge network address represents the default docker0 bridge network address present in all Docker installations.
While docker0 bridge is not used by AKS clusters or the pods themselves, you must set this address to continue to support scenarios such as docker build within the AKS cluster.
It is required to select a CIDR for the Docker bridge network address because otherwise Docker will pick a subnet automatically which could conflict with other CIDRs.
This network 172.17.0.0/16 is default when you installed docker. Please refer below link for more explanation. https://github.com/docker/for-aws/issues/117 Because docker choose first available network and most of the time end up choosing 172.17.0.0/16 hence this route is added locally towards docker0 bridge
Regards,
Vinoth
I have 3 NICs deployed on a Azure VM say eth0, eth1 and eth2. But I configure only eth1 and eth2 not eth0. My network configuration marked as failed.
Is it necessary to configure eth0 on Azure VM? If yes, why?
It's necessary to configure the primary network interface as the primary NIC is used for communicating with resources over a network connection. Since the primary interface on an Azure Linux virtual machine (VM) is eth0. If eth0 isn't configured, the VM is not accessible over a network connection even when other tools indicate the VM is up.
When you set multiple NICs in Azure VM. One of the NICs on a multi-NIC VM needs to be primary.
Azure assigns a default gateway to the first (primary) network
interface attached to the virtual machine. Azure does not assign a
default gateway to additional (secondary) network interfaces attached
to a virtual machine. Therefore, you are unable to communicate with
resources outside the subnet that a secondary network interface is in,
by default.
Reference: https://learn.microsoft.com/en-us/azure/virtual-machines/windows/multiple-nics#configure-guest-os-for-multiple-nics