Docker : Linking containers on different host machines - node.js

How can i connect two containers on different host machines in Docker ? I need to use data from mongodb on one host by a nodejs application on another host ? Can any one give me an example like this?

You could use the abassador pattern for container linking
http://docs.docker.com/articles/ambassador_pattern_linking/
Flocker is also addressing this issue, but needs more time for infrastructure setup:
https://docs.clusterhq.com/en/0.3.2/gettingstarted/

You might want to checkout also Kontena (http://www.kontena.io). Kontena supports multicast (provided by Weave) and DNS service discovery. Because of DNS discovery you can predict before the deploy what addresses each container will get.
As Flocker, Kontena also needs some time for infrastructure setup: https://github.com/kontena/kontena/tree/master/docs#getting-started
But you will get service scaling and deploy automation as a bonus.

You can connect container from different host by creating an overlay network.
Docker Engine supports multi-host networking out-of-the-box through
the overlay network driver.

It doesn't matter what machine the other container is on, all you need to is ensure that the port is exposed on that machine and then direct the second container on the first machine to the IP of the second machine.
Machine 1: Postgres:5432 172.25.8.10 ifconfig
Machine 2: Web Server:80 172.25.8.11 -> Point DB to 172.25.8.10:5432

Related

Kubernetes: Is it Possible to mount the host's entire root filesystem into container and execute its commands?

I have a Kubernetes cluster and need to install WireGuard kernel module as a Daemonset-like job in each and every node in the cluster since the kernel version I have to deal with is pre 5.16.
My question is: is it possible to replace and mount the entire host root filesystem into the container (if it possible then the container image doesn't really matter, let's choose ubuntu) and use the host commands to install WireGuard kernel module (or generally install anything) from the container?
Why would you mount the root filesystem when you can use the docker image for wireguard.
Or build your own image based on wireguard dockerfile.
There is also a project kubewg which helps you manage Wireguard.
kubewg is a Kubernetes controller that allows you to configure and manage [Wireguard] VPN configuration using a Kubernetes API server.
It introduces the following [CustomResourceDefinition] resources:
Network: Represents a Wireguard VPN network.
Peer: Represents a single Peer in a a Network. Each peer will be allocated an address in the network's subnet.
RouteBinding: Represents additional route configuration that should be used by all members of the VPN network.
And Wormhole Wireguard based overlay network CNI plugin for kubernetes.
Wormhole is a simple CNI plugin designed to create an encrypted overlay network for kubernetes clusters.
WireGuard is a fascinating Fast, Modern, Secure VPN tunnel, that has been gaining significant praise from security experts, and is currently proposed for inclusion within the linux kernel.
Wormhole uses WireGuard to create a simple and secure high performance encrypted overlay network for kubernetes clusters, that is easy to manage and troubleshoot.
Wormhole does not implement network policy, instead we recommend to use calico or kube-router as network policy controllers.
Although it's quite dangerous to do and you really have to know what you're doing, it's possible with privileged container, see for details here you need to add following to your manifest:
securityContext:
privileged: true
Example for plain docker

Connecting Centos VMs to each other for K8s

Im trying to set up kubernetes on my centos VMs using virtualbox. I prefer to use the kubeadm method, so that I can join slave nodes with a join token.
My issue is that I think I am lacking understanding of how to connect my VMs to one another beforehand. This is the resource I am using for the Kubernetes installation:
https://www.profiq.com/kubernetes-cluster-setup-using-virtual-machines/
When I create VMs and run ifconfig, they all have the same IPs listed, even if they are new VMs and not just a copy of the original. I must be doing something wrong.
Anyway, Im just wondering if anyone would be so kind as to give me some steps to get my VMs talking to each other, just to be sure Im doing it correctly. Im following the article I posted, and can ping each VM from the other, but then ran ifconfig and, since each machine has the same 10.0.2.15 IP, I feel like its just pinging itself and not the master from slave, etc
Did you perform the step after the cloning and before you load kubernetes to change the IP addresses of the 2nd and 3rd VM?
from the instructions you are following, I see:
Now create a linked clone machines from kubemaster machines created before. Once you’re done, boot into machine and change following things to match infrastructure:
Set IP address 192.168.99.21 (or 22 for second slave) for host only network.
Set hostname hostnamectl set-hostname kubeslave1 (or kubeslave2 for second slave) Everything else is already configured.

Making multiple Docker Machines accessible across local network. Linux & Mac

I know there are several questions similar to this, but as far as I can see there's not an answer for the setup that I can get to work, and as far as documentation goes I'm a bit lost.
My goal is to set up a linux development server on the local network which I can run multiple docker machines / containers on for each of our projects.
Ideally, I would create a docker-machine on the dev box, and then be able to access that from any of my local network machines. I can run docker on the linux box directly and access by publishing the ports, but I want to run multiple machines with different ip addresses so that we can have multiple VMs running (multiple projects).
I've looked at Docker Swarm and overlay networks and just not been able to find a single tutorial or set of instructions to get this sort of set up running.
So I have a dev box at 192.168.0.101 with docker-machine on. I want to create a new machine, run nginx on it, and then access nginx from another machine on the local network i..e http://192.168.99.1/ then set up another and access that too at say http://192.168.99.2/.
If anyone has managed to do this i'd be interested to know how.
One way I've been thinking about doing it, is running nginx on the local host on the dev box, and set up config rules to proxy to the local machines, unsure how well this would work (it works for web servers, but what if I want to ssh or bash into one of those machines, or if one has a mysql container I want to connect to)
Have you considered running your docker machines inside LXD containers?
Stepane Grabers site has a lot of relevant information
https://stgraber.org/category/lxd/
The way that I resolved this, is by using a NAT on the linux box, and then assigning a different ip to each machine. I followed the instructions here; http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/ which finally got me to be able to share multiple docker machines using the same ports (80), on different ips.

Internal infrastructure with docker

I have a small company network with the following services/servers:
Jenkins
Stash (Atlassian)
Confluence (Atlassian)
LDAP
Owncloud
zabbix (monitoring)
puppet
and some Java web apps
all running in separate kvm(libvirt)-vms in separate virtual-subnets on 2 machines (1 internal, 1 hetzner-rootserver) with shorewall inbetween. I'm thinking about switching to Docker.
But I have two questions:
How can I achieve network security between docker containers (i.e. I want to prevent owncloud to access any host in the network except ldap-hosts-sslport)
Just by using docker-linking? If yes: does docker really allow to access only linked containers, but no others?
By using kubernetes?
By adding multiple bridging-network-interfaces for each container?
Would you switch all my infra-services/-servers to docker, or a hybrid solution with just the owncloud and the java-web-apps on docker?
Regarding the multi-host networking: you're right that Docker links won't work across hosts. With Docker 1.9+ you can use "Docker Networking" like described in their blog post http://blog.docker.com/2015/11/docker-multi-host-networking-ga/
They don't explain how to secure the connections, though. I strongly suggest to enable TLS on your Docker daemons, which should also secure your multi-host network (that's an assumption, I haven't tried).
With Kubernetes you're going to add another layer of abstraction, so that you'll need to learn working with the pods and services concept. That's fine, but might be a bit too much. Keep in mind that you can still decide to use Kubernetes (or alternatives) later, so the first step should be to learn how you can wrap your services in Docker containers.
You won't necessarily have to switch everything to Docker. You should start with Jenkins, the Java apps, or OwnCloud and then get a bit more used to the Docker universe. Jenkins and OwnCloud will give you enough challenges to gain some experience in maintaining containers. Then you can evaluate much better if Docker makes sense in your setup and with your needs to be applied to the other services.
I personally tend to wrap everything in Docker, but only due to one reason: keeping the host clean. If you get to the point where everything runs in Docker you'll have much more freedom to choose where a service can run and you can move containers to other hosts much more easily.
You should also explore the Docker Hub, where you can find ready to run solutions, e.g. Atlassian Stash: https://hub.docker.com/r/atlassian/stash/
If you need inspiration for special applications and how to wrap them in Docker, I recommend to have a look in https://github.com/jfrazelle/dockerfiles - you'll find a bunch of good examples there.
You can give containers their own IP from your subnet by creating a network like so:
docker network create \
--driver=bridge \
--subnet=135.181.x.y/28 \
--gateway=135.181.x.y+1 \
network
Your gateway is the IP of your subnet + 1 so if my subnet was 123.123.123.123 then my gateway should be 123.123.123.124
Unfortunately I have not yet figured out how to make the containers appear to the public from their own ip, at the moment they appear as the dedicated servers' ip. Let me know if you know how I can fix that. I am able to access the container using its ip though.

How do I provide dynamic host names for all docker containers?

Currently, I have a single machine I'm starting to run docker images on. What I want to do is have all containers accessible through a host name based upon the container name. So, if I have container C1 and C2, and the host for the docker server is mydocker.local, then c1.mydocker.local will point to container image C1 and if I were to run C3, it would become available as C3.mydocker.local.
docker-dns seems similar to what I'm trying to do, but the project hasn't been updated for 7 months and documentation was not enough for me to get it running.
This seems like a very common use case, but I have not been able to create the appropriate google query magic to find anything.
You can try docker-gen to automatic update your host's hosts file, or create a dns service such as Dnsmasq.
Also I found an docker image called dns-gen which just using docker-gen and dnsmasq, might solve your problem.
A you should use explicit linking if containers are going to depend on each other. This will update the etc hosts file inside the container to add IPs for each of the linked containers.
If you do want something something like Rancher will give you a complete virtual private network in which each of the containers has its own IP. Rancher also supports services which will maintain DNS for all running containers. Bonus for using rancher is that it will work in a multi-host scenario as well.

Resources