Bridge to Kubernetes doesnt add entries in /etc/hosts - linux

I need help with the Bridge to Kubernetes setup in my Linux(WSL) environment.
The debug starts as expected but it doesn't change my /etc/hosts, hence I can't connect to the other services in the cluster.
I believe the issue can be related to not having enough permissions, and I can't find endpointManager running in Linux.
https://learn.microsoft.com/en-us/visualstudio/bridge/overview-bridge-to-kubernetes#additional-configuration
Any idea what this could be related to?

Related

Kube-Proxy with multiple external IPs

I would like to run multiple Kubernetes services and use the externalIPs field for those services to bind a specific service to a specific IP.
I have one VM which has three interfaces:
Internal interface (eth0)
External interface (eth1)
External interface (eth2)
I've already added iproute2 tables/routes/rules for the interfaces 2 and 3, which take care that the traffic is back-routed via the correct interface.
As long is kubelet/kube-proxy is not running, everything is working as expected. (e.g. running nc to serve some data.)
As soon a kubelet/kube-proxy is started some (and I don't know which) iptables configuration is created, which drops the packages.
(At least this is what it looks like in tcpdump.)
If I run only one IP on the node, everything works as expected - so I'm assuming the issue is the second IP and some kind of routing.
Here is the iptables config pre and post starting the kubelet service.
I've anonymised the file and removed stuff which is clearly unrelated - if I've removed to much, pleas let me know.
https://gist.github.com/Thubo/7421d30288ef72ad480ac830dc19ec47
Does anybody run a similar setup?
How does one need to configure kube-proxy and/or the OS to setup this kind of network?
Any ideas where to proceed for debugging?
I'm running Kubernetes 1.6.4 on CentOS7.
Kube-proxy trying to manage all interfaces which it has, and, of course, forcing some rules (include filtering) for provide a service.
If you really want to use multiple interfaces on your servers and save custom forwarding rules between interfaces in the same time, you can bind all your components to the internal interface (eth0 in you case) and manage all other interfaces manually as you want.
For set one interface, you should you that CLI args:
For kubelet daemon - --address
For kube-proxy daemon - --bind-address
For kube-api daemon - --bind-address.
But please keep in mind, you will need to use that interfaces for all intercommunication inside a cluster and some flags, like HostNetwork will also provide you only that interface.

Connecting Centos VMs to each other for K8s

Im trying to set up kubernetes on my centos VMs using virtualbox. I prefer to use the kubeadm method, so that I can join slave nodes with a join token.
My issue is that I think I am lacking understanding of how to connect my VMs to one another beforehand. This is the resource I am using for the Kubernetes installation:
https://www.profiq.com/kubernetes-cluster-setup-using-virtual-machines/
When I create VMs and run ifconfig, they all have the same IPs listed, even if they are new VMs and not just a copy of the original. I must be doing something wrong.
Anyway, Im just wondering if anyone would be so kind as to give me some steps to get my VMs talking to each other, just to be sure Im doing it correctly. Im following the article I posted, and can ping each VM from the other, but then ran ifconfig and, since each machine has the same 10.0.2.15 IP, I feel like its just pinging itself and not the master from slave, etc
Did you perform the step after the cloning and before you load kubernetes to change the IP addresses of the 2nd and 3rd VM?
from the instructions you are following, I see:
Now create a linked clone machines from kubemaster machines created before. Once you’re done, boot into machine and change following things to match infrastructure:
Set IP address 192.168.99.21 (or 22 for second slave) for host only network.
Set hostname hostnamectl set-hostname kubeslave1 (or kubeslave2 for second slave) Everything else is already configured.

Making multiple Docker Machines accessible across local network. Linux & Mac

I know there are several questions similar to this, but as far as I can see there's not an answer for the setup that I can get to work, and as far as documentation goes I'm a bit lost.
My goal is to set up a linux development server on the local network which I can run multiple docker machines / containers on for each of our projects.
Ideally, I would create a docker-machine on the dev box, and then be able to access that from any of my local network machines. I can run docker on the linux box directly and access by publishing the ports, but I want to run multiple machines with different ip addresses so that we can have multiple VMs running (multiple projects).
I've looked at Docker Swarm and overlay networks and just not been able to find a single tutorial or set of instructions to get this sort of set up running.
So I have a dev box at 192.168.0.101 with docker-machine on. I want to create a new machine, run nginx on it, and then access nginx from another machine on the local network i..e http://192.168.99.1/ then set up another and access that too at say http://192.168.99.2/.
If anyone has managed to do this i'd be interested to know how.
One way I've been thinking about doing it, is running nginx on the local host on the dev box, and set up config rules to proxy to the local machines, unsure how well this would work (it works for web servers, but what if I want to ssh or bash into one of those machines, or if one has a mysql container I want to connect to)
Have you considered running your docker machines inside LXD containers?
Stepane Grabers site has a lot of relevant information
https://stgraber.org/category/lxd/
The way that I resolved this, is by using a NAT on the linux box, and then assigning a different ip to each machine. I followed the instructions here; http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/ which finally got me to be able to share multiple docker machines using the same ports (80), on different ips.

Internal infrastructure with docker

I have a small company network with the following services/servers:
Jenkins
Stash (Atlassian)
Confluence (Atlassian)
LDAP
Owncloud
zabbix (monitoring)
puppet
and some Java web apps
all running in separate kvm(libvirt)-vms in separate virtual-subnets on 2 machines (1 internal, 1 hetzner-rootserver) with shorewall inbetween. I'm thinking about switching to Docker.
But I have two questions:
How can I achieve network security between docker containers (i.e. I want to prevent owncloud to access any host in the network except ldap-hosts-sslport)
Just by using docker-linking? If yes: does docker really allow to access only linked containers, but no others?
By using kubernetes?
By adding multiple bridging-network-interfaces for each container?
Would you switch all my infra-services/-servers to docker, or a hybrid solution with just the owncloud and the java-web-apps on docker?
Regarding the multi-host networking: you're right that Docker links won't work across hosts. With Docker 1.9+ you can use "Docker Networking" like described in their blog post http://blog.docker.com/2015/11/docker-multi-host-networking-ga/
They don't explain how to secure the connections, though. I strongly suggest to enable TLS on your Docker daemons, which should also secure your multi-host network (that's an assumption, I haven't tried).
With Kubernetes you're going to add another layer of abstraction, so that you'll need to learn working with the pods and services concept. That's fine, but might be a bit too much. Keep in mind that you can still decide to use Kubernetes (or alternatives) later, so the first step should be to learn how you can wrap your services in Docker containers.
You won't necessarily have to switch everything to Docker. You should start with Jenkins, the Java apps, or OwnCloud and then get a bit more used to the Docker universe. Jenkins and OwnCloud will give you enough challenges to gain some experience in maintaining containers. Then you can evaluate much better if Docker makes sense in your setup and with your needs to be applied to the other services.
I personally tend to wrap everything in Docker, but only due to one reason: keeping the host clean. If you get to the point where everything runs in Docker you'll have much more freedom to choose where a service can run and you can move containers to other hosts much more easily.
You should also explore the Docker Hub, where you can find ready to run solutions, e.g. Atlassian Stash: https://hub.docker.com/r/atlassian/stash/
If you need inspiration for special applications and how to wrap them in Docker, I recommend to have a look in https://github.com/jfrazelle/dockerfiles - you'll find a bunch of good examples there.
You can give containers their own IP from your subnet by creating a network like so:
docker network create \
--driver=bridge \
--subnet=135.181.x.y/28 \
--gateway=135.181.x.y+1 \
network
Your gateway is the IP of your subnet + 1 so if my subnet was 123.123.123.123 then my gateway should be 123.123.123.124
Unfortunately I have not yet figured out how to make the containers appear to the public from their own ip, at the moment they appear as the dedicated servers' ip. Let me know if you know how I can fix that. I am able to access the container using its ip though.

Docker : Linking containers on different host machines

How can i connect two containers on different host machines in Docker ? I need to use data from mongodb on one host by a nodejs application on another host ? Can any one give me an example like this?
You could use the abassador pattern for container linking
http://docs.docker.com/articles/ambassador_pattern_linking/
Flocker is also addressing this issue, but needs more time for infrastructure setup:
https://docs.clusterhq.com/en/0.3.2/gettingstarted/
You might want to checkout also Kontena (http://www.kontena.io). Kontena supports multicast (provided by Weave) and DNS service discovery. Because of DNS discovery you can predict before the deploy what addresses each container will get.
As Flocker, Kontena also needs some time for infrastructure setup: https://github.com/kontena/kontena/tree/master/docs#getting-started
But you will get service scaling and deploy automation as a bonus.
You can connect container from different host by creating an overlay network.
Docker Engine supports multi-host networking out-of-the-box through
the overlay network driver.
It doesn't matter what machine the other container is on, all you need to is ensure that the port is exposed on that machine and then direct the second container on the first machine to the IP of the second machine.
Machine 1: Postgres:5432 172.25.8.10 ifconfig
Machine 2: Web Server:80 172.25.8.11 -> Point DB to 172.25.8.10:5432

Resources