I would like to run multiple Kubernetes services and use the externalIPs field for those services to bind a specific service to a specific IP.
I have one VM which has three interfaces:
Internal interface (eth0)
External interface (eth1)
External interface (eth2)
I've already added iproute2 tables/routes/rules for the interfaces 2 and 3, which take care that the traffic is back-routed via the correct interface.
As long is kubelet/kube-proxy is not running, everything is working as expected. (e.g. running nc to serve some data.)
As soon a kubelet/kube-proxy is started some (and I don't know which) iptables configuration is created, which drops the packages.
(At least this is what it looks like in tcpdump.)
If I run only one IP on the node, everything works as expected - so I'm assuming the issue is the second IP and some kind of routing.
Here is the iptables config pre and post starting the kubelet service.
I've anonymised the file and removed stuff which is clearly unrelated - if I've removed to much, pleas let me know.
https://gist.github.com/Thubo/7421d30288ef72ad480ac830dc19ec47
Does anybody run a similar setup?
How does one need to configure kube-proxy and/or the OS to setup this kind of network?
Any ideas where to proceed for debugging?
I'm running Kubernetes 1.6.4 on CentOS7.
Kube-proxy trying to manage all interfaces which it has, and, of course, forcing some rules (include filtering) for provide a service.
If you really want to use multiple interfaces on your servers and save custom forwarding rules between interfaces in the same time, you can bind all your components to the internal interface (eth0 in you case) and manage all other interfaces manually as you want.
For set one interface, you should you that CLI args:
For kubelet daemon - --address
For kube-proxy daemon - --bind-address
For kube-api daemon - --bind-address.
But please keep in mind, you will need to use that interfaces for all intercommunication inside a cluster and some flags, like HostNetwork will also provide you only that interface.
Related
I need help with the Bridge to Kubernetes setup in my Linux(WSL) environment.
The debug starts as expected but it doesn't change my /etc/hosts, hence I can't connect to the other services in the cluster.
I believe the issue can be related to not having enough permissions, and I can't find endpointManager running in Linux.
https://learn.microsoft.com/en-us/visualstudio/bridge/overview-bridge-to-kubernetes#additional-configuration
Any idea what this could be related to?
I need to write a bash script that:
-- takes ip address and list of ports as standard input,
-- check to see if port up or down,
-- if port is down, then restart the service via ssh
Got the first two working, however I am stuck on the last part, determining what service was running on the down port, as I may not know what services the machine is supposed to be running. lsof, netstat are not useful because the service is down.
The assumption is that this script will run on the users machine to check server status and restart any downed services automagically. It is known that some services may use ports listed in /etc/services for other services (for example, cpanel customer portal uses 2083, which /etc/services lists as radsec).
Any help is most appreciated, thank you!!
There is no way to determine what nonstandard ports what a non-running application may have used. All that you can do is to check for services which are not running, and (perhaps) restart those that are not running.
Even doing that runs into problems:
some services may not be running for other reasons (than loss of connectivity)
some services may not give a useful status when asked if they are running (Apache Tomcat, for instance, seems to come with service scripts which never do more than half the job).
I have a small company network with the following services/servers:
Jenkins
Stash (Atlassian)
Confluence (Atlassian)
LDAP
Owncloud
zabbix (monitoring)
puppet
and some Java web apps
all running in separate kvm(libvirt)-vms in separate virtual-subnets on 2 machines (1 internal, 1 hetzner-rootserver) with shorewall inbetween. I'm thinking about switching to Docker.
But I have two questions:
How can I achieve network security between docker containers (i.e. I want to prevent owncloud to access any host in the network except ldap-hosts-sslport)
Just by using docker-linking? If yes: does docker really allow to access only linked containers, but no others?
By using kubernetes?
By adding multiple bridging-network-interfaces for each container?
Would you switch all my infra-services/-servers to docker, or a hybrid solution with just the owncloud and the java-web-apps on docker?
Regarding the multi-host networking: you're right that Docker links won't work across hosts. With Docker 1.9+ you can use "Docker Networking" like described in their blog post http://blog.docker.com/2015/11/docker-multi-host-networking-ga/
They don't explain how to secure the connections, though. I strongly suggest to enable TLS on your Docker daemons, which should also secure your multi-host network (that's an assumption, I haven't tried).
With Kubernetes you're going to add another layer of abstraction, so that you'll need to learn working with the pods and services concept. That's fine, but might be a bit too much. Keep in mind that you can still decide to use Kubernetes (or alternatives) later, so the first step should be to learn how you can wrap your services in Docker containers.
You won't necessarily have to switch everything to Docker. You should start with Jenkins, the Java apps, or OwnCloud and then get a bit more used to the Docker universe. Jenkins and OwnCloud will give you enough challenges to gain some experience in maintaining containers. Then you can evaluate much better if Docker makes sense in your setup and with your needs to be applied to the other services.
I personally tend to wrap everything in Docker, but only due to one reason: keeping the host clean. If you get to the point where everything runs in Docker you'll have much more freedom to choose where a service can run and you can move containers to other hosts much more easily.
You should also explore the Docker Hub, where you can find ready to run solutions, e.g. Atlassian Stash: https://hub.docker.com/r/atlassian/stash/
If you need inspiration for special applications and how to wrap them in Docker, I recommend to have a look in https://github.com/jfrazelle/dockerfiles - you'll find a bunch of good examples there.
You can give containers their own IP from your subnet by creating a network like so:
docker network create \
--driver=bridge \
--subnet=135.181.x.y/28 \
--gateway=135.181.x.y+1 \
network
Your gateway is the IP of your subnet + 1 so if my subnet was 123.123.123.123 then my gateway should be 123.123.123.124
Unfortunately I have not yet figured out how to make the containers appear to the public from their own ip, at the moment they appear as the dedicated servers' ip. Let me know if you know how I can fix that. I am able to access the container using its ip though.
I am developing an embedded system accessed through a node server running express.js. One of the functions that I'm trying to provide our users is the ability to configure the network interfaces via a web UI/REST call, without the need to drop down to a SSH session.
Here's my question: Is there a programmatic way of setting an interface as DHCP or static? Short of editing /etc/network/interfaces, I haven't been able to google or stackoverflow search a programmatic method. Can anyone recommend a direction and/or best practices for doing this?
p.s., I should mention that as part of my change, I would have the necessary configuration parameters (e.g., address, netmask, gateway) and, of course, I would preface any changes with ifconfig down.
Not really. If you want to modify network configuration, you'll need to edit the config file and invoke /etc/init.d/networking script to apply the changes.
If you want to change active network configuration, you need to exec() appropriate tools, e.g. ifconfig of dhcpcd.
How can i connect two containers on different host machines in Docker ? I need to use data from mongodb on one host by a nodejs application on another host ? Can any one give me an example like this?
You could use the abassador pattern for container linking
http://docs.docker.com/articles/ambassador_pattern_linking/
Flocker is also addressing this issue, but needs more time for infrastructure setup:
https://docs.clusterhq.com/en/0.3.2/gettingstarted/
You might want to checkout also Kontena (http://www.kontena.io). Kontena supports multicast (provided by Weave) and DNS service discovery. Because of DNS discovery you can predict before the deploy what addresses each container will get.
As Flocker, Kontena also needs some time for infrastructure setup: https://github.com/kontena/kontena/tree/master/docs#getting-started
But you will get service scaling and deploy automation as a bonus.
You can connect container from different host by creating an overlay network.
Docker Engine supports multi-host networking out-of-the-box through
the overlay network driver.
It doesn't matter what machine the other container is on, all you need to is ensure that the port is exposed on that machine and then direct the second container on the first machine to the IP of the second machine.
Machine 1: Postgres:5432 172.25.8.10 ifconfig
Machine 2: Web Server:80 172.25.8.11 -> Point DB to 172.25.8.10:5432