Connecting Centos VMs to each other for K8s - linux

Im trying to set up kubernetes on my centos VMs using virtualbox. I prefer to use the kubeadm method, so that I can join slave nodes with a join token.
My issue is that I think I am lacking understanding of how to connect my VMs to one another beforehand. This is the resource I am using for the Kubernetes installation:
https://www.profiq.com/kubernetes-cluster-setup-using-virtual-machines/
When I create VMs and run ifconfig, they all have the same IPs listed, even if they are new VMs and not just a copy of the original. I must be doing something wrong.
Anyway, Im just wondering if anyone would be so kind as to give me some steps to get my VMs talking to each other, just to be sure Im doing it correctly. Im following the article I posted, and can ping each VM from the other, but then ran ifconfig and, since each machine has the same 10.0.2.15 IP, I feel like its just pinging itself and not the master from slave, etc

Did you perform the step after the cloning and before you load kubernetes to change the IP addresses of the 2nd and 3rd VM?
from the instructions you are following, I see:
Now create a linked clone machines from kubemaster machines created before. Once you’re done, boot into machine and change following things to match infrastructure:
Set IP address 192.168.99.21 (or 22 for second slave) for host only network.
Set hostname hostnamectl set-hostname kubeslave1 (or kubeslave2 for second slave) Everything else is already configured.

Related

Bridge to Kubernetes doesnt add entries in /etc/hosts

I need help with the Bridge to Kubernetes setup in my Linux(WSL) environment.
The debug starts as expected but it doesn't change my /etc/hosts, hence I can't connect to the other services in the cluster.
I believe the issue can be related to not having enough permissions, and I can't find endpointManager running in Linux.
https://learn.microsoft.com/en-us/visualstudio/bridge/overview-bridge-to-kubernetes#additional-configuration
Any idea what this could be related to?

vSphere Cloud-Init not keeping the static IP

I'm getting odd behavior that I don't understand in dealing with vSphere and Terraform. My terraform code provisions a host in vSphere with a static network configuration (network, IP, mask, gateway, etc.), and on the first boot, it seems to be correct. The IP and relevant network settings are applied.
However on reboot, the network configuration falls back to DHCP, which fails.
I see in the /var/log/cloud-init.log file that on the first boot, it is able to successfully apply the config:
However after that, on reboot, it reverts to DHCP?
I noticed this issue when trying to bring up Consul on the host, and Consul complained that there wasn't an IPv4 address to bind to. So, when I rebooted, the IP was gone. So I don't think the address is applying correctly.
The terraform code works for DHCP based networks, but for some reason when I apply static configurations, it adds it once, and then doesn't seem like it keeps.
Anyone ever hit this similar issue?
Edit:
Troubleshooting further by reading the cloud-init debugs:
This is the first boot, which should be the base template.
The second boot, which should be the deployed host receiving the config from terraform -> cloud-init:
...it's a bug in vSphere.
https://kb.vmware.com/s/article/71264
Symptoms
Virtual Machine has cloud-init customization enabled and a static IP
After reboot the virtual machine is configured with DHCP
Cause
This issue occurs due to cloud-init considering the virtual machine as a new instance. This instance can not retrieve data from the datasource when there is no customization, and will have DatasourceNone applied defaulting to DHCP.
Resolution
Currently there is no resolution.
Workaround
To workaround this issue apply one of the following.workarounds:
Add the setting manual_cache_clean: True to the /etc/cloud/cloud.cfg for the customized virtual machine.
Uninstall cloud-init from the virtual machine

Making multiple Docker Machines accessible across local network. Linux & Mac

I know there are several questions similar to this, but as far as I can see there's not an answer for the setup that I can get to work, and as far as documentation goes I'm a bit lost.
My goal is to set up a linux development server on the local network which I can run multiple docker machines / containers on for each of our projects.
Ideally, I would create a docker-machine on the dev box, and then be able to access that from any of my local network machines. I can run docker on the linux box directly and access by publishing the ports, but I want to run multiple machines with different ip addresses so that we can have multiple VMs running (multiple projects).
I've looked at Docker Swarm and overlay networks and just not been able to find a single tutorial or set of instructions to get this sort of set up running.
So I have a dev box at 192.168.0.101 with docker-machine on. I want to create a new machine, run nginx on it, and then access nginx from another machine on the local network i..e http://192.168.99.1/ then set up another and access that too at say http://192.168.99.2/.
If anyone has managed to do this i'd be interested to know how.
One way I've been thinking about doing it, is running nginx on the local host on the dev box, and set up config rules to proxy to the local machines, unsure how well this would work (it works for web servers, but what if I want to ssh or bash into one of those machines, or if one has a mysql container I want to connect to)
Have you considered running your docker machines inside LXD containers?
Stepane Grabers site has a lot of relevant information
https://stgraber.org/category/lxd/
The way that I resolved this, is by using a NAT on the linux box, and then assigning a different ip to each machine. I followed the instructions here; http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/ which finally got me to be able to share multiple docker machines using the same ports (80), on different ips.

Docker : Linking containers on different host machines

How can i connect two containers on different host machines in Docker ? I need to use data from mongodb on one host by a nodejs application on another host ? Can any one give me an example like this?
You could use the abassador pattern for container linking
http://docs.docker.com/articles/ambassador_pattern_linking/
Flocker is also addressing this issue, but needs more time for infrastructure setup:
https://docs.clusterhq.com/en/0.3.2/gettingstarted/
You might want to checkout also Kontena (http://www.kontena.io). Kontena supports multicast (provided by Weave) and DNS service discovery. Because of DNS discovery you can predict before the deploy what addresses each container will get.
As Flocker, Kontena also needs some time for infrastructure setup: https://github.com/kontena/kontena/tree/master/docs#getting-started
But you will get service scaling and deploy automation as a bonus.
You can connect container from different host by creating an overlay network.
Docker Engine supports multi-host networking out-of-the-box through
the overlay network driver.
It doesn't matter what machine the other container is on, all you need to is ensure that the port is exposed on that machine and then direct the second container on the first machine to the IP of the second machine.
Machine 1: Postgres:5432 172.25.8.10 ifconfig
Machine 2: Web Server:80 172.25.8.11 -> Point DB to 172.25.8.10:5432

How to clone a Git repo from a VM?

I am currently developing inside a virtual Ubuntu box with Git, and I need to clone this repo to another CentOS VM. I don't know how to describe the git repo's location using the user#server:/path.git syntax.
Anyone can point me to the right direction? Thanks!
Can you ping one VM from the other? If so, then the IP you can ping you should be able to ssh to.
If you cannot ping, then perhaps you have a host which is reachable from both VMs. You could create a server repo there. For instance, github.com or bitbucket.com or the many many others might be a suitable third party host. Perhaps you could install a proxy (squid or dante-socks or something similar) to allow the VMs to talk to each other.
If you have email connectivity, perhaps you could mail git-bundles back and forth instead of using normal live git connections. There are many ways to do this, but we really need to know more about the networking and communications environment of these Vms to say more.

Resources