CoreOS access from other instance - coreos

I have a coreOS cluster with 3 instances. I need to init a service in the 3 instances, but I don't want to use the IP to connect. Is there a dynamic way to scan the instances and get the IP's and then use it?

You can use the following command to get a list of the cluster instances for further processing:
fleetctl list-machines | awk '{print $2}' | tail -n +2

If you want to use the IP in one of your containers/services, and you're running CoreOS on a cloud providers, you can source in an environment file with the allocated IPs:
[Service]
EnvironmentFile=/etc/environment
and then use those values as environment variables.

CoreOS has a built in scheduler called fleet that runs on all hosts. You can just create the service and execute the command to run the service:
fleetctl start myunit.service.
To tail the output of the service you can run: fleetctl journal -f myunit.service which will automatically ssh into the host running the container, and see the output.
"I need to init a service in the 3 instances"
It seems like you're trying to run a service that should be active on all hosts in the CoreOS cluster. Take a look at the X-Fleet variable Global. You can set up a service to run on all hosts by adding this to the bottom of your service/unit file:
[X-Fleet]
Global=true
This way you will only have to start the service once, to initiate it on all hosts in the CoreOS cluster.

Related

Access host shell from inside the docker container

I have a docker based application written in Java which calls a shell script to collect data. I want to add a few commands in this script to collect host machine/VM data like below :
firewall-cmd --list-all >> firewall.txt
journalctl >> journal.log
hostnamectl >> hostname-config.txt
iptables-save >> iptables.txt.
As these commands/resources are not directly accessible to the container, Is there any way I can achieve this? Basically what I am looking for is a way to access/run commands on host from inside the container. If yes, please answer with examples associated with any of the above commands.
A principal design goal of Docker is that processes in containers can't directly run commands on the host and can't directly access the host's filesystem, network configuration, init system, or other details.
If you want to run a detailed low-level diagnostic tool on this system, it needs to run directly on the host system, and probably as root. It can't run in a container, virtual machine, or other isolation system.

Start docker-compose automatically on EC2 startup

I have a linux AMI 2 AWS instance with some services orchestrated via docker-compose, and I am using docker-compose up or docker-compose start commands to start them all. Now I am in the process to start/stop my ec2 instance automatically every day, but once it is started, I want to run some ssh to be able to change to the directory where docker-compose.yml file is, and then start it.
something like:
#!
cd /mydirectory
docker-compose start
How can I achieve that?
Thanks
I would recommend using cron for this as it is easy. Most of the corn supports non-standard instructions like #daily, #weekly, #monthly, #reboot.
You can put this either in a shell script and schedule that in crontab as #reboot /path/to/shell/script
or
you can specify the docker-compose file using the absolute path and directly schedule it in crontab as #reboot docker-compose -f /path/to/docker-compose.yml start
Other possibilities:
Create a systemd service and enable it. All the enabled systems services will be started on powering.(difficulty: medium)
Put scripts under init.d and link it to rc*.d directory. These scripts are also started based on the priority.(difficulty: medium)
Bonus:
If you specify restart policy in the docker-compose file for a container it will autostart if you reboot or switch on the server. Reference
Consider using Amazon Elastic Container Service (Amazon ECS) which can orchestrate docker containers and take care of your underlying OSes.
Simply run the following command once on the host:
sudo systemctl enable docker
Afterwards the restart: always inside of your service in docker-compose.yml should start working.

Configure Kubernetes for an Azure cluster

I followed the guide to getting Kubernetes running in Azure here:
http://kubernetes.io/docs/getting-started-guides/coreos/azure/
In order to create pods, etc., the guide has you ssh into the master node kube-00 in the cloud service and run kubectl commands there:
ssh -F ./output/kube_randomid_ssh_conf kube-00
Once in you can run the following:
kubectl get nodes
kubectl create -f ~/guestbook-example/
Is it possible to run these kubectl commands without logging to the master node, e.g., how can I set up kubectl to connect to the cluster hosted in Azure from my development machine instead of ssh'ing into the node this way?
I tried creating a context, user and cluster in the config but the values I tried using did not work.
Edit
For some more background the tutorial creates the azure cluster using a script using the Azure CLI. It ends up looking like this:
Resource Group: kube-randomid
- Cloud Service: kube-randomid
- VM: etcd-00
- VM: etcd-01
- VM: etcd-02
- VM: kube-00
- VM: kube-01
- VM: kube-02
It creates a Virtual Network that all of these VM's live in. As far as I can tell all of the machines in the cloud service share a single virtual IP.
The kubectl command line tool is just a wrapper to execute remote HTTPS API REST calls on the kubernetes cluster. If you want to be able to do so from your own machine you need to open the correct port (443) on your master node and pass along some parameters to the kubectl tool as specified in this tutorial:
https://coreos.com/kubernetes/docs/latest/configure-kubectl.html

Cannot setup multi-host Docker overlay network with etcd

I am trying to connect two Docker hosts with an overlay network and am using etcd as a KV-store. etcd is running directly on the first host (not in a container). I finally managed to connect the Docker daemon of the first host to etcd but cannot manage to establish a connection the Docker daemon on the second host.
I downloaded etcd from the Github releases page and followed the instructions under the "Linux" section.
After starting etcd, it is listening to the following ports:
etcdmain: listening for peers on http://localhost:2380
etcdmain: listening for peers on http://localhost:7001
etcdmain: listening for client requests on http://localhost:2379
etcdmain: listening for client requests on http://localhost:4001
And I started the Docker daemon on the first host (on which etcd is running as well) like this:
docker daemon --cluster-advertise eth1:2379 --cluster-store etcd://127.0.0.1:2379
After that, I could also create an overlay network with:
docker network create -d overlay <network name>
But I can't figure out how to start the daemon on the second host. No matter which values I tried for --cluster-advertise and --cluster-store, I keep getting the following error message:
discovery error: client: etcd cluster is unavailable or misconfigured
Both my hosts are using the eth1 interface. The IP of host1 is 10.10.10.10 and the IP of host2 is 10.10.10.20. I already ran iperf to make sure they can connect to each other.
Any ideas?
So I finally figured out how to connect the two hosts and to be honest, I don't understand why it took me so long to solve the problem. But in case other people run into the same problem I will post my solution here. As mentioned earlier, I downloaded etcd from the Github release page and extracted the tar file.
I followed the instructions from the etcd documentation and applied it to my situation. Instead of running etcd with all the options directly from the command line I created a simple bash script. This makes it a lot easier to adjust the options and rerun the command. Once you figured out the right options it would be handy to place them separately in a config file and run etcd as a service as explaind in this tutorial. So here is my bash script:
#!/bin/bash
./etcd --name infra0 \
--initial-advertise-peer-urls http://10.10.10.10:2380 \
--listen-peer-urls http://10.10.10.10:2380 \
--listen-client-urls http://10.10.10.10:2379,http://127.0.0.1:2379 \
--advertise-client-urls http://10.10.10.10:2379 \
--initial-cluster-token etcd-cluster-1 \
--initial-cluster infra0=http://10.10.10.10:2380,infra1=http://10.10.10.20:2380 \
--initial-cluster-state new
I placed this file in the etcd-vX.X.X-linux-amd64 directory (that I just downloaded and extracted) which also contains the etcd binary. On the second host I did the same thing but changed the --name from infra0 to infra1 and adjusted the IP to that one the second host (10.10.10.20). The --initial-cluster option is not modified.
Then I executed the script on host1 first and then on host2. I'm not sure if the order matters, but in my case I got an error message when I did it the other way round.
To make sure your cluster is set up correctly you can run:
./etcdctl cluster-health
If the output looks similar to this (listing the two members) it should work.
member 357e60d488ae5ab3 is healthy: got healthy result from http://10.10.10.10:2379
member 590f234979b9a5ee is healthy: got healthy result from http://10.10.10.20:2379
If you want to be really sure, add a value to your store on host1 and retrieve it on host2:
host1$ ./etcdctl set myKey myValue
host2$ ./etcdctl get myKey
Setting up docker overlay network
In order to set up a docker overlay network I had to restart the Docker daemon with the --cluster-store and --cluster-advertise options. My solution is probably not the cleanest one but it works. So on both hosts first stopped the docker service and then restarted the daemon with the options:
sudo service docker stop
sudo /usr/bin/docker daemon --cluster-store=etcd://10.10.10.10:2379 --cluster-advertise=10.10.10.10:2379
Note that on host2 the IP addresses need to be adjusted. Then I created the overlay network like this on one of the hosts:
sudo docker network create -d overlay <network name>
If everything worked correctly, the overlay network can now be seen on the other host. Check with this command:
sudo docker network ls

Issues trying to deploy MongoDB to EC2

I created a micro instance on EC2 that has my node.js based web application along with nginx (created a reverse front-end proxy so that my app can be on port 3000, and I have that routed to my localhost with nginx).
I also installed mongodb on this same (micro) instance, however, I was reading last night from the MongoDB docs on the way to deploy MongoDB on EC2 here. The difference between this method and my initial method is:
This method uses the ec2 command line tools to create new instances
When I use the ec2 command line tools to replicate the instructions, it tells me that it's ignoring one of the flags, so I think that the following command is outdated:
$ ec2-run-instances ami-05355a6c -t m1.large -g [SECURITY-GROUP] -k [KEY-PAIR] -b "/dev/sdf=:200:false:io1:1000" -b "/dev/sdg=:25:false:io1:250" -b "/dev/sdh=:10:false:io1:100" --ebs-optimized true
After using the above command, and proceeding to do: sudo mkfs.ext4 /dev/sdf, the name changed on my AMI image since it doesn't live there anymore.
After running ec2-run-instances and refreshing my Amazon EC2 dashboard, it doesn't show up in my instances, but if I do sudo fdisk -l it'll show 2 mounts.
As you can see, the guide is probably a little outdated, and I'm just wondering how in the world to deploy my mongodb to EC2 on its own instance. From there, how do I get them to talk to each other too? E.g. my new mongodb instance to talk to my node.js micro instance with nginx on it.
Try to add the volumes from EC2 panel, and then attach them to an existing instance. It works for me.
The command line is
-b "/dev/xvdf=:200:false:io1:1000" -b "/dev/xvdg=:25:false:io1:250" -b "/dev/xvdh=:10:false:io1:100"
and
"/dev/xvdf=:200:false:io1:1000"
means that
1, you have to add a Provisioned IOPS (PIOPS) EBS volume.
2, volume size should be 200, and IOPS value is 1000, the available zone should be as same as your ec2 instance.
3, And xvdf is the location you add when you attach the volume to the instance.

Resources