Shared storage with CoreOS - coreos

I have a test cluster of 4 CoreOS machines. I want to have shared storage between them for example to put my docker images there and not have to pull them to each machine.
It seems however that CoreOS does not support NFS. What are my options for creating shared storage on CoreOS?

CoreOS does in fact support NFS-- We configured an NFS mount for EFS shared storage in AWS.
This is an example cloud-config to mount AWS EFS on /mnt:
#cloud-config
write_files:
- path: /etc/conf.d/nfs
permissions: '0644'
content: |
OPTS_RPC_MOUNTD=""
coreos:
units:
- name: rpc-statd.service
command: start
enable: true
- name: mnt.mount
content: |
[Mount]
What=AZ_ZONE.fs-xxxxxxxx.efs.us-west-2.amazonaws.com:/
Where=/mnt
Type=nfs
- name: runcmd.service
command: start
content: |
[Unit]
Description=command
[Service]
Type=oneshot
ExecStart=/bin/sh -c "AZ_ZONE=$(curl -L http://169.254.169.254/latest/meta-data/placement/availability-zone); sed -i \"s/AZ_ZONE/$AZ_ZONE/\" /etc/systemd/system/mnt.mount; systemctl daemon-reload; systemctl restart mnt.mount"
update:
group: stable
reboot-strategy: off
Replacing xxxx with the unique alphanumeric ID of your EFS share.
Before EFS was available we used BitTorrent Sync, alternatively.
Quite curious though why you would want to share your image layers. The layers themselves are a deployment strength of Docker. If you have enough in common, very little gets re-pulled between applications. Say two different applications share the Ubuntu:latest base which is the largest layer. You won't have to re-pull Ubuntu when you spin up the second application on that host. You can't use any old storage backend either-- CoreOS uses OverlayFS which might be interesting to read up on.

I have recently solved this exact problem using Deis. Amongst other useful functionality, it sets up a Ceph storage cluster and private docker registry right out of the box. The storage volume is fault-tolerant and spans all machines in the cluster.

I Use EFS(Amazon Elastic File System) for shared Storage in CoreOS cluster.
Another option will be Flocker from ClusterHQ. But its not yet available in CoreOS

You could have your own docker registry server that would start only on the master machine and pull images from s3. I haven't tried this one, but it looks good. The service files below are to work with the official registry repo. This is not a full solution, but should get you into a direction you are looking for, I think.
https://registry.hub.docker.com/u/blalor/docker-s3-registry/
You would have a service file for docker registry and a service discovery file to notify your cluster where the registry lives. An alternate would be to use CoreOS private registry called http://quay.io
registry#.service file:
[Unit]
Description=Docker Registry
After=docker.service
Requires=docker.service
[Service]
Restart=always
RestartSec=10s
TimeoutStartSec=0
ExecStartPre=-/usr/bin/docker kill registry
ExecStartPre=-/usr/bin/docker rm registry
ExecStartPre=/usr/bin/docker pull registry
ExecStart=/usr/bin/docker run -p 5000:5000 --name registry -e SETTINGS_FLAVOR=s3 -e AWS_BUCKET=my-registry-bucket -e STORAGE_PATH=/registry -e AWS_KEY=... -e AWS_SECRET="..." registry
ExecStartPost=/usr/bin/etcdctl set /domains/domain.com/registry `ifconfig|grep 'broadcast 10'|awk '{print $2}'`
ExecStop=/usr/bin/docker stop registry
[Install]
WantedBy=multi-user.target
[X-Fleet]
Global=true
Conflicts=registry#*.service
registry-discovery#.service file:
[Unit]
Description=Announce registry
BindsTo=registry#*.service
[Service]
ExecStart=/bin/sh -c "while true; do etcdctl set /services/registry/%H:%i '{ \"host\": \"%H\", \"port\": \"%i\", \"version\": \"52c7248a14\" }' --ttl 60;sleep 45;done"
ExecStop=/usr/bin/etcdctl rm /services/registry/%H:%i
[X-Fleet]
X-ConditionMachineOf=registry#*.service

Related

Start docker containers on linux system startup from user directory

I've downloaded two docker containers and already configure them.
So, now all I want is to start them on system startup.
They are in a path like
/home/user/docker-mailserver
/home/user/docker-webserver
Hosted on a Ubuntu 18.04.01 (x64)
On boot those docker containers are not running.
On login, those docker containers are starting.
I already tried to do something like
docker run -it --restart unless-stopped fancydockercontainer:latest
docker run -dit --restart unless-stopped fancydockercontainer:latest
But then when I do docker ps there where new containers added to the pool.
Is there a way to "re-route" the start process of those container to system start without completely delete / remove them?
Addition:
I started them like docker-compose up -d mailserver
After #KamilCuk gave a hint to solve this with service, this was a possible solution.
Looks like this:
Create service file with command:
nano /etc/systemd/system/docker-mail.service
Done stuff like that in the file
[Unit]
Description=Docker Mailserver
Requires=docker.service
After=docker.service
[Service]
Restart=always
RemainAfterExit=yes
WorkingDirectory=/home/user/docker-mailserver
ExecStart=/usr/bin/docker-compose up -d mail
ExecStop=/usr/bin/docker-compose stop -d mail
[Install]
WantedBy=default.target
Adding the new service to systemctl with systemctl enable docker-mail.service
After rebooting the server, this mailserver is available.
At this point, I was able to see the startup log with journalctl -u docker-mail.service -b (-b is just "boot")

How to set the dns option in Azure web app for containers

This is what happens to run the container. I wonder if there is a way to start the web app for containers with a custom DNS.
I have 5 microservices in my ILB-ASE
they need to be able to call each other using my custom DNS server in the VNet. When I check the resolv.conf i see 127.0.0.11. I need that to be set to my own custom dns server.
how can we inject my custom DNS value here?
Should we use the appsettings if so what are the values in the web app for containers?
So I can use the --dns option
The mystery part that Azure runs it. Some values are coming up from the appsettings.
2018-08-23 14:12:56.100 INFO - docker run -d -p 13940:5001 --name xxx
-e DOCKER_CUSTOM_IMAGE_NAME=xxx.azurecr.io/xxx:558 -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITES_PORT=5001 -e
WEBSITE_SITE_NAME=xxx -e WEBSITE_AUTH_ENABLED=False -e
WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_INSTANCE_ID=xxx -e
HTTP_LOGGING_ENABLED=1 xxx.azurecr.io/xxx:558
=====DOCKER LOG=========
2018_08_23_RD0003FF2D0408_default_docker.log:
​
2018-08-23T14:12:49.755843301Z [40m[1m[33mwarn[39m[22m[49m:
Microsoft.AspNetCore.DataProtection.KeyManagement.XmlKeyManager[35]
2018-08-23T14:12:49.755897801Z No XML encryptor configured. Key
{xxx-xxx-xxx-xxx-xxx} may be persisted to storage in unencrypted form.
2018-08-23T14:12:54.761216323Z [40m[1m[33mwarn[39m[22m[49m:
Microsoft.AspNetCore.Server.Kestrel[0]
2018-08-23T14:12:54.761251623Z Overriding address(es) 'http://+:80'.
Binding to endpoints defined in UseKestrel() instead.
2018-08-23T14:12:54.908189021Z Hosting environment: Production
2018-08-23T14:12:54.908386123Z Content root path: /app
2018-08-23T14:12:54.908961927Z Now listening on: http://0.0.0.0:5001
2018-08-23T14:12:54.909256229Z Application started. Press Ctrl+C to
shut down.
​
2018_08_23_RD0003FF2D0408_docker.log:
2018-08-23 14:12:44.125 INFO - Recycling container because of
AppFrameworkVersionChange and appFrameworkVersion = xxx.xxx.io/xxx:558
2018-08-23 14:12:45.900 INFO - Starting container for site
2018-08-23 14:12:45.900 INFO - docker run -d -p 30464:5001 --name xxx
-e DOCKER_CUSTOM_IMAGE_NAME=xxx.azurecr.io/xxx:549 -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITES_PORT=5001 -e
WEBSITE_SITE_NAME=xxx -e WEBSITE_AUTH_ENABLED=False -e
WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_INSTANCE_ID=xxx -e
HTTP_LOGGING_ENABLED=1 xxx.xxx.io/xxx:558
​
2018-08-23 14:12:55.972 INFO - Container xxx for site xxx initialized
successfully.
2018-08-23 14:12:55.976 INFO - Recycling container because of
AppSettingsChange and isMainSite = True
2018-08-23 14:12:56.099 INFO - Starting container for site
2018-08-23 14:12:56.100 INFO - docker run -d -p 13940:5001 --name xxx
-e DOCKER_CUSTOM_IMAGE_NAME=xxx.azurecr.io/xxx:558 -e WEBSITES_ENABLE_APP_SERVICE_STORAGE=false -e WEBSITES_PORT=5001 -e
WEBSITE_SITE_NAME=xxx -e WEBSITE_AUTH_ENABLED=False -e
WEBSITE_ROLE_INSTANCE_ID=0 -e WEBSITE_INSTANCE_ID=xxx -e
HTTP_LOGGING_ENABLED=1 xxx.azurecr.io/xxx:558
2018-08-23 14:13:05.385 INFO - Container xxx for site xxx initialized
successfully.
​
we responded to your question on Github and Reddit. Re-posting our response here for visibility.
"Currently, there is a workaround for this: you should modify the default resolv.conf to the custom DNS IP and then add your custom resolv.conf on docker build by adding a COPY command in your entrypoint script and pointing a custom resolv.conf to /etc.
However, we are investigating a better solution for this, so that manually updating the resolv.conf wouldn’t be necessary, so stay tuned."
You shouldn't use DNS to communicate with microservices, instead, you should make use of service registry.
Check this Microsoft paper talking about this:
Each microservice has a unique name (URL) that is used to resolve its
location. Your microservice needs to be addressable wherever it is
running. If you have to think about which computer is running a
particular microservice, things can go bad quickly. In the same way
that DNS resolves a URL to a particular computer, your microservice
needs to have a unique name so that its current location is
discoverable. Microservices need addressable names that make them
independent from the infrastructure that they are running on. This
implies that there is an interaction between how your service is
deployed and how it is discovered, because there needs to be a service
registry. In the same vein, when a computer fails, the registry
service must be able to indicate where the service is now running.
As you can see, the best solution will depend on your deployment model. Check this note about containers:
In some microservice deployment environments (called clusters, to be
covered in a later section), service discovery is built-in. For
example, within an Azure Container Service environment, Kubernetes and
DC/OS with Marathon can handle service instance registration and
deregistration. They also run a proxy on each cluster host that plays
the role of server-side discovery router. Another example is Azure
Service Fabric, which also provides a service registry through its
out-of-the-box Naming Service.
Hope it helps!

How to start a Docker container with cloud-config file for CoreOS?

I'm trying to configure my CoreOS server with Terraform, using cloud-config file for CoreOS. I am currently trying to set up a Mongo database in a Docker container.
Here is my config file:
write_files:
- path: "/home/core/keyfile"
permissions: "0600"
owner: "999"
content: |
hUoQVrERB0*** <here is my key for MongoDB>
coreos:
units:
- name: "dockerstart.service"
command: "start"
content: |
[Unit]
Description=Start
Author=Me
[Service]
Restart=always
ExecStart=/usr/bin/docker run --name mongo -v /home/core:/opt --add-host node1.example.com:127.0.0.1 -p 27017:27017 -d mongo:2.6.5 --smallfiles --keyFile /opt/keyfile --replSet "rs0"
ExecStop=/usr/bin/docker rm -f mongo
I am not sure how to use coreOS units (when I ssh into the server, the docker container is not running, so the config file is not correct). According to CoreOS Validator, my file is valid. Also, I am not sure if that is the simplest way to deploy a MongoDB server.
How to properly use CoreOS units ? Any thoughts on a way to improve how deploy a Mongo Database ?
Any help, comments, suggestions are appreciated !
I finally found the solution.
Actually running docker run with -d option daemonizes the command. So, when systemd founds out that this action runs in the background, it considers that Docker is crashing.
Here is journalctl -u dockerstart.service result on server :
docker[1237]: ace3978442a729420ecb87af224bd146ec6ac7912c5cc452570735f4a3be3a79
docker[1297]: mongo
systemd[1]: dockerstart.service: Service hold-off time over, scheduling restart.
systemd[1]: Stopped Start.
systemd[1]: Started Start.
Here you can clearly see that systemd stops and restarts the Start service.
So the solution for this might be removing -d from the docker run command.
If help you in the future, you can use a Container Linux Config file format for setup the initial config for CoreOS:
I published an example that create an ignition config based on a Container Linux Config file applied to Terraform in: https://github.com/joariasl/terraform-ansible-docker-swarm-coreos-aws/tree/feature/coreos-etcd
About this: https://coreos.com/os/docs/latest/provisioning.html

How to make an Azure VM & configure containers to use Azure File Storage via docker CLI / quickstart terminal?

I'm using the latest Docker Toolbox and I would like to launch docker containers on Azure that connect to an Azure File Store. What should one run to achieve this from the docker quick start terminal?
The easiest way to do this is to create an Ubuntu VM with Docker preinstalled on Azure:
https://azure.microsoft.com/en-us/blog/introducing-docker-in-microsoft-azure-marketplace/
Then follow the Azure File System Docker Volume Driver install instructions here:
https://github.com/Azure/azurefile-dockervolumedriver/blob/master/contrib/init/systemd/README.md
Once you can successfully create volumes on that VM, you can make them shared volumes or Data Volume Containers to share them between your Docker containers:
https://docs.docker.com/engine/tutorials/dockervolumes/
For more generic instructions, please use #rbj325's answer
Create docker-machine
First things first, we need an azure VM which we can use. We can use the docker-machine cli to create this. This set of instructions will create it with the ubuntu 16.04LTS to simplify(ish) installation steps.
docker-machine create --driver azure --azure-subscription-id XXXX \
--azure-location westeurope --azure-resource-group XXX \
--azure-image canonical:UbuntuServer:16.04.0-LTS:latest XXXXXX
This sets up everything we need on Azure.
Install azure file storage docker plugin
(Based on my knowledge of SSH) We then need to SSH into the docker-machine to be able to install the plugin.
docker-machine XXXXXX ssh
Once in, the following steps can be taken to install the plugin:
sudo -s
wget -qO /usr/bin/azurefile-dockervolumedriver https://github.com/Azure/azurefile-dockervolumedriver/releases/download/[VERSION]/azurefile-dockervolumedriver
chmod +x /usr/bin/azurefile-dockervolumedriver
wget -qO /etc/systemd/system/azurefile-dockervolumedriver.service https://raw.githubusercontent.com/Azure/azurefile-dockervolumedriver/master/contrib/init/systemd/azurefile-dockervolumedriver.service
cp [myconfigfile] /etc/default/
systemctl daemon-reload
systemctl enable azurefile-dockervolumedriver
systemctl start azurefile-dockervolumedriver
systemctl status azurefile-dockervolumedriver
Note that there are to things required here:
the latest version number for the driver from github
a file containing some azure storage credentials
For my installation process, I made a script that I could use and put my config file in a secure store that could be retrieved at install time. Please note it is gets the driver version 0.2.1.
Once this has completed, exit the ssh connection.
Create volumes
You should now be able to create docker volumes
docker volume create --name filestore -d azurefile -o share=filestore
Create docker containers
You can now use this volume with docker containers
docker run -it --name=example -v filestore:/filestore ubuntu /bin/bash

Is there a production ready UI for managing CoreOS

I've played around with Panamax as a solution for managing groups of containers on a single server CoreOS installation, but it lacks several features, notably fleet management and user based access restriction.
Finally, the project does not seem to be maintained anymore.
Are there any active and production ready alternatives that make the management of multiple CoreOS servers possible via a UI (Web or desktop)?
Have not yet tried this yet, but Mist.io seems like a very promising option. Comes in both an open source and SaaS version. Open Source version is being actively maintained and their service is production ready for an impressive list of cloud providers. The UI gives options for managing and monitoring CoreOS clusters at both the OS and container levels: you can spin up new hosts and new containers from the same UI. Might be what you are looking for.
The container manager for fleet is fleet-browser:
https://github.com/cloudwalkio/fleet-browser
Here is the unit file I use to deploy it, just change the lines where I have put "YOURBLAH", etc.
[Unit]
Description=Expose Fleet API in a nice GUI
Requires=docker.service
After=docker.service
[Service]
EnvironmentFile=/etc/environment
KillMode=none
TimeoutStartSec=0
Restart=always
RestartSec=10s
ExecStartPre=-/usr/bin/docker kill fleet-browser
ExecStartPre=-/usr/bin/docker rm fleet-browser
ExecStartPre=/usr/bin/docker pull cloudwalk/fleet-browser:latest
ExecStart=/usr/bin/docker run --net=host --rm --name fleet-browser \
-e FLEET_ENDPOINT=YOURENDPOINT:8081 \
-e ETCD_ENDPOINT=YOURENDPOINT:2379 \
-p 5000:5000 cloudwalk/fleet-browser:latest
ExecStop=/usr/bin/docker stop fleet-browser
[X-Fleet]
MachineMetadata="server-type=YOURSERVERTYPE" "node=YOURNODENAME"

Resources