Heketi operations not reflecting in glusterfs server - glusterfs

Started the two glusterfs server and able to create and mount volumes in two servers.
I have built heketi from here version 5.0.1.
This is how I started the server from heketi after build.
cd $GOPATH/src/github.com/heketi/heketi/
cp etc/heketi.json heketi.json
./heketi --config=heketi.json
The server started running on 8080
Now I am using the heketi client to interact with heketi server as follow
export HEKETI_CLI_SERVER=http://localhost:8080
cd $GOPATH/src/github.com/heketi/heketi/client/cli/go
Added the topology.json with data in glusterfs server and able to run the following commands
./heketi-cli topology load --json=./topology.json
./heketi-cli volume create --size=1 --replica=2
./heketi-cli volume create --name=testvol --size=40 --durability="replicate" --replica=2
I am able to see the volumes created using heketi client. and also able to fetch all details of volume. But when I check the gluserfs server. I don't see any volumes created. Do anyone has idea on this?

Click here for answer from community.
The example config file etc/heketi.json configures the mock executor which does not send commands to gluster at all and is used for testing purposes.
Please change this setting to ssh or kubernetes whichever applies to you (I assume ssh). You also have to fill the details of how to reach the gluster server with ssh (the settings under sshexec).

Related

Persisting content across docker restart within an Azure Web App

I'm trying to run a ghost docker image on Azure within a Linux Docker container. This is incredibly easy to get up and running using a custom Docker image for Azure Web App on Linux and pointing it at the official docker hub image for ghost.
Unfortunately the official docker image stores all data on the /var/lib/ghost path which isn't persisted across restarts so whenever the container is restarted all my content get's deleted and I end up back at a default ghost install.
Azure won't let me execute arbitrary commands you basically point it at a docker image and it fires off from there so I can't use the -v command line param to map a volume. The docker image does have an entry point configured if that would help.
Any suggestions would be great. Thanks!
Set WEBSITES_ENABLE_APP_SERVICE_STORAGE to true in appsettings and the home directory would be mapped from your outer kudo instance:
https://learn.microsoft.com/en-us/azure/app-service/containers/app-service-linux-faq
You have a few options:
You could mount a file share inside the Docker container by creating a custom image, then storing data there. See these docs for more details.
You could switch to the new container instances, as they provide volume support.
You could switch to the Azure Container Service. This requires an orchestrator, like Kubernetes, and might be more work than you're looking for, but it also offers more flexibility, provides better reliability and scaling, and other benefits.
You have to use a shared volume that map the content of the container /var/lib/ghost directory to a host directory. This way, your data will persist in your host directory.
To do that, use the following command.
$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost/content ghost:1-alpine
I never worked with Azure, so I'm not 100 percent sure the following applies. But if you interface docker via the CLI there is a good chance it applies.
Persistency in docker is handled with volumes. They are basically mounts inside the container's file system tree to a directory on the outside. From your text I understand that you want store the content of the inside /var/lib/ghost path in /home/site/wwwroot on the outside. To do this you would call docker like this:
$ docker run [...] -v /var/lib/ghost:/home/site/wwwroot ghost
Unfortunately setting the persistent storage (or bring your own storage) to a specific path is currently not supported in Azure Web Apps on Linux.
That's said, you can play with ssh and try and configure ghost to point to /home/ instead of /var/lib/.
I have prepared a docker image here: https://hub.docker.com/r/elnably/ghost-on-azure that adds the ssh capability the dockerfile and code can be found here: https://github.com/ahmedelnably/ghost-on-azure/tree/master/1/alpine.
try it out by configuring you web app to use elnably/ghost-on-azure:latest, browse to the site (to start the container) and go to the ssh page .scm.azurewebsites.net, to learn more about SSH check this link: https://aka.ms/linux-ssh.

How to scale application with multiple exposed ports and multiple volume mounted by using docker swarm?

I have one Java based application(Jboss version 6.1 Community) with heavy traffic on it. Now I want to migrate this application deployments using docker and docker-swarm for clustering.
Scenario
My application needs two ports exposed from docker container one is web port(i.e.9080) and another one is databases connection port(i.e.1521) and there are few things like logs directory for each container mounted on host system.
Simple Docker example
docker run -it -d --name web1 -h "My Hostname" -p 9080:9080 -p 1521:1521 -v /home/web1/log:/opt/web1/jboss/server/log/ -v /home/web1/license:/opt/web1/jboss/server/license/ MYIMAGE
Docker with Swarm example
docker service create --name jboss_service --mount type=bind,source=/home/web1/license,destination=/opt/web1/jboss/server/license/ --mount type=bind,source=/home/web1/log,destination=/opt/web1/jboss/server/log/ MYIMAGE
Now if I scale/replicate above service to 2 or 3, which host port it will bind and which mount directory will it bind for the newly created containers ??
Can anyone help me to get how scale and replication service will work in this type of scenario ?
I also gone through --publish and --name global but nothing help me in my case.
Thank you!
Supporting stateful containers is still immature in the Docker universe.
I'm not sure this is possible with Docker Swarm (if it is I'd like to know) and it's not a simple problem to solve.
I would suggest you review the Statefulset feature that comes in the latest version of Kubernetes:
https://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
It supports the creation of a unique volume for each container in a scale-up event. As for port handling that is part of Kubernetes nornal Service feature that implements container load balancing.
I would suggest building your stack into a docker-compose v3 file, which could be run onto an swarn-cluster.
Instead publishing those ports, you should expose them. That means, the ports are NOT available onto the hostsystem directly, but in the docker-network. Every Composefile got it's own network by default, eg: 172.18.0.0/24. Each Container got's an own ip and makes that Service available other the specified port.
If you scale up to 3 Containers you will got:
172.18.0.1:9080,1521
172.18.0.2:9080,1521
172.18.0.3:9080,1521
You would need a Loadbalancer to access those Services. I do use Jwilder/Nginx if you prefer a container approach. I also can recommand Rancher which comes with an internal Loadbalancer.
In Swarm-mode you have to use the overlay network driver and create the network, otherwise it would just be accessible from the local host itself.
Related to logging, you should redirect your log file to stdout and catch them with an logging driver (fluentd, syslog, graylog2)
For persistent Storage you should have a look at flocker! However Databases might not support those storage implementations. EG: MYsql doesnot support them, mongodb does work with a flocker volume.
It seems like you have to read alot.. :)
https://docs.docker.com/

Restricting access to mounted /var/run/docker.sock

I am currently developing a webapp using docker-compose and Docker. Currently, there is a front-end Nginx reverse proxy-server in one container and a Rails app in another container.
Sometimes, the Rails app needs to make changes to the Nginx configuration files. I've implemented this by mounting the configuration directory as a shared volume in both containers.
However, to force Nginx to reload its configuration files after the Rails app modifies it, it needs to send a HUP signal to the Nginx process. At the moment, I am implementing this by mounting the host's /var/run/docker.sock into the Rails app container and using a gem to ask the host Docker to send the signal to the right container.
This works fine but now I'm worried about security. If the Rails container is compromised, then the attacker will have root access to the host.
I thought about creating another container who's sole job is to broker access to the socket and exposing a limited API to the main Rails app. But then we run into the same problem of what happens when the broker is also compromised. Not only that but surely there's an easier way?
I searched for some solutions to limit which APIs can be called on /var/run/docker.sock but I wasn't able to find any solutions.
Does anyone have any ideas? Perhaps there is some other way I can reload the Nginx configuration files without having to go through the Docker API?

Sharing /etc/passwd between two Docker containers

I'm trying to containerize application which consists of two services defined. One is basic server running on specific port and second one is SSH server. The first service creates users by executing standard unix commands (managing user and their ssh keys) and they can login to second ssh service. Each service is running in separate container and I use Docker Compose to make it all running together.
The issue is that I'm unable to define Docker volumes so that I can user created by first service can use second service. I need share files like /etc/passwd, /etc/shadow, /etc/group between both services.
What I tried:
1) Define volumes for each file
Not supported in Docker Compose (it can use only directories as volumes)
2) Replace /etc/passwd with symlink to copy in volume
as described here
3) Set whole /etc as volume
doesn't work (only some files get from image to volume)
is ugly
I would be thrilled with any suggestion or workaround which wouldn't require put both services in one container.

Docker - Cassandra with Authentication

I want to setup a Cassandra container with configured authentication on Docker. Currently i'm using the official Cassandra Docker image, but it doesn't seem to provide a option (via the ENV thingies) for enabling Auth Mode.
One possibility would be to setup an own repository, clone from the Cassandra Docker GitHub and modifiy this file so it also accepts the Auth related options, but this seems a bit to complex for my quite simple task. Does anybody know about a more simple solution or has any hints?
The only option that I can think of (other than making your own version of the image and updating that docker-entrypoint.sh, as you suggested) is to provide your own cassandra.yaml in a bind mount. For example:
$ docker run -v /path/to/config:/etc/cassandra
Where /path/to/config is a directory containing your cassandra.yaml. Make any adjustments you like to the copy of cassandra.yaml on the host, including your auth changes. To ensure consistency in the configuration, be sure your copy of cassandra.yaml matches the version embedded in the docker image.

Resources