Docker - Cassandra with Authentication - cassandra

I want to setup a Cassandra container with configured authentication on Docker. Currently i'm using the official Cassandra Docker image, but it doesn't seem to provide a option (via the ENV thingies) for enabling Auth Mode.
One possibility would be to setup an own repository, clone from the Cassandra Docker GitHub and modifiy this file so it also accepts the Auth related options, but this seems a bit to complex for my quite simple task. Does anybody know about a more simple solution or has any hints?

The only option that I can think of (other than making your own version of the image and updating that docker-entrypoint.sh, as you suggested) is to provide your own cassandra.yaml in a bind mount. For example:
$ docker run -v /path/to/config:/etc/cassandra
Where /path/to/config is a directory containing your cassandra.yaml. Make any adjustments you like to the copy of cassandra.yaml on the host, including your auth changes. To ensure consistency in the configuration, be sure your copy of cassandra.yaml matches the version embedded in the docker image.

Related

How do i create a custom docker image?

I have to deploy my application software which is a linux based package (.bin) file on a VM instance. As per system requirements, it needs minimum 8vCPUs and 32GB RAM.
Now, i was wondering if it is possible to deploy this software over multiple containers that load share the CPU and RAM power in the kubernetes cluster, rather than installing the software on a single VM instance.
is it possible?
Yes, it's possible to achieve that.
You can start using docker compose to build your customs docker images and then build your applications quickly.
First, I'll show you my GitHub docker-compose repo, you can inspect the folders, they are separated by applications or servers, so, one docker-compose.yml build the app, only you must run a command docker-compose up -d
if you need to create a custom image with docker you should use this docker command docker build -t <user_docker>/<image_name> <path_of_files>
<user_docker> = your docker user
<image_name> = the image name that you choose
<path_of_files> = somelocal path, if you need to build in the same folder you should use . (dot)
So, after that, you can upload this image to Dockerhub using the following commands.
You must login with your credentials
docker login
You can check your images using the following command
docker images
Upload the image to DockerHub registry
docker push <user_docker>/<image_name>
Once the image was uploaded, you can use it in different projects, make sure to make the image lightweight and usefully
Second, I'll show a similar repo but this one has a k8s configuration into the folder called k8s. This configuration was made for Google cloud but I think you can analyze it and learn how you can start in your new project.
The Nginx service was replaced by ingress service ingress-service.yml and https certificate was added certificate.yml and issuer.yml files
If you need dockerize dbs, make sure the db is lightweight, you need to make a persistent volume using PersistentVolumeClaim (database-persistent-volume-claim.yml file) or if you use larger data onit you must use a dedicated db server or some db service in the cloud.
I hope this information will be useful to you.
There are two ways to achieve what you want to do. The first one is to write a dockerfile and create the image. More information about how to write a dockerfile can be found from here. Apart for that you can create a container from a base image and install all the software and packages and export it as a image. Then you can upload to a docker image repo like Docker Registry and Amazon ECR

Using existing Ansible roles to create a custom Docker image

I currently use Ansible to manage and deploy a fleet of servers.
I wish to start using Docker for some applications and would like to build a Docker image using the same scripts we use to configure on non Dockerized hosts.
For example we have an Ansible role that builds Nginx with 3rd party modules, would like to use the same role to build a Docker image with the custom Nginx.
Any ideas how I would get this done?
There is the "Ansible Container" project, https://www.ansible.com/integrations/containers/ansible-container. That page points also to the github repo.
It is not clear how well maintained it is, but their reasoning and approach makes sense.
Consider that you might have some adjustments to do regarding two aspects:
a container should do only one thing (microservice)
how to pass configuration to the container at runtime (Docker has some guidelines, such as environmental variables if possible or mounting a volume with the configuration files)
That's a perfect example where the docker-systemctl-replacement script should be used.
It has been developed to allow ansible scripts to target both virtual machines and docker containers. It had been developed when distros did switch to systemd which was hard to enable for containers. When overwriting /usr/bin/systemctl then the docker container will then look good enough for ansible that all the old scripts will continue to run, installing rpm/deb, and having 'service:'s started and enabled.

Heketi operations not reflecting in glusterfs server

Started the two glusterfs server and able to create and mount volumes in two servers.
I have built heketi from here version 5.0.1.
This is how I started the server from heketi after build.
cd $GOPATH/src/github.com/heketi/heketi/
cp etc/heketi.json heketi.json
./heketi --config=heketi.json
The server started running on 8080
Now I am using the heketi client to interact with heketi server as follow
export HEKETI_CLI_SERVER=http://localhost:8080
cd $GOPATH/src/github.com/heketi/heketi/client/cli/go
Added the topology.json with data in glusterfs server and able to run the following commands
./heketi-cli topology load --json=./topology.json
./heketi-cli volume create --size=1 --replica=2
./heketi-cli volume create --name=testvol --size=40 --durability="replicate" --replica=2
I am able to see the volumes created using heketi client. and also able to fetch all details of volume. But when I check the gluserfs server. I don't see any volumes created. Do anyone has idea on this?
Click here for answer from community.
The example config file etc/heketi.json configures the mock executor which does not send commands to gluster at all and is used for testing purposes.
Please change this setting to ssh or kubernetes whichever applies to you (I assume ssh). You also have to fill the details of how to reach the gluster server with ssh (the settings under sshexec).

Create an LXC container from a local centos template (tar.gz)

I have a custom centos, where we made some changes, and I would like to create a LXC from that template. Is it possible ?
Possible? Yes.
depending on the custom centos and if you need to run it privileged or not, it can get really complicated. The basics are easy though.
For lxc v1 , you just need to create config file for container in /var/lib/lxc or whatever your lxc path is and untar the rootfs somewhere lxc can access, set its path in config and try to run it.
It would be best if you first make a normal centos container, look where it has config,rootfs , what file permissions it has, network config and so on.
Then you just try to mirror its config for your own image.
if it doesn't start right away , try starting in foreground (-F ) and/or look in logfile .
For lxd it is mostly the same, but you need to create the image and put it in your local repository. It is mostly covered in https://osso.nl/blog/lxc-create-image-debian-squeeze/

Persisting content across docker restart within an Azure Web App

I'm trying to run a ghost docker image on Azure within a Linux Docker container. This is incredibly easy to get up and running using a custom Docker image for Azure Web App on Linux and pointing it at the official docker hub image for ghost.
Unfortunately the official docker image stores all data on the /var/lib/ghost path which isn't persisted across restarts so whenever the container is restarted all my content get's deleted and I end up back at a default ghost install.
Azure won't let me execute arbitrary commands you basically point it at a docker image and it fires off from there so I can't use the -v command line param to map a volume. The docker image does have an entry point configured if that would help.
Any suggestions would be great. Thanks!
Set WEBSITES_ENABLE_APP_SERVICE_STORAGE to true in appsettings and the home directory would be mapped from your outer kudo instance:
https://learn.microsoft.com/en-us/azure/app-service/containers/app-service-linux-faq
You have a few options:
You could mount a file share inside the Docker container by creating a custom image, then storing data there. See these docs for more details.
You could switch to the new container instances, as they provide volume support.
You could switch to the Azure Container Service. This requires an orchestrator, like Kubernetes, and might be more work than you're looking for, but it also offers more flexibility, provides better reliability and scaling, and other benefits.
You have to use a shared volume that map the content of the container /var/lib/ghost directory to a host directory. This way, your data will persist in your host directory.
To do that, use the following command.
$ docker run -d --name some-ghost -p 3001:2368 -v /path/to/ghost/blog:/var/lib/ghost/content ghost:1-alpine
I never worked with Azure, so I'm not 100 percent sure the following applies. But if you interface docker via the CLI there is a good chance it applies.
Persistency in docker is handled with volumes. They are basically mounts inside the container's file system tree to a directory on the outside. From your text I understand that you want store the content of the inside /var/lib/ghost path in /home/site/wwwroot on the outside. To do this you would call docker like this:
$ docker run [...] -v /var/lib/ghost:/home/site/wwwroot ghost
Unfortunately setting the persistent storage (or bring your own storage) to a specific path is currently not supported in Azure Web Apps on Linux.
That's said, you can play with ssh and try and configure ghost to point to /home/ instead of /var/lib/.
I have prepared a docker image here: https://hub.docker.com/r/elnably/ghost-on-azure that adds the ssh capability the dockerfile and code can be found here: https://github.com/ahmedelnably/ghost-on-azure/tree/master/1/alpine.
try it out by configuring you web app to use elnably/ghost-on-azure:latest, browse to the site (to start the container) and go to the ssh page .scm.azurewebsites.net, to learn more about SSH check this link: https://aka.ms/linux-ssh.

Resources