How to use Amazon EBS for persistent storage in ArangoDB? - arangodb

We are working on setting up an ArangoDB cluster in DC/OS. For the data storage we've mounted 100 GB of
EBS to /dcos/volume1 as mentioned and the disk space is getting reflected in the DC/OS dashboard.
However the ArangoDB Db server process gets launched with the below docker command :
I0110 13:04:25.273998 21115 docker.cpp:815] Running docker -H unix:///var/run/docker.sock run --cpu-shares 1024 --memory 16106127360 -e CLUSTER_ROLE=primary -e CLUSTER_ID=DBServer005 -e ADDITIONAL_ARGS= -e AGENCY_ENDPOINTS=tcp://172.22.0.75:1025 tcp://172.22.0.198:1025 tcp://172.22.0.171:1025 -e HOST=172.22.0.128 -e PORT0=1025 -e LIBPROCESS_IP=172.22.0.128 -e MESOS_SANDBOX=/mnt/mesos/sandbox -e MESOS_CONTAINER_NAME=mesos-085398be-0bc9-4b23-9a13-4a7379530ea9-S3.1db76e46-20c1-48ba-ad2f-978874118930 -v /var/lib/mesos/slave/slaves/085398be-0bc9-4b23-9a13-4a7379530ea9-S3/frameworks/085398be-0bc9-4b23-9a13-4a7379530ea9-0045/executors/6f9b1b94-5dee-4454-bbe8-48f82e65d4d3/runs/1db76e46-20c1-48ba-ad2f-978874118930/myPersistentVolume:/var/lib/arangodb3:rw -v /var/lib/mesos/slave/slaves/085398be-0bc9-4b23-9a13-4a7379530ea9-S3/frameworks/085398be-0bc9-4b23-9a13-4a7379530ea9-0045/executors/6f9b1b94-5dee-4454-bbe8-48f82e65d4d3/runs/1db76e46-20c1-48ba-ad2f-978874118930:/mnt/mesos/sandbox --net bridge -p 1025:8529/tcp --name mesos-085398be-0bc9-4b23-9a13-4a7379530ea9-S3.1db76e46-20c1-48ba-ad2f-978874118930 arangodb/arangodb-mesos:3.1
Does this mean that the persistent data is stored within this location: /var/lib/? If so, is there any option to make the data get stored in different volume such as /dcos/volume1?
In the ArangoDB DCOS install config, I couldn't find any options to attach persistent volumes.

Related

Starting Cassandra with docker gives error

While starting cassandra with the below command, gives the error as
docker: invalid reference format: repository name must be lowercase. See 'docker run --help'.
docker run -e DS_LICENSE=accept --memory 4g -e CASSANDRA_ENDPOINT_SNITCH="GossipingPropertyFileSnitch" CASSANDRA_DC="testDC" CASSANDRA_RACK="testRack" DS_LICENSE=accept --memory 4g --name cassandra -d datastax/dse-server -g -s -k -v /Users/test/cassandranode01:/var/lib/cassandra
Below is my repository screenshot
Please assist me on this.
could you try using a specific version tag of DSE like 6.7.8? Latest tag is not working anymore.
Like this
docker run -e DS_LICENSE=accept --memory 4g -e CASSANDRA_ENDPOINT_SNITCH="GossipingPropertyFileSnitch" CASSANDRA_DC="testDC" CASSANDRA_RACK="testRack" DS_LICENSE=accept --memory 4g --name cassandra -d datastax/dse-server:6.7.8 -g -s -k -v /Users/test/cassandranode01:/var/lib/cassandra

Can I run mayactl from my nodes (OpenEBS)?

I don't have access to the namespace openebs and maya-apiserver. Can I run mayactl on my nodes to get the same information? If yes, how does mayactl know which PVCs/PVs I have access to? How does it protect other volumes from accidental deletion via mayactl volume delete?
You can do it from maya-apiserver pod. You can do it with the below command in the master node.
kubectl exec -it <pod name> -n openebs bash
Once you are inside the pod, you can run required mayactl command
Else you can run the command directly as per below format.
kubectl exec -it <pod name> -n openebs <required mayactl command>

How to set limit on folder memory running in Docker Container

I am running a script in docker container which create some files and logs information in that.
Command is
docker run -t --name a6f97966d3a2552283df -v "/temp/a6f97966d3a2552283df":/usercode ubuntu_16_04:firsttry /usercode/script.sh
I want to limit the size of that folder which i have mounted using this command because log size may increase very much.
One solution for that may be i mount a virtual filesystem in container using following commands
mkdir -p /quota
mkdir -p /var/virtual_disks
touch /var/virtual_disks/directory_with_size_limit.ext3
dd if=/dev/zero of=/var/virtual_disks/directory_with_size_limit.ext3 count=51200
mkfs.ext3 /var/virtual_disks/directory_with_size_limit.ext3
mount -o loop,rw,usrquota,grpquota /var/virtual_disks/directory_with_size_limit.ext3 /quota
Its working fine on my local system but not in container.
Is there any other way of acheiving this?
It is now working fine.Actually the mounted path in folder should be the same as the mount path of the virtual filesystem
So the modified command is
docker run -t --name a6f97966d3a2552283df -v "/quota":/usercode ubuntu_16_04:firsttry /usercode/script.sh

Should using a temporary docker container remove a volume?

Running a docker container with the --rm option deletes a mounted volume post exit. I'm wondering whether this is intended behavior?
Here is the exact sequence.
ole#MKI:~$ docker volume create --name a-volume-test
ole#MKI:~$ sudo ls /var/lib/docker/volumes/ | grep a-
a-volume-test
ole#MKI:~$ docker run --rm -it -v a-volume-test:/data alpine /bin/ash
/ # touch /data/test
/ # ls /data
test
/ # exit
ole#MKI:~$ sudo ls /var/lib/docker/volumes/ | grep a-
After I exit the the volume is gone.
This was a bug that will be fixed in docker 1.11 - https://github.com/docker/docker/pull/19568
According to the Docs, no that is not intended, because you are mounting a named volume it should not be deleted. Maybe submit a github issue?
Note: When you set the --rm flag, Docker also removes the volumes associated with the container when the container is removed. This is similar to running docker rm -v my-container. Only volumes that are specified without a name are removed. For example, with docker run --rm -v /foo -v awesome:/bar busybox top, the volume for /foo will be removed, but the volume for /bar will not. Volumes inheritted via --volumes-from will be removed with the same logic -- if the original volume was specified with a name it will not be removed.
Source: Docker Docs

How to run official mongodb docker storing data to separate host system drive?

I want to run the official mongodb docker while storing data to a separate host system drive with XFS file system mounted as /data on redhat linux. I want to achieve that by
docker run --name mongodb -p 27017:27017 -v /data:/data/db -d mongo --master --storageEngine wiredTiger
It is my understanding that this should create a new volume mounting the host system's /data as /data/db inside the docker container and thus redirecting all access to /data/db to the hosts /data.
Doing a docker inspect --type=container ce03...73shows
"Mounts": [
{
Source: "/data",
Destination: "/data/db",
Mode: "",
RW: true
},
...
However doing df shows that /dev/xvda1 mounted as / is the only fs growing, while /dev/xvdb mounted as /data remains empty. This is confirmed by du /data and by ls -a /data both showing nothing.
Now if I do sudo du -L / | grep -E "^[0-9]{6,20}" this shows all directories on my system with relevant byte size. Of these the only ones growing are /proc/3071/task/3071/*. Doing top -p 3071 shows mongod. Thus instead of saving anything to /data the mongod has created a virtual volume and is storing data there. Doing sudo du -L / | grep -E "^[0-9]{6,20}" inside the container confirms that only /data/db is growing. Doing lsblk inside the container however does not show that anything would be mounted as /data/db, only two docker loops are shown being mounted as /. Thus /data/db is just a normal directory on the / docker image.
Why? How can I make it store data on /data?

Resources