How do I use EBS volume with ECS container - linux

I created an EBS volume, attached and mounted it to my Container Instance. In the task definition volumes I set the volume Source Path with the mounted directory.
The container data is not beeing created in the mounted directory, all other directories out of the mounted EBS works properly.
The purpose is to save the data out of the container and with this another volume backup it.
Is there a way to use this attached volume with my container? or is a better way to work with volumes and backups.
EDIT: It was tested with a random docker image running it specifying the volume and I faced the same problem. I manage to make it work restarting the Docker service but I'm still looking for a solution without restart Docker.
Inspecting a container with a volume directory that is the mounted EBS
"HostConfig": {
"Binds": [
"/mnt/data:/data"
],
...
"Mounts": [
{
"Source": "/mnt/data",
"Destination": "/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
],
the directory displays:
$ ls /mnt/data/
lost+found
Inspecting a container with a volume directory that is not the mounted EBS
"HostConfig": {
"Binds": [
"/home/ec2-user/data:/data"
],
...
"Mounts": [
{
"Source": "/home/ec2-user/data",
"Destination": "/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
]
the directory displays:
$ ls /home/ec2-user/data
databases dbms

It sounds like what you potentially want to do is make use of the AWS EC2 Launch Configurations. Using Launch Configurations, you can specify EBS volumes be created and attached to your instance at launch. This happens prior to the docker agent and subsequent tasks being started.
As part of your launch configuration, you'll want to also update the User data under Configure details with something along the lines of:
mkdir /data;
mkfs -t ext4 /dev/xvdb;
mount /dev/xvdb /data;
echo '/dev/xvdb /data ext4 defaults,nofail 0 2' >> /etc/fstab;
Then, so long as your container is setup to access /data on the host, everything will just work the first go.
Bonus: If you're using ECS clusters, I presume you're already making use of Launch Configurations to get your instances joined to the cluster. If not, you can add new instances automatically as well, using something like:
#!/bin/bash
docker pull amazon/amazon-ecs-agent
docker run --name ecs-agent --detach=true --restart=on-failure:10 --volume=/var/run/docker.sock:/var/run/docker.sock --volume=/var/log/ecs/:/log --volume=/var/lib/ecs/data:/data --volume=/sys/fs/cgroup:/sys/fs/cgroup:ro --volume=/var/run/docker/execdriver/native:/var/lib/docker/execdriver/native:ro --publish=127.0.0.1:51678:51678 --env=ECS_LOGFILE=/log/ecs-agent.log --env=ECS_AVAILABLE_LOGGING_DRIVERS=[\"json-file\",\"syslog\",\"gelf\"] --env=ECS_LOGLEVEL=info --env=ECS_DATADIR=/data --env=ECS_CLUSTER=your-cluster-here amazon/amazon-ecs-agent:latest
Specifically in that bit, you'll want to edit this part: --env=ECS_CLUSTER=your-cluster-here
Hope this helps.

The current documentation on Using Data Volumes in Tasks seems to address this problem:
Prior to the release of the Amazon ECS-optimized AMI version 2017.03.a, only file systems that were available when the Docker daemon was started are available to Docker containers. You can use the latest Amazon ECS-optimized AMI to avoid this limitation, or you can upgrade the docker package to the latest version and restart Docker.

Related

Is it possible to copy data of bind mount destination to a local source folder?

I have Jenkins server running inside a docker container. It has mount section like this
"Mounts": [
{
"Type": "bind",
"Source": "/mnt/data",
"Destination": "/var/jenkins_home",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
]
I see all the jenkins job configuration is present in /var/jenkins_home but even though it has source /mnt/data, I don't see the data there. It seems that the local source folder
has been formatted. Now I want to get the data from /var/jenkins_home to the source directory /mnt/data.
Could you please explain to me the commands to do it if it's possible?
I don't know why that bind mount does not work - just like you I would expect /mnt/data to be bound to/from the container's /var/jenkins_home. If the host directory was emtpy though it would be expectable that the container's mount is also empty initially.
You can use docker cp to copy files/folders between host and a docker container.
So for example docker cp jenkinscontainername:/var/jenkins_home ./local_dir.

Create two containers with the same second volume names

I am learning Docker. Wen i run two MYSQL containers with -v options whose two volumes names are the same , only one of those two volumes is created on the host file system. Would the second one override the first one or the system keeps the first one ? I don't see any command showing the volume names conflict. Here are my commands
docker container run -d --name mysql_1 -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql_db:/var/lib/mysql mysql
docker container run -d --name mysql_2 -e MYSQL_ALLOW_EMPTY_PASSWORD=True -v mysql_db:/var/lib/mysql mysql
I check the logs with command docker volume [name] inspect and it seems the second volume will override the first one
[
{
"CreatedAt": "2020-07-24T09:34:05Z",
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/mysql_db78/_data",
"Name": "mysql_db78",
"Options": null,
"Scope": "local"
}
]
The createdAt is the time that i typed the last docker container run -v .. command. But it is strange that docker didn't notify about volume name conflict
You're allowed to mount the same volume into different containers. Files read and written by one can be read and written by the other. There's no "conflict" here.
In both docker run commands you're telling Docker to mount a volume named mysql_db on to the path /var/lib/mysql. In the first command, Docker automatically creates the named volume, as though you had run docker volume create mysql_db, since it doesn't exist yet. Then the second docker run command reuses that some volume.
(Operationally, you can't have multiple MySQL servers running off the same data store, so you should see a startup-time error referring to a lock file in the mysql_2 container. At a design level, try to avoid file sharing and prefer cross-container API calls instead, since coordinating file sharing can be tricky and it doesn't scale well to more advanced environments like Kubernetes.)

how to limit memory for edgeAgent on an Edge device

I have found out from Azure team that the memory limit for Edge modules including edgeHub can be controlled by specifying createOptions -> HostConfig -> Memory. How to control memory limit for edgeAgent Docker container as it is created by edgeAgent? Is this documented? Currently here it shows as 1.88GB on a 2GB VM.
Below is an extract from docker stats
fd66aaa4dbe1 edgeAgent 0.01% 40.59MiB / 1.885GiB 2.10% 2.18MB / 1.13MB 55.6MB / 705kB 15
Setting the memory limit is not specific to the edgeHub module but it is the same for any other docker module used with IotEdge. You can add the Memory setting to the HostConfig section in the createOptions of any module.
For the edgeAgent the deployment setting would look like this:
"systemModules": {
"edgeAgent": {
"type": "docker",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-agent:1.0",
"createOptions": "{\"HostConfig\":{\"Memory\":536870912}}"
}
},
"edgeHub": {
...
}
}
With this do a new deployment.
For the changes to take affect on your machine you have to remove the edgeAgent module so it will create a new one according to your changed deployment rules.
You can do so with the following command:
sudo docker rmi mcr.microsoft.com/azureiotedge-agent:1.0 -f
After that restart the iotEdge Deamon with
sudo systemctl restart iotedge
After that the memory will be limited for the edgeAgent module.
Note:
If you want to limit memory on a Raspberry Pi, be aware that memory limit support is turned off by default. You can find a HowTo for enabling it here: https://blog.raveland.org/post/docker_raspian/

Security profiles in Docker (docker build --security-opt)

I'm trying to build a docker image for centos:7 that restricts system commands which any user (including root) can execute inside a docker machine. My intention is that I want to build an docker image with security profile that I need and then use that as my base image to build other application images thereby inheriting security profile from the base image. Is this doable? Am I missing something?
Here is a sample security profile I'm testing:
{
"defaultAction" : "SCMP_ACT_ALLOW",
"syscalls": [
{
"name": "mkdir",
"action": "SCMP_ACT_ERRNO"
},
{
"name": "chown",
"action":"SCMP_ACT_ERRNO"
}
]
}
When i run:
docker build -t test . --security-opt seccomp:policy.json
It throws an error :
Error response from daemon: The daemon on this platform does not support setting security options on build
Thoughts on how to get past this or other approaches I could use?
From Github...
"Docker engine does not support the parameter "--security-opt seccomp=" when executing command "docker build"
#cason you can supply a custom default profile to the daemon.
`--secomp-profile /path/to/profile.json'
https://github.com/moby/moby/issues/34454#issuecomment-321135510

How can I run a Docker container in AWS Elastic Beanstalk with non-default run parameters?

I have a Docker container that runs great on my local development machine. I would like to move this to AWS Elastic Beanstalk, but I am running into a small bit of trouble.
I am trying to mount an S3 bucket to my container by using s3fs. I have the Dockerfile:
FROM tomcat:7.0
MAINTAINER me#example.com
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y build-essential libfuse-dev libcurl4-openssl-dev libxml++2.6-dev libssl-dev mime-support automake libtool wget tar
# Add the java source
ADD . /path/to/tomcat/webapps/
ADD run_docker.sh /root/run_docker.sh
WORKDIR $CATALINA_HOME
EXPOSE 8080
CMD ["/root/run_docker.sh"]
And I install s3fs, mount an S3 bucket, and run the Tomcat server after the image has been created, by running run_docker.sh:
#!/bin/bash
#run_docker.sh
wget https://github.com/s3fs-fuse/s3fs-fuse/archive/master.zip -O /usr/src/master.zip;
cd /usr/src/;
unzip /usr/src/master.zip;
cd /usr/src/s3fs-fuse-master;
autoreconf --install;
CPPFLAGS=-I/usr/include/libxml2/ /usr/src/s3fs-fuse-master/configure;
make;
make install;
cd $CATALINA_HOME;
mkdir /opt/s3-files;
s3fs my-bucket /opt/s3-files;
catalina.sh run
When I build and run this Docker container using the command:
docker run --cap-add mknod --cap-add sys_admin --device=/dev/fuse -p 80:8080 -d username/mycontainer:latest
it works well. Yet, when I remove the --cap-add mknod --cap-add sys_admin --device=/dev/fuse, then s3fs fails to mount my S3 bucket.
Now, I would like to run this on AWS Elastic Beanstalk, and when I deploy the container (and run run_docker.sh), all the steps execute fine, except the step s3fs my-bucket /opt/s3-files in run_docker.sh fails to mount the bucket.
Presumably, this is because whatever Elastic Beanstalk does to run a Docker container, it doesn't add any additional flags like, --cap-add mknod --cap-add sys_admin --device=/dev/fuse.
My Dockerrun.aws.json file looks like:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "tomcat:7.0"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
Is it possible to add additional docker run flags to an AWS EB Docker deployment?
An alternative option is to find another way to mount an S3 bucket, but I suspect I'd run into similar permission errors regardless. Has anyone seen any way to accomplish this???
UPDATE:
For people trying to use #Egor's answer below, it works when the EB configuration is set to use v1.4.0 running Docker 1.6.0. Anything past the v1.4.0 version fails. So to make it work, build your environment as normal (which should give you a failed build), then rebuild it with a v1.4.0 running Docker 1.6.0 configuration. That should do it!
If you are using the latest version of aws docker stack (docker 1.7.1 for example), you'll need to slightly modify the above answer. Try this:
commands:
00001_add_privileged:
cwd: /tmp
command: 'sed -i "s/docker run -d/docker run --privileged -d/" /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh'
Notice the change of location && name of the run script
Add file .ebextensions/01-commands.config
container_commands:
00001-docker-privileged: command: 'sed -i "s/docker run -d/docker run --privileged -d/" /opt/elasticbeanstalk/hooks/appdeploy/pre/04run.sh'
I am also using s3fs
Thanks elijahchancey for answer it was much helpful. I would just like to add small comment:
Elasticbeanstalk is now using ECS tasks to deploy and manage application cluster. There is very important paragraph in Multicontainer Docker Configuration
docs (which I originally missed).
The following examples show a subset of parameters that are commonly used. More optional parameters are available. For more information on the task definition format and a full list of task definition parameters, see Amazon ECS Task Definitions in the Amazon ECS Developer Guide.
So the document is not complete reference but it just shows typical entries and you are supposed to find more elsewhere. This has quite major impact because now (2018) you are able to specify more options and you don't need to hack ebextensions any more. Only thing you need to do is to use task parameter in containerDefinitions of your multi docker Dockerrun.aws.json.
This is not mentioned in single docker containers but one can try and verify...
Example of multi docker Dockerrun.aws.json with extra cap:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "service1",
"image": "myapp/service1:latest",
"essential": true,
"memoryReservation": 128,
"portMappings": [
{
"hostPort": 8080,
"containerPort": 8080
}
],
"linuxParameters": {
"capabilities": {
"add": [
"SYS_PTRACE"
]
}
}
}
]
}
You can now add capabilities using the task definition. Here are the docs:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
This is specifically what you would add to your task definition:
"linuxParameters": {
"capabilities": {
"add": [
"SYS_PTRACE"
]
}
},

Resources