How do I grant the mlock syscall to a container invoked via "sudo rkt run" on CoreOs - coreos

Running my app as below:
sudo rkt run --insecure-options=image --interactive --net=host ./myapp.aci
I get the message:
Failed to lock memory: cannot allocate memory
Which after some digging would seem to indicate that the container does not have the CAP_IPC_LOCK capability passed to it. I have dug into some of the documentation, but cannot find where I need to add configuration or any option to enable this. How do I do this?

ACIs can specify which caps they need in their manifest with an isolator of type os/linux/capabilities-retain-set.
To check if the manifest contains such an isolator, you can use actool:
$ actool cat-manifest --pretty-print ./myapp.aci
You might see the following:
"isolators": [
{
"name": "os/linux/capabilities-retain-set",
"value": {
"set": [
"CAP_IPC_LOCK"
]
}
}
]
To add CAP_IPC_LOCK, you can use:
$ actool patch-manifest --capability=CAP_IPC_LOCK --replace ./myapp.aci
It is currently not possible to add a capability directly on the rkt run command line. I filed an issue on GitHub for this feature request: coreos/rkt#2371

You can use acbuild to give your container the right capabilities.
If you're already using acbuild to make your ACI, just add this line to the build script:
echo '{ "set": ["CAP_IPC_LOCK"] }' | acbuild isolator add "os/linux/capabilities-retain-set" -
Or if you're not already using acbuild to make your ACI, you can modify an existing ACI by using the --modify flag. So the command would be:
echo '{ "set": ["CAP_IPC_LOCK"] }' | acbuild --modify path/to/your/app.aci isolator add "os/linux/capabilities-retain-set" -

Related

Docker - unable to run script

What I'm doing
I am using AWS batch to run a docker container for a large compute job. I have configured the ECR/ECS successfully to the best of my knowledge but am having issues running the required commands for reasons that are beyond my level of understanding with docker ( newbie )
What I need to do is pass the below commands into my application and start my application to perform some heavy computing tasks; all commands listed below must be present.
The Issue(s)
The issue arises when I send the submit job to AWS batch; this service pulls the image from the ACR ( amazon container repository ) and spins up a compute environment. The issue comes from when I try to run the command I pass in, below I will go throgh it.
"command": [
"mkdir -p logging",
"chmod 777 logging/",
"docker run -t -i -e my-application", # container name
"-e APIKEY",
"-e BASEURI",
"-e APIUSER",
"-v WORKSPACE /logging:/src/log",
"DOCKERIMAGE",
"python my_app.py",
"-t APP_USER",
"-e APP_ENVIRONMENT",
"-u APP_USERNAME",
"-p APP_PASSWORD",
"-i IN_PATH",
"-o OUT_PATH",
"-b tmp/"
]
The command above generates the following error(s)
container_linux.go:370: starting container process caused: exec: "mkdir -p log": executable file not found in $PATH
I tried to pass in the command to echo the env var $PATH but was unsuccesfull getting a response and resulted in a similar error.
I have ran successfully "ls" and was able to see the directory contents of my application inside.
I am not however able to run any of these commands that I have included in the command [] section. I have tried just running python and such in hopes of getting a more detailed error but was unsuccessful.
Logic in plain English
Create a path called logging if it doesnt exist
set the permissions for logging
run the docker container and pass in the environment variables while doing so
Tell docker to run the python file my_app.py and pass in the expected runtime args
Execute and perform the required logic deligated in the python3 application
Questions
Why can I not create a directory here called "logging" where am I ?
Am I running these properly as defined by AWS batch? or docker
What am I missing or where am I going wrong?
AWS Batch high level doc
AWS Batch link specific to what i'm doing
Assuming that you're following the syntax described in the Container
Properties
section of the AWS docs, you have several problems with the syntax of
your command directive.
First
The command directive can only run a single command. You can't mash together a bunch of commands as you're trying to do in your example. If you need to run multiple commands you would need to embed them as an argument to a shell. For example, something like:
command: ["/bin/sh", "-c", "mkdir -p logging; chmod 777 logging; ..."]
Second
You must properly tokenize your
command lines -- that is, when you type mkdir -p logging at the
command prompt, the shell splits this into three parts (or "tokens"): ['mkdir', '-p', 'logging']. You need to do the same thing when building up the
list of arguments to command.
This is invalid:
command: ["mkdir -p logging"]
That would looking for a command named mkdir -p logging, and of course no such command exists. That would properly be written as:
command: ["mkdir", "-p", "logging"]
Third
I'm not very familiar with the AWS batch environment, but it's unlikely you can run a docker command inside a docker` container as you're trying to do. It's unclear why you're doing this, though: why not just configure your AWS batch job with the appropriate image, environment variables, etc?
Take a look at some of these example job definitions.

containerd error "failed to find user by uid" when creating ejbca docker container on azure

When I try to create an Azure container instance for EJBCA-ce I get an error and cannot see any logs.
I expect the following result :
But I get the following error :
Failed to start container my-azure-container-resource-name, Error response: to create containerd task: failed to create container e9e48a_________ffba97: guest RPC failure: failed to find user by uid: 10001: expected exactly 1 user matched '0': unknown
Some context:
I run the container on azure cloud container instance
I tried
from ARM template
from Azure Portal.
with file share mounted
with database env variable
without any env variables
It runs fine locally using the same env variable (database configuration).
It used to run with the same configuration a couple weeks ago.
Here are some logs I get when I attach the container group from az cli.
(count: 1) (last timestamp: 2020-11-03 16:04:32+00:00) pulling image "primekey/ejbca-ce:6.15.2.3"
(count: 1) (last timestamp: 2020-11-03 16:04:37+00:00) Successfully pulled image "primekey/ejbca-ce:6.15.2.3"
(count: 28) (last timestamp: 2020-11-03 16:27:52+00:00) Error: Failed to start container aci-pulsy-ccm-ejbca-snd, Error response: to create containerd task: failed to create container e9e48a06807fba124dc29633dab10f6229fdc5583a95eb2b79467fe7cdffba97: guest RPC failure: failed to find user by uid: 10001: expected exactly 1 user matched '0': unknown
An extract of the dockerfile from dockerhub
I suspect the issue might be related to the commands USER 0 and USER 10001 we found several times in the dockerfile.
COPY dir:89ead00b20d79e0110fefa4ac30a827722309baa7d7d74bf99910b35c665d200 in /
/bin/sh -c rpm --import /etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
CMD ["/bin/bash"]
USER 0
COPY dir:893e424bc63d1872ee580dfed4125a0bef1fa452b8ae89aa267d83063ce36025 in /opt/primekey
COPY dir:756f0fe274b13cf418a2e3222e3f6c2e676b174f747ac059a95711db0097f283 in /licenses
USER 10001
CMD ["/opt/primekey/wildfly-14.0.1.Final/bin/standalone.sh" "-b" "0.0.0.0"
MAINTAINER PrimeKey Solutions AB
ARG releaseTag
ARG releaseEdition
ARM template
{
"type": "Microsoft.ContainerInstance/containerGroups",
"apiVersion": "2019-12-01",
"name": "[variables('ejbcaContainerGroupName')]",
"location": "[parameters('location')]",
"tags": "[variables('tags')]",
"dependsOn": [
"[resourceId('Microsoft.DBforMariaDB/servers', variables('ejbcaMariadbServerName'))]",
"[resourceId('Microsoft.DBforMariaDB/servers/databases', variables('ejbcaMariadbServerName'), variables('ejbcaMariadbDatabaseName'))]"
],
"properties": {
"sku": "Standard",
"containers": [
{
"name": "[variables('ejbcaContainerName')]",
"properties": {
"image": "primekey/ejbca-ce:6.15.2.3",
"ports": [
{
"protocol": "TCP",
"port": 443
},
{
"protocol": "TCP",
"port": 8443
}
],
"environmentVariables": [
{
"name": "DATABASE_USER",
"value": "[concat(parameters('mariadbUser'),'#', variables('ejbcaMariadbServerName'))]"
},
{
"name": "DATABASE_JDBC_URL",
"value": "[variables('ejbcaEnvVariableJdbcUrl')]"
},
{
"name": "DATABASE_PASSWORD",
"secureValue": "[parameters('mariadbAdminPassword')]"
}
],
"resources": {
"requests": {
"memoryInGB": 1.5,
"cpu": 2
}
}
,
"volumeMounts": [
{
"name": "certificates",
"mountPath": "/mnt/external/secrets"
}
]
}
}
],
"initContainers": [],
"restartPolicy": "OnFailure",
"ipAddress": {
"ports": [
{
"protocol": "TCP",
"port": 443
},
{
"protocol": "TCP",
"port": 8443
}
],
"type": "Public",
"dnsNameLabel": "[parameters('ejbcaContainerGroupDNSLabel')]"
},
"osType": "Linux",
"volumes": [
{
"name": "certificates",
"azureFile": {
"shareName": "[parameters('ejbcaCertsFileShareName')]",
"storageAccountName": "[parameters('ejbcaStorageAccountName')]",
"storageAccountKey": "[parameters('ejbcaStorageAccountKey')]"
}
}
]
}
}
It runs fine on my local machine on linux (ubuntu 20.04)
docker run -it --rm -p 8080:8080 -p 8443:8443 -h localhost -e DATABASE_USER="mymaridbuser#my-db" -e DATABASE_JDBC_URL="jdbc:mariadb://my-azure-domain.mariadb.database.azure.com:3306/ejbca?useSSL=true" -e DATABASE_PASSWORD="my-pwd" primekey/ejbca-ce:6.15.2.3
In the EJBCA-ce container image, I think they are trying to provide an user different than root to run the EJBCA server. According to the Docker documentation:
The USER instruction sets the user name (or UID) and optionally the user group (or GID) to use when running the image and for any RUN, CMD and ENTRYPOINT instructions that follow it in the Dockerfile
In the Dockerfile they reference two users, root, corresponding to UID 0, and another one, with UID 10001.
Typically, in Linux and UNIX systems, UIDs can be organized in different ranges: it is largely dependent on the concrete operating system and user management praxis, but it is very likely that the first user account created in a linux system will be assigned to UID 1001 or 10001, like in this case. Please, see for instance the UID entry in wikipedia or this article.
AFAIK, the USER indicated does not need to exist in your container to run it correctly: in fact, if you run it locally, it will start without further problem.
The user with UID 10001 will be actually setup in your container by the script that is run in the CMD defined in the Dockerfile, /opt/primekey/bin/start.sh, by this code fragment:
if ! whoami &> /dev/null; then
if [ -w /etc/passwd ]; then
echo "${APPLICATION_NAME}:x:$(id -u):0:${APPLICATION_NAME} user:/opt:/sbin/nologin" >> /etc/passwd
fi
fi
Please, be aware that APPLICATION_NAME in this context takes the value ejbca and that the user which runs this script, as indicated in the Dockerfile, is 10001. That will be the value provided by the command id -u in this code.
You can verify it if you run your container locally:
docker run -it -p 8080:8080 -p 8443:8443 -h localhost primekey/ejbca-ce:6.15.2.3
And initiate bash into it:
docker exec -it container_name /bin/bash
If you run whoami, it will tell you ejbca.
If you run id it will give you the following output:
uid=10001(ejbca) gid=0(root) groups=0(root)
You can verify the user existence in the /etc/passwd as well:
bash-4.2$ cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:99:99:Nobody:/:/sbin/nologin
systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin
ejbca:x:10001:0:ejbca user:/opt:/sbin/nologin
The reason why Pierre did not get this output is because he ran the container overwriting the provided CMD and, as a consequence, not executing the start.sh script responsible of the user creation, as above mentioned.
For any reason, and this is where my knowledge fails me, when Azure is trying to run your container, it is failing because the USER 10001 identified in the Dockerfile does not exist.
I think it could be related with the use of containerd instead of docker.
The error reported by Azure seems related with the Microsoft project opengcs.
They say about the project:
Open Guest Compute Service is a Linux open source project to further the development of a production quality implementation of Linux Hyper-V container on Windows (LCOW). It's designed to run inside a custom Linux OS for supporting Linux container payload.
And:
The focus of LCOW v2 as a replacement of LCOW v1 is through the coordination and work that has gone into containerd/containerd and its Runtime V2 interface. To see our containerd hostside shim please look here Microsoft/hcsshim/cmd/containerd-shim-runhcs-v1.
The error you see in the console is raised by the spec.go file that you can find in their code base, when they are trying to establish the user on behalf of whom the container process should be run:
func setUserID(spec *oci.Spec, uid int) error {
u, err := getUser(spec, func(u user.User) bool {
return u.Uid == uid
})
if err != nil {
return errors.Wrapf(err, "failed to find user by uid: %d", uid)
}
spec.Process.User.UID, spec.Process.User.GID = uint32(u.Uid), uint32(u.Gid)
return nil
}
This code is executed by this other code fragment - you can see the full function code here:
parts := strings.Split(userstr, ":")
switch len(parts) {
case 1:
v, err := strconv.Atoi(parts[0])
if err != nil {
// evaluate username to uid/gid
return setUsername(spec, userstr)
}
return setUserID(spec, int(v))
And the getUser function:
func getUser(spec *oci.Spec, filter func(user.User) bool) (user.User, error) {
users, err := user.ParsePasswdFileFilter(filepath.Join(spec.Root.Path, "/etc/passwd"), filter)
if err != nil {
return user.User{}, err
}
if len(users) != 1 {
return user.User{}, errors.Errorf("expected exactly 1 user matched '%d'", len(users))
}
return users[0], nil
}
As you can see, these are exactly the errors that Azure is reporting you.
As a summary, I think they are providing a Windows LCOW solution that conforms to the OCI Image Format Specification suitable to run containers with containerd.
As you indicated if It used to run with the same configuration a couple weeks ago my best guest is that, perhaps, they switched your containers from a pure Linux containerd runtime implementation to one based in Windows and in the above mentioned software, and this is why you containers are now failing.
A possible workaround could be to create a custom image based on the official provided by PrimeKey and create the user 10001, as also Pierre pointed out.
To accomplish this task, first, create a new custom Dockerfile. You can try, for instance:
FROM primekey/ejbca-ce:6.15.2.3
USER 0
RUN echo "ejbca:x:10001:0:ejbca user:/opt:/sbin/nologin" >> /etc/passwd
USER 10001
Please, note that you may need to define some of the environment variables from the official EJBCA image.
With this Dockerfile you can build your image with docker or docker compose with an appropriate docker-compose.yaml file, something like:
version: "3"
services:
ejbca:
image: <your repository>/ejbca
build: .
ports:
- "8080:8080"
- "8443:8443"
Please, customize it as you consider appropriate.
With this setup the new container will still run properly in a local environment in the same way as the original one: I hope it will be also the case in Azure.
User with UID 10001 does not exists in your image. This does not prevent USER command in your Dockerfile to work or the image to be invalid itself, but it seems to cause issues with Azure container.
I cannot find doc or any reference on why it doesn't work on Azure (will update if so), but adding the user in the image should solve the issue. Try adding something like this in your Dockerfile to create user with UID 10001 (this must be done as root, i.e. with user 0) :
useradd -u 10001 myuser
Additional notes to see user 10001 does not exists:
# When running container, not recognized by system
$ docker run docker.io/primekey/ejbca-ce:6.15.2.3 whoami
whoami: cannot find name for user ID 10001
# Not present in /etc/passwd
$ docker run docker.io/primekey/ejbca-ce:6.15.2.3 cat /etc/passwd
root:x:0:0:root:/root:/bin/bash
bin:x:1:1:bin:/bin:/sbin/nologin
daemon:x:2:2:daemon:/sbin:/sbin/nologin
adm:x:3:4:adm:/var/adm:/sbin/nologin
lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
sync:x:5:0:sync:/sbin:/bin/sync
shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
halt:x:7:0:halt:/sbin:/sbin/halt
mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
operator:x:11:0:operator:/root:/sbin/nologin
games:x:12:100:games:/usr/games:/sbin/nologin
ftp:x:14:50:FTP User:/var/ftp:/sbin/nologin
nobody:x:99:99:Nobody:/:/sbin/nologin
systemd-network:x:192:192:systemd Network Management:/:/sbin/nologin
dbus:x:81:81:System message bus:/:/sbin/nologin

Proper way for keep process running on a container

I'm not aware if this could be considered as a duplicate since it's a problem for an specific case.
Currently, I have created a docker outside docker image for handling my Jenkins agent which will perform auto restarts without using supervisor as a solution ( lack of python 3.7 support ), and by that, since I'm using openjdk:slim as base image and I don't want to install any additional dependencies I opted to compensate the lack of tools like lsof and ps, or others for checking if the process is running or not, by writing the started process pid on a file which will be used for validating if the process exists or not under the path /proc/pid/status. Currently this works and the main reason of creating this solution for handling the auto start of the agents.
But my question is, Is this the best or more appropriated approach?
Please find the following code with the implementation:
#!/bin/bash
set -e
agent_runner() {
while :
do
if [ ! -f "/proc/$(cat /tmp/agent.pid)/status" ]
then
curl $JNLP_AGENT_DOWNLOAD_URL -o agent.jar
java \
-Dorg.jenkinsci.plugins.durabletask.BourneShellScript.HEARTBEAT_CHECK_INTERVAL=300 \
-Dhttps.protocols=TLSv1.2 \
-jar agent.jar \
-jnlpUrl $JNLP_AGENT_URL \
-secret $JENKINS_SECRET \
-workDir "$JENKINS_WORKDIR" &
echo $! > /tmp/agent.pid
else
:
fi
sleep 10
done
}
while :
do
if [ cat < /dev/tcp/"$TARGET" ]; then
echo "Starting Agent"
agent_runner
else
echo "Jenkins master is offline, waiting...."
fi
sleep 10
done
Link for the repository: https://github.com/thcp/jenkins-agent-dod
If the main process in the container dies, you should let the container die with it.
Docker and the various layers above it have functionality to restart whole containers. There is a docker run --restart option for the basic Docker CLI, and equivalent Docker Compose option, and restarting dying containers after some backoff is the default behavior for Kubernetes pods.
So, if you just let a container die on its own, you’ll have out-of-the-box support for the container engine to restart itself, without adding any special support into your image; just set the CMD to the thing you actually need the container to do and go. This approach also has the benefit that if you detect your environment has become unstable (“I depend on a database and it’s unreachable”) the process can choose to abort itself and let it be restarted later when hopefully the environment has improved.

Check_MK - configuring legacy check

I am running OMD 1.20 -latest according to official website, Check_MK 1.2.4p5 community edition on an Ubuntu 14.04.3 LTS machine.
I need to configure a FTP check that will check also the credentials and reading/writing a file. The standard plugins do not offer such a feature from what I know so I am trying to use a custom plugin, specifically: https://exchange.nagios.org/directory/Plugins/Network-Protocols/FTP/check_ftp_rw/details
So the monitoring server is supposed to test an external FTP server that does not have the agent installed.
I have the plugin in /usr/lib/nagios/plugins and have ran it manually and it works ok.
Now I am trying to configure it as a check in check_mk so I did the following
in /opt/omd/sites/monitoring/etc/check_mk/main.mk
# Put your host names here
# all_hosts = [ 'localhost' ]
all_hosts = [ ]
extra_nagios_conf += r"""
define command {
command_name check_ftprw
command_line /usr/lib/nagios/plugins/check_ftp_rw --host ftp.test.com --user test --password 'test123' --dir pub
}
"""
legacy_checks = [
( ( "check_ftprw", FTP", True), [ "localhost"] ),
]
I restart the omd site and check the inventory but it nevers picks up this check.
I have solved this problem, the config is ok, then we need to reload check_mk: cmk -O
Then we need to check that the command is registered in check_mk_objects.cfg:
# extra_nagios_conf
define command {
command_name check_ftprw
command_line /usr/lib/nagios/plugins/check_ftp_rw --host ftp.xxx --user test --password 'test' --dir pub --file test.txt --write
}
We can check the agent output using: check_mk_agent

How can I run a Docker container in AWS Elastic Beanstalk with non-default run parameters?

I have a Docker container that runs great on my local development machine. I would like to move this to AWS Elastic Beanstalk, but I am running into a small bit of trouble.
I am trying to mount an S3 bucket to my container by using s3fs. I have the Dockerfile:
FROM tomcat:7.0
MAINTAINER me#example.com
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y build-essential libfuse-dev libcurl4-openssl-dev libxml++2.6-dev libssl-dev mime-support automake libtool wget tar
# Add the java source
ADD . /path/to/tomcat/webapps/
ADD run_docker.sh /root/run_docker.sh
WORKDIR $CATALINA_HOME
EXPOSE 8080
CMD ["/root/run_docker.sh"]
And I install s3fs, mount an S3 bucket, and run the Tomcat server after the image has been created, by running run_docker.sh:
#!/bin/bash
#run_docker.sh
wget https://github.com/s3fs-fuse/s3fs-fuse/archive/master.zip -O /usr/src/master.zip;
cd /usr/src/;
unzip /usr/src/master.zip;
cd /usr/src/s3fs-fuse-master;
autoreconf --install;
CPPFLAGS=-I/usr/include/libxml2/ /usr/src/s3fs-fuse-master/configure;
make;
make install;
cd $CATALINA_HOME;
mkdir /opt/s3-files;
s3fs my-bucket /opt/s3-files;
catalina.sh run
When I build and run this Docker container using the command:
docker run --cap-add mknod --cap-add sys_admin --device=/dev/fuse -p 80:8080 -d username/mycontainer:latest
it works well. Yet, when I remove the --cap-add mknod --cap-add sys_admin --device=/dev/fuse, then s3fs fails to mount my S3 bucket.
Now, I would like to run this on AWS Elastic Beanstalk, and when I deploy the container (and run run_docker.sh), all the steps execute fine, except the step s3fs my-bucket /opt/s3-files in run_docker.sh fails to mount the bucket.
Presumably, this is because whatever Elastic Beanstalk does to run a Docker container, it doesn't add any additional flags like, --cap-add mknod --cap-add sys_admin --device=/dev/fuse.
My Dockerrun.aws.json file looks like:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "tomcat:7.0"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
Is it possible to add additional docker run flags to an AWS EB Docker deployment?
An alternative option is to find another way to mount an S3 bucket, but I suspect I'd run into similar permission errors regardless. Has anyone seen any way to accomplish this???
UPDATE:
For people trying to use #Egor's answer below, it works when the EB configuration is set to use v1.4.0 running Docker 1.6.0. Anything past the v1.4.0 version fails. So to make it work, build your environment as normal (which should give you a failed build), then rebuild it with a v1.4.0 running Docker 1.6.0 configuration. That should do it!
If you are using the latest version of aws docker stack (docker 1.7.1 for example), you'll need to slightly modify the above answer. Try this:
commands:
00001_add_privileged:
cwd: /tmp
command: 'sed -i "s/docker run -d/docker run --privileged -d/" /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh'
Notice the change of location && name of the run script
Add file .ebextensions/01-commands.config
container_commands:
00001-docker-privileged: command: 'sed -i "s/docker run -d/docker run --privileged -d/" /opt/elasticbeanstalk/hooks/appdeploy/pre/04run.sh'
I am also using s3fs
Thanks elijahchancey for answer it was much helpful. I would just like to add small comment:
Elasticbeanstalk is now using ECS tasks to deploy and manage application cluster. There is very important paragraph in Multicontainer Docker Configuration
docs (which I originally missed).
The following examples show a subset of parameters that are commonly used. More optional parameters are available. For more information on the task definition format and a full list of task definition parameters, see Amazon ECS Task Definitions in the Amazon ECS Developer Guide.
So the document is not complete reference but it just shows typical entries and you are supposed to find more elsewhere. This has quite major impact because now (2018) you are able to specify more options and you don't need to hack ebextensions any more. Only thing you need to do is to use task parameter in containerDefinitions of your multi docker Dockerrun.aws.json.
This is not mentioned in single docker containers but one can try and verify...
Example of multi docker Dockerrun.aws.json with extra cap:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "service1",
"image": "myapp/service1:latest",
"essential": true,
"memoryReservation": 128,
"portMappings": [
{
"hostPort": 8080,
"containerPort": 8080
}
],
"linuxParameters": {
"capabilities": {
"add": [
"SYS_PTRACE"
]
}
}
}
]
}
You can now add capabilities using the task definition. Here are the docs:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
This is specifically what you would add to your task definition:
"linuxParameters": {
"capabilities": {
"add": [
"SYS_PTRACE"
]
}
},

Resources