Best way to check if a bucket is ready in node.js - node.js

I'm using gcsfuse to mount a volume in a container, and I need it to start my node.js application.
To mount the volume I'm using the lifecycle hooks of kubernetes, but it doesn't ensure that it will be executed before the entrypoint of my container.
I've been thinking about how should I check when the volume is mounted, and if it goes down.
To check when it is mounted and unmounted I read and search the existence of the volume in /proc/mounts, and adding a watcher to it for changes.
Is there a simplier way to ensure that the volume is mounted in node.js, docker, or kubernetes?

You can run this dockerfile in privileged mode:
FROM ubuntu
RUN echo "deb http://packages.cloud.google.com/apt gcsfuse-stretch main" | tee /etc/apt/sources.list.d/gcsfuse.list
RUN apt-get update && apt install curl -y
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN apt-get update
RUN apt-get install gcsfuse fuse -y
RUN mkdir -p /mnt/tmp
CMD gcsfuse [BUCKET NAME] /mnt/tmp && /bin/bash
This way you are sure that the bucket is mounted when the pod initializes.
On the other hand I do not recommend this approach as there is a Node.sj library for Google Cloud Storage [1].
Here is an example of bucket listing:
// Imports the Google Cloud client library
const Storage = require('#google-cloud/storage');
// Creates a client
const storage = new Storage();
// Lists all buckets in the current project
storage
.getBuckets()
.then(results => {
const buckets = results[0];
console.log('Buckets:');
buckets.forEach(bucket => {
console.log(bucket.name);
});
})
.catch(err => {
console.error('ERROR:', err);
});
[1] https://github.com/googleapis/nodejs-storage/tree/master/samples

Related

Docker Chrome Memory Leak When Using --Disable-Dev-Shm-Usage

This is a follow-up to my previous question:
I am creating a NodeJS-based image that I install latest Chrome and Chromedriver on, then run a NodeJS-based cron job that uses Selenium Webdriver for testing on a one-minute interval.
This runs in an Azure Container Instance, which is the simplest way to run containers in Azure.
My challenge is that Docker containers in ACI run with 64 MB of dev/shm by default, which causes Chrome failures due to the relatively low amount of memory. Chrome provides a disable-dev-shm-usage flag, but running that creates a memory leak that I can't seem to figure out how to prevent. How can I address this best for my container in ACI, please?
Azure Container Instance Container Memory Consumption
Dockerfile
# 1) Build from this Dockerfile's directory:
# docker build -t "<some tag>" -f Dockerfile .
# 2) Start the image (e.g. in Docker)
# 3) Observe that the button's value is printed.
# ---------------------------------------------------------------------------------------------
# 1) Use alpine-based NodeJS base image
FROM node:latest
# 2) Install latest stable Chrome
# https://gerg.dev/2021/06/making-chromedriver-and-chrome-versions-match-in-a-docker-image/
RUN echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" | \
tee -a /etc/apt/sources.list.d/google.list && \
wget -q -O - https://dl.google.com/linux/linux_signing_key.pub | \
apt-key add - && \
apt-get update && \
apt-get install -y google-chrome-stable libxss1
# 3) Install the Chromedriver version that corresponds to the installed major Chrome version
# https://blogs.sap.com/2020/12/01/ui5-testing-how-to-handle-chromedriver-update-in-docker-image/
RUN google-chrome --version | grep -oE "[0-9]{1,10}.[0-9]{1,10}.[0-9]{1,10}" > /tmp/chromebrowser-main-version.txt
RUN wget --no-verbose -O /tmp/latest_chromedriver_version.txt https://chromedriver.storage.googleapis.com/LATEST_RELEASE_$(cat /tmp/chromebrowser-main-version.txt)
RUN wget --no-verbose -O /tmp/chromedriver_linux64.zip https://chromedriver.storage.googleapis.com/$(cat /tmp/latest_chromedriver_version.txt)/chromedriver_linux64.zip && rm -rf /opt/selenium/chromedriver && unzip /tmp/chromedriver_linux64.zip -d /opt/selenium && rm /tmp/chromedriver_linux64.zip && mv /opt/selenium/chromedriver /opt/selenium/chromedriver-$(cat /tmp/latest_chromedriver_version.txt) && chmod 755 /opt/selenium/chromedriver-$(cat /tmp/latest_chromedriver_version.txt) && ln -fs /opt/selenium/chromedriver-$(cat /tmp/latest_chromedriver_version.txt) /usr/bin/chromedriver
# 4) Set the variable for the container working directory, create and set the working directory
ARG WORK_DIRECTORY=/program
RUN mkdir -p $WORK_DIRECTORY
WORKDIR $WORK_DIRECTORY
# 5) Install npm packages (do this AFTER setting the working directory)
COPY package.json .
RUN npm config set unsafe-perm true
RUN npm i
ENV NODE_ENV=development NODE_PATH=$WORK_DIRECTORY
# 6) Copy script to execute to working directory
COPY runtest.js .
EXPOSE 8080
# 7) Execute the script in NodeJS
CMD ["node", "runtest.js"]
runtest.js
const { Builder, By } = require('selenium-webdriver');
const { Options } = require('selenium-webdriver/chrome');
const cron = require('node-cron');
cron.schedule('*/1 * * * *', async () => await main());
async function main() {
let driver;
try {
//Browser Setup
let options = new Options()
.headless() // run headless Chrome
.excludeSwitches(['enable-logging']) // disable 'DevTools listening on...'
.addArguments([
// no-sandbox is not an advised flag due to security but eliminates "DevToolsActivePort file doesn't exist" error
'no-sandbox',
// Docker containers run with 64 MB of dev/shm by default, which causes Chrome failures
// Disabling dev/shm uses tmp, which solves the problem but appears to result in memory leaks
'disable-dev-shm-usage'
]);
driver = await new Builder().forBrowser('chrome').setChromeOptions(options).build();
// Navigate to Google and get the "Google Search" button text.
await driver.get('https://www.google.com');
let btnText = await driver.findElement(By.name('btnK')).getAttribute('value');
log(`Google button text: ${btnText}`);
} catch (e) {
log(e);
} finally {
if (driver) {
await driver.close(); // helps chromedriver shut down cleanly and delete the "scoped_dir" temp directories that eventually fill up the harddrive.
await driver.quit();
driver = null;
log(' Closed and quit the driver, then set to null.');
} else {
log(' *** No driver to close and quit ***');
}
}
}
function log(msg) {
console.log(`${new Date()}: ${msg}`);
}
UPDATE
Interestingly, it seems to stabilize once it reaches a certain consumption. The container is allocated 2 GB of memory. I don't see crashes in my app logs, so this seems functional overall.

Azure IoT Edge Module - Backoff State

I am trying to push a custom docker image (not C/C#) to an Azure IOT Edge device from Azure IOT HUB. The docker image runs without exiting when run manually. e.g. docker run -itd is perfectly fine. When the module is published via IOT Hub, it continually shows a status of backup/and is restarting always. The full code of the docker file is as follows:
FROM alpine
RUN apk -U -u add sqlite && \
mkdir -p /db && \
rm -rf /var/lib/apt/lists/*
#RUN /usr/bin/sqlite3 /db/arf.sqlite
CMD /bin/sh
The custom create options are as follows:
{
"Env": [],
"HostConfig": {
"Binds": [
"/work:/db"
]
}
}
There are no specific module twin setting and hence I am passing it as
{}
I am attaching a screen shot that (hopefully) explains this better.
I figured this out. When running manually, I was running with -itd flags to run in daemonized mode. When publishing to azure Hub, it ran the /bin/bash as specified in CMD and exited.
Cruel Workaround:
Add a run.sh that just does nothing. I hate this solution - but it works.
while :; do
sleep 1000
done
What would be nice
Is it possible to specify anywhere in the IOT Module Metadata to run in daemon mode so that the edge device can pass -d when starting the module?

NPM package `pem` doesn't seem to work in AWS lambda NodeJS 10.x (results in OpenSSL error)

When I run the function locally on NodeJS 11.7.0 it works, when I run it in AWS Lambda NodeJS 8.10 it works, but I've recently tried to run it in AWS Lambda NodeJS 10.x and get this response and this error in Cloud Watch.
Any thoughts on how to correct this?
Response
{
"success": false,
"error": "Error: Could not find openssl on your system on this path: openssl"
}
Cloudwatch Error
ERROR (node:8) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
Function
...
const util = require('util');
const pem = require('pem');
...
return new Promise((fulfill) => {
require('./certs').get(req, res, () => {
return fulfill();
});
}).then(() => {
const createCSR = util.promisify(pem.createCSR);
//This seems to be where the issue is coming from
return createCSR({
keyBitsize: 1024,
hash: HASH,
commonName: id.toString(),
country: 'US',
state: 'Maryland',
organization: 'ABC', //Obfuscated
organizationUnit: 'XYZ', //Obfuscated
});
}).then(({ csr, clientKey }) => {
...
}).then(async ({ certificate, clientKey }) => {
...
}, (err) => {
return res.status(404).json({
success: false,
error: err,
});
});
...
I've tried with
"pem": "^1.14.3", and "pem": "^1.14.2",
I tried the answer documented by #Kris White, but I was not able to get it to work. Each execution resulted in the error Could not find openssl on your system on this path: /opt/openssl. I tried several different paths and approaches, but none worked well. It's entirely possible that I simply didn't copy the OpenSSL executable correctly.
Since I needed a working solution, I used the answer provided by #Wilfred Dittmer. I modified it slightly since I wasn't using Docker. I launched an Amazon Linux 2 server, built OpenSSL on it, transferred the package to my local machine, and deployed it via Serverless.
Create a file named create-openssl-zip.sh with the following contents. The script will create the Lambda Layer OpenSSL package.
#!/bin/bash -x
# This file should be copied to and run inside the /tmp folder
yum update -y
yum install autoconf bison gcc gcc-c++ libcurl-devel libxml2-devel -y
curl -sL http://www.openssl.org/source/openssl-1.1.1d.tar.gz | tar -xvz
cd openssl-1.1.1d
./config --prefix=/tmp/nodejs/openssl --openssldir=/tmp/nodejs/openssl && make && make install
cd /tmp
rm -rf nodejs/openssl/share nodejs/openssl/include
zip -r lambda-layer-openssl.zip nodejs
rm -rf nodejs openssl-1.1.1d
Then, follow these steps:
Open a terminal session in this project's root folder.
Run the following command to upload the Linux bash script.
curl -F "file=#create-openssl-zip.sh" https://file.io
Note: The command above uses the popular tool File.io to copy the script to the cloud temporarily so it can be securely retrieved from the build server.
Note: If curl is not installed on your dev machine, you can also upload the script manually using the File.io website.
Copy the URL for the uploaded file from either the terminal session or the File.io website.
Note: The url will look similar to this example: https://file.io/a1B2c3
Open the AWS Console to the EC2 Instances list.
Launch a new instance with these attributes:
AMI: Amazon Linux 2 AMI (HVM), SSD Volume Type (id: ami-0a887e401f7654935)
Instance Type: t2.micro
Instance Details: (use all defaults)
Storage: (use all defaults)
Tags: Name - 'build-lambda-layer-openssl'
Security Group: 'Create new security group' (use all defaults to ensure Instance will be publicly accessible via SSH over the internet)
When launching the instance and selecting a key pair, be sure to choose a Key Pair from the list to which you have access.
Launch the instance and wait for it to be accessible.
Once the instance is running, use an SSH Client to connect to the instance.
More details on how to open an SSH connection can be found here.
In the SSH terminal session, navigate to the tmp directory by running cd /tmp.
Download the bash script uploaded earlier by running curl {FILE_IO_URL} --output create-openssl-zip.sh.
Note: In the script above, replace FILE_IO_URL with the URL returned from File.io and copied in step 3.
Execute the bash script by running sudo bash ./create-openssl-zip.sh. The script may take a while to complete. You may need to confirm one or more package install prompts.
When the script completes, run the following command to upload the package to File.io: curl -F "file=#lambda-layer-openssl.zip" https://file.io.
Copy the URL for the uploaded file from the terminal session.
In the terminal session on the local development machine, run the following command to download the file: curl {FILE_IO_URL} --output lambda-layer-openssl.zip.
Note: In the script above, replace FILE_IO_URL with the URL returned from File.io and copied in step 13.
Note: If curl is not installed on your dev machine, you can also download the file manually by pasting the copied URL in the address bar of your favorite browser.
Close the SSH session.
In the EC2 Instances list, terminate the build-lambda-layer-openssl EC2 instance since it is not needed any longer.
The OpenSSL Lambda Layer is now ready to be deployed.
For completeness, here is a portion of my serverless.yml file:
functions:
functionName:
# ...
layers:
- { Ref: OpensslLambdaLayer }
layers:
openssl:
name: ${self:provider.stage}-openssl
description: Contains openssl command line utility for lambdas that need it
package:
artifact: 'path\to\lambda-layer-openssl.zip'
compatibleRuntimes:
- nodejs10.x
- nodejs12.x
retain: false
...and here is how I configured PEM in the code file:
import * as pem from 'pem';
process.env.LD_LIBRARY_PATH = '/opt/nodejs/openssl/lib';
pem.config({
pathOpenSSL: '/opt/nodejs/openssl/bin/openssl',
});
// other code...
I contacted AWS Support about this and it turns out that the openssl library is still on the Node10x image, just not the command line utility. However, it's pretty easy to just grab it off a standard AMI and use it as a Lambda layer.
Steps:
Launch an Amazon Linux 2 AMI as an EC2
SSH into the box, or use an SFTP utility to connect to the box
Copy the command line utility for openssl at /usr/bin/openssl somewhere you can work with it locally. In my case I downloaded it to my Mac even though it is a Linux file.
Verify that it's still marked as executable (chmod a+x openssl if necessary if you've downloaded it elsewhere)
Zip up the file
Optional: Upload it to an S3 bucket you can get to
Go to Lambda Layers in the AWS console
Create a new lambda layer. I named mine openssl and used the S3 pointer to the file on S3. You can also upload the zip directly if you have it on a local file system.
Attach the arn provided for the layer to your Lambda function. I use serverless so it was defined in the function setup per their documentation.
In your code, reference openssl as /opt/openssl or you can avoid pathing it in your code (or may not have an option if it's a package you don't control) by adding /opt to you path, i.e.
process.env['PATH'] = process.env['PATH'] + ':' + process.env['LAMBDA_TASK_ROOT'] + ':/opt';
The layer will have been unzipped for you and because you set it to be executable beforehand, it should just work. The underlying openssl libraries are there, so just copying the cli works just fine.
What you can do is to create a lambda layer with the openssl library.
Using the lambdaci/lambda:build-nodejs10.x you can compile the openssl library and create a zip file from the install. The zip file you can then use as a layer for your lambda.
Create a file called create-openssl-zip.sh and make sure to chmod u+x it.
#!/bin/bash -x
# This file should be run inside the lambci/lambda:build-nodejs10.x container
yum update -y
yum install autoconf bison gcc gcc-c++ libcurl-devel libxml2-devel -y
curl -sL http://www.openssl.org/source/openssl-1.1.1d.tar.gz | tar -xvz
cd openssl-1.1.1d
./config --prefix=/var/task/nodejs/openssl --openssldir=/var/task/nodejs/openssl && make && make install
cd /var/task/
rm -rf nodejs/openssl/share
rm -rf nodejs/openssl/include
zip -r lambda-openssl-layer.zip nodejs
cp lambda-openssl-layer.zip /opt/layer/
Then run:
docker run -it -v `pwd`:/opt/layer lambci/lambda:build-nodejs10.x /opt/layer/create-openssl-zip.sh
This will run the script inside the docker container and when it is done you have a file called lambda-openssl-layer.zip in your current directory.
Upload this lambda to an s3 bucket and create a lambda layer.
On your original lambda, add this layer and modify your code so that the PEM library knows where to look for the OpenSSL library as follows:
PEM.config({
pathOpenSSL: '/opt/nodejs/openssl/bin/openssl'
})
And finally add an extra environment variable to your lambda called LD_LIBRARY_PATH with value /opt/nodejs/openssl/lib
Otherwise it will fail with:
/opt/nodejs/openssl/bin/openssl: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory
PEM NPM docs says:
Setting openssl location
In some systems the openssl executable might not be available by the default name or it is not included in $PATH. In this case you can define the location of the executable yourself as a one time action after you have loaded the pem module:
So I think it is not able to find OpenSSL path in system you can try configuring it programmatically :
var pem = require('pem')
pem.config({
pathOpenSSL: '/usr/local/bin/openssl'
})
As you are using AWS Lambda so just try printing process.env.path you will get idea of whether OpenSSL is included in path env variable or not.
You can also check 'OpenSSL' by running below code
const exec = require('child_process').exec;
exec('which openssl',function(err,stdopt,stderr){
console.log(err ? err : stdopt);
})
UPDATE
As #hoangdv mentioned in his answer openssl is seems to be removed for node10.x runtime and I think he is right. Also, we have read-only access to file system so we can't do much.
#Seth McClaine, you can give try for node-forge npm module. One of the module built on top of this is 'https://github.com/jfromaniello/selfsigned' which will make your task easier
https://github.com/lambci/git-lambda-layer/issues/13#issue-444697784 (announcement email)
It seem openssl has been removed in nodejs10.x runtime.
I have checked again on lambci/lambda:build-nodejs10.x docker image and confirmed that. Maybe, you need to change your runtime version or find another way to createCSR.
which: no openssl in (/var/lang/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin)

Correct syntax for bash script as userdata for AWS runinstance

I am trying to launch a new ec2 instance through lambda everytime a particular event occurs. I have a bash script that I want to run everytime that a new ec2 instance launches and I would like for it to be attached using the userdata parameter for runinstances.
I have tested the script and it works well when i launch the instance through the aws console.
I suspect that it is probably because my syntax is wrong. I realise that the question is very basic but I have tried various permutations multiple times and am unable to get it up and running.
function(next) {
console.log("INITIALIZING EC2");
var params = {
ImageId: 'ami-b2c934d2',
InstanceType: 't2.micro', //'c4.4xlarge',
MinCount: 1, MaxCount: 1,
KeyName: 'malpem2102'
UserData : console.log(new Buffer('#!/bin/bash \n
sudo apt-get install awscli -y \n
echo alarm \n
aws configure set default.region us-west-2 \n
aws configure set aws_access_key_id AKIAIXXXXXXXXX \n
aws configure set aws_secret_access_key U2fyRtyakG1kAXXXXXXXXXX \n
instance=`curl -s http://169.254.169.254/latest/meta-data/instance-id/` \n
aws cloudwatch put-metric-alarm --alarm-name $instance --alarm-description
"Terminate the instance when it is idle for 10mins" --namespace "AWS/EC2"
--dimensions Name=InstanceId,Value=$instance --statistic Average
--metric-name CPUUtilization --comparison-operator LessThanThreshold
--threshold 5 --period 120 --evaluation-periods 5 --alarm-actions
arn:aws:automate:us-west-2:ec2:terminate \n').toString('base64'));
};
When using API calls you need to send your commands as a base64-encoded text string.
So, you need to get the raw commands data and encode it to base64 for entering into the UserData param.
For example, if you have the following commands:
#!/bin/bash
curl --silent --location https://rpm.nodesource.com/setup_10.x | sudo bash -
sudo yum install -y nodejs
sudo yum install -y git
git clone https://github.com/user/repo
cd repo
npm i
npm run start
After you encode to base64 with a tool like the Base64 Decode and Encode, you can use as param your UseData like this:
const params = {
ImageId: 'ami-b2c934d2',
InstanceType: 't2.micro', //'c4.4xlarge',
MinCount: 1, MaxCount: 1,
KeyName: 'malpem2102',
UserData: 'IyEvYmluL2Jhc2gNCmN1cmwgLS1zaWxlbnQgLS1sb2NhdGlvbiBodHRwczovL3JwbS5ub2Rlc291cmNlLmNvbS9zZXR1cF8xMC54IHwgc3VkbyBiYXNoIC0NCnN1ZG8geXVtIGluc3RhbGwgLXkgbm9kZWpzDQpzdWRvIHl1bSBpbnN0YWxsIC15IGdpdA0KZ2l0IGNsb25lIGh0dHBzOi8vZ2l0aHViLmNvbS9jb2RlcmFkZS9hd3MtZWMyLWV4YW1wbGVzDQpjZCBoYmZsDQpucG0gaQ0KbnBtIHJ1biBzdGFydA=='
};
You can also encode using a native Javascript approach with the following code:
let commandsString = `#!/bin/bash
curl --silent --location https://rpm.nodesource.com/setup_10.x | sudo bash -
sudo yum install -y nodejs
sudo yum install -y git
git clone https://github.com/user/repo
cd repo
npm i
npm run start`;
UserData: new Buffer(commandsString).toString('base64');
For further information, read the documentation of Running Commands on Your Linux Instance at Launch.
i think that you just need to remove console.log instruction from your code.

How can I run a Docker container in AWS Elastic Beanstalk with non-default run parameters?

I have a Docker container that runs great on my local development machine. I would like to move this to AWS Elastic Beanstalk, but I am running into a small bit of trouble.
I am trying to mount an S3 bucket to my container by using s3fs. I have the Dockerfile:
FROM tomcat:7.0
MAINTAINER me#example.com
RUN apt-get update
RUN DEBIAN_FRONTEND=noninteractive apt-get install -y build-essential libfuse-dev libcurl4-openssl-dev libxml++2.6-dev libssl-dev mime-support automake libtool wget tar
# Add the java source
ADD . /path/to/tomcat/webapps/
ADD run_docker.sh /root/run_docker.sh
WORKDIR $CATALINA_HOME
EXPOSE 8080
CMD ["/root/run_docker.sh"]
And I install s3fs, mount an S3 bucket, and run the Tomcat server after the image has been created, by running run_docker.sh:
#!/bin/bash
#run_docker.sh
wget https://github.com/s3fs-fuse/s3fs-fuse/archive/master.zip -O /usr/src/master.zip;
cd /usr/src/;
unzip /usr/src/master.zip;
cd /usr/src/s3fs-fuse-master;
autoreconf --install;
CPPFLAGS=-I/usr/include/libxml2/ /usr/src/s3fs-fuse-master/configure;
make;
make install;
cd $CATALINA_HOME;
mkdir /opt/s3-files;
s3fs my-bucket /opt/s3-files;
catalina.sh run
When I build and run this Docker container using the command:
docker run --cap-add mknod --cap-add sys_admin --device=/dev/fuse -p 80:8080 -d username/mycontainer:latest
it works well. Yet, when I remove the --cap-add mknod --cap-add sys_admin --device=/dev/fuse, then s3fs fails to mount my S3 bucket.
Now, I would like to run this on AWS Elastic Beanstalk, and when I deploy the container (and run run_docker.sh), all the steps execute fine, except the step s3fs my-bucket /opt/s3-files in run_docker.sh fails to mount the bucket.
Presumably, this is because whatever Elastic Beanstalk does to run a Docker container, it doesn't add any additional flags like, --cap-add mknod --cap-add sys_admin --device=/dev/fuse.
My Dockerrun.aws.json file looks like:
{
"AWSEBDockerrunVersion": "1",
"Image": {
"Name": "tomcat:7.0"
},
"Ports": [
{
"ContainerPort": "8080"
}
]
}
Is it possible to add additional docker run flags to an AWS EB Docker deployment?
An alternative option is to find another way to mount an S3 bucket, but I suspect I'd run into similar permission errors regardless. Has anyone seen any way to accomplish this???
UPDATE:
For people trying to use #Egor's answer below, it works when the EB configuration is set to use v1.4.0 running Docker 1.6.0. Anything past the v1.4.0 version fails. So to make it work, build your environment as normal (which should give you a failed build), then rebuild it with a v1.4.0 running Docker 1.6.0 configuration. That should do it!
If you are using the latest version of aws docker stack (docker 1.7.1 for example), you'll need to slightly modify the above answer. Try this:
commands:
00001_add_privileged:
cwd: /tmp
command: 'sed -i "s/docker run -d/docker run --privileged -d/" /opt/elasticbeanstalk/hooks/appdeploy/enact/00run.sh'
Notice the change of location && name of the run script
Add file .ebextensions/01-commands.config
container_commands:
00001-docker-privileged: command: 'sed -i "s/docker run -d/docker run --privileged -d/" /opt/elasticbeanstalk/hooks/appdeploy/pre/04run.sh'
I am also using s3fs
Thanks elijahchancey for answer it was much helpful. I would just like to add small comment:
Elasticbeanstalk is now using ECS tasks to deploy and manage application cluster. There is very important paragraph in Multicontainer Docker Configuration
docs (which I originally missed).
The following examples show a subset of parameters that are commonly used. More optional parameters are available. For more information on the task definition format and a full list of task definition parameters, see Amazon ECS Task Definitions in the Amazon ECS Developer Guide.
So the document is not complete reference but it just shows typical entries and you are supposed to find more elsewhere. This has quite major impact because now (2018) you are able to specify more options and you don't need to hack ebextensions any more. Only thing you need to do is to use task parameter in containerDefinitions of your multi docker Dockerrun.aws.json.
This is not mentioned in single docker containers but one can try and verify...
Example of multi docker Dockerrun.aws.json with extra cap:
{
"AWSEBDockerrunVersion": 2,
"containerDefinitions": [
{
"name": "service1",
"image": "myapp/service1:latest",
"essential": true,
"memoryReservation": 128,
"portMappings": [
{
"hostPort": 8080,
"containerPort": 8080
}
],
"linuxParameters": {
"capabilities": {
"add": [
"SYS_PTRACE"
]
}
}
}
]
}
You can now add capabilities using the task definition. Here are the docs:
https://docs.aws.amazon.com/AmazonECS/latest/developerguide/task_definition_parameters.html
This is specifically what you would add to your task definition:
"linuxParameters": {
"capabilities": {
"add": [
"SYS_PTRACE"
]
}
},

Resources