How to pull signed docker images using dockerode NodeJS API - node.js

I have a node js application that does basic docker operations like pull images, create, run, start and stop docker containers. I am using dockerode library.
I want to enforce only trusted signed images are allowed to be pulled.
According to docker documentation, setting env variable DOCKER_CONTENT_TRUST=1. This is not feasible because I am invoking docker remotely.
Observation on command line: Without setting DOCKER_CONTENT_TRUST=1, using flag --disable-content-trust=false will force only trusted images to be downloaded.
[root#vm ~]# echo $DOCKER_CONTENT_TRUST
[root#vm ~]# docker pull --disable-content-trust=false docker/trusttest
Using default tag: latest
no trust data available
[root#vm ~]#
But, this is no effect when called from node js using dockerode api
Here is the node code:
function pullImage(imageId){
return new Promise((resolve, reject)=>{
docker.pull(imageId,{"disable-content-trust":"false"},(err,stream)=>{
if(err){
console.error("Docker pull failed for:" + imageId + "error:" + err);
reject(err);
}else
console.log("Docker image installed: " + imageId);
resolve(true);
}
});
});
}
pullImage('docker/trusttest',{}).then((v)=>{
console.log("pull image successful", v);
}).catch((ex)=>{
console.error("exception in pull image", ex);
});
This code downloads the image even though disable-content-trust=false.
The question is am I passing the option parameters to docker.pull correctly ?
I can't find the documentation for option parameter values for dockerode.
Any help is much appreciated.
Links:
https://docs.docker.com/engine/security/trust/content_trust/
https://github.com/apocas/dockerode

Related

Is it possible to install and run docker inside node container in Jenkins?

This is somewhat complicated situation, but I have Jenkins installed inside a docker container. I'm trying to run some tests in node.js app, but this test environment requires docker+docker-compose to be enabled. At the moment, the Jenkins configuration is through pipeline code
So far, I've tried pulling docker inside a stage, as follow:
pipeline {
agent {
docker {
image 'node'
}
}
stages {
stage("Checkout") {
steps {
git url: ....
}
}
stage("Docker") {
steps {
script {
def image = docker.image('docker')
image.pull()
image.inside() {
sh 'docker --version'
sh 'docker-compose --version'
}
}
}
}
}
with error returning 'docker: not found'. I was expecting the script to succeed because I've tried exactly the same with 'agent any' which had no problem, but inside node image it doesn't seem to work.
I'm also not sure if this is the right way to do so because as I understand correctly, this way of running docker inside a docker is not recommended. One method that I have found is that when running docker, it is recommended to run docker -v /var/run/docker.sock:/var/run/docker.sock ... but currently I am running on docker-compose, with installation steps from https://www.jenkins.io/doc/book/installing/docker/ (instead of individual docker, I've combined both jenkins and jenkins-blueocean into a docker-compose file), and that did not work.
At this moment, I'm out of idea and any solutions or other suggestions as to how to run both node.js and docker in the same environment, would be greatly appreciated.
You can try to use docker-in-docker image https://hub.docker.com/_/docker

How to build docker image without having to use the sudo keyword

I'm building a node.js app which allows people to run code on my server and I'm using Docker to containerise the user's code so that it can't steal data or in general do something they shouldn't. I have a Docker image template that is copied into the user's personal app directory and I want to build the image using this function I've written:
const util = require("util");
const exec = util.promisify(require("child_process").exec);
async function buildContainer(path, dockerUser) {
return await exec(`sudo docker build -t user_app_${dockerUser} ${path}`);
}
However when I go to use it, it requires me to enter my sudo password as if I was executing it manually in a terminal window.
Is there anyway I can run this function without having to include the sudo keyword?
Thanks in advance.
you can use podman instead of docker.
There you don´t need sudo.
You have the most commands like docker.
example:
podman build
podman run
and so on...
hope that helps :)
Regards

Error: Status Code is 403 (MongoDB's 404) This means that the requested version-platform combination dosnt exist

beforeAll(async () => {
mongo = new MongoMemoryServer();
const mongoURI = await mongo.getConnectionString();
await mongoose.connect(mongoURI, {
useNewUrlParser: true,
useUnifiedTopology: true
});
});
For some reason mongodb-memory-server, doesn't work and it seems that it's because it's downloading mongodb for some reason? Wasn't mongodb supposed to be included with the package, what is the package downloading? How do we prevent mongodb-memory-server from downloading everytime I use it? Is there a way to make it work as it's intended?
$ npm run test
> auth#1.0.0 test C:\Users\admin\Desktop\projects\react-node-docker-kubernetes-app-two\auth
> jest --watchAll --no-cache
2020-06-06T03:12:45.207Z MongoMS:MongoMemoryServer Called MongoMemoryServer.ensureInstance() method:
2020-06-06T03:12:45.207Z MongoMS:MongoMemoryServer - no running instance, call `start()` command
2020-06-06T03:12:45.207Z MongoMS:MongoMemoryServer Called MongoMemoryServer.start() method
2020-06-06T03:12:45.214Z MongoMS:MongoMemoryServer Starting MongoDB instance with following options: {"port":51830,"dbName":"b67a9bfd-d8af-4d7f-85c7-c2fd37832f59","ip":"127.0.0.1","storageEngine":"ephemeralForTest","dbPath":"C:\\Users\\admin\\AppData\\Local\\Temp\\mongo-mem-205304KB93HW36L9ZD","tmpDir":{"name":"C:\\Users\\admin\\AppData\\Local\\Temp\\mongo-mem-205304KB93HW36L9ZD"},"uri":"mongodb://127.0.0.1:51830/b67a9bfd-d8af-4d7f-85c7-c2fd37832f59?"}
2020-06-06T03:12:45.217Z MongoMS:MongoBinary MongoBinary options: {"downloadDir":"C:\\Users\\admin\\Desktop\\projects\\react-node-docker-kubernetes-app-two\\auth\\node_modules\\.cache\\mongodb-memory-server\\mongodb-binaries","platform":"win32","arch":"ia32","version":"4.0.14"}
2020-06-06T03:12:45.233Z MongoMS:MongoBinaryDownloadUrl Using "mongodb-win32-i386-2008plus-ssl-4.0.14.zip" as the Archive String
2020-06-06T03:12:45.233Z MongoMS:MongoBinaryDownloadUrl Using "https://fastdl.mongodb.org" as the mirror
2020-06-06T03:12:45.235Z MongoMS:MongoBinaryDownload Downloading: "https://fastdl.mongodb.org/win32/mongodb-win32-i386-2008plus-ssl-4.0.14.zip"
2020-06-06T03:14:45.508Z MongoMS:MongoMemoryServer Called MongoMemoryServer.stop() method
2020-06-06T03:14:45.508Z MongoMS:MongoMemoryServer Called MongoMemoryServer.ensureInstance() method:
FAIL src/test/__test___/Routes.test.ts
● Test suite failed to run
Error: Status Code is 403 (MongoDB's 404)
This means that the requested version-platform combination dosnt exist
at ClientRequest.<anonymous> (node_modules/mongodb-memory-server-core/src/util/MongoBinaryDownload.ts:321:17)
Test Suites: 1 failed, 1 total
Tests: 0 total
Snapshots: 0 total
Time: 127.136s
Ran all test suites.
Seems you have the same issue like I have had.
https://github.com/nodkz/mongodb-memory-server/issues/316
Specify binary version in package.json
E.g:
"config": {
"mongodbMemoryServer": {
"version": "latest"
}
},
I hope it helps.
For me, "latest" (as in accepted answer) did not work, the latest current version "4.4.1" worked:
"config": {
"mongodbMemoryServer": {
"version": "4.4.1"
}
}
For anyone getting the dreaded
''
Error: Status Code is 403 (MongoDB's 404)
This means that the requested version-platform combination doesn't exist
''
I found an easy fix.
in the package.json file we need to add an "arch" for the mongo memory server config
  "config": {
"mongodbMemoryServer": {
"debug": "1",
"arch": "x64"
}
},
the error is occurring because the URL link that mongo memory server is creating to download a binary version of mongo is wrong or inaccessible.
By adding debug we now are able to get a console log of the mongo memory server process and it should correctly download because we changed the arch variable to a one that worked for me. **You might need to change the arch depending on you system.
Without adding the arch I was able to see why it was crashing in the console log here:
MongoMS:MongoBinaryDownloadUrl Using "mongodb-win32-i386-2008plus-ssl-latest.zip" as the Archive String +0ms
MongoMS:MongoBinaryDownloadUrl Using "https://fastdl.mongodb.org" as the mirror +1ms
MongoMS:MongoBinaryDownload Downloading: "https://fastdl.mongodb.org/win32/mongodb-win32-i386-2008plus-ssl-latest.zip" +0ms
MongoMS:MongoMemoryServer Called MongoMemoryServer.stop() method +2s
MongoMS:MongoMemoryServer Called MongoMemoryServer.ensureInstance() method: +0ms
If you notice it is trying to download "https://fastdl.mongodb.org/win32/mongodb-win32-i386-2008plus-ssl-latest.zip" - if you visit the link you will notice it is a BROKEN LINK and that is the reason mongo memory server is failing to download.
For some reason mongo memory server was defaulting to the i386 arch, which didn't work in my case because the link was broken / inaccessible when I visited it. *normally a download should start right away when visiting a link like that.
I was able to configure the to the correct arch manually in the package.json file. Once I did that, it started to download mongo binary and ran all my tests no problem. You will even notice a console log of the download and displaying the correct download link.
You can find your system arch by going to the command prompt and typing
WINDOWS
SET Processor
MAC
uname -a
** EDIT **
The reason I was running into this was because I was running a 32 bit version of Node.js and my Windows machine was a 64 bit system. After installing to a 64 bit version of Node.js I no longer have to specify the arch type in Package.json file.
you can find what architecture type your Node.js is by typing in your terminal:
node -p "process.arch"
Status 403 usually means that your ip is restricted from server(for example maybe your country is in sanction list like iran,syria,...).
The best solution for this challenge is to change dns to dns of vpns.
In linux just type:
sudo nano /etc/resolv.conf
And then type your dns in nameserver place.
Try this version mongodb-memory-server#6.5.1
I found a solution for this problem that worked for me.
I just set writing permissions to the binary file of mongod that is used for mongo-memory and is saved in the .cache path of your computer or in the node_modules folder.
just locale the mongod file and set writing permission to the file with chmod +x mongod

pass filePath to dockerfile as variable _ nodeJS dockerode Docker

In my case, I am creating a config.json that I need to copy from the host to my container.
I figured out there is some option that I can pass args to my dockerfile.
so first step is :
1.create Dockerfile:
FROM golang
WORKDIR /go/src/app
COPY . . /* here we have /foo directory */
COPY $CONFIG_PATH ./foo/
EXPOSE $PORT
CMD ["./foo/run", "-config", "./foo/config.json"]
as you can see, I have 2 variable [ "$CONFIG_PATH", "$PORT"].
so these to variables are dynamic and comes from my command in docker run.
here I need to copy my config file from my host to my container, and I need to run my project with that config.json file.
after building image:
second step:
get my config file from user and run the docker image with these variables.
let configFilePath = '/home/baazz/baaaf/config.json'
let port = "8080"
docker.run('my_image', null, process.stdout, { Env: [`$CONFIG_PATH=${configFilePath}`, `$PORT=${port}`] }).then(data => {
}).catch(err => { console.log(err) })
I am getting this error message when I am trying to execute my code.
Error opening JSON configuration (./foo/config.json): open
./foo/config.json: no such file or directory . Terminating.
You generally don’t want to COPY configuration files like this in your Docker image. You should be able to docker run the same image in multiple environments without modification.
Instead, you can use the docker run -v option to inject the correct config file when you run the image:
docker run -v $PWD/config-dev.json:/go/src/app/foo/config.json my_image
(The Dockerode home page shows an equivalent Binds option. In Docker Compose, this goes into the per-container volumes:. There’s no requirement that the two paths or file names match.)
Since file paths like this become part of the external interface to how people run your container, you generally don’t want to make them configurable at build time. Pick a fixed path and document that that’s the place to mount your custom config file.

NPM package `pem` doesn't seem to work in AWS lambda NodeJS 10.x (results in OpenSSL error)

When I run the function locally on NodeJS 11.7.0 it works, when I run it in AWS Lambda NodeJS 8.10 it works, but I've recently tried to run it in AWS Lambda NodeJS 10.x and get this response and this error in Cloud Watch.
Any thoughts on how to correct this?
Response
{
"success": false,
"error": "Error: Could not find openssl on your system on this path: openssl"
}
Cloudwatch Error
ERROR (node:8) [DEP0005] DeprecationWarning: Buffer() is deprecated due to security and usability issues. Please use the Buffer.alloc(), Buffer.allocUnsafe(), or Buffer.from() methods instead.
Function
...
const util = require('util');
const pem = require('pem');
...
return new Promise((fulfill) => {
require('./certs').get(req, res, () => {
return fulfill();
});
}).then(() => {
const createCSR = util.promisify(pem.createCSR);
//This seems to be where the issue is coming from
return createCSR({
keyBitsize: 1024,
hash: HASH,
commonName: id.toString(),
country: 'US',
state: 'Maryland',
organization: 'ABC', //Obfuscated
organizationUnit: 'XYZ', //Obfuscated
});
}).then(({ csr, clientKey }) => {
...
}).then(async ({ certificate, clientKey }) => {
...
}, (err) => {
return res.status(404).json({
success: false,
error: err,
});
});
...
I've tried with
"pem": "^1.14.3", and "pem": "^1.14.2",
I tried the answer documented by #Kris White, but I was not able to get it to work. Each execution resulted in the error Could not find openssl on your system on this path: /opt/openssl. I tried several different paths and approaches, but none worked well. It's entirely possible that I simply didn't copy the OpenSSL executable correctly.
Since I needed a working solution, I used the answer provided by #Wilfred Dittmer. I modified it slightly since I wasn't using Docker. I launched an Amazon Linux 2 server, built OpenSSL on it, transferred the package to my local machine, and deployed it via Serverless.
Create a file named create-openssl-zip.sh with the following contents. The script will create the Lambda Layer OpenSSL package.
#!/bin/bash -x
# This file should be copied to and run inside the /tmp folder
yum update -y
yum install autoconf bison gcc gcc-c++ libcurl-devel libxml2-devel -y
curl -sL http://www.openssl.org/source/openssl-1.1.1d.tar.gz | tar -xvz
cd openssl-1.1.1d
./config --prefix=/tmp/nodejs/openssl --openssldir=/tmp/nodejs/openssl && make && make install
cd /tmp
rm -rf nodejs/openssl/share nodejs/openssl/include
zip -r lambda-layer-openssl.zip nodejs
rm -rf nodejs openssl-1.1.1d
Then, follow these steps:
Open a terminal session in this project's root folder.
Run the following command to upload the Linux bash script.
curl -F "file=#create-openssl-zip.sh" https://file.io
Note: The command above uses the popular tool File.io to copy the script to the cloud temporarily so it can be securely retrieved from the build server.
Note: If curl is not installed on your dev machine, you can also upload the script manually using the File.io website.
Copy the URL for the uploaded file from either the terminal session or the File.io website.
Note: The url will look similar to this example: https://file.io/a1B2c3
Open the AWS Console to the EC2 Instances list.
Launch a new instance with these attributes:
AMI: Amazon Linux 2 AMI (HVM), SSD Volume Type (id: ami-0a887e401f7654935)
Instance Type: t2.micro
Instance Details: (use all defaults)
Storage: (use all defaults)
Tags: Name - 'build-lambda-layer-openssl'
Security Group: 'Create new security group' (use all defaults to ensure Instance will be publicly accessible via SSH over the internet)
When launching the instance and selecting a key pair, be sure to choose a Key Pair from the list to which you have access.
Launch the instance and wait for it to be accessible.
Once the instance is running, use an SSH Client to connect to the instance.
More details on how to open an SSH connection can be found here.
In the SSH terminal session, navigate to the tmp directory by running cd /tmp.
Download the bash script uploaded earlier by running curl {FILE_IO_URL} --output create-openssl-zip.sh.
Note: In the script above, replace FILE_IO_URL with the URL returned from File.io and copied in step 3.
Execute the bash script by running sudo bash ./create-openssl-zip.sh. The script may take a while to complete. You may need to confirm one or more package install prompts.
When the script completes, run the following command to upload the package to File.io: curl -F "file=#lambda-layer-openssl.zip" https://file.io.
Copy the URL for the uploaded file from the terminal session.
In the terminal session on the local development machine, run the following command to download the file: curl {FILE_IO_URL} --output lambda-layer-openssl.zip.
Note: In the script above, replace FILE_IO_URL with the URL returned from File.io and copied in step 13.
Note: If curl is not installed on your dev machine, you can also download the file manually by pasting the copied URL in the address bar of your favorite browser.
Close the SSH session.
In the EC2 Instances list, terminate the build-lambda-layer-openssl EC2 instance since it is not needed any longer.
The OpenSSL Lambda Layer is now ready to be deployed.
For completeness, here is a portion of my serverless.yml file:
functions:
functionName:
# ...
layers:
- { Ref: OpensslLambdaLayer }
layers:
openssl:
name: ${self:provider.stage}-openssl
description: Contains openssl command line utility for lambdas that need it
package:
artifact: 'path\to\lambda-layer-openssl.zip'
compatibleRuntimes:
- nodejs10.x
- nodejs12.x
retain: false
...and here is how I configured PEM in the code file:
import * as pem from 'pem';
process.env.LD_LIBRARY_PATH = '/opt/nodejs/openssl/lib';
pem.config({
pathOpenSSL: '/opt/nodejs/openssl/bin/openssl',
});
// other code...
I contacted AWS Support about this and it turns out that the openssl library is still on the Node10x image, just not the command line utility. However, it's pretty easy to just grab it off a standard AMI and use it as a Lambda layer.
Steps:
Launch an Amazon Linux 2 AMI as an EC2
SSH into the box, or use an SFTP utility to connect to the box
Copy the command line utility for openssl at /usr/bin/openssl somewhere you can work with it locally. In my case I downloaded it to my Mac even though it is a Linux file.
Verify that it's still marked as executable (chmod a+x openssl if necessary if you've downloaded it elsewhere)
Zip up the file
Optional: Upload it to an S3 bucket you can get to
Go to Lambda Layers in the AWS console
Create a new lambda layer. I named mine openssl and used the S3 pointer to the file on S3. You can also upload the zip directly if you have it on a local file system.
Attach the arn provided for the layer to your Lambda function. I use serverless so it was defined in the function setup per their documentation.
In your code, reference openssl as /opt/openssl or you can avoid pathing it in your code (or may not have an option if it's a package you don't control) by adding /opt to you path, i.e.
process.env['PATH'] = process.env['PATH'] + ':' + process.env['LAMBDA_TASK_ROOT'] + ':/opt';
The layer will have been unzipped for you and because you set it to be executable beforehand, it should just work. The underlying openssl libraries are there, so just copying the cli works just fine.
What you can do is to create a lambda layer with the openssl library.
Using the lambdaci/lambda:build-nodejs10.x you can compile the openssl library and create a zip file from the install. The zip file you can then use as a layer for your lambda.
Create a file called create-openssl-zip.sh and make sure to chmod u+x it.
#!/bin/bash -x
# This file should be run inside the lambci/lambda:build-nodejs10.x container
yum update -y
yum install autoconf bison gcc gcc-c++ libcurl-devel libxml2-devel -y
curl -sL http://www.openssl.org/source/openssl-1.1.1d.tar.gz | tar -xvz
cd openssl-1.1.1d
./config --prefix=/var/task/nodejs/openssl --openssldir=/var/task/nodejs/openssl && make && make install
cd /var/task/
rm -rf nodejs/openssl/share
rm -rf nodejs/openssl/include
zip -r lambda-openssl-layer.zip nodejs
cp lambda-openssl-layer.zip /opt/layer/
Then run:
docker run -it -v `pwd`:/opt/layer lambci/lambda:build-nodejs10.x /opt/layer/create-openssl-zip.sh
This will run the script inside the docker container and when it is done you have a file called lambda-openssl-layer.zip in your current directory.
Upload this lambda to an s3 bucket and create a lambda layer.
On your original lambda, add this layer and modify your code so that the PEM library knows where to look for the OpenSSL library as follows:
PEM.config({
pathOpenSSL: '/opt/nodejs/openssl/bin/openssl'
})
And finally add an extra environment variable to your lambda called LD_LIBRARY_PATH with value /opt/nodejs/openssl/lib
Otherwise it will fail with:
/opt/nodejs/openssl/bin/openssl: error while loading shared libraries: libssl.so.1.1: cannot open shared object file: No such file or directory
PEM NPM docs says:
Setting openssl location
In some systems the openssl executable might not be available by the default name or it is not included in $PATH. In this case you can define the location of the executable yourself as a one time action after you have loaded the pem module:
So I think it is not able to find OpenSSL path in system you can try configuring it programmatically :
var pem = require('pem')
pem.config({
pathOpenSSL: '/usr/local/bin/openssl'
})
As you are using AWS Lambda so just try printing process.env.path you will get idea of whether OpenSSL is included in path env variable or not.
You can also check 'OpenSSL' by running below code
const exec = require('child_process').exec;
exec('which openssl',function(err,stdopt,stderr){
console.log(err ? err : stdopt);
})
UPDATE
As #hoangdv mentioned in his answer openssl is seems to be removed for node10.x runtime and I think he is right. Also, we have read-only access to file system so we can't do much.
#Seth McClaine, you can give try for node-forge npm module. One of the module built on top of this is 'https://github.com/jfromaniello/selfsigned' which will make your task easier
https://github.com/lambci/git-lambda-layer/issues/13#issue-444697784 (announcement email)
It seem openssl has been removed in nodejs10.x runtime.
I have checked again on lambci/lambda:build-nodejs10.x docker image and confirmed that. Maybe, you need to change your runtime version or find another way to createCSR.
which: no openssl in (/var/lang/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/opt/bin)

Resources