This is a follow-up to my previous question:
I am creating a NodeJS-based image that I install latest Chrome and Chromedriver on, then run a NodeJS-based cron job that uses Selenium Webdriver for testing on a one-minute interval.
This runs in an Azure Container Instance, which is the simplest way to run containers in Azure.
My challenge is that Docker containers in ACI run with 64 MB of dev/shm by default, which causes Chrome failures due to the relatively low amount of memory. Chrome provides a disable-dev-shm-usage flag, but running that creates a memory leak that I can't seem to figure out how to prevent. How can I address this best for my container in ACI, please?
Azure Container Instance Container Memory Consumption
Dockerfile
# 1) Build from this Dockerfile's directory:
# docker build -t "<some tag>" -f Dockerfile .
# 2) Start the image (e.g. in Docker)
# 3) Observe that the button's value is printed.
# ---------------------------------------------------------------------------------------------
# 1) Use alpine-based NodeJS base image
FROM node:latest
# 2) Install latest stable Chrome
# https://gerg.dev/2021/06/making-chromedriver-and-chrome-versions-match-in-a-docker-image/
RUN echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" | \
tee -a /etc/apt/sources.list.d/google.list && \
wget -q -O - https://dl.google.com/linux/linux_signing_key.pub | \
apt-key add - && \
apt-get update && \
apt-get install -y google-chrome-stable libxss1
# 3) Install the Chromedriver version that corresponds to the installed major Chrome version
# https://blogs.sap.com/2020/12/01/ui5-testing-how-to-handle-chromedriver-update-in-docker-image/
RUN google-chrome --version | grep -oE "[0-9]{1,10}.[0-9]{1,10}.[0-9]{1,10}" > /tmp/chromebrowser-main-version.txt
RUN wget --no-verbose -O /tmp/latest_chromedriver_version.txt https://chromedriver.storage.googleapis.com/LATEST_RELEASE_$(cat /tmp/chromebrowser-main-version.txt)
RUN wget --no-verbose -O /tmp/chromedriver_linux64.zip https://chromedriver.storage.googleapis.com/$(cat /tmp/latest_chromedriver_version.txt)/chromedriver_linux64.zip && rm -rf /opt/selenium/chromedriver && unzip /tmp/chromedriver_linux64.zip -d /opt/selenium && rm /tmp/chromedriver_linux64.zip && mv /opt/selenium/chromedriver /opt/selenium/chromedriver-$(cat /tmp/latest_chromedriver_version.txt) && chmod 755 /opt/selenium/chromedriver-$(cat /tmp/latest_chromedriver_version.txt) && ln -fs /opt/selenium/chromedriver-$(cat /tmp/latest_chromedriver_version.txt) /usr/bin/chromedriver
# 4) Set the variable for the container working directory, create and set the working directory
ARG WORK_DIRECTORY=/program
RUN mkdir -p $WORK_DIRECTORY
WORKDIR $WORK_DIRECTORY
# 5) Install npm packages (do this AFTER setting the working directory)
COPY package.json .
RUN npm config set unsafe-perm true
RUN npm i
ENV NODE_ENV=development NODE_PATH=$WORK_DIRECTORY
# 6) Copy script to execute to working directory
COPY runtest.js .
EXPOSE 8080
# 7) Execute the script in NodeJS
CMD ["node", "runtest.js"]
runtest.js
const { Builder, By } = require('selenium-webdriver');
const { Options } = require('selenium-webdriver/chrome');
const cron = require('node-cron');
cron.schedule('*/1 * * * *', async () => await main());
async function main() {
let driver;
try {
//Browser Setup
let options = new Options()
.headless() // run headless Chrome
.excludeSwitches(['enable-logging']) // disable 'DevTools listening on...'
.addArguments([
// no-sandbox is not an advised flag due to security but eliminates "DevToolsActivePort file doesn't exist" error
'no-sandbox',
// Docker containers run with 64 MB of dev/shm by default, which causes Chrome failures
// Disabling dev/shm uses tmp, which solves the problem but appears to result in memory leaks
'disable-dev-shm-usage'
]);
driver = await new Builder().forBrowser('chrome').setChromeOptions(options).build();
// Navigate to Google and get the "Google Search" button text.
await driver.get('https://www.google.com');
let btnText = await driver.findElement(By.name('btnK')).getAttribute('value');
log(`Google button text: ${btnText}`);
} catch (e) {
log(e);
} finally {
if (driver) {
await driver.close(); // helps chromedriver shut down cleanly and delete the "scoped_dir" temp directories that eventually fill up the harddrive.
await driver.quit();
driver = null;
log(' Closed and quit the driver, then set to null.');
} else {
log(' *** No driver to close and quit ***');
}
}
}
function log(msg) {
console.log(`${new Date()}: ${msg}`);
}
UPDATE
Interestingly, it seems to stabilize once it reaches a certain consumption. The container is allocated 2 GB of memory. I don't see crashes in my app logs, so this seems functional overall.
Related
I am trying to run python code from VS code(Visual Studio Code) using Remote Containers for docker. I have installed 1. Remote Container, Docker and Remote SSH plugin and created a docker file. Below Docker file:
FROM opt_image:latest
ARG USERNAME=abc
ARG USER_UID=444
ARG USER_GID=$USER_UID
RUN groupadd --gid $USER_GID $USERNAME \
&& useradd --uid $USER_UID --gid $USER_GID -m $USERNAME \
&& apt-get update \
&& apt-get install -y sudo \
&& echo $USERNAME ALL=\(root\) NOPASSWD:ALL > /etc/sudoers.d/$USERNAME \
&& chmod 0440 /etc/sudoers.d/$USERNAM
USER ${USERNAME}
Below my devcontainer.json file:
{
"name": "Python 3",
"build": {
"dockerfile": "Dockerfile",
"context": "..",
"args": {
// Update 'VARIANT' to pick a Python version: 3, 3.6, 3.7, 3.8, 3.9
"VARIANT": "3.7",
// Options
"NODE_VERSION": "14"
}
},
// Set *default* container specific settings.json values on container create.
"settings": {
"python.pythonPath": "/usr/local/bin/python",
"python.languageServer": "Pylance",
"python.linting.enabled": true,
"python.linting.pylintEnabled": true,
"python.formatting.autopep8Path": "/usr/local/py-utils/bin/autopep8",
"python.formatting.blackPath": "/usr/local/py-utils/bin/black",
"python.formatting.yapfPath": "/usr/local/py-utils/bin/yapf",
"python.linting.banditPath": "/usr/local/py-utils/bin/bandit",
"python.linting.flake8Path": "/usr/local/py-utils/bin/flake8",
"python.linting.mypyPath": "/usr/local/py-utils/bin/mypy",
"python.linting.pycodestylePath": "/usr/local/py-utils/bin/pycodestyle",
"python.linting.pydocstylePath": "/usr/local/py-utils/bin/pydocstyle",
"python.linting.pylintPath": "/usr/local/py-utils/bin/pylint"
},
// Add the IDs of extensions you want installed when the container is created.
"extensions": [
"ms-python.python",
"ms-python.vscode-pylance"
],
// Use 'forwardPorts' to make a list of ports inside the container available locally.
"forwardPorts": [8080],
// Use 'postCreateCommand' to run commands after the container is created.
// "postCreateCommand": "pip3 install --user -r requirements.txt",
"containerEnv": {"CHOKIDAR_USEPOLLING":"true"},
// Comment out connect as root instead. More info: https://aka.ms/vscode-
remote/containers/non-root.
"remoteUser": "abc"
}
When I run Remote-Containers: Rebuild and Reopen in container command from Visual Studio Code, I am getting below error:
Stop (1032 ms): Downloading VS Code Server [2021-08-26T10:55:41.031Z]
Error: certificate signature failure at TLSSocket.onConnectSecure
(_tls_wrap.js:1497:34) at TLSSocket.emit (events.js:315:20) at
TLSSocket._finishInit (_tls_wrap.js:932:8) at
TLSWrap.ssl.onhandshakedone (_tls_wrap.js:706:12)
I have installed VS Code Server manually and started from systemd using below command:
systemctl enable code-server
systemctl status code-server
Its Enabled properly, below is status:
Loaded:loaded /etc/systemd/system/code-server.service enabled.vendor preset: enabled
Active:active(running) since Thu 2021-08-26 05:52:58 CDT.27s ago.
But still got the same issue. is there something, I am doing wrong. could you please help me about this ?
I am trying to push a custom docker image (not C/C#) to an Azure IOT Edge device from Azure IOT HUB. The docker image runs without exiting when run manually. e.g. docker run -itd is perfectly fine. When the module is published via IOT Hub, it continually shows a status of backup/and is restarting always. The full code of the docker file is as follows:
FROM alpine
RUN apk -U -u add sqlite && \
mkdir -p /db && \
rm -rf /var/lib/apt/lists/*
#RUN /usr/bin/sqlite3 /db/arf.sqlite
CMD /bin/sh
The custom create options are as follows:
{
"Env": [],
"HostConfig": {
"Binds": [
"/work:/db"
]
}
}
There are no specific module twin setting and hence I am passing it as
{}
I am attaching a screen shot that (hopefully) explains this better.
I figured this out. When running manually, I was running with -itd flags to run in daemonized mode. When publishing to azure Hub, it ran the /bin/bash as specified in CMD and exited.
Cruel Workaround:
Add a run.sh that just does nothing. I hate this solution - but it works.
while :; do
sleep 1000
done
What would be nice
Is it possible to specify anywhere in the IOT Module Metadata to run in daemon mode so that the edge device can pass -d when starting the module?
While running terraform init when using Terraform 0.11.3 we are getting the following error:
Initializing provider plugins...
- Checking for available provider plugins on https://releases.hashicorp.com...
Error installing provider "template": Get
https://releases.hashicorp.com/terraform-provider-template/: read tcp
172.25.77.25:53742->151.101.13.183:443: read: connection reset by peer.
Terraform analyses the configuration and state and automatically
downloads plugins for the providers used. However, when attempting to
download this plugin an unexpected error occured.
This may be caused if for some reason Terraform is unable to reach the
plugin repository. The repository may be unreachable if access is
blocked by a firewall.
If automatic installation is not possible or desirable in your
environment, you may alternatively manually install plugins by
downloading a suitable distribution package and placing the plugin's
executable file in the following directory:
terraform.d/plugins/linux_amd64
I realized it's because of connectivity issues with https://releases.hashicorp.com domain. For some obvious reasons, we will have to adjust with this connectivity issue as there are some SSL and firewall issues between the control server and Hashicorp's servers.
Is there any way we could bypass this by downloading the plugins from Hashicorp's servers and copying them onto the control server? Or any other alternative to avoid trying to download things from Hashicorp's servers?
You can use pre-installed plugins by either putting the plugins in the same directory as the terraform binary or by setting the -plugin-dir flag.
It's also possible to build a bundle of every provider you need automatically using the terraform-bundle tool.
I run Terraform in our CI pipeline in a Docker container so have a Dockerfile that looks something like this:
FROM golang:alpine AS terraform-bundler-build
RUN apk --no-cache add git unzip && \
go get -d -v github.com/hashicorp/terraform && \
go install ./src/github.com/hashicorp/terraform/tools/terraform-bundle
COPY terraform-bundle.hcl .
RUN terraform-bundle package terraform-bundle.hcl && \
mkdir -p terraform-bundle && \
unzip -d terraform-bundle terraform_*.zip
####################
FROM python:alpine
RUN apk add --no-cache git make && \
pip install awscli
COPY --from=terraform-bundler-build /go/terraform-bundle/* /usr/local/bin/
Note that the finished container image also adds git, make and the AWS CLI as I also require those tools in the CI jobs that uses this container.
The terraform-bundle.hcl then looks something like this (taken from the terraform-bundle README):
terraform {
# Version of Terraform to include in the bundle. An exact version number
# is required.
version = "0.10.0"
}
# Define which provider plugins are to be included
providers {
# Include the newest "aws" provider version in the 1.0 series.
aws = ["~> 1.0"]
# Include both the newest 1.0 and 2.0 versions of the "google" provider.
# Each item in these lists allows a distinct version to be added. If the
# two expressions match different versions then _both_ are included in
# the bundle archive.
google = ["~> 1.0", "~> 2.0"]
# Include a custom plugin to the bundle. Will search for the plugin in the
# plugins directory, and package it with the bundle archive. Plugin must have
# a name of the form: terraform-provider-*, and must be build with the operating
# system and architecture that terraform enterprise is running, e.g. linux and amd64
customplugin = ["0.1"]
}
config plugin_cache_dir in .terraformrc
plugin_cache_dir = "$HOME/.terraform.d/plugin-cache"
then move the pre-installed provider into the plugin_cache_dir,
terraform will not download the provider anymore
btw, use the ~/.terraform.d/plugin directory doesn't work
/.terraform.d/plugin/linux_amd64$ terraform -v
Terraform v0.12.15
The proper way to handle this since terraform 0.14, as also discussed on the terraform-bundle page mentioned in the currently accepted answer, is to use terraform providers mirror as described on https://www.terraform.io/cli/commands/providers/mirror. This command creates all the necessary index files etc so the folder can be used for plugins. Eg:
$ cd your-tf-root-module
$ terraform providers mirror path/to/tf-plugins
...
$ terraform init --plugin-dir path/to/tf-plugins
...
You can cd to each of your root modules (ie those that have terraform state) and run the mirror command; multiple versions of a plugin may be installed there, and that's ok. When you run the terraform init command, it will fetch the proper one. Same as without the --plugin-dir arg.
So the only difference is that the internet is not used to acquire the plugins, terraform init gets them from the plugin folder.
This is very useful also for creating a cache that can then be used by terraform in ci/cd. Eg in circleci you would have a manual job that calls mirror and does a save-cache; and your automated terraform init job would restore-cache, and use --plugin-dir arg; then the automated terraform apply job would behave as usual.
Starting 0.13.2 version of Terraform release, one could download plugins from a local webserver/http server via network mirror protocol.
For more details, check this link
It expects a .terraformrc file in $HOME path, pointing to the provider path of the plugins like below. If the file is in different directory, you could provide the path with TERRAFORM_CONFIG env var.
provider_installation {
network_mirror {
url = "https://terraform-plugins.example.net/providers/"
}
}
Then, you define providers in a custom tf like below.
providers.tf::
terraform {
required_providers {
azurerm = {
source = "registry.terraform.io/example/azurerm"
}
openstack = {
source = "registry.terraform.io/example/openstack"
}
null = {
source = "registry.terraform.io/example/null"
}
random = {
source = "registry.terraform.io/example/random"
}
local = {
source = "registry.terraform.io/example/local"
}
}
}
However, you have to upload the plugin file in .zip format along with index.json and the <version>.json files for terraform to discover the version of plugin to download.
Example index.json containing the version of plugin::
{
"versions": {
"2.3.0": {}
}
}
Again, 2.3.0.json contains hashes of the plugin file. In this case it's <version>.json
{
"archives": {
"linux_amd64": {
"hashes": [
"h1:nFL6uiwsQFLiP8QCr35sPfWe9LpXI3/c7gP9tYnih+k="
],
"url": "terraform-provider-random_2.3.0_linux_amd64.zip"
}
}
}
How do you get details of index.json and <version>.json files?
By running terraform providers on the directory containing tf files. Note, the machine running this command, needs to connect to public terraform registry. Terraform will download the information of these files. If you have different terraform configuration files, it makes sense to automate these steps otherwise, you could manually do :)
Upon, terraform init, terraform downloads the plugins from above web server rather from terraform registry. Make sure you don't use plugin-dir argument with terraform init as it will override all the changes you made.
Updated Dockerfile for #ydaetskcoR 's solution, because currently terraform-bundle doesn't work with 0.12.x (the problem was fixed at 0.12.2, but appeared on 0.12.18)
FROM hashicorp/terraform:0.12.18 as terraform-provider
COPY provider.tf .
RUN terraform init && \
mv .terraform/plugins/linux_amd64/terraform-provider* /bin/
FROM hashicorp/terraform:0.12.18
# Install terraform pre-installed plugins
COPY --from=terraform-provider /bin/terraform-provider* /bin/
And here is the content of provider.tf
provider "template" { version = "~>2.1.2" }
provider "aws" { version = "~>2.15.0" }
...
This took me awhile, had the same problem. I ended up having to download from source and use the image that this spits out. Its nasty, but it does what i need it do to to work with the Google provider.
FROM golang:alpine AS terraform-bundler-build
ENV TERRAFORM_VERSION=0.12.20
ENV GOOGLE_PROVIDER=3.5.0
RUN apk add --update --no-cache git make tree bash curl
ENV GOPATH=/go
RUN mkdir -p $GOPATH/src/github.com/terraform-providers
RUN cd $GOPATH/src/github.com/terraform-providers && curl -sLO https://github.com/terraform-providers/terraform-provider-google-beta/archive/v$GOOGLE_PROVIDER.tar.gz
RUN cd $GOPATH/src/github.com/terraform-providers && tar xvzf v$GOOGLE_PROVIDER.tar.gz && mv terraform-provider-google-beta-$GOOGLE_PROVIDER terraform-provider-google-beta
RUN cd $GOPATH/src/github.com/terraform-providers/terraform-provider-google-beta && pwd && make build
RUN cd $GOPATH/src/github.com/terraform-providers && curl -sLO https://github.com/terraform-providers/terraform-provider-google/archive/v$GOOGLE_PROVIDER.tar.gz
RUN cd $GOPATH/src/github.com/terraform-providers && tar xvzf v$GOOGLE_PROVIDER.tar.gz && mv terraform-provider-google-$GOOGLE_PROVIDER terraform-provider-google
RUN cd $GOPATH/src/github.com/terraform-providers/terraform-provider-google && pwd && make build
RUN mkdir -p $GOPATH/src/github.com/hashicorp
RUN cd $GOPATH/src/github.com/hashicorp && curl -sLO https://github.com/hashicorp/terraform/archive/v$TERRAFORM_VERSION.tar.gz
RUN cd $GOPATH/src/github.com/hashicorp && tar xvzf v$TERRAFORM_VERSION.tar.gz && mv terraform-$TERRAFORM_VERSION terraform
RUN cd $GOPATH/src/github.com/hashicorp/terraform && go install ./tools/terraform-bundle
ENV TF_DEV=false
ENV TF_RELEASE=true
COPY my-build.sh $GOPATH/src/github.com/hashicorp/terraform/scripts/
RUN cd $GOPATH/src/github.com/hashicorp/terraform && /bin/bash scripts/my-build.sh
ENV HOME=/root
COPY terraformrc $HOME/.terraformrc
RUN mkdir -p $HOME/.terraform.d/plugin-cache
########################################
FROM alpine:3
ENV HOME=/root
RUN ["/bin/sh", "-c", "apk add --update --no-cache bash ca-certificates curl git jq openssh"]
RUN ["bin/sh", "-c", "mkdir -p /src"]
COPY --from=terraform-bundler-build /go/bin/terraform* /bin/
RUN mkdir -p /root/.terraform.d/plugins/linux_amd64
COPY --from=terraform-bundler-build /root/.terraform.d/ $HOME/.terraform.d/
RUN cp /bin/terraform-provider-google $HOME/.terraform.d/plugin-cache/linux_amd64
RUN cp /bin/terraform-provider-google-beta $HOME/.terraform.d/plugin-cache/linux_amd64
COPY terraformrc $HOME/.terraformrc
COPY provider.tf $HOME/
COPY backend.tf $HOME/
# For Testing (This should be echoed or taken care of in the CI pipeline)
#COPY google.json $HOME/.google.json
WORKDIR $HOME
ENTRYPOINT ["/bin/bash"]
.terraformrc:
plugin_cache_dir = "$HOME/.terraform.d/plugins/linux_amd64"
disable_checkpoint = true
provider.tf
# Define which provider plugins are to be included
provider "google" {
credentials = ".google.json"
}
provider "google-beta" {
credentials = ".google.json"
}
my-build.sh
#!/usr/bin/env bash
#
# This script builds the application from source for multiple platforms.
# Get the parent directory of where this script is.
SOURCE="${BASH_SOURCE[0]}"
while [ -h "$SOURCE" ] ; do SOURCE="$(readlink "$SOURCE")"; done
DIR="$( cd -P "$( dirname "$SOURCE" )/.." && pwd )"
# Change into that directory
cd "$DIR"
echo "DIR=$DIR"
# Get the git commit
GIT_COMMIT=$(git rev-parse HEAD)
GIT_DIRTY=$(test -n "`git status --porcelain`" && echo "+CHANGES" || true)
# Determine the arch/os combos we're building for
XC_ARCH=${XC_ARCH:-"amd64 arm"}
XC_OS=${XC_OS:-linux}
XC_EXCLUDE_OSARCH="!darwin/arm !darwin/386"
mkdir -p bin/
# If its dev mode, only build for ourself
if [[ -n "${TF_DEV}" ]]; then
XC_OS=$(go env GOOS)
XC_ARCH=$(go env GOARCH)
# Allow LD_FLAGS to be appended during development compilations
LD_FLAGS="-X main.GitCommit=${GIT_COMMIT}${GIT_DIRTY} $LD_FLAGS"
fi
if ! which gox > /dev/null; then
echo "==> Installing gox..."
go get -u github.com/mitchellh/gox
fi
# Instruct gox to build statically linked binaries
export CGO_ENABLED=0
# In release mode we don't want debug information in the binary
if [[ -n "${TF_RELEASE}" ]]; then
LD_FLAGS="-s -w"
fi
# Ensure all remote modules are downloaded and cached before build so that
# the concurrent builds launched by gox won't race to redundantly download them.
go mod download
# Build!
echo "==> Building..."
gox \
-os="${XC_OS}" \
-arch="${XC_ARCH}" \
-osarch="${XC_EXCLUDE_OSARCH}" \
-ldflags "${LD_FLAGS}" \
-output "pkg/{{.OS}}_{{.Arch}}/${PWD##*/}" \
.
## Move all the compiled things to the $GOPATH/bin
GOPATH=${GOPATH:-$(go env GOPATH)}
case $(uname) in
CYGWIN*)
GOPATH="$(cygpath $GOPATH)"
;;
esac
OLDIFS=$IFS
IFS=: MAIN_GOPATH=($GOPATH)
IFS=$OLDIFS
#
# Create GOPATH/bin if it's doesn't exists
if [ ! -d $MAIN_GOPATH/bin ]; then
echo "==> Creating GOPATH/bin directory..."
mkdir -p $MAIN_GOPATH/bin
fi
# Copy our OS/Arch to the bin/ directory
DEV_PLATFORM="./pkg/$(go env GOOS)_$(go env GOARCH)"
if [[ -d "${DEV_PLATFORM}" ]]; then
for F in $(find ${DEV_PLATFORM} -mindepth 1 -maxdepth 1 -type f); do
cp ${F} bin/
cp ${F} ${MAIN_GOPATH}/bin/
ls -alrt ${MAIN_GOPATH}/bin/
echo "MAIN_GOPATH=${MAIN_GOPATH}"
done
fi
bucket.tf
terraform {
backend "gcs" {
bucket = "my-terraform-bucket"
prefix = "terraform/state"
credentials = ".google.json"
}
required_version = "v0.12.20"
}
You can use pre-installed plugins by either putting the plugins binaries in the same directory where Terraform binary is available by setting the "plugins-dir" flag.
By default, all plugins downloaded in .terraform folder. For example, Null resource plugin will be available at below location
.terraform\providers\registry.terraform.io\hashicorp\null\3.0.0.\windows_amd64.
Create new folder like "terraform-plugins" inside Terraform directory and copy all content including registry.terraform.io folder mentioned in above example in created folder.
Now run the terraform init command with plugins-dir flag
terraform init -plugin-dir="/terraform-plugins"
specify complete directory path with plugin-dir flag
I'm using gcsfuse to mount a volume in a container, and I need it to start my node.js application.
To mount the volume I'm using the lifecycle hooks of kubernetes, but it doesn't ensure that it will be executed before the entrypoint of my container.
I've been thinking about how should I check when the volume is mounted, and if it goes down.
To check when it is mounted and unmounted I read and search the existence of the volume in /proc/mounts, and adding a watcher to it for changes.
Is there a simplier way to ensure that the volume is mounted in node.js, docker, or kubernetes?
You can run this dockerfile in privileged mode:
FROM ubuntu
RUN echo "deb http://packages.cloud.google.com/apt gcsfuse-stretch main" | tee /etc/apt/sources.list.d/gcsfuse.list
RUN apt-get update && apt install curl -y
RUN curl https://packages.cloud.google.com/apt/doc/apt-key.gpg | apt-key add -
RUN apt-get update
RUN apt-get install gcsfuse fuse -y
RUN mkdir -p /mnt/tmp
CMD gcsfuse [BUCKET NAME] /mnt/tmp && /bin/bash
This way you are sure that the bucket is mounted when the pod initializes.
On the other hand I do not recommend this approach as there is a Node.sj library for Google Cloud Storage [1].
Here is an example of bucket listing:
// Imports the Google Cloud client library
const Storage = require('#google-cloud/storage');
// Creates a client
const storage = new Storage();
// Lists all buckets in the current project
storage
.getBuckets()
.then(results => {
const buckets = results[0];
console.log('Buckets:');
buckets.forEach(bucket => {
console.log(bucket.name);
});
})
.catch(err => {
console.error('ERROR:', err);
});
[1] https://github.com/googleapis/nodejs-storage/tree/master/samples
I want to be able to run node inside a docker container, and then be able to run docker stop <container>. This should stop the container on SIGTERM rather than timing out and doing a SIGKILL. Unfortunately, I seem to be missing something, and the information I have found seems to contradict other bits.
Here is a test Dockerfile:
FROM ubuntu:14.04
RUN apt-get update && apt-get install -y curl
RUN curl -sSL http://nodejs.org/dist/v0.11.14/node-v0.11.14-linux-x64.tar.gz | tar -xzf -
ADD test.js /
ENTRYPOINT ["/node-v0.11.14-linux-x64/bin/node", "/test.js"]
Here is the test.js referred to in the Dockerfile:
var http = require('http');
var server = http.createServer(function (req, res) {
console.log('exiting');
process.exit(0);
}).listen(3333, function (err) {
console.log('pid is ' + process.pid)
});
I build it like so:
$ docker build -t test .
I run it like so:
$ docker run --name test -p 3333:3333 -d test
Then I run:
$ docker stop test
Whereupon the SIGTERM apparently doesn't work, causing it to timeout 10 seconds later and then die.
I've found that if I start the node task through sh -c then I can kill it with ^C from an interactive (-it) container, but I still can't get docker stop to work. This is contradictory to comments I've read saying sh doesn't pass on the signal, but might agree with other comments I've read saying that PID 1 doesn't get SIGTERM (since it's started via sh, it'll be PID 2).
The end goal is to be able to run docker start -a ... in an upstart job and be able to stop the service and it actually exits the container.
My way to do this is to catch SIGINT (interrupt signal) in my JavaScript.
process.on('SIGINT', () => {
console.info("Interrupted");
process.exit(0);
})
This should do the trick when you press Ctrl+C.
Ok, I figured out a workaround myself, which I'll venture as an answer in the hope it helps others. It doesn't completely answer why the signals weren't working before, but it does give me the behaviour I want.
Using baseimage-docker seems to solve the issue. Here's what I did to get this working with the minimal test example above:
Keep test.js as is.
Modify Dockerfile to look like the following:
FROM phusion/baseimage:0.9.15
# disable SSH
RUN rm -rf /etc/service/sshd /etc/my_init.d/00_regen_ssh_host_keys.sh
# install curl and node as before
RUN apt-get update && apt-get install -y curl
RUN curl -sSL http://nodejs.org/dist/v0.11.14/node-v0.11.14-linux-x64.tar.gz | tar -xzf -
# the baseimage init process
CMD ["/sbin/my_init"]
# create a directory for the runit script and add it
RUN mkdir /etc/service/app
ADD run.sh /etc/service/app/run
# install the application
ADD test.js /
baseimage-docker includes an init process (/sbin/my_init) which handles starting other processes and dealing with zombie processes. It uses runit for service supervision. The Dockerfile therefore sets the my_init process as the command to run on boot, and adds a script /etc/service for runit to pick it up.
The run.sh script is simple:
#!/bin/sh
exec /node-v0.11.14-linux-x64/bin/node /test.js
Don't forget to chmod +x run.sh!
By default, runit will automatically restart the service if it goes down.
Following these steps (and build, run, and stop as before), the container properly responds to requests for it to shutdown, in a timely fashion.