gitlab.com CI - build a NodeJS app using docker in docker - node.js

I'm currently facing a problem with the gitlab.com shared-runners. What I'm trying to archieve in my pipeline is:
- NPM install and using grunt to make some uncss, minimize and compress tasks
- Cleaning up
- Building a docker container with the app included
- Moving the container to gitlab registry
Unfortunateley I don't get it running since a long time! I tried a lot of different gitlab.ci configs - without success. The problem is, that I have to use the "image: docker:latest" to have all the docker-tools running. But then I don't have node and grunt installed in the container.
Also the other way around is not working. I was trying to use image: centos:latest and install docker manually - but this is also not working as I always just get a Failed to get D-Bus connection: Operation not permitted
Does anyone has some more experience on the gitlab-ci using docker build commands in a docker shared runner?
Any help is highly appreciated!!
Thank you
Jannik

Gitlab can be a bit tricky :) I dont have an example based on CentOS, but I have one based on Ubuntu if that helps you. Here is some copy paste of a working gitlab pipeline of mine which uses gulp (you should be easily able to adjust it to work with your grunt).
The .gitlab-ci.yml looks like this (adjust the CONTAINER... variables at the beginning):
variables:
CONTAINER_TEST_IMAGE: registry.gitlab.com/psono/psono-client:$CI_BUILD_REF_NAME
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/psono/psono-client:latest
stages:
- build
docker-image:
stage: build
image: ubuntu:16.04
services:
- docker:dind
variables:
DOCKER_HOST: 'tcp://docker:2375'
script:
- sh ./var/build-ubuntu.sh
- docker info
- docker login -u gitlab-ci-token -p "$CI_BUILD_TOKEN" registry.gitlab.com
- docker build -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
in addition i have a this "./var/build-ubuntu.sh" which you can adjust a bit according to your needs, replace some ubuntu dependencies or switch gulp for grunt as needed:
#!/usr/bin/env bash
apt-get update && \
apt-get install -y libfontconfig zip nodejs npm git apt-transport-https ca-certificates curl openssl software-properties-common && \
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add - && \
add-apt-repository \
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \
$(lsb_release -cs) \
stable" && \
apt-get update && \
apt-get install -y docker-ce && \
ln -s /usr/bin/nodejs /usr/bin/node && \
npm install && \
node --version && \
npm --version

Related

Unable to run aliyun-cli in Docker:stable container after installing it. Errors as command not found

I am unsure if stack overflow or system fault is the right stack exchange site but I'm going with stack overflow cause the alicloud site posted to add a tag and ask a question here.
So. I'm currently building an image based on Docker:stable, that is an alpine distro, that will have aliyun-cli installed and available for use. However I am getting a weird error of Command Not Found when I'm running it. I have followed the guide here https://partners-intl.aliyun.com/help/doc-detail/139508.htm and moved the aliyun binary to /usr/sbin
Here is my Dockerfile for example
FROM docker:stable
RUN apk update && apk add curl
#Install python 3
RUN apk update && apk add python3 py3-pip
#Install AWS Cli
RUN pip3 install awscli --upgrade
# Install Aliyun CLI
RUN curl -L -o aliyun-cli.tgz https://aliyuncli.alicdn.com/aliyun-cli-linux-3.0.30-amd64.tgz
RUN tar -xzvf aliyun-cli.tgz
RUN mv aliyun /usr/bin
RUN chmod +x /usr/bin/aliyun
RUN rm aliyun-cli.tgz
However when i'm running aliyun (which can be auto-completed) I am getting this
/ # aliyun
sh: aliyun: not found
I've tried moving it to other bins. Cding into the folder and calling it explicitly but still always getting a command not found. Any suggestions would be welcome.
Did you check this Dockerfile?
Also why you need to install aws-cli in the same image and why you will need to maintain it for your self when AWS provide managed aws-cli image.
docker run --rm -it amazon/aws-cli --version
that's it for aws-cli image,but if you want in existing image then you can try
RUN pip install awscli --upgrade
DockerFile
FROM python:2-alpine3.8
LABEL com.frapsoft.maintainer="Maik Ellerbrock" \
com.frapsoft.version="0.1.0"
ARG SERVICE_USER
ENV SERVICE_USER ${SERVICE_USER:-aliyun}
RUN apk add --no-cache curl
RUN curl https://raw.githubusercontent.com/ellerbrock/docker-collection/master/dockerfiles/alpine-aliyuncli/requirements.txt > /tmp/requirements.txt
RUN \
adduser -s /sbin/nologin -u 1000 -H -D ${SERVICE_USER} && \
apk add --no-cache build-base && \
pip install aliyuncli && \
pip install --no-cache-dir -r /tmp/requirements.txt && \
apk del build-base && \
rm -rf /tmp/*
USER ${SERVICE_USER}
WORKDIR /usr/local/bin
ENTRYPOINT [ "aliyuncli" ]
CMD [ "--help" ]
build and run
docker build -t aliyuncli .
docker run -it --rm aliyuncli
output
docker run -it --rm abc aliyuncli
usage: aliyuncli <command> <operation> [options and parameters]
<aliyuncli> the valid command as follows:
batchcompute | bsn
bss | cms
crm | drds
ecs | ess
ft | ocs
oms | ossadmin
ram | rds
risk | slb
ubsms | yundun
After a lot of lookup I found a github issue in the official aliyun-cli that sort of describes that it is not compatible with alpine linux because of it's not muslc compatible.
Link here: https://github.com/aliyun/aliyun-cli/issues/54
Following the workarounds there I build a multi-stage docker file with the following that simply fixed my issue.
Dockerfile
#Build aliyun-cli binary ourselves because of issue
#in alpine https://github.com/aliyun/aliyun-cli/issues/54
FROM golang:1.13-alpine3.11 as cli_builder
RUN apk update && apk add curl git make
RUN mkdir /srv/aliyun
WORKDIR /srv/aliyun
RUN git clone https://github.com/aliyun/aliyun-cli.git
RUN git clone https://github.com/aliyun/aliyun-openapi-meta.git
ENV GOPROXY=https://goproxy.cn
WORKDIR aliyun-cli
RUN make deps; \
make testdeps; \
make build;
FROM docker:19
#Install python 3 & jq
RUN apk update && apk add python3 py3-pip python3-dev jq
#Install AWS Cli
RUN pip3 install awscli --upgrade
# Install Aliyun CLI from builder
COPY --from=cli_builder /srv/aliyun/aliyun-cli/out/aliyun /usr/bin
RUN aliyun configure set --profile default --mode EcsRamRole --ram-role-name build --region cn-shanghai

Install nodejs and npm in Dockerfile

The context
I have a Dockerfile to create an image that contains an apache webserver. However I also want to build my website using the Dockerfile so that the build process isn't dependent on the developers local environment. Note that the docker container is only going to be used for local development not for production.
The problem
I have this Dockerfile:
FROM httpd
RUN apt-get update -yq
RUN apt-get -yq install curl gnupg
RUN curl -sL https://deb.nodesource.com/setup_12.x | bash
RUN apt-get update -yq
RUN apt-get install -yq \
dh-autoreconf=19 \
ruby=1:2.5.* \
ruby-dev=1:2.5.* \
nodejs
I build it:
sudo docker build --no-cache .
The build completes successfully, here is part of the output:
Step 9/15 : RUN curl -sL https://deb.nodesource.com/setup_12.x | bash
---> Running in e6c747221ac0
......
......
......
Removing intermediate container 5a07dd0b1e01
---> 6279003c1e80
Successfully built 6279003c1e80
However, when I run the image in a container using this:
sudo docker container run --rm -it --name=debug 6279003c1e80 /bin/bash
Then when doing apt-cache policy inside the container, it doesn't show the repository that should have been added with the curl command. Also when doing apt-cache policy nodejs it shows the old version is installed.
However when I then run the following inside the container:
curl -sL https://deb.nodesource.com/setup_12.x | bash
apt-cache policy
apt-cache policy nodejs
It shows me the repository is added and it shows the newer nodejs version is available.
So why is it that when using the curl command using RUN inside the docker file it doesn't seem to work, but when doing it manually in the container from a shell then it does work? And how can I get around this problem?
Updates
Note that to prevent caching issues I am using the --no-cache flag.
I also removed all containers and did sudo docker system prune and rebuild the image but without success.
I tried bundling everything in one RUN command as user "hmm" suggested (as this is best practice for apt commands):
RUN apt-get update -yq \
&& apt-get -yq install curl gnupg && \
&& curl -sL https://deb.nodesource.com/setup_12.x | bash \
&& apt-get update -yq \
&& apt-get install -yq \
dh-autoreconf=19 \
ruby=1:2.5.* \
ruby-dev=1:2.5.* \
nodejs \
&& rm -rf /var/lib/apt/lists/*
You're likely running into issues with cached layers. There's a long section in the Dockerfile best practices documentation on using apt-get. Probably worth a read.
The gist is that Docker doesn't recognize any difference between the first and second RUN apt-get update, nor does it know that apt-get install depends on a fresh apt-get update layer.
The solution is to combine all of that into a single RUN command (recommended) or disable the cache during the build process (docker build --no-cache).
RUN apt-get update -yq \
&& apt-get -yq install curl gnupg ca-certificates \
&& curl -L https://deb.nodesource.com/setup_12.x | bash \
&& apt-get update -yq \
&& apt-get install -yq \
dh-autoreconf=19 \
ruby=1:2.5.* \
ruby-dev=1:2.5.* \
nodejs
Edit: Running your Dockerfile locally, I noticed no output from the curl command. After removing the -s flag (fail silently), you can see it's failing due to not being able to verify the server's SSL certificate:
curl: (60) SSL certificate problem: unable to get local issuer certificate
More details here: https://curl.haxx.se/docs/sslcerts.html
curl failed to verify the legitimacy of the server and therefore could not
establish a secure connection to it. To learn more about this situation and
how to fix it, please visit the web page mentioned above.
The solution to that issue is to install ca-certificates before running curl. I've updated the RUN command above.

Cant launch chrome in docker linux container

I have an asp.net core application that uses the jsreport nuget packages to run reports. I am attempting to deploy it with a linux docker container. I seem to be having trouble getting chrome to launch when I run a report. I am getting the error:
Failed to launch chrome! Running as root without --no-sandbox is not supported.
I have followed the directions on the .net local reporting page (https://jsreport.net/learn/dotnet-local) regarding docker, but I am still getting the error.
Here is my full docker file:
#use the .net core 2.1 runtime default image
FROM microsoft/dotnet:2.1-aspnetcore-runtime
#set the working directory to the server
WORKDIR /server
#copy all contents in the current directory to the container server directory
COPY . /server
#install node
RUN apt-get update -yq \
&& apt-get install curl gnupg -yq \
&& curl -sL https://deb.nodesource.com/setup_8.x | bash \
&& apt-get install nodejs -yq
#install jsreport-cli
RUN npm install jsreport-cli -g
#install chrome for jsreport linux
RUN apt-get update && \
apt-get install -y gnupg libgconf-2-4 wget && \
wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - && \
sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' && \
apt-get update && \
apt-get install -y google-chrome-unstable --no-install-recommends
ENV chrome:launchOptions:executablePath google-chrome-unstable
ENV chrome:launchOptions:args --no-sandbox
#expose port 80
EXPOSE 80
CMD dotnet Server.dll
Is there another step that I am missing somewhere?
Its little late but may be can help someone else.
For me, the only option that was needed to fix this issue in the docker container was to run chrome in a headless mode (so cause was in tests not in dockerfile).
ChromeOptions options = new ChromeOptions().setHeadless(true);
WebDriver driver = new ChromeDriver(options);
Results: Now tests run successfully, without any errors.
Expanding on Pramod's answer, my own issues were only solved by running with both the --headless and --no-sandbox flags.

Installing specific version of node.js and npm in ubuntu image with Dockerfile

I would like to know how can update my custom Dockerfile to install Node v6.3.1 and NPM v3.10.6 without breaking what is already in there.
Currently this is my custom file:
FROM ubuntu:16.10
MAINTAINER Fátima Alves
COPY . /my-software
WORKDIR /my-software
RUN apt-get update \
&& \
apt-get install -y \
python-dev \
tesseract-ocr
Thanks!
Update
Currently my dockerfile is like this:
FROM ubuntu:16.10
MAINTAINER Fátima Alves
COPY ./dist /my-software
COPY ./s3-config.json /my-software
COPY ./_* /my-software
COPY ./node_modules /my-software
WORKDIR /dataextractor
RUN apt-get update \
&& \
apt-get install -y \
curl
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash - \
&& apt-get install -y nodejs
And is returning:
The command '/bin/sh -c curl -sL https://deb.nodesource.com/setup_6.x | bash - && apt-get install -y nodejs' returned a non-zero code: 1
Perhaps i'm missing something?
You can just follow the usual Ubuntu install instructions, just within the RUN statement in your Dockerfile
RUN curl -sL https://deb.nodesource.com/setup_6.x | bash - \
&& apt-get install -y nodejs
Docs
WHY
Because https://nodejs.org/en/download/package-manager/#debian-and-ubuntu-based-linux-distributions suggests doing the "curl pipe bash" anti-pattern, let's try to make that cleaner.
WHAT
Since containers are built from a definitive OS and Version we don't need the universality of that bash script.
HOW
If we examine closely, the https://deb.nodesource.com/setup_6.x we see that it really only does 2 things for Debian:
Add their public key to apt's keychain via apt-key add
Add their deb repo to a newly created file /etc/apt/sources.list.d/nodesource.list
Adding sources
The 2nd thing we can do really easily. That is simply putting this in your Dockerfile:
COPY nodesource.list /etc/apt/sources.list.d/nodesource.list
Of course you'll need to create nodesource.list with content like:
deb https://deb.nodesource.com/node_6.x trusty main
deb-src https://deb.nodesource.com/node_6.x trusty main
Adding a trusted key
The 1st thing is a bit trickier to new "cleanly". I would rather to add a keychain file to /etc/apt/trusted.gpg.d/ than modify the existing /etc/apt/trusted.gpg file (which is what apt-key add would do).
What they have at the URL https://deb.nodesource.com/gpgkey/nodesource.gpg.key is a public key, not a keychain. To get a keychain file, we can pipe it [not to apt-key, rather] like so:
curl -s https://deb.nodesource.com/gpgkey/nodesource.gpg.key | \
gpg --import --no-default-keyring --keyring ./nodesource.gpg
That creates nodesource.gpg which we can utilize by putting this in your Dockerfile:
COPY nodesource.gpg /etc/apt/trusted.gpg.d/nodesource.gpg
Install as usual
The crazy spacing and \ terminated lines is what I use because I tend to have a lot of additional packages to install.
# Install software packages
ARG DEBIAN_FRONTEND=noninteractive
RUN apt-get update -qq && apt-get clean
RUN apt-get install -qqy \
nodejs \
&& \
apt-get clean
You can see the complete Dockerfile at https://gist.github.com/RichardBronosky/f748563dc328b12b39cd864973fcb138#file-dockerfile

docker - cannot find aws credentials in container although they exist

Running the following docker command on mac works and on linux, running ubuntu cannot find the aws cli credentials. It returns the following message: Unable to locate credentials
Completed 1 part(s) with ... file(s) remaining
The command which runs an image and mounts a data volume and then copies a file from and s3 bucket, and starts the bash shell in the docker container.
sudo docker run -it --rm -v ~/.aws:/root/.aws username/docker-image sh -c 'aws s3 cp s3://bucketname/filename.tar.gz /home/emailer && cd /home/emailer && tar zxvf filename.tar.gz && /bin/bash'
What am I missing here?
This is my Dockerfile:
FROM ubuntu:latest
#install node and npm
RUN apt-get update && \
apt-get -y install curl && \
curl -sL https://deb.nodesource.com/setup | sudo bash - && \
apt-get -y install python build-essential nodejs
#install and set-up aws-cli
RUN sudo apt-get -y install \
git \
nano \
unzip && \
curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip" && \
unzip awscli-bundle.zip
RUN sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
# Provides cached layer for node_modules
ADD package.json /tmp/package.json
RUN cd /tmp && npm install
RUN mkdir -p /home/emailer && cp -a /tmp/node_modules /home/emailer/
Mounting $HOME/.aws/ into the container should work. Make sure to mount it as read-only.
It is also worth mentioning, if you have several profiles in your ~/.aws/config -- you must also provide the AWS_PROFILE=somethingsomething environment variable. E.g. via docker run -e AWS_PROFILE=xxx ... otherwise you'll get the same error message (unable to locate credentials).
Update: Added example of the mount command
docker run -v ~/.aws:/root/.aws …
You can use environment variable instead of copying ~/.aws/credentials and config file into container for aws-cli
docker run \
-e AWS_ACCESS_KEY_ID=AXXXXXXXXXXXXE \
-e AWS_SECRET_ACCESS_KEY=wXXXXXXXXXXXXY \
-e AWS_DEFAULT_REGION=us-west-2 \
<img>
Ref: AWS CLI Doc
what do you see if you run
ls -l ~/.aws/config
within your docker instance?
the only solution that worked for me in this case is:
volumes:
- ${USERPROFILE}/.aws:/root/.aws:ro
There are a few things that could be wrong. One, as mentioned previously you should check if your ~/.aws/config file is set accordingly. If not, you can follow this link to set it up. Once you have done that you can map the ~/.aws folder using the -v flag on docker run.
If your ~/.aws folder is mapped correctly, make sure to check the permissions on the files under ~/.aws so that they are able to be accessed safely by whatever process is trying to access them. If you are running as the user process, simply running chmod 444 ~/.aws/* should do the trick. This will give full read permissions to the file. Of course, if you want write permissions you can add whatever other modifiers you need. Just make sure the read octal is flipped for your corresponding user and/or group.
The issue I had was that I was running Docker as root. When running as root it was unable to locate my credentials at ~/.aws/credentials, even though they were valid.
Directions for running Docker without root on Ubuntu are here: https://askubuntu.com/a/477554/85384
You just have to pass the credential in order to be the AWS_PROFILE, if you do not pass anything it will use the default, but if you want you can copy the default and add your desired credentials.
In Your credentials
[profile_dev]
aws_access_key_id = xxxxxxxxxxxxxxxxxxxxxxxx
aws_secret_access_key = xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
output = json
region = eu-west-1
In Your docker-compose
version: "3.8"
services:
cenas:
container_name: cenas_app
build: .
ports:
- "8080:8080"
environment:
- AWS_PROFILE=profile_dev
volumes:
- ~/.aws:/app/home/.aws:ro

Resources