Access private git repos via npm install in a Docker container - node.js

I am in the process of setting up a Docker container that will pull private repos from GitHub as part of the process. At the moment I am using an Access Token that I pass from the command line (will change once build gets triggered via Jenkins).
docker build -t my-container --build-arg GITHUB_API_TOKEN=123456 .
# Dockerfile
# Env Vars
ARG GITHUB_API_TOKEN
ENV GITHUB_API_TOKEN=${GITHUB_API_TOKEN}
RUN git clone https://${GITHUB_API_TOKEN}#github.com/org/my-repo
This works fine and seems to be a secure way of doing this? (though need to check the var GITHUB_API_TOKEN only being available at build time)
I am looking to find out how people deal with ssh keys or access tokens when running npm install and dependencies pull from github
"devDependencies": {
"my-repo": "git#github.com:org/my-repo.git",
"electron": "^1.7.4"
}
At the moment I cannot pull this repo as I get the error Please make sure you have the correct access rights as I have no ssh keys setup in this container

Use the multi-stage build approach.
Your Dockerfile should look something like this:
FROM alpine/git as base_clone
ARG GITHUB_API_TOKEN
WORKDIR /opt
RUN git clone https://${GITHUB_API_TOKEN}#github.com/org/my-repo
FROM <whatever>
COPY --from=base_clone /opt/my-repo /opt
...
...
...
Build:
docker build -t my-container --build-arg GITHUB_API_TOKEN=123456 .
The Github API Token secret won't be present in the final image.

docker secrets is a thing, but it's only available to containers that are part of a docker swarm. It is meant for handling things like SSH keys. You could do as the documentation suggests and create a swarm of 1 to utilize this feature.
docker-compose also supports secrets, though I haven't used them with compose.

Related

npm login via a node docker container

I'm trying to dockerize a project. I've got a few services as containers, i.e. Redis, Postgres, RabbitMQ, and Node. I have a docker-compose.yml that has all the services needed.
In my node build Dockerfile:
FROM node:16
ARG PAT
WORKDIR /app
COPY package.json .
COPY .npmrc /root/.npmrc
RUN npm install
COPY . .
WORKDIR /app/project1
RUN npm install
WORKDIR /app/project2
RUN npm install
The above fails because, within project2, I have a private GitHub package that I need to authenticate. I have generated a PAT and I can do npm login --scope=#OWNER --registry=https://npm.pkg.github.com enter the correct credentials and then do npm install which successfully gets the package that needed authenticating.
Is there a way to automate this via docker-compose/Dockerfile? Somehow add the token, owner, username, etc to the .yml file and use that to login?
My node services in my docker-compose.yml:
node:
container_name: node
build:
context: ..
dockerfile: ./docker/build/node/Dockerfile
args:
PAT: TOKEN
ports:
- 3150:3150
As I see it, you need your credentials in the build phase of your image. You can do as follows.
Create a .npmrc in your docker context
#OWNER:registry=https://npm.pkg.github.com/
//npm.pkg.github.com/:_authToken=${PAT}
user.email=email#example.com
user.name=foo bar
and copy that file in the Dockerfile
FROM node:16-alpine
ARG PAT
COPY --chown=node .npmrc /home/node/.npmrc
and then during the image build set the value for the PAT environment variable from the GITHUB_PAT environment variable of the host.
docker build --build-arg PAT=${GITHUB_PAT} .
ie --build-arg sets the environment variables during build time of the image. But be aware, that any environment variable set via --build-args is only available during build time of the image. Ie, it's not available when the container is running. But again, you don't seem to need it at the runtime of the container, as the installation of your npm packages happens during the build time of the image.

Absolute path gitlab project

I have a GitLab instance self-managed and one of my project has a folder which contains 3 sub-directories, these 3 sub-directories have a Dockerfile.
All my Dockerfile's have a grep command to get the latest version from the CHANGELOG.md which is located in the root directory.
I tried something like this to go back 2 steps but it doesn't work (grep: ../../CHANGELOG.md: No such file or directory)
Dockerfile:
grep -m 1 '^## v.*$' "../../CHANGELOG.md"
example:
Link:
https://mygitlab/project/images/myproject
repo content:
.
├──build
├──image1
├──image2
├──image3
├──CHANGELOG.md
gitlab-ci.yaml
script:
- docker build --network host -t $VAL_IM ./build/image1
- docker push $VAL_IM
The issue is happening when I build the images.
docker build --network host -t $VAL_IM ./build/image1
Here, you have set the build context to ./build/image1 -- builds cannot access directories or files outside of the build context. Also keep in mind that if you use RUN in a docker build, it can only access files that have already been copies inside the container (and as stated you can't copy files outside the build context!) so this doesn't quite make sense as stated.
If you're committed to this versioning strategy, what you probably want to do is perform your grep command as part of your GitLab job before calling docker build and pass in the version as a build arg.
In your Dockerfile, add an ARG:
FROM # ...
ARG version
# now you can use the version in the build... eg:
LABEL com.example.version="$version"
RUN echo version is "$version"
Then your GitLab job might be like:
script:
- version=$(grep -m 1 '^## v.*$' "./CHANGELOG.md")
- docker build --build-arg version="${version}" --network host -t $VAL_IM ./build/image1
- docker push $VAL_IM

Does GCP Cloud Build Docker remove files created during the Dockerfile execution?

I have a build step in the docker file that generates some files. Since I also need those files locally (when testing) I have the generation of them not in Cloud Build itself but in the Dockerfile (simple node script that executes via npx). Locally this works perfectly fine and my Docker image does contain those generated files. But whenever I throw this Dockerfile into Cloud Build it executes the script but it does not keep the generated files in the resulting image. I also scanned the logs and so on but found no error (such as a persission error or something similar).
Is there any flag or something I am missing here that prevents my Dockerfile from generating those files and storing them into the image?
Edit:
Deployment pipeline is a trigger onto a GitHub pull request that runs the cloud build.yaml in which the docker build command is located. Afterwards the image is getting pushed to the Artifact Registry and to Cloud Run. On Cloud Run itself the files are gone. Steps in-between I can't check but when building locally the files are getting generated and they are persistent in the image.
Dockerfile
FROM node:16
ARG ENVIRONMENT
ARG GOOGLE_APPLICATION_CREDENTIALS
ARG DISABLE_CLOUD_LOGGING
ARG DISABLE_CONSOLE_LOGGING
ARG GIT_ACCESS_TOKEN
WORKDIR /usr/src/app
COPY ./*.json ./
COPY ./src ./src
COPY ./build ./build
ENV ENVIRONMENT="${ENVIRONMENT}"
ENV GOOGLE_APPLICATION_CREDENTIALS="${GOOGLE_APPLICATION_CREDENTIALS}"
ENV DISABLE_CLOUD_LOGGING="${DISABLE_CLOUD_LOGGING}"
ENV DISABLE_CONSOLE_LOGGING="${DISABLE_CONSOLE_LOGGING}"
ENV PORT=8080
RUN git config --global url."https://${GIT_ACCESS_TOKEN}#github.com".insteadOf "ssh://git#github.com"
RUN npm install
RUN node ./build/generate-files.js
RUN rm -rf ./build
EXPOSE 8080
ENTRYPOINT [ "node", "./src/index.js" ]
Cloud Build (stuff before and after is just normal deployment to Cloud Run stuff)
...
- name: 'gcr.io/cloud-builders/docker'
entrypoint: 'bash'
args: [ '-c', 'docker build --build-arg ENVIRONMENT=${_ENVIRONMENT} --build-arg DISABLE_CONSOLE_LOGGING=true --build-arg GIT_ACCESS_TOKEN=$$GIT_ACCESS_TOKEN -t location-docker.pkg.dev/$PROJECT_ID/atrifact-registry/docker-image:${_ENVIRONMENT} ./' ]
secretEnv: ['GIT_ACCESS_TOKEN']
...
I figured it out. Somehow the build process does not fail when crashing a RUN statement. This lead to me thinking there are no problem, when in fact it could not authorize my generation script. Adding --network=cloudbuild to the docker build command fixed the authorization problem.

How to publish .Net core docker application from Windows to Linux machine?

I created a .Net core application with Linux docker support using Visual Studio 2017 on a Windows 10 PC with Docker for Windows installed. I can use the following command to run it (a console application)
docker run MyApp
I have another Linux machine with Docker installed. How to publish the .Net core application to the Linux machine? I need to publish and run the dockerized application on the Linux machine.
The linux has the following docker packages installed.
$ sudo yum list installed "*docker*"
Loaded plugins: amazon-id, rhui-lb, search-disabled-repos
Installed Packages
docker-engine.x86_64 17.05.0.ce-1.el7.centos #dockerrepo
docker-engine-selinux.noarch 17.05.0.ce-1.el7.centos #dockerrepo
There are many ways to do this, just search for any tool for CI/CD.
The easiest way to do it is manually, connect to your Linux server, make a git pull of the code and then run the same commands that you run locally.
Other option is to do a push of your docker image to a container registry, then do a pull in you docker server and you are ready to go
Edit:
You should really take a look to some CI service, for example, in our environment, we use GitLab, when we do a push to master there is a gitlab.yml that builds the project, then do a push:
image: docker:latest
services:
- docker:dind
stages:
- build
api:
variables:
IMAGE_NAME: git.lagersoft.com:4567/gumbo/vtae/api:${CI_BUILD_REF}
stage: build
only:
- master
script:
- docker build -t ${IMAGE_NAME} -f vtae.api/Dockerfile .
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN ${IMAGE_NAME}
- docker push ${IMAGE_NAME}
With this we only need to do a pull in our server with the latest version.
It's worth noticing that docker by itself does not handle the publication part, so you need to do it manually or with some tool (any CI tool like gitlab, jenkins, circleci, amazon code pipeline...) if you are starting learning I would recommend to start manually and then integrate some CI tool.
Edit 2
About the Visual Studio tool, I would not recommend to use it for anything else than local development, since yeah, it only works in windows and it only works in visual studio (Rider has integrated just very recently), so, to do the deploy in a linux environment we use our own docker and docker compose files, they are based in the defaults anyway, they are something like this:
FROM microsoft/aspnetcore:2.0 AS base
WORKDIR /app
EXPOSE 80
FROM microsoft/aspnetcore-build:2.0 AS build
WORKDIR /src
COPY lagersoft.common/lagersoft.common.csproj lagersoft.common/
COPY vtae.redirect/vtae.redirect.csproj vtae.redirect/
COPY vtae.data/vtae.data.csproj vtae.data/
COPY vtae.common/vtae.common.csproj vtae.common/
RUN dotnet restore vtae.redirect/vtae.redirect.csproj
COPY . .
WORKDIR /src/vtae.redirect
RUN dotnet build vtae.redirect.csproj -c Release -o /app
FROM build AS publish
RUN dotnet publish vtae.redirect.csproj -c Release -o /app
FROM base AS final
WORKDIR /app
COPY --from=publish /app .
ENTRYPOINT ["dotnet", "vtae.redirect.dll"]
This docker file copy all the related projects (I hate the copying part, but is the same as Microsoft do their default file), the do the build and publish the app, on the other hand we have a docker-compose to add some services (this files must be in the solution folder to access all the related projects):
version: '3.4'
services:
vtae.redirect.redis:
image: redis
volumes:
- "./volumes/redirect/redis/data:/data"
container_name: vtae.redirect.redis
vtae.redirect:
image: vtae.redirect
depends_on:
- vtae.redirect.redis
build:
context: .
dockerfile: vtae.redirect/Dockerfile
ports:
- "8080:80"
volumes:
- "./volumes/redirect/data:/data"
container_name: vtae.redirect
entrypoint: dotnet /app/vtae.redirect.dll
With this parts there is only left to do a commit, then a pull in the server and run the docker-compose up command to run our app (you could do it from the docker file directly, but it is easier and more manageable with docker compose.
Edit 3
To make the deployment in the server we use two tools.
First the gitlab ci is run after the commit is done
It makes the build specified in the docker file and pushes it to our Gitlab container registry, same if it was the container registry of amazon, google, azure... etc...
Then it makes a post request to the server in production, this server is running a special tool in a separate port
The server receive the post request and validates it, for this we use this tool (a friend is the repo owner)
The script receive the request, check the login, and if it is valid, then it simply does the pull from our gitlab container registry and run docker-compose up
Notes
The tool is not perfect, we are moving from just docker to use kubernetes, were you can connect to your cluster directly from your machine or some CI integration and do the deploys directly, no matter what solution do you choose, i recommend you that start to see how kubernetes can help you, sadly is one more layer to learn, but it is very promising, were you will be able to publish to almos any cloud or metal painless, with fallbacks, scaling and other stuff.
Also
If you do not want or can not use the container registry (I strongly recommend this way), you can use the same tool, in the .sh that executes it, just do a git pull and then a docker build or docker compose.
The most simple scenario could be to create an script yourself where you do ssh to the server, upload the files as zip and then run it in the server, remember, Ubuntu is in the microsoft store and could run this script, but the other solutions are more "independient" and scalable, so, make your choose!

How to specify Docker private registry credentials in Docker configuration File?

I am creating Docker container of nodejs application. Below is the sample of my Docker configuration file
FROM node:6.11
WORKDIR /usr/src/app
COPY package.json .
npm install
copy . /usr/src/app
EXPOSE 80
CMD [ "npm", "start" ]
This will download the node image from Docker hub and then it will create Docker image as per the configuration.
For security reasons I don't want to download nodejs image from Docker hub, Instead I want to use my private repository to download nodejs image.
As I have setup private repository I am not sure how to specify registry credentials in DockerFile.
Can anyone help me with this?
By default, docker pulls all images from Dockerhub. If you want to pull an image from another registry, you have to prefix the image name with the registry URL. Check the official docker pull documentation.
In your case, you have 2 options:
The first is to specify explicitly the registry inside the Dockerfile as such:
FROM <registry>:<port>/node:6.11
WORKDIR /usr/src/app
Once you build, the image will be downloaded from the private registry. Make sure that you are logged in to the registry before building using the docker login command.
Alternatively, if you don't want to change the docker file. Pull the image from the private registry using docker pull <registry>:<port>/node:6.11 and then force docker build to use this image by tagging it with only node:6.11
docker tag <registry>:<port>/node:6.11 node:6.11
Before you build the Docker image you'll have to do docker login to your private repo. Then pulls - explicit or implicit through FROM will use that registry (and while I can't find any documentation to back that up, I suspect it also fallback to Docker Hub if it can't find the image there, but that may be dependent on the registry settings????)
I guess you already have nodejs image in your local docker registry.
If you want to pull the nodejs image from local docker registry:
Make sure your docker daemon is pointing to local docker registry use --insecure-registry <registry_address>:<port> as mentioned here https://docs.docker.com/engine/reference/commandline/dockerd/
Change Dockerfile to point to image in registry. FROM <registry_address>:<port>/node:6.11 (actually this will be the complete name of your nodejs image in local docker registry)
The registry credentials can be set using docker login command https://docs.docker.com/engine/reference/commandline/login/ or you can manually set the credentials in ~/.docker/config.json file.
Now you can build the image, it should pull the base image from registry.

Resources