Building a Docker image for a Node.js app in GitLab CI - node.js

I'm working on a Node.js application for which my current Dockerfile looks like this:
# Stage 0
# =======
FROM node:10-alpine as build-stage
WORKDIR /app
COPY package.json yarn.lock ./
RUN yarn install
COPY . ./
RUN yarn build
# Stage 1
# =======
FROM nginx:mainline-alpine
COPY --from=build-stage /app/build /usr/share/nginx/html
I'd like to integrate this into a GitLab CI pipeline but I'm not sure if I got the basic idea. So far I know that I need to create a .gitlab-ci.yml file which will be later picked up by GitLab.
My basic idea is:
I push my code changes to GitLab.
GitLab builds a new Docker image based on my Dockerfile.
GitLab pushes this newly create image to a "production" server (later).
So, my question is:
My .gitlab-ci.yml should then contain something like a build job which triggers... what? The docker build command? Or do I need to "copy" the Dockerfile content to the CI file?

GitLab CI executes the pipeline in the Runners that need to be registered into the project using generated tokens (Settings/CI CD/Runners). You also can used Shared Runners for multiple projects. The pipeline is configured with the .gitlab-ci.yml file and you can build, test, push and deploy docker images using the yaml file, when something is done in the repo (push to branch, merge request, etc).
It’s also useful when your application already has the Dockerfile that
can be used to create and test an image
So basically you need to install the runner, register it with the token of your project (or use Shared Runners) and configure your CI yaml file. The recommended aproach is docker in docker but it is up to you. You can also check this basic example. Finally you can deploy your container directly into Kubernetes, Heroku or Rancher. Remember to safely configure your credentials and secrets in Settings/Variables.
Conclusion
GitLab CI is awesome, but I recommend you to firstly think about your git workflow to use in order to set the stages in the .gitlab-ci.yml file. This will allow you to configure your node project as a pipeline an then it would be easy to export to other tools such as Jenkins pipelines or Travis for example.

build job trigger:
option 1:
add when: manual in the job and you can run the job by manual in CI/CD>Pipelines
option 2:
only:
- <branchname>
in this case the job start when you push into the defined branch
(this my personal suggest)
option 3:
add nothin' and the job will run every time when you push code
Of corse you can combine the options above.
In addition may star the job with web request by using the job token.
docker build command will work in pipeline. I think in script section.
Requirements docker engine on the gitlab-runner which pick the job.
Or do I need to "copy" the Dockerfile content to the CI file?
no

Related

How to deploy a gitlab project to a specific directory on the server?

I have a project in gitlab and want to deploy this to a specific directory on a Linux server using the .gitlab-ci.yml for which I am facing an issue.
I have setup a gitlab runner for "cms-project" and added .gitlab-ci.yml to the root directory
When I push to the repository, the runner fetches the commits to the following directory
home/gitlab-runner/builds/ziwwUK3Jz/0/project/cms_project
Now I want the runner to fetch the commit to the dev server which is located in
/var/www/project-cms.com/html
I have tried the changes and below is the .gitlab-ci.yml file
job_main:
type: deploy
script: cd /var/www/project-cms.com/html && git pull
deploy:
stage: deploy
only:
- master
variables:
BRANCH: master
script:
- composer install
but I am getting the following error
Removing modules/contrib/
Removing vendor/
Skipping Git submodules setup
Executing "step_script" stage of the job script
$ cd /var/www/project-cms.com/html && git pull
error: cannot open .git/FETCH_HEAD: Permission denied
The user has the root permissions to the directory.
I have already gone through this link "https://stackoverflow.com/questions/37564681/gitlab-ci-how-to-deploy-the-latest-to-a-specific-directory" but that did not help
can anyone please help me to deploy the project to the "/var/www/project-cms.com/html" directory?
It is because you did not understand the difference between the gitlab-runner and your server
This code is running inside a "machine" called gitlab runner that is responsible to execute anything that you want. But it is not your web server.
To be able to deploy your code, you need to build a script that will run inside this runner and you need to understand that your entire code is inside this runner (this is the reason that you found your code inside the /builds folder.
So you need to tell to your runner to copy your code to your webserver, using any protocol/binary like ssh, ftp, sftp etc.
Edit: If your runner is running inside your webserver, you just need to copy the code to your folder, do not need to pull with git anymore
Some examples here: https://docs.gitlab.com/ee/ci/examples/
Docs: https://docs.gitlab.com/ee/ci/
Runner docs: https://docs.gitlab.com/runner/

Why does GitLab Ci not find my cached folder?

I have a list of CI jobs running in my GitLab and the Caching does not work as expected:
This is how my docu-generation job ends:
[09:19:33] Documentation generated in ./documentation/ in 4.397 seconds using gitbook theme
Creating cache angular...
00:02
WARNING: frontend/node_modules: no matching files
frontend/documentation: found 136 matching files
No URL provided, cache will be not uploaded to shared cache server. Cache will be stored only locally.
Created cache
Job succeeded
I then start a deployment Job (to GitLab Pages) but it fails because it doesn't find the documentation-folder:
$ cp -r frontend/documentation .public/frontend
cp: cannot stat 'frontend/documentation': No such file or directory
this is the cache config of the generation:
generate_docu_frontend:
image: node:12.19.0
stage: build
cache:
key: angular
paths:
- frontend/node_modules
- frontend/documentation
needs: ["download_angular"]
and this is for deployment:
deploy_documentation:
stage: deploy
cache:
- key: angular
paths:
- frontend/node_modules
- frontend/documentation
policy: pull
- key: laravel
paths:
- backend/vendor
- backend/public/docs
policy: pull
does anyone know why my documentation folder is missing?
The message in your job output No URL provided, cache will be not uploaded to shared cache server. Cache will be stored only locally. just means that your runners are not using Amazon S3 to store your cache, or something similar like Minio.
Without S3/Minio, the cache only lives on the runner that first ran the job and cached the resources. This means that the next time the job runs and it happens to be picked up by a different runner, it won't have the cache. In that case, you'd run into an error like this.
There's a couple ways around this:
Configure your runners to use S3/Minio (Minio has an open source, free-to-use license if you're interested in hosting it yourself).
Only use one runner (not a great solution since generally more runners means faster pipelines and this would slow things down considerably, though it would solve the cache problem).
Use tags. Tags are used to ensure that a job runs on a specific runner(s). Let's say for example that 1 out of your 10 runners have access to your production servers, but all have access to your lower environment servers. Your lower-env jobs can run on any runner, but your Production Deployment job has to run on the one runner with prod access. You can do this by putting a Tag on the runner called let's say prod-access and putting the same tag on the prod deploy job. This will ensure that job will run on the runner with prod access. The same thing can be used here to ensure the cache is available.
Use artifacts instead of cache. I'll explain this option below as it's really what you should be using for this use case.
Let's briefly explain the difference between Cache and Artifacts:
Cache is generally best used for dependency installation like npm or composer (for PHP projects). When you have a job that runs npm ci or composer install, you don't want it to run every since time your pipeline runs when you don't necessary change the dependencies as it wastes time. Use the cache keyword to cache the dependencies so that subsequent pipelines don't have to install the dependencies again.
Artifacts are best used when you need to share files or directories between jobs in the same pipeline. For example, after installing npm dependencies, you might need to use the node_modules directory in another job in the pipeline. Artifacts are also uploaded to the GitLab server by the runner at the end of the job, opposed to being stored locally on the runner that ran the job. All previous artifacts will be downloaded for all subsequent jobs, unless controlled with either dependencies or needs.
Artifacts are the better choice for your use case.
Let's update your .gitlab-ci.yml file to use artifacts instead of cache:
stages:
- build
- deploy
generate_docu_frontend:
image: node:12.19.0
stage: build
script:
- ./generate_docs.sh # this is just a representation of whatever steps you run to generate the docs
artifacts:
paths:
- frontend/node_modules
- frontend/documentation
expire_in: 6 hours # your GitLab instance will have a default, you can override it like this
when: on_success # don't attempt to upload the docs if generating them failed
deploy_documentation:
stage: deploy
script:
- ls # just an example showing that frontend/node_modules and frontend/documentation are present
- deploy.sh # whatever else you need to run this job

Call specific runner for Git script

How I can call a specific runner for CI job. Problem: I have default runners and now I have to install my local runner and run tests on it. But I haven't a possibility to turn off runners on the Git because the whole project builds on remote drivers.
You can use tags to tag a runner, when you register it. And you can specify that your job will only run on runners with this tags.
eg. you tagged your runner with 'fancy-example' than you can use it like
build:
tags:
- fancy-example
script:
- echo Hello
See the gitLab docs for more detailed examples and explanations:
https://docs.gitlab.com/ce/ci/yaml/README.html#tags

Is there a way to upload GitLab CI artifacts to an Openshift container?

I have a GitLab CI pipeline which builds a few artifacts. For example:
train:job:
stage: train
script: python script.py
artifacts:
paths:
- artifact.csv
expire_in: 1 week
Now I deploy the repository to OpenShift using the following step in my GitLab pipeline. This will pull my GitLab repo inside OpenShift. It does not include the artifacts from the 'testing'.
deploy:app:
stage: deploy
image: ayufan/openshift-cli
before_script:
- oc login $OPENSHIFT_DOMAIN --token=$OPENSHIFT_TOKEN
script:
- oc start-build my_app
How can I let OpenShift use this repository, plus the artifacts created in my pipeline?
In general OpenShift build pipelines rely on the s2i build process to build applications.
The best practice for reusing artifacts between s2i builds would either be through using incremental builds or chaining multiple BuildConfig definitions (the output image of one BuildConfig being fed as source image into another BuildConfig) together via the spec.source.images or spec.source.git configuration in the BuildConfig definition.
In your case since you are using a Jenkins pipeline to generate your artifacts instead of the OpenShift build process you really only need to combine your artifacts with your source code and the runtime container image.
To do this you might create a builder container image that pulls those artifacts down from an external source during the assemble phase (via curl, wget, etc) of the s2i workflow. You could then configure your BuildConfig to point at your source repository. At build time the BuildConfig will pull down your source code and the assemble script will pull down your artifacts.

Custom GitLag Container Registry Image Creation & Reuse

I want to build and add a custom image (with ruby, node.js, bower, grunt, jekyll etc.) and tag it as 'myimage:1.0'. This image needs to be stored in gitlab container registry and then used in .gitlab-ci.yml as image: sachin.1.0.0. So that my build via gitlab ci will have everything preinstalled like node.js, etc.
Tried enough, How can this be done ?
Before you do this, you need to configure a gitlab runner which allows you to use docker build. You can configure this using the instructions here depending on your use case
Next, create a new repo in gitlab, let's call it sachin-image.
Inside the root of the git repo, add a Dockerfile with installation of everything you need.
Now, into this repo, add a .gitlab-ci.yml file like so:
---
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN <my-docker-gitlab-registry-url>
stages:
- build
build_image:
stage: build
script:
- docker build -t gitlab.example.com/my/dockerimage/repo:latest .
- docker push gitlab.example/my/dockerimage/repo:latest
tags:
- docker_engine
At this point, you now have automated docker builds working in gitlab. In order to use this image in future gitlab builds, all you need to use the following image url:
gitlab.example.com/my/dockerimage/repo:latest

Resources