How to install a package in GitLab runner container? - linux

I am trying to implement some cicd using GitLab runner,
I am very new to containers and trying to install the zip package in the container,
I was able to install awscli using pip but, I am not able to install the zip package, which is required for my shell script.
Following is the .gitlab-ci.yml file -
stages:
- build
build:
image: python:latest
stage: build
script:
- pip install awscli
- yum install zip
- bash cicdScript.sh
I'm using the python container, as my script requires awscli,
But also needs the zip package,
I tried the following -
1)
script:
- pip install awscli
- yum install zip
- bash cicdScript.sh
gives -
/bin/bash: line 82: yum: command not found
2)
script:
- pip install awscli
- apt-get install zip unzip
- bash cicdScript.sh
gives -
Reading package lists...
Building dependency tree...
Reading state information...
Package zip is not available, but is referred to by another package.
This may mean that the package is missing, has been obsoleted, or
is only available from another source
E: Package 'zip' has no installation candidate

Try update and -y
apt-get update
apt-get install -y zip unzip

Related

Alpine and docker:20.10.22 unable to install pip3

I am upgrading some images from to docker:19.03.5 to docker:20.10.22 but am hitting issues when my Push Dev job tries to install pip3 on the image being used by .gitlab-ci.yml.
Initially I was not explicitly installing pip3 as this worked fine, after an initial error saying pip3 was not installed I tried installing using apk as below.
before_script:
- apk add python3
- apk add python3-pip
- pip3 install awscli==1.18.8
- docker load --input data/image.tar
- $(aws ecr get-login --no-include-email --region us-east-1)
Push Dev:
stage: Push
script:
- docker tag proxy:latest $ECR_REPO:dev
- docker push $ECR_REPO:dev
rules:
- if: "$CI_COMMIT_BRANCH == 'main'"
However I still get this issue. Any idea what is wrong here?
$ apk add python3
fetch https://dl-cdn.alpinelinux.org/alpine/v3.17/main/x86_64/APKINDEX.tar.gz
fetch https://dl-cdn.alpinelinux.org/alpine/v3.17/community/x86_64/APKINDEX.tar.gz
(1/11) Installing libbz2 (1.0.8-r4)
(2/11) Installing libexpat (2.5.0-r0)
(3/11) Installing libffi (3.4.4-r0)
(4/11) Installing gdbm (1.23-r0)
(5/11) Installing xz-libs (5.2.9-r0)
(6/11) Installing libgcc (12.2.1_git20220924-r4)
(7/11) Installing libstdc++ (12.2.1_git20220924-r4)
(8/11) Installing mpdecimal (2.5.1-r1)
(9/11) Installing readline (8.2.0-r0)
(10/11) Installing sqlite-libs (3.40.1-r0)
(11/11) Installing python3 (3.10.9-r1)
Executing busybox-1.35.0-r29.trigger
OK: 65 MiB in 34 packages
$ apk add python3-pip
ERROR: unable to select packages:
python3-pip (no such package):
required by: world[python3-pip]
Cleaning up project directory and file based variables
00:01
ERROR: Job failed: exit code 1
As per the comment made by β.εηοιτ.βε, I changed the name of the pip3 package to py3-pip to be in line with the expected package name in alpine instead of python3-pip which is Debian.
before_script:
- apk add python3
- apk add py3-pip
- pip3 install awscli==1.18.8
- docker load --input data/image.tar
- $(aws ecr get-login --no-include-email --region us-east-1)
https://pkgs.alpinelinux.org/package/v3.17/community/armv7/py3-pip

How to install nodejs to install a npm package in a gitlab job?

.deploy: &deploy
before_script:
- apt-get update -y
script:
- cd source/
- npm install multi-file-swagger
- multi-file-swagger -o yaml temp.yml > swagger.yml
I want to install multi-file-swagger package to compile the temp.yml( which has been split into multiple files) into the swagger.yml. So before using npm I need to install nodejs. How can I do that?
As the image is Debian-based, you should be able to install the source repo from Node and install the package from there. The relevant section of your Gitlab file would look like this:
.deploy: &deploy
before_script:
- apt-get update -y
script:
- curl -sL https://deb.nodesource.com/setup_17.x | bash
- apt-get install nodejs -yq
- cd source/
- npm install multi-file-swagger
- multi-file-swagger -o yaml temp.yml > swagger.yml
Please note that these additional steps will add a significant amount of time to your build process. If you are executing them more frequently, consider creating your own build image derived from the one you’re using now, and adding these steps into the image itself.

Travis CI: What is the directory to cache dependencies in case of pip3 --user?

I use their node_js builds in trusty containers with Python 3 addons:
sudo: false
dist: "trusty"
language: "node_js"
addons:
apt:
packages:
- "python3"
- "python3-pip"
To install dependencies of my Sphinx docs, which live alongside my Node.js project,
I do following:
pip3 install --user -r docs/requirements.txt
These dependencies change quite rarely in case of my project. If I wanted
to cache these dependencies between builds to save some seconds, what would
be the directory to cache?
The docs suggest something for pip,
but my hunch is this only works for python builds and only for pip. It is possible
to cache arbitrary directories though,
so I only need to figure out the right directory.
I inspected the build output, but could not figure out the correct directory just from what pip3 printed out.
Did you try adding a before_install line and search for the dependecy? e.g
before_install:
- pip3 show <your_dependency> | grep -i location

Using easy_install3 with ansible

It seems easy_install3 is not currently available as an ansible module. So instead of doing this:
---
- name: Install apt dependencies
apt: name={{item}} state=installed
with_items:
- python3-setuptools
- name: install pip3
easy_install3: name=pip
I'm using this:
---
- name: Install apt dependencies
apt: name={{item}} state=installed
with_items:
- python3-setuptools
- name: install pip3
shell: easy_install3 pip
Is there a better alternative?
As mentioned in the comments, you can just use apt to install python3-pip rather than going via easy_install like the dark old days:
---
- name: Install apt dependencies
apt: name=python3-pip state=installed
If you did need to force a specific executable for easy_install then you could use the executable option:
- name: install pip3
easy_install: name=pip
executable: easy_install-3.3
However, it might also help to know that pip has been bundled with Python since 3.4 so you might not need to do anything at all.
I think executable option can be used to mention the version of easy_install. However(as suggested in the comments), it is recommended to use the pip module which you can first install using easy_install.
See: http://docs.ansible.com/ansible/easy_install_module.html for more information.

test after build would run in new environment on gitlab-ci

I have the following configuration as .gitlab-ci.yml
but I found out after successfully pass build stage (which
would create a virtualenv called venv), it seems that
in test stage you would get a brand new environment(there's
no venv directory at all). So I wonder should I put setup
script in before_script therefor it would run in each phase(build/test/deploy). Is it a right way to do it ?
before_script:
- uname -r
types:
- build
- test
- deploy
job_install:
type: build
script:
- apt-get update
- apt-get install -y libncurses5-dev
- apt-get install -y libxml2-dev libxslt1-dev
- apt-get install -y python-dev libffi-dev libssl-dev
- apt-get install -y python-virtualenv
- apt-get install -y python-pip
- virtualenv --no-site-packages venv
- source venv/bin/activate
- pip install -q -r requirements.txt
- ls -al
only:
- master
job_test:
type: test
script:
- ls -al
- source venv/bin/activate
- cp crawler/settings.sample.py crawler/settings.py
- cd crawler
- py.test -s -v
only:
- master
adasd
Gitlab CI jobs supposed to be independent, because they could run on different runners. It is not issue. There two ways to pass files between stages:
The right way. Using artefacts.
The wrong way. Using cache. With cache key "hack". Still need same runner.
So yes, supposed by gitlab way to have everything your job depends on in before script.
Artifacts example:
artifacts:
when: on_success
expire_in: 1 mos
paths:
- some_project_files/
Cache example:
cache:
key: "$CI_BUILD_REF_NAME"
untracked: true
paths:
- node_modules/
- src/bower_components/
For correct running environment i suggest using docker with image containing apt-get dependencies. And use artefacts for passing job results between jobs. Note that artefact also uploaded to gitlab web interface and being able to download them. So if they are quite heavy use small expire_in time, for removing them after all jobs done.

Resources