I have the following azure task that copies my .env file from secure files (library) in Azure to the working diretory from which I execte unit tests:
steps:
- task: DownloadSecureFile#1
inputs:
secureFile: '.env'
displayName: "Download .env file"
- task: CopyFiles#2
inputs:
sourceFolder: "$(Agent.TempDirectory)"
contents: ".env"
targetFolder: "$(System.DefaultWorkingDirectory)"
displayName: "Copy .env"
- script: |
cd /$(System.DefaultWorkingDirectory)
sudo apt-get update
sudo apt-get install -y python3-dev default-libmysqlclient-dev build-essential unixodbc-dev
pip3 install -r requirements.txt
pip3 install pytest pytest-azurepipelines
pip3 install pytest-cov
python3 -m pytest --doctest-modules --junitxml=junit/test-results.xml --cov=. --cov-report=xml
However when running pytest its failing because Python cannot find the environment variables from the file. Is there something I am missing that I need to do so that Pytest can successfully run the tests with all the environment variables that are coming from the .env file?
Related
I built a simple ymal file for Azure pipeline.
Tasks:
Build wheel file during Azure pipeline run
Copy wheel file just created in the pipeline run to databricks' dbfs
It does write .whl file somewhere as I can see in scrip print in the pipeline, but it doesn't overwrite into the repo or I don't know where the file created is?
stages:
- stage: Build
jobs:
- job:
steps:
- task: Bash#3
inputs:
targetType: 'inline'
script: |
echo 'install/update pip and wheel'
python -m pip install --upgrade pip wheel
echo 'building wheel file'
python src/databricks/setup.py bdist_wheel
displayName: 'Build updated wheel file'
- stage: Deploy
jobs:
- job:
steps:
- script: |
python src/databricks/setup.py bdist_wheel
echo 'installing databricks-cli'
python -m pip install --upgrade databricks-cli
displayName: 'install databricks cli'
- script: |
echo 'copying wheel file'
databricks fs cp --overwrite src/databricks/dist/library-0.0.2-py3-none-any.whl dbfs:/FileStore/whl
echo 'copying main.py'
databricks fs cp --overwrite src/databricks/main.py dbfs:/FileStore/whl
env:
DATABRICKS_HOST: $(DATABRICKS_HOST)
DATABRICKS_TOKEN: $(DATABRICKS_TOKEN)
displayName: 'Copy wheel file and main.py to dbfs'
I have a self hosted container using ubuntu.
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat \
libssl1.0 \
zip \
unzip \
&& rm -rf /var/lib/apt/lists/*
RUN curl -LsS https://aka.ms/InstallAzureCLIDeb | bash \
&& rm -rf /var/lib/apt/lists/*
# Can be 'linux-x64', 'linux-arm64', 'linux-arm', 'rhel.6-x64'.
ENV TARGETARCH=linux-x64
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
ENTRYPOINT ["./start.sh"]
This runs and active on azure platform. I am running an angular build pipeline on this agent and runs successfully.
But when I created a dotnet core project and build on this agent, it throws exceptions.
If I use following:
trigger:
- main
pool:
name: PCDOCKER
variables:
solution: '**/*.sln'
buildPlatform: 'Any CPU'
buildConfiguration: 'Release'
projectName: vitrin-api
steps:
- task: NuGetToolInstaller#1
- task: DotNetCoreCLI#2
inputs:
command: 'restore'
feedsToUse: 'select'
vstsFeed: 'my-vsts-feed' # A series of numbers and letters
- task: DotNetCoreCLI#2
inputs:
command: 'build'
arguments: '--configuration $(buildConfiguration)'
displayName: 'dotnet build $(buildConfiguration)'
The error is at DotNetCoreCLI#2 step:
##[error]Error: Unable to locate executable file: 'dotnet'. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.
##[error]Packages failed to restore
If I use
- task: NuGetCommand#2
inputs:
restoreSolution: '$(solution)'
instead of DotNetCoreCLI#2, it throws exception to not found "mono"
How can I build my dotnet application on my self hosted container?
To build a .NET Core application, .NET SDK needs to be present on the system. .NET SDK includes all the necessary components required to build and run the application.
To fix this issue, install .NET SDK on your Ubuntu container
Before you install .NET, run the following commands to add the Microsoft package signing key to your list of trusted keys and add the package repository.
Open a terminal and run the following commands:
wget https://packages.microsoft.com/config/ubuntu/21.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb
rm packages-microsoft-prod.deb
Now, install .NET SDK
sudo apt-get update; \
sudo apt-get install -y apt-transport-https && \
sudo apt-get update && \
sudo apt-get install -y dotnet-sdk-6.0
Checkout this for more detail - https://learn.microsoft.com/en-us/dotnet/core/install/linux-ubuntu#2110-
My vmImage is 'ubuntu-latest'.
When my build comes to nlohmann I get the following error:
-- Up-to-date: /usr/local/include
CMake Error at libraries/nlohmann_json/cmake_install.cmake:41 (file):
file INSTALL cannot set permissions on "/usr/local/include": Operation not
permitted.
Call Stack (most recent call first):
libraries/cmake_install.cmake:42 (include)
cmake_install.cmake:42 (include)
It works fine locally and also on the Windows and MacOS vmImage pipelines, so I am assuming this is some type of permissions issue/setting with DevOps?
The yaml file is as follows:
# Starter pipeline
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: UsePythonVersion#0
displayName: 'install python 3.x'
inputs:
versionSpec: '3.x'
addToPath: true
architecture: 'x64'
- task: CmdLine#2
displayName: 'install opengl'
inputs:
script: 'sudo apt-get -y install freeglut3 freeglut3-dev libglew1.5 libglew1.5-dev libglu1-mesa libglu1-mesa-dev libgl1-mesa-glx libgl1-mesa-dev'
failOnStderr: true
- task: CmdLine#2
displayName: 'install sdl'
inputs:
script: 'sudo apt-get install -y libsdl2-dev libsdl2-image-dev libsdl2-ttf-dev libsdl2-net-dev'
failOnStderr: true
- task: CmdLine#2
displayName: 'update google gpg key'
inputs:
script: 'wget -q -O - curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -'
failOnStderr: false
- task: CmdLine#2
displayName: 'install gcc 10'
inputs:
script: 'sudo apt-get update && sudo add-apt-repository ppa:ubuntu-toolchain-r/test && sudo apt-get update && sudo apt-get -y install gcc-10 g++-10'
failOnStderr: true
- task: PythonScript#0
displayName: 'run build.py'
inputs:
scriptSource: 'filePath'
scriptPath: '$(Build.SourcesDirectory)/build.py'
failOnStderr: false
In the end, I disabled the install step of nlohmann json by doing the following before I added the subdirectory:
set(JSON_Install OFF CACHE INTERNAL "")
I am trying to write a gitlab CI file as follows:
image: ubuntu:latest
variables:
GIT_SUBMODULE_STRATEGY: recursive
AWS_DEFAULT_REGION: eu-central-1
S3_BUCKET: $BUCKET_TRIAL
stages:
- deploy
.before_script_template: &before_script_definition
stage: deploy
before_script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv
.after_script_template: &after_script_definition
after_script:
# Upload package to S3
# Install AWS CLI
- pip install awscli --upgrade # --user
- export PATH=$PATH:~/.local/bin # Add to PATH
# Configure AWS connection
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set default.region $AWS_DEFAULT_REGION
- aws sts get-caller-identity --output text --query 'Account' # current account
- aws s3 cp ~/forlambda/archive.zip $BUCKET_TRIAL/${LAMBDA_NAME}-deployment.zip
monatliche_strom:
variables:
LAMBDA_NAME: monthly_strom
before_script: *before_script_definition
script:
- mv some.py ~
- mv requirements.txt ~
# Move submodules
- mv submodule1/submodule1 ~
- mv submodule1/submodule2/submodule2 ~
# Setup virtual environment
- mkdir ~/forlambda
- cd ~/forlambda
- virtualenv -p python3 venv
- source venv/bin/activate
- pip3.7 install -r ~/requirements.txt -t ~/forlambda/venv/lib/python3.7/site-packages/
# Package environment and dependencies
- cd ~/forlambda/venv/lib/python3.7/site-packages/
- zip -r9 ~/forlambda/archive.zip .
- cd ~
- zip -g ~/forlambda/archive.zip some.py
- zip -r ~/forlambda/archive.zip submodule1/*
- zip -r ~/forlambda/archive.zip submodule2/*
after_script: *after_script_definition
When I run it in the gitlab CI lint, it gives me the following error:
jobs:monatliche_strom:before_script config should be an array
containing strings and arrays of strings
jobs:monatliche_strom:after_script config should be an array
containing strings and arrays of strings
I am fairly new to gitlab CI, so can someone please tell what is the mistake I am doing?
Try this:
image: ubuntu:latest
variables:
GIT_SUBMODULE_STRATEGY: recursive
AWS_DEFAULT_REGION: eu-central-1
S3_BUCKET: $BUCKET_TRIAL
stages:
- deploy
.before_script_template: &before_script_definition
stage: deploy
before_script:
- apt-get -y update
- apt-get -y install python3-pip python3.7 zip
- python3.7 -m pip install --upgrade pip
- python3.7 -V
- pip3.7 install virtualenv
.after_script_template: &after_script_definition
after_script:
# Upload package to S3
# Install AWS CLI
- pip install awscli --upgrade # --user
- export PATH=$PATH:~/.local/bin # Add to PATH
# Configure AWS connection
- aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
- aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
- aws configure set default.region $AWS_DEFAULT_REGION
- aws sts get-caller-identity --output text --query 'Account' # current account
- aws s3 cp ~/forlambda/archive.zip $BUCKET_TRIAL/${LAMBDA_NAME}-deployment.zip
monatliche_strom:
variables:
LAMBDA_NAME: monthly_strom
<<: *before_script_definition
script:
- mv some.py ~
- mv requirements.txt ~
# Move submodules
- mv submodule1/submodule1 ~
- mv submodule1/submodule2/submodule2 ~
# Setup virtual environment
- mkdir ~/forlambda
- cd ~/forlambda
- virtualenv -p python3 venv
- source venv/bin/activate
- pip3.7 install -r ~/requirements.txt -t ~/forlambda/venv/lib/python3.7/site-packages/
# Package environment and dependencies
- cd ~/forlambda/venv/lib/python3.7/site-packages/
- zip -r9 ~/forlambda/archive.zip .
- cd ~
- zip -g ~/forlambda/archive.zip some.py
- zip -r ~/forlambda/archive.zip submodule1/*
- zip -r ~/forlambda/archive.zip submodule2/*
<<: *after_script_definition
Since you already described before_script & after_script in the anchors, you have to use << to merge the given hash into the current one
I have the following configuration as .gitlab-ci.yml
but I found out after successfully pass build stage (which
would create a virtualenv called venv), it seems that
in test stage you would get a brand new environment(there's
no venv directory at all). So I wonder should I put setup
script in before_script therefor it would run in each phase(build/test/deploy). Is it a right way to do it ?
before_script:
- uname -r
types:
- build
- test
- deploy
job_install:
type: build
script:
- apt-get update
- apt-get install -y libncurses5-dev
- apt-get install -y libxml2-dev libxslt1-dev
- apt-get install -y python-dev libffi-dev libssl-dev
- apt-get install -y python-virtualenv
- apt-get install -y python-pip
- virtualenv --no-site-packages venv
- source venv/bin/activate
- pip install -q -r requirements.txt
- ls -al
only:
- master
job_test:
type: test
script:
- ls -al
- source venv/bin/activate
- cp crawler/settings.sample.py crawler/settings.py
- cd crawler
- py.test -s -v
only:
- master
adasd
Gitlab CI jobs supposed to be independent, because they could run on different runners. It is not issue. There two ways to pass files between stages:
The right way. Using artefacts.
The wrong way. Using cache. With cache key "hack". Still need same runner.
So yes, supposed by gitlab way to have everything your job depends on in before script.
Artifacts example:
artifacts:
when: on_success
expire_in: 1 mos
paths:
- some_project_files/
Cache example:
cache:
key: "$CI_BUILD_REF_NAME"
untracked: true
paths:
- node_modules/
- src/bower_components/
For correct running environment i suggest using docker with image containing apt-get dependencies. And use artefacts for passing job results between jobs. Note that artefact also uploaded to gitlab web interface and being able to download them. So if they are quite heavy use small expire_in time, for removing them after all jobs done.