Nlohmann not installing to /usr/local/include - azure

My vmImage is 'ubuntu-latest'.
When my build comes to nlohmann I get the following error:
-- Up-to-date: /usr/local/include
CMake Error at libraries/nlohmann_json/cmake_install.cmake:41 (file):
file INSTALL cannot set permissions on "/usr/local/include": Operation not
permitted.
Call Stack (most recent call first):
libraries/cmake_install.cmake:42 (include)
cmake_install.cmake:42 (include)
It works fine locally and also on the Windows and MacOS vmImage pipelines, so I am assuming this is some type of permissions issue/setting with DevOps?
The yaml file is as follows:
# Starter pipeline
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: UsePythonVersion#0
displayName: 'install python 3.x'
inputs:
versionSpec: '3.x'
addToPath: true
architecture: 'x64'
- task: CmdLine#2
displayName: 'install opengl'
inputs:
script: 'sudo apt-get -y install freeglut3 freeglut3-dev libglew1.5 libglew1.5-dev libglu1-mesa libglu1-mesa-dev libgl1-mesa-glx libgl1-mesa-dev'
failOnStderr: true
- task: CmdLine#2
displayName: 'install sdl'
inputs:
script: 'sudo apt-get install -y libsdl2-dev libsdl2-image-dev libsdl2-ttf-dev libsdl2-net-dev'
failOnStderr: true
- task: CmdLine#2
displayName: 'update google gpg key'
inputs:
script: 'wget -q -O - curl https://bazel.build/bazel-release.pub.gpg | sudo apt-key add -'
failOnStderr: false
- task: CmdLine#2
displayName: 'install gcc 10'
inputs:
script: 'sudo apt-get update && sudo add-apt-repository ppa:ubuntu-toolchain-r/test && sudo apt-get update && sudo apt-get -y install gcc-10 g++-10'
failOnStderr: true
- task: PythonScript#0
displayName: 'run build.py'
inputs:
scriptSource: 'filePath'
scriptPath: '$(Build.SourcesDirectory)/build.py'
failOnStderr: false

In the end, I disabled the install step of nlohmann json by doing the following before I added the subdirectory:
set(JSON_Install OFF CACHE INTERNAL "")

Related

How to use env variables in Azure Pipelines?

I have the following azure task that copies my .env file from secure files (library) in Azure to the working diretory from which I execte unit tests:
steps:
- task: DownloadSecureFile#1
inputs:
secureFile: '.env'
displayName: "Download .env file"
- task: CopyFiles#2
inputs:
sourceFolder: "$(Agent.TempDirectory)"
contents: ".env"
targetFolder: "$(System.DefaultWorkingDirectory)"
displayName: "Copy .env"
- script: |
cd /$(System.DefaultWorkingDirectory)
sudo apt-get update
sudo apt-get install -y python3-dev default-libmysqlclient-dev build-essential unixodbc-dev
pip3 install -r requirements.txt
pip3 install pytest pytest-azurepipelines
pip3 install pytest-cov
python3 -m pytest --doctest-modules --junitxml=junit/test-results.xml --cov=. --cov-report=xml
However when running pytest its failing because Python cannot find the environment variables from the file. Is there something I am missing that I need to do so that Pytest can successfully run the tests with all the environment variables that are coming from the .env file?

where is wheel(.whl) file created inside Azure pipeline run?

I built a simple ymal file for Azure pipeline.
Tasks:
Build wheel file during Azure pipeline run
Copy wheel file just created in the pipeline run to databricks' dbfs
It does write .whl file somewhere as I can see in scrip print in the pipeline, but it doesn't overwrite into the repo or I don't know where the file created is?
stages:
- stage: Build
jobs:
- job:
steps:
- task: Bash#3
inputs:
targetType: 'inline'
script: |
echo 'install/update pip and wheel'
python -m pip install --upgrade pip wheel
echo 'building wheel file'
python src/databricks/setup.py bdist_wheel
displayName: 'Build updated wheel file'
- stage: Deploy
jobs:
- job:
steps:
- script: |
python src/databricks/setup.py bdist_wheel
echo 'installing databricks-cli'
python -m pip install --upgrade databricks-cli
displayName: 'install databricks cli'
- script: |
echo 'copying wheel file'
databricks fs cp --overwrite src/databricks/dist/library-0.0.2-py3-none-any.whl dbfs:/FileStore/whl
echo 'copying main.py'
databricks fs cp --overwrite src/databricks/main.py dbfs:/FileStore/whl
env:
DATABRICKS_HOST: $(DATABRICKS_HOST)
DATABRICKS_TOKEN: $(DATABRICKS_TOKEN)
displayName: 'Copy wheel file and main.py to dbfs'

How to build dotnet core on self hosted ubuntu container agent?

I have a self hosted container using ubuntu.
FROM ubuntu:18.04
# To make it easier for build and release pipelines to run apt-get,
# configure apt to not require confirmation (assume the -y argument by default)
ENV DEBIAN_FRONTEND=noninteractive
RUN echo "APT::Get::Assume-Yes \"true\";" > /etc/apt/apt.conf.d/90assumeyes
RUN apt-get update && apt-get install -y --no-install-recommends \
ca-certificates \
curl \
jq \
git \
iputils-ping \
libcurl4 \
libicu60 \
libunwind8 \
netcat \
libssl1.0 \
zip \
unzip \
&& rm -rf /var/lib/apt/lists/*
RUN curl -LsS https://aka.ms/InstallAzureCLIDeb | bash \
&& rm -rf /var/lib/apt/lists/*
# Can be 'linux-x64', 'linux-arm64', 'linux-arm', 'rhel.6-x64'.
ENV TARGETARCH=linux-x64
WORKDIR /azp
COPY ./start.sh .
RUN chmod +x start.sh
ENTRYPOINT ["./start.sh"]
This runs and active on azure platform. I am running an angular build pipeline on this agent and runs successfully.
But when I created a dotnet core project and build on this agent, it throws exceptions.
If I use following:
trigger:
- main
pool:
name: PCDOCKER
variables:
solution: '**/*.sln'
buildPlatform: 'Any CPU'
buildConfiguration: 'Release'
projectName: vitrin-api
steps:
- task: NuGetToolInstaller#1
- task: DotNetCoreCLI#2
inputs:
command: 'restore'
feedsToUse: 'select'
vstsFeed: 'my-vsts-feed' # A series of numbers and letters
- task: DotNetCoreCLI#2
inputs:
command: 'build'
arguments: '--configuration $(buildConfiguration)'
displayName: 'dotnet build $(buildConfiguration)'
The error is at DotNetCoreCLI#2 step:
##[error]Error: Unable to locate executable file: 'dotnet'. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.
##[error]Packages failed to restore
If I use
- task: NuGetCommand#2
inputs:
restoreSolution: '$(solution)'
instead of DotNetCoreCLI#2, it throws exception to not found "mono"
How can I build my dotnet application on my self hosted container?
To build a .NET Core application, .NET SDK needs to be present on the system. .NET SDK includes all the necessary components required to build and run the application.
To fix this issue, install .NET SDK on your Ubuntu container
Before you install .NET, run the following commands to add the Microsoft package signing key to your list of trusted keys and add the package repository.
Open a terminal and run the following commands:
wget https://packages.microsoft.com/config/ubuntu/21.04/packages-microsoft-prod.deb -O packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb
rm packages-microsoft-prod.deb
Now, install .NET SDK
sudo apt-get update; \
sudo apt-get install -y apt-transport-https && \
sudo apt-get update && \
sudo apt-get install -y dotnet-sdk-6.0
Checkout this for more detail - https://learn.microsoft.com/en-us/dotnet/core/install/linux-ubuntu#2110-

Gitlab CI - Specifying stages in before_script

i want to run a script that is needed for my test_integration and build stage. Is there a way to specify this in the before script so i don't have to write it out twice.
before_script:
stage: ['test_integration', 'build']
this does not seem to work i get the following error in gitlab ci linter.
Status: syntax is incorrect
Error: before_script config should be an array of strings
.gitlab-ci.yml
stages:
- security
- quality
- test
- build
- deploy
image: node:10.15.0
before_script:
stage: ['test_integration', 'build']
script:
- apt-get update
- apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
- curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
- apt-get update
- apt-get -y install docker-ce
- curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
services:
- mongo
- docker:dind
security:
stage: security
script:
- npm audit
quality:
stage: quality
script:
- npm install
- npm run-script lint
test_unit:
stage: test
script:
- npm install
- npm run-script unit-test
test_integration:
stage: test
script:
- docker-compose -f CI/backend-service/docker-compose.yml up -d
- npm install
- npm run-script integration-test
build:
stage: build
script:
- npm install
- export VERSION=`git describe --tags --always`
- docker build -t $CI_REGISTRY_IMAGE:$VERSION .
- docker push $CI_REGISTRY_IMAGE
deploy:
stage: deploy
script: echo 'deploy'
The before_script syntax does not support a stages section. You could use before_script as you have done without the stages section, however the before_script stage would run for every single job in the pipeline.
Instead, what you could do is use GitLab's anchor's feature, which allows you to duplicate content across the .gitlab-ci file.
So in your scenario, it would look something like:
stages:
- security
- quality
- test
- build
- deploy
image: node:10.15.0
.before_script_template: &build_test-integration
before_script:
- apt-get update
- apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
- curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
- apt-get update
- apt-get -y install docker-ce
- curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
services:
- mongo
- docker:dind
security:
stage: security
script:
- npm audit
quality:
stage: quality
script:
- npm install
- npm run-script lint
test_unit:
stage: test
script:
- npm install
- npm run-script unit-test
test_integration:
stage: test
<<: *build_test-integration
script:
- docker-compose -f CI/backend-service/docker-compose.yml up -d
- npm install
- npm run-script integration-test
build:
stage: build
<<: *build_test-integration
script:
- npm install
- export VERSION=`git describe --tags --always`
- docker build -t $CI_REGISTRY_IMAGE:$VERSION .
- docker push $CI_REGISTRY_IMAGE
deploy:
stage: deploy
script: echo 'deploy'
Edit: there is another way, instead of using anchors, you could also use extends syntax:
.before_script_template:
before_script:
- apt-get update
- apt-get -y install apt-transport-https ca-certificates curl gnupg2 software-properties-common
- curl -fsSL https://download.docker.com/linux/debian/gpg | apt-key add -
- add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
- apt-get update
- apt-get -y install docker-ce
- curl -L "https://github.com/docker/compose/releases/download/1.23.2/docker-compose-$(uname -s)-$(uname -m)" -o /usr/local/bin/docker-compose
- chmod +x /usr/local/bin/docker-compose
- docker login -u gitlab-ci-token -p $CI_JOB_TOKEN $CI_REGISTRY
test_integration:
extends: .before_script_template
stage: test
script:
- docker-compose -f CI/backend-service/docker-compose.yml up -d
- npm install
- npm run-script integration-test
build:
extends: .before_script_template
stage: build
script:
- npm install
- export VERSION=`git describe --tags --always`
- docker build -t $CI_REGISTRY_IMAGE:$VERSION .
- docker push $CI_REGISTRY_IMAGE
etc

Gitlab runner error "Build failed: exit code 1"

I'm trying to build Jekyll blog using gitlab runner (for gitlab pages). I get the following error: ERROR: Build failed: exit code 1. So far, everything worked. Link to project: https://gitlab.com/dash.plus/dashBlog
Just add
- apt-get update && apt-get install -y nodejs
And ofc
- bundle install
inside gitlab-cl.yaml
image: ruby:2.3
test:
stage: test
script:
- gem install jekyll
- bundle install
- apt-get update && apt-get install -y nodejs
- bundle exec jekyll -d test/
artifacts:
paths:
- test
except:
- master
pages:
stage: deploy
script:
- gem install jekyll
- bundle install
- apt-get update && apt-get install -y nodejs
- bundle exec jekyll -d public/
artifacts:
paths:
- public
only:
- master

Resources