I have a multi-job deployment to an environment in GitLab CI. When I do a rollback of the environment only the last job is executed but I need to execute all the jobs.
My stages are like this: 1) Build 2) Test 3) Build Docker image 4) Deploy docker image, in a rollback only the 4th stage is ran but this is not what I want, I want to re-build the app so the previous version is deployed. Is there some solution to this? Re-run all the pipeline or can I deploy a previous version of my docker image (I have all the previous version stored) with the environment feature in GitLab?
I tried to specify the same environment in all the jobs but 4 environments are created, not 1 environment with 3 jobs so I only can roll-back one job.
Related
I´m trying to setup a pipeline in Azure DevOps which initiates a Azure Resource Group. The configurations for which are saved in a .tf file in the DevOps repository.
The pipeline was created with the classic editor. Following tasks were added to the job agent in the same order: terraform init (Terraform CLI) and Build Project (.NET Core).
Terraform is already installed (file path was added to environment variables).
I´m really new to this and am trying to do my first steps. So any help would be appreciated. Also, you can tell me if any important information is missing.
This is the job agent´s log for the Terraform init task:
2023-01-19T15:15:19.3880770Z ##[section]Starting: terraform init
2023-01-19T15:15:19.3895740Z ==============================================================================
2023-01-19T15:15:19.3896080Z Task : Terraform CLI
2023-01-19T15:15:19.3896240Z Description : Execute terraform cli commands
2023-01-19T15:15:19.3896440Z Version : 0.7.8
2023-01-19T15:15:19.3896590Z Author : Charles Zipp
2023-01-19T15:15:19.3896770Z Help :
2023-01-19T15:15:19.3896890Z ==============================================================================
2023-01-19T15:15:21.3070520Z ##[error]Error: Unable to locate executable file: 'terraform'. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.
2023-01-19T15:15:21.3186320Z ##[error]Error: Unable to locate executable file: 'terraform'. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also check the file mode to verify the file is executable.
2023-01-19T15:15:21.5201550Z ##[section]Finishing: terraform init
I have tried running this in various Backend Type settings. Also have tried to change the Agent specifications multiple times.
Furthermore have I tried runnning this after putting the terraform.exe in the repository root.
My expection was that the pipeline creates a new resource group, but the task won´t even be executed.
It looks like you're using the Azure Pipelines Terraform Tasks extension for Azure DevOps which also includes an installer task for Terraform called "TerraformInstaller". Try adding that to your pipeline before "terraform init" to ensure that Terraform is installed (by default it will install the latest version unless you give it a specific one) and is correct for the agent's OS.
As to why the existing terraform binary isn't being found my guess is that you've been trying to use the windows version of terraform (terraform.exe) instead of the linux version which is just terraform and that the pipeline is running on a linux agent. I'm guessing this because the output of the task says that it can't find the file named terraform without the .exe extension.
Problem solved. Two solutions:
Add Terraform install task into the pipeline before adding other Terraform tasks. You can find useful links for this in the approved answer above.
Run pipeline with Ubuntu (any version I guess) agent (Pipeline -> Agent Specification -> ubuntu-latest).
I prefer the second solution because the ubuntu agent finished the tasks significantly faster than the Microsoft agent (and not because of the added install task which only took 7 seconds).
For comparison:
Ubuntu agent: 1 min 16 sec (first execution; the executions afterwards ~30 sec total)
Windows agent: 4 min 31 sec
I am running a test job on Jenkins, using Jenkinsfile pipeline. The job targets the node ubuntu-node.
After the job is done, when I select the "Workspace" link, I get 2 entries, for example:
Workspaces for validation_my_proj #105
/home/jenkins/workspace/validation_my_proj#script on master
/home/jenkins/workspace/validation_my_proj on ubuntu-node
Could someone explain why do I have 2 workspaces? What does the first line validation_my_proj#script on master mean?
I am having some problems with linking executable produced by the build with a shared library (using Meson build system), and I am wandering does this Workspace setup have anything to do with it, because locally all works OK, only on Jenkins not.
I've a working build pipeline in Azure Devops that essentially installs Python3.6, sets up a virtual environment (.env) and then executes all unit tests. It then uses as its final step, a copy operation to move all files, including the virtual environment to a drop folder.
My problem arises from creating a release pipe. I am running a bash script for the release pipeline that essentially installs the azure functions command tools, and then I activate the python virtual environment before I call the func azure publish instruction.
The error I get states that settings are encrypted and that I need to call func setting add to add settings, however, when run locally, the script executes without any error whatsoever.
Does anyone have a working release pipeline in Azure Devops for a python-based Azure Function that they'd be able to share with me, so I can perhaps see what I am doing wrong?
Here is the relevant bit of script that executes:
#!/usr/bin/env bash
FUNCTION_APP_NAME="secret"
FUNCTION_APP_FOLDER="evenMoreSecret"
# Install Azure Functions Core Tools
echo "--> Install Azure Functions Core Tools"
wget -q https://packages.microsoft.com/config/ubuntu/16.04/packages-microsoft-prod.deb
sudo dpkg -i packages-microsoft-prod.deb
sudo apt-get update
sudo apt-get install azure-functions-core-tools -y
echo ">>>>>>>> Initialize Python Virtual Environment"
source .env/bin/activate
echo "--> Publish the function app to Azure Functions"
cd $FUNCTION_APP_FOLDER
func azure functionapp publish $FUNCTION_APP_NAME --build-native-deps
The script is executed using an Azure CLI, using a security principal which is tied to the azure account that it is targeting.
Usually with Azure DevOps you create several build steps that result in some build artifacts - these are defined in the azure-pipelines.yml file. You then do a release step to release the artifacts that you have created - this is created within the UI. This can involve deploying to a test server and then to production or however you want to configure it. What you are describing is doing the build and release step all in the one yaml file as the func publish is essentially doing a release and it seems to all be in the one script.
In the next release of the az cli there is a new command called az functionapp devops-build that will set up the DevOps pipeline with the seperate build and release steps. However, in the mean time, we have created a series of beta yaml files that we hope you can just drag and drop to do the build and release steps just within the build part (as you are doing).
The beta yaml files are here:
https://github.com/Azure/azure-functions-devops-build/wiki/Yaml-Samples
I must disclaim that they are not fully tested, nor are they supported yet.
I will answer myself as I've solved the problem.
To #Oliver Dolk: We do NOT want to publish as part of a build pipeline. The only thing I'm interested in is to set up a virtual environment and then run the unit tests.
The RELEASE stage is where we want to deploy the scripts copied over from the build step. These artifacts are then the basis for releasing into dev, test and prodution environment.
I was missing a very important step in my script; To create a local.settings.json file which contains encrypted settings for the functionapp.
In order to solve the problem, I only had to call the following:
func azure functionapp fetch-app-settings $FUNCTION_APP_NAME
This calls the azure functionApp, and retrieves it's settings into an encrypted local.settings.json which is then used during publishing.
For a complete script reference of both the build YAML script and the bash script that does the deployment, I've put both in an anonimized github repo:
https://github.com/digitaldias/Python-Examples
In absence of network, on-site we can commit to local git repo but can't have gitlab-ci to compile project and early trobuleshoot.
How to have a localized gitlab-ci and gitlab-runner which can compile commits offline (*or alternate means) ?
The gitlab runner has an exec command which allows you to run the gitlab runner on your local machine with your local .gitlab-ci.yml configuration file.
This command allows you to run builds locally, trying to replicate the CI
environment as much as possible. It doesn't need to connect to GitLab, instead
it reads the local .gitlab-ci.yml and creates a new build environment in
which all the build steps are executed.
Though if local network troubles are often you may consider installing the gitlab on premises and connect your own local gitlab runner to it so the work is automated.
We recently started to use GitLab-CI on the gitlab.com free service.
At first everything went fine, but now, seems like we can't build our project anymore. The builds are shown as pending and doesn't do anything.
Here's what we have in our builds list:
And if we check the details of a build:
As you might notice, in the list, each build is assigned to a runner id, but in the details page, the runner section is blank.
At first, we thought it was just latency caused by gitlab.com ingrastructure, but it's really just stuck there...
EDIT
It's more than 1 year ago but I keep having notifications about this question. If I recall properly, the problem was due to GitLab itself. Follow the GitLab docs and make sure your setup is valid, and hope for the best !
If you are working with local gitlab-runner, like macOS or custom runner that you have been made, you should start running jobs manually.
Based on this topic on gitlab documentation you should start manually in user-mode or system-mode based on where you executing this command
Run in terminal
If you did not started gitlab-runner yet
gitlab-runner start
system-mode execution
sudo gitlab-runner run
user-mode execution
gitlab-runner run
I was stuck into same issue on my windows machine. I went to event viewer to get some logs of the service and found the error "listen_address not defined".
I followed below steps to fix it.
Go to gitlab repository and edit the runner settings.
You will find checkbox named "Indicates whether this runner can pick jobs without tags"
Make sure the option is checked.
It works for me now.
My problem was solved after doing the following steps:
Go to your project repository, click on CI/CD and then select pipelines. Try removing the runner cache by clicking on clear runner caches.
Verify, start and run your local runners by the doing the following steps on the server you have registered your runners:
sudo gitlab-runner verify
sudo gitlab-runner start
sudo gitlab-runner run
GitLab maxed out their shared runners but they have just finished adding more of them. Now GitLab has 12 shared runners. Take a look at this issue: https://gitlab.com/gitlab-org/gitlab-foss/issues/5543#note_3130561
Update
GitLab has moved to auto scaling Runners. If you're still hitting any issues it might be due to a different cause.
Try to clear the Runner cache if you have set it up.
Goto CI/CD>>Pipelines>>on top side >> clear Runner Caches
For me, this workaround worked: Pausing and unpausing the runner triggers the pending job to run.
Reference: https://gitlab.com/gitlab-org/gitlab/-/issues/23401
I had the same issue because there were no active runners.
Go to settings > CI CD > Enable shared runners for this project