I have a CICD pipeline that I use to execute a set of terraform scripts that manage the IPSets for multiple environments. The runners are deployed into the kubernetes cluster for each environment in AWS.
The pipeline is setup such that it the runner dedicated for each AWS environment running in their respective EKS kubernetes cluster will run terraform plan and apply on their respective folders that contains the terraform scripts.
The CICD pipeline contains a terraform plan job followed by a terraform apply job.
My problem is when one or more environment is scaled down (ie: runner deployment is scaled down to 0), the pipeline will get stuck and eventually fail as the state of the those runners are stuck in pending.
Is there a way to structure the CICD pipeline such that when a runner is stuck for X amount of time, those executions would stop/fail so as to not trigger the terraform apply job to execute while allowing executions for other environments to continue?
Related
I am using Gitlab Runner with the VM executor (so that each of my pipeline are run in VM).
The thing is that Gitlab deletes my VM at the end of the pipeline but I don't want to.
I want to be able to configure Runner to not delete it so I can access my environment after the pipeline has been run.
I think I saw something about a configuration in the .toml of the gitlab runner but I can't find anymore...
Thanks!
I have a Gitlab pipeline which deploys my app.
App consists of 2 parts: front and back.
Both deployment jobs have
environment:
name: staging
So if one job fails, another one may succeed and it makes env deployment status = success.
Is there a way to overcome this?
I am deploying DEV environment in two pipelines that contains (Deployment(10 mins) + tests run on deplyment (10mins) post deployment) for each run.
1>Master Pipeline - Triggers when changes merged to master
2>Release Pipeline - Manual deployment
During deployment to Release pipeline if some code merged to master then pipeline will trigger & error message some other installation/upgrade in progress & lead to fail state.
If deployment in Same pipeline then the deployment waits for previous one completes & starts deployment but as my tests running in previous deployment this would also lead to error prone scenario.
is there any dependency can be achieved in Azure delivery step between two CD pipelines if deployment going on one environment then other Release from different pipeline should wait ?
Since you want to invoke a deployment pipeline after the execution
of a testing pipeline what you can do that inside the test pipeline
before the start of the testing use a task to invoke a function which
will store (inside a storage solution of your choice) the status of the test pipeline as under execution.
Then after the completion of testing the test pipeline again
trigger a function which will reset the status of the test pipeline
to completed execution.
Then inside the development pipeline you can create a GATE at
the start of the pipeline . This gate will invoke a function which
will check for the test pipeline status and return a response to the
gate . Now this gate will not start the deployment until a preferred
response is received (i.e. the test pipeline has completed execution) from the function.
Use agentless job to trigger the functions in the pipelines.
Reference:
Refer this article by Marcus Felling to trigger function using agent less jobs .
Azure Pipeline GATE
I'm using terraform in a CI/CD pipeline.
When I open a pull request, I create a workspace named after the feature branch.
Terraform creates all the resources with the workspace name attached to them so as to be unique.
Once this is deployed to prod there is a cleanup step that destroys everything created by that workspace.
This works fine but recreating all the resources for every pull request soon will be unfeasible. Is there a way in Terraform to defer to the prod tfstate file so to plan only for the delta from this and their dependencies?
A simple example:
my prod tfstate has these resources
1 database(dbA)
2 schemas(dbA.schemaA,dbA.schemaB)
1 table (dbA.schemaA.tableA).
If I add one table(dbA.schemaB.tableB) in dbA.schemaB, I'd want terraform to plan for
dbA
dbA.schemaB
dbA.schemaB.tableB
and not for
dbA.schemaA
dbA.schemaA.tableA
I set up Gitlab and Gitlab-CI on a k8s cluster in AWS. I have jobs that use a lot of resources. I want to run these jobs on specific instances in AWS. How can this be done?
Kubernetes configuration
You need to add a node selector which enable you to assign pods on specific nodes
kubectl label nodes <node-name> gitlab=true
Gitlab Runner configuration
Specify a tag associated to the runner. In your case, uncheck the Run untagged jobs option.
Specify a node selector using the keyword node_selector :
[runners.kubernetes.node_selector]
gitlab = "true"
Check a more complete example of config.toml on gitlab website.
Gitlab CI configuration
Refer the tag of your runner in your .gitlab-ci.yml
job:
tags:
- big_server