Persistent Volume required in one of the Gitlab CICD pipeline - gitlab

I have a requirement where customer wants to put some java based dependencies in the persistent volume and their app containers can refer that and we have to create cicd pipelines for this use case.
We are using Gitlab for CI and codestream for CD. How it can be possible in this case with regard to persistent volume ? Please suggest
Thanks,
Persistent Volume needs to be created at codestream level and we can refern in gitlab OR something we can deal at Gitlab itself.

Related

How to update security patches in azure container registry automatically?

Scenario:
There is an azure container registry(acr) with many repositories (around 20)
Each repository has different tags like image:dev0.1, dev0.2, prod0.1, prod0.2
There are more than 100 images which include all available tags within each repository
I know about the acr task, which can be used to update an image automatically when we update its base image.
In my scenario, I need to create more than 100 acr tasks for each tag and also maintain a backup of old images for recovery in case of production failures.
I would like to know, Is there any standard and/or simple way to deal with this case?

Is it possible to set only one branch at Databricks shared git folder(highlighted in screenshot)?

I would like to set only one branch at shared folder in databricks workspace. Attaching screenshot to give more clarity on the same.
All of data factory pipelines are using shared folder location for running notebooks and if some one changes it to other branch than production, everything starts messing up. So I would like to understand if this branch can be locked and if yes, how?
Please help..!!
Right now you can do that by settung correct permissions on that checkout, and allow only specific users or system account (service principal on Azure) (recommended) to perform Pull operation - all other people should have just read-only permissions. (doc on permissions)
There is also a new functionality to set Git reference (Git URL and branch/tag/commit) directly in the job configuration so you won't even need to have a repository checked out. This functionality was just released (as of 6th May 2022).

Get name of cluster release pipeline is targeting on manual intervention job

I'm currently attempting to create a release pipeline to a service fabric cluster.
The goal of the pipeline is to take a built artefact and publish it to a service fabric cluster which it does successfully.
I am looking to add in a manual intervention step which will notify the user of the name of the SF cluster they are attempting to deploy to.
How can I do this? there does not seem to be a way to access the name of the cluster. using the predefined variable
$(Parameters.serviceConnectionName)
Will print the ID of the connection, rather than its actual name
using the predefined variable
$(Parameters.serviceConnectionName)
Will print the ID of the connection, rather than its actual name
I do not see any predefined variable that named "Parameters.serviceConnectionName" in the following documentations about Azure DevOps. Where did you find this variable?
Use predefined variables
Classic release and artifacts variables
To get the name of service fabric cluster, maybe you can try to check if there is any specific command-line or API on the service fabric. If yes, you can run the related command-line or API and get the service fabric cluster from the output.

Versioning Azure static websites

As part of our release pipeline we deploy to Azure blob store (static websites). So every time the release pipeline runs, it overwrites the contents of the blob store with the new build artifact created and we see the latest changes.
For debugging and internal testing we have a requirement where each deployment instead of overwriting the existing contents of the blob store, creates a version.
So if a dev checks in their changes to master and a new artifact is generated, it gets deployed to https://abc.z22.web.core.windows.net/1. The next time a new change is checked in to master, it creates a new version at - https://abc.z22.web.core.windows.net/2.
There is versioning in blob store that was added recently but you have to manually go in to the blob store and mark a version as current.
Is there a way to achieve this ? Any other Azure offering that can help with this ?
OK. Looks like you want all the versions to be active and available on different urls. I dont think its possible with azure web apps as well. Potentially you can spinup a new container when there is new code push and run it on a different port. But, you will have to build the logic of limiting the number of containers as you cannot go infinitely. Rather unusual requirement. Or, you can use slots in web app to serve multiple versions at the same time but its limited based on the tier you opt for.

How can I interface with terraform from an external APP?

I'm using a PHP APP, BoxBilling. It takes orders from final users, these orders need to be processed into actual nodes and containers.
I was planning on using Terraform as the provisioner for both, containers whenever there is room available in existing nodes or new nodes whenever the existing ones are full.
Terraform would interface with my provider for creating new nodes and with Vagrant for configuring containers.
Vagrant would interface with Kubernetes to provision the pods/containers.
Question is: Is there an inbound Terraform API that I can use to send orders to Terraform from the BoxBilling APP?
I've searched the documentation, examples and case studies but it's eluding me...
Thank you!
You could orchestrate the provisioning of infrastructure and/or configuration of nodes using an orchestration/CI tool such as Jenkins.
Jenkins has a Remote Access API which could be called to trigger a set of steps which could include Terraform plan, apply, creation of new workspaces etc and then downstream to configuration, testing and anything else in your toolchain.

Resources