Scenario:
There is an azure container registry(acr) with many repositories (around 20)
Each repository has different tags like image:dev0.1, dev0.2, prod0.1, prod0.2
There are more than 100 images which include all available tags within each repository
I know about the acr task, which can be used to update an image automatically when we update its base image.
In my scenario, I need to create more than 100 acr tasks for each tag and also maintain a backup of old images for recovery in case of production failures.
I would like to know, Is there any standard and/or simple way to deal with this case?
Related
I have a requirement where customer wants to put some java based dependencies in the persistent volume and their app containers can refer that and we have to create cicd pipelines for this use case.
We are using Gitlab for CI and codestream for CD. How it can be possible in this case with regard to persistent volume ? Please suggest
Thanks,
Persistent Volume needs to be created at codestream level and we can refern in gitlab OR something we can deal at Gitlab itself.
We are using Azure cloud platform and used Terraform to provision our resources using Azuredevops pipelines. So when we provisioned the resources we kept the statefiles resource wise(eg:ApIM, Appservice, AKS, Storage Accounts, etc..) and the statefiles where in sync with the actual resource.
But we have other ADO pipelines as part of our application releases, which are making some changes on the previous terraform built resources like (API creation and update, Tags update to resources, additional component creation to the base resource etc..). So those changes made our terraform states out of sync the actual resources and when we triggered the pipeline for terraform plan for those resources, number of changes are showing and some resources are showing to replace itself.
So, We need to make our existing resources statefile in sync with any kind of pipeline\manual changes from the portal and we have to follow the practice of incrementally updating the statefile.
So by searching in internet we found that we can achieve this using terraformer and planning to add a Pipeline for terraformer task that will update those changes to the existing statefiles for each resource (planning to schedule this pipeline weekly).
Is it possible to use terrafomer to make the incremental changes with both statefile and already used terraform manifests in sync.
Your using of Terraform for IaaC seems wrong. If you plan to deploy resources through terraform then those resources should not be modified by other external factors. If it does, then you lose one of terraform's key feature i.e, maintaining a state and updating the resources.
Terraformer is a completely different tool that is not suited for your direct usecase. It is used to generate tf files/state files for existing resources created via methods other than terraform (Eg: console)
My recommendation for you will be to go through the basics of terraform, Iaac and restructure your pipelines/architecture.
Below are some links that were helpful for me.
https://developer.hashicorp.com/terraform/language/state
https://www.terraform-best-practices.com/examples
https://developer.hashicorp.com/terraform/language/modules
Context: We are managing a central azure container registry which is holding around 350 repository. Each repositories are having good amount of image. Now because of Log4J issue, we are trying to inform all image owner to take care of their repositories. Because there is no owner name associated with images/repo, we are not able to find who they are and not able to communication.
I am trying to find a way how we can set owner name to image in azure container registry so that i can extract and send communication to them.
I tested in my environment where multiple images have built by many user in a ACR.
But it doesn’t record who created the images in ACR.
For Workaround There is two way you can track who has created an image in ACR
you can look at the activity log to see who the last user is who pushed the image to a particular repository. Please keep in mind that the activity logs are kept for 90 days by default.
Another way you can do is introduce a process in your team to create the images with a tag on theirs name or specific ID if any.
For information you can refer this: https://azure.microsoft.com/en-in/blog/azure-container-registry-preview-of-diagnostics-and-audit-logs/
I have three questions that I couldn't get a clear answer for in the documentation I visited.
1- In case I deployed a VM scale-set with auto-scaling and I had a VM that was scaled-in (according to the set policy) then it remained active for a while and after that, it was scaled-out after the utilization got back to normal. My question here is what happens to the data generated by the VM that was scaled in, then out (ex: logs) in case I was using Managed storage. Noting that the aim here is to persist important data (app logs...)?
2- As per what I understood from the documentation in order to update your app code for example (using SCM ex Git) on all the scale set nodes you will need the help of an automation tool (ex: ansible), or you'll need to update the custom image and redeploy it to the scale set. Is there a more centralized way that I missed?
3- Is there a way to add an existing VM to a new scale set other than converting it to a base image?
thanks in advance.
A1. If you do not set the persistent storage, then the scale in will just delete the instance without persisting the data in the VM.
A2. There is no other way to update your code, the best way is to change your VM image. Or use storage to store your code and then mount the storage to your VMSS. For example, the
Azure file share.
A3. No, you can't add other existing VMs to VMSS. It's impossible.
As part of our release pipeline we deploy to Azure blob store (static websites). So every time the release pipeline runs, it overwrites the contents of the blob store with the new build artifact created and we see the latest changes.
For debugging and internal testing we have a requirement where each deployment instead of overwriting the existing contents of the blob store, creates a version.
So if a dev checks in their changes to master and a new artifact is generated, it gets deployed to https://abc.z22.web.core.windows.net/1. The next time a new change is checked in to master, it creates a new version at - https://abc.z22.web.core.windows.net/2.
There is versioning in blob store that was added recently but you have to manually go in to the blob store and mark a version as current.
Is there a way to achieve this ? Any other Azure offering that can help with this ?
OK. Looks like you want all the versions to be active and available on different urls. I dont think its possible with azure web apps as well. Potentially you can spinup a new container when there is new code push and run it on a different port. But, you will have to build the logic of limiting the number of containers as you cannot go infinitely. Rather unusual requirement. Or, you can use slots in web app to serve multiple versions at the same time but its limited based on the tier you opt for.