Updating virtual machine scale set with a new image - azure

I have been working on it straight for the past 2 days and read all the possible documentation that i can find.
I have a scaleset (windows machines that will hold an asp.net core app on it). Here's what the setup looks like.
I have a "Pipeline" that gets the code and publishes the artifact (this is working as expected)
Now I am trying to use that artifact and build a new image and push it to the scaleset.
Here are the screenshots of what the release pipeline looks like.
pipeline tasks
scaleset release error
So the first step builds an image using packer and the second step is supposed to push that image to the scale sets.
The error basically states "Error: VMSS myapp can not be updated as it uses a platform image. Only a VMSS which is currently using a custom image can be updated."
I have tried this release pipeline with 2 different scalesets. One of them was using default windows 2016 image and the other one was using a custom image that was built. In both the cases I run into the same error.
I have spent a LOT of hours trying to figure this out to no avail. I have also asked around in all the communities that i am a part of. Is anyone out there aware of how to update these damnned scale sets with a new image of your application?

Related

display build number in page footer, application deployed using azure devops

I have deployed a website containing react as front end and node as back end using azure devops in azure vm and its working perfect. now i want to display build number & release number in page footer of the website. can anyone help with it, i'm quite new to azure devops so not able to find any helping solution.
Edit:
as suggested i have added below variables to my ci pipeline as can be seen here
and in my code i have tried to add them to footer as you can see here
but the output i'm getting is here
can't get where i'm going wrong, in the code image you can see i have used one azure devops link to get build number value but i'm not sure how to get that link, i have followed this blog, can someone help on it
In the build pipeline, you can use the predefined variable Build.BuildNumber to get the build number of current build. For more details, you can see here.
And in the release pipeline, you can use the predefined variable Release.ReleaseName to get the release name (release number) of current release. For more details, you can see here.
In the pipelines, you can use the predefined variables to get the values you require and write them into the code of your web app to set the page footer.
Maybe, you can try to develop a script to write the build number and release name into the code of your web app, and call this script in the build pipeline and release pipeline.

IBM Cloud Code Engine: How can I deploy an app from GitLab source without CLI

I create a project and saved it in GitLab.
I tried to download the IBM Cloud CLI to my Windows 10 system and I failed, I try to do it Run as administrator as mention in the CLI docs.
Now, I want to deploy this code project without CLI from source code. I could not find any docs about it.
I read about Dockerfile I should insert into my project but I know nothing about it.
Please help me in 2 ways:
Deploy my project with source code (Gitlab connect to IBM Cloud Code Engine).
Deploy my project using CLI in the windows 10 system.
I just did the same thing as part 1 of your question yesterday. As a prerequisite, you will need a container registry to put things into, such as a free account on Docker Hub.
Start on the Code Engine console.
Choose Start with Source Code, and paste in your on Gitlab URL. (The default is a sample repo which may be useful, especially https://github.com/IBM/hello.
On the next page, you can accept most of the defaults but you will need to create a project. That's free, but it needs a credit card so you can use a Pay As You Go account.
You'll also need to Specify Build Details
Here you tell it about your source repo, and where your Dockerfile lives. If you don't have a Dockerfile, you can probably find a sample one for your chosen runtime (C#? Node.js? Java?), or you can try using the Cloud Native buildpack. The build pack will try and work out how to run your code by inspecting what files you have.
Finally, on the last page of the build details, you will need to tell it where your container registry lives. This is a repository used to store the built images. If you set up a personal account on docker hub, you can just enter the credentials.
Once you've done that, you can choose Done in the sidebar:
and then Create
You should get a page which shows your image is building
and then once the build is done, in the top right you'll get a link to take you to your app's web page:
If you get stuck, there's a good set of documentation.

Missing images in gke private images registry from gitlab ci/cd build

GKE private image registry is missing images. No changes to the environment have been done, this process was working fine until about 2 weeks ago. Here's the process
(This environment was handed to me and it is my first time into the CI/CD process and I am a newbie on the GKE environment as well.)
I have a GitLab pipeline that builds and deploys my app to a GKE dev environment when triggered. There are no errors reported in this process and it completes using gitlab.com in 4-5 minutes. )
The issue that manifested is that many of the images in a google private registry are no longer there, the current version is gone. The pod is trying to pull that image and it is failing with the ImagePullBackoff error, which makes sense due to the missing images. (That is most of them have disappeared, over 40 past versions are not longer in the registry, some older images are still there. )
First, I cannot tell how the images, from the CI/CD process, get placed into the private registry. There is only a reference to pull the registry.gitlab.com and no corresponding push to eu.gcr.io references at all (in the ci/cd files) which is the location of the gke image registry.
There are 3 files related to the ci/cd process:
gitlab-ci.yaml
kube-init.sh
migration.sh
All the secrets are in place and none have been changes. It seems there is a piece missing which moves/saves the files to the private google image registry...where would that be defined?
I can post the files in this process but since there are no errors there, I am not sure that would help. (Let me know if they are needed.)
Thanks in advance...I can't wait to get a DevOps engineer:)
-glen
As a summary of the conclusion reached in the comments:
The images are hosted on gitlab and aren't pushed to the GKE registry. as can be seen here.
The issue OP had was related to the token created for the pipeline from Google Cloud Platform to Gitlab, as it was linked to the previous account which is no longer associated. A new token was issued and the images can be pulled from Gitlab.

Manage Docker Images in Azure App Services for Staging and Production

I am setting up two Django Web App on the Azure using their App Services - one for staging and one for production.
I have dockerized my Django App and stored the image in the Azure Container Registries.
Now, I have noticed that when I create the webapp, it asks for the specific tag from the registry repo that I want to use, which I can't seem to change after the webapp is created.
My plan is to tag the images with their versions (e.g. :090920201) and promote this version (tag) first on staging, test it there and if it works as expected, use the same version (tag) image and promote it on prod.
For now, I am tagging the images as :staging for staging environment and :prod for the production environment. When I am happy with a specific version of my local, I push the image with their respective tags.
Now the problem is since I cannot change the tag of the registry repo after the app is created, I have to push the same image twice, one with :staging tag and if this image is working as expected, push the same image again with the prod tag.
This could work for the time being, until the dev and staging are in sync.
So what's the problem?
Is there a way to change the tag of the image after the web app was created in Azure?
Is there a way to use a consistent tag (let's say :latest) but only deploy to staging first, test there and then promote it to production?
If we completely ignore what I am doing, how else do you suggest I utilize the same image and manage the promotion of the image first on the staging, and then to the production environment?
Apologies for the world tour for a straight forward question.
Not sure if you looked at registry best practices, but you can't retag an image after it's been deployed.
You can consider tagging your images by including the slots in the namespace name e.g. <registry-name>.azurecr.io/<app-svc-slot>/<image-name>:<version>. If you're going to use the latest moniker, I would tag latest docker build twice, the actual release version and latest. You can then push both images to your registry, or just the one tagged latest, like what's explained in https://learn.microsoft.com/en-us/azure/container-registry/container-registry-image-tag-version.

Self hosted azure agent - how to configure pipelines to share the same build folder

We have a self-hosted build agent on an on-prem server.
We typically have a large codebase, and in the past followed this mechanism with TFS2013 build agents:
Daily check-ins were built to c:\work\tfs\ (taking about 5 minutes)
Each night a batch file would run that did the same build to those folders, using the same sources (they were already 'latest' from the CI build), and build the installers. Copy files to a network location, and send an email to the team detailing the build success/failures. (Taking about 40 minutes)
The key thing there is that for the nightly build there would be no need to get the latest sources, and the disk space required wouldn't grow much. Just by the installer sizes.
To replicate this with Azure Devops, I created two pipelines.
One pipeline that did the CI using MSBuild tasks in the classic editor- works great
Another pipeline in the classic editor that runs our existing powershell script, scheduled at 9pm - works great
However, even though my agent doesn't support parallel builds what's happening is that:
The CI pipeline's folder is c:\work\1\
The Nightly build folder is c:\work\2\
This doubles the amount of disk space we need (10gb to 20gb)
They are the same code files, just built differently.
I have struggled to find a way to say to the agent "please use the same sources folder for all pipelines"
What setting is this, as we have to pay our service provider for extra GB storage otherwise.
Or do I need to change my classic pipelines into Yaml and somehow conditionally branch the build so it knows it's being scheduled and do something different?
Or maybe, stop using a Pipeline for the scheduled build, and use task scheduler in Windows as before?
(I did try looking for the same question - I'm sure I can't be the only one).
There is "workingDirectory" directive available for running scripts in pipeline. This link has details of this - https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/command-line?view=azure-devops&tabs=yaml
The number '1' '2'...'6' of work folder c:\work\1\, c:\work\2\... c:\work\6\ in your build agent which stands for a particular pipeline.
Agent.BuildDirectory
The local path on the agent where all folders for a given build
pipeline are created. This variable has the same value as
Pipeline.Workspace. For example: /home/vsts/work/1
If you have two pipelines, there will also be two corresponding work folders. It's an except behavior. We could not configure pipelines to share the same build folde. This is by designed.
If you need to use less disk space to save cost, afraid stop using a Pipeline for the scheduled build, and use task scheduler in Windows as before is a better way.

Resources