Versioning Azure static websites - azure

As part of our release pipeline we deploy to Azure blob store (static websites). So every time the release pipeline runs, it overwrites the contents of the blob store with the new build artifact created and we see the latest changes.
For debugging and internal testing we have a requirement where each deployment instead of overwriting the existing contents of the blob store, creates a version.
So if a dev checks in their changes to master and a new artifact is generated, it gets deployed to https://abc.z22.web.core.windows.net/1. The next time a new change is checked in to master, it creates a new version at - https://abc.z22.web.core.windows.net/2.
There is versioning in blob store that was added recently but you have to manually go in to the blob store and mark a version as current.
Is there a way to achieve this ? Any other Azure offering that can help with this ?

OK. Looks like you want all the versions to be active and available on different urls. I dont think its possible with azure web apps as well. Potentially you can spinup a new container when there is new code push and run it on a different port. But, you will have to build the logic of limiting the number of containers as you cannot go infinitely. Rather unusual requirement. Or, you can use slots in web app to serve multiple versions at the same time but its limited based on the tier you opt for.

Related

Uploading data to Azure App Service's persistent storage (%HOME%)

We have a windows-based app service that requires a large dataset to run (files stored on Azure Blob Storage at around ~30GB). This data is static per app version, and therefore should be accessible to all instances across a given slot (a slot in our case represents a version).
Based on our initial research, it seems like Persistent Storage (%HOME%) would be the ideal place for this, since data stored there is shared across instances, but not across slots.
The next step now is to load the required data as part of our devops deployment pipeline, since the app service cannot operate without the underlying data. However, it seems like the %HOME% directory is only accessible by the app service itself, even though the underlying implementation is using Azure Storage.
At this point, we're considering having the app service download the data during its startup, but then we hit a snag which is that we have two instances. We could implement a Mutex (using blob lease) but this seems to us to be too complicated a solution for a simple need.
Any thoughts about how to best implement this?
The problems I see with loading the file on container startup are the following:
It's going to be really slow, and you might hit one of the built-in App Service timeouts.
Every time your container restarts, or you add another instance, it will re-download all the data, and it might cause issues with blocked writes because of file handle locks, which can make files or directories on %HOME% completely unaccessible for reading and modifying (I just had this happen to me).
For this I would rather suggest connecting the app to Azure Files by SMB, and for example have a directory per each version. This way you can connect to Azure Files and write the data during your build pipeline, and save an ENV variable or file that tells each slot which directory to get the current version's data from.

Application code update on Azure Virtual Machine Scale Set (VMSS)

Currently, we are hosting five websites on a Linux VM. The websites reside in their separate directories and are hosted by Nginx. The SSL is terminated at Azure Application gateway which sends the traffic to the VM. If a file is updated in a remote repository, the local copy is updated by a cron task which is a simple Bash script running git pull and few additional lines. Not all five websites need to be updated at the same time.
We created the image of the VM and provisioned a VMSS set up.
What could be the easiest or standard way of deploying the codes to the VMSS? The codes also need some manual changes each time due to client's requirements.
Have a look into Azure Durable Functions as an active scripted deployment manager.
You can configure your Durable Function to be triggered via a cron schedule, then it can orchestrate a series of tasks, monitoring for responses from the deployment targets for acceptable response before continuing each step or even waiting for user input to proceed.
By authoring your complex workflow using either of C#/JavaScript/Python/PowerShell you are only limited by your own ability to transform your manual process into a scripted one.
Azure Functions is just one option of many, it really comes down to the complexity of your workflow and the individual tasks. Octopus Deploy is a common product used to automate Azure application deployments and may have templates that match your current process, I go straight to Durable Functions when I find it too hard to configure complex steps that involve waiting for specific responses from targets before proceeding to the next step, and I want to use C# to evaluate those responses or perhaps reuse some of my application logic as part of the workflow.

Push notification from azure blobstore to arbitrary number of webapps

I use data stored in in a blob for some configuration for some azure web apps, and I'd like to react to changes to it in near realtime. Currently I just set a timed event and periodically check if the etag of the blob has changed, and if it has then download the new blob.
This is ok, but I don't want to poll the blob too often, and I also want to be reactive. The devs changing the values in the blob want to be able to test the new values quickly.
The web app scales up and down, and each instance of the web app needs to download the config file. So, as far as I can tell, I can't just use the event system that azure storage has, as that would only send a notification to one instance.
Is there a recommended way to do this?
Per my understanding, you want to centralize manage your azure web apps. Once some config has been changed, your app services should reload configs on time automatically. Actually, Azure App Configuration provides this kind of functionality.
You can also config the condition to reload all configs in code. This is a .net core sample here. And you find other samples under the Enable dynamic configuration blade.

Azure LogicApp for migration of millions of files

I have the following requirements, where I consider using Azure LogicApp:
Files placed in Azure Blob Storage must be migrated into a custom place (it can be different from case to case)
Amount of files is something about 1 000 000
When the process is over, we should have a report saying how many records (files) failed
If the process stopped somewhere in the middle, the next run must take only files that have not been migrated
The process must be fast as it can be and files must be migrated within N hours
But what makes me worried is the fact that I cannot find any examples or articles (including official Azure Documentation) where the same thing is achieved by Azure LogicApp.
I have some ideas about my requirements and Azure Logic App:
I think that I must use pagination for dealing with this amount of files because Azure Logic App will not be able to read millions of file names - https://learn.microsoft.com/en-us/azure/logic-apps/logic-apps-exceed-default-page-size-with-pagination
I can add a record into Azure Table Storage to track failed migrations (something like creating a record to say that the process started and updating it when the file is moved to the destination)
I have no ideas how I can restart the Azure Logic App without using a custom tracking mechanism (for instance it can be the same Azure Table Storage instance)
And the question about splitting the work across several units is still open
Do you think that Azure Logic App is the right choice for my needs or I should consider something else? If Azure LogicApp can work for me, could you please share your thoughts and ideas on how I can achieve the given requirements?
I don't think logic app is a good solution for you to implement the requirement because the amount of files is about 1000000, that's too much. For this requirement, I suggest you to use Azure Data Factory.
To migrate data in azure blob according data factory, you can refer to this document

Difference between update and create&delete deployment option for Azure Cloud Service

What is the difference between updating an deployment and deleting and then creating a new deployment for a cloud service ?
We have a cloud service set up which during deployment, first deletes the existing deployment in staging and then creates a new deployment. Due to this the VIP for staging is always changing. We have a requirement where we want to make sure that both the PROD and Staging VIP always remains same.
Before changing the deployment option i would like to know what is the real difference and the need to have these two options.
I tried to search but found nothing on this.
EDIT: In the Azure Pub XML, we have a node named 'AzureDeploymentReplacementMethod' and the different options for this field are 'createanddelete', 'automaticupgrade' and 'blastupgrade'
Right now we are using 'createanddelete' and we are interested to use blastupgrade.
Any help would be much appreciated.
THanks,
Javed
When you use Create&Delete deployment the process simply deletes an existing deployment, then creates new one.
The other two options do upgrade deployment. The difference between automaticupdate and blastupgrade are in the value of Mode element of the Upgrade Deployment operation. As their name suggests, automaticupdate sends Auto for that element. While blastupdate would send Simultaneous. As per documentation:
Mode Required. Specifies the type of update to initiate. Role instances are allocated to update domains when the service is
deployed. Updates can be initiated manually in each update domain or
initiated automatically in all update domains. Possible values are:
Auto
Manual
Simultaneous
If not specified, the default
value is Auto. If set to Manual, WalkUpgradeDomain must be called to
apply the update. If set to Auto, the update is automatically applied
to each update domain in sequence. The Simultaneous setting is only
available in version 2012-12-01 or higher.
You can read more on Update Cloud Service here.
Although, if you really want to persist VIP in all situations, I would suggest you to:
Do not use staging for cloud services at all - just use two separate cloud services (one for production and one for staging)
use the Reserved IP Address feature of the Azure Platform.

Resources