When should I repackage my Azure compute role? - azure

When setting up an Azure Web / Worker Role for the first time I need to 'Package' the project and upload it via the Azure portal. After doing this I can 'Publish' the application from Visual Studio.
Under which circumstances do I need to 'Package' the project again and update it via the Azure portal?
In other words - which changes require the project to be re-packaged?
Note: I need to 'Package' the project in order to upload it via the Azure portal. When I create a Compute Role in Azure, I must upload a package in order to be make the Compute Role operational.
From Azure portal:
You have nothing deployed to the production environment.
UPLOAD A NEW PRODUCTION DEPLOYMENT.

The Cloud Service package contains the role definitions, configuration settings, runtime bits, and other static content bundled with your app. Visual Studio (or PowerShell) creates an encrypted package (actually a zip file that you can look into, when building for emulator) for upload to the named slot you created via the portal.
In the future, there are certain things you can do without rebuilding the package, such as changing instance count and other configuration settings. Also: If you move your static content (such as your CSS, images, etc.) to blob storage, you can then update those directly without ever needing to recreate / redeploy the package (you may need to send some type of signal to your running app, to reload some resources, but that's going to be app-specific). If you have specific exe's or MSI's that get installed as part of your startup scripts, you can move these to blob storage as well, since they can easily be downloaded as your role startup code executes (and this cuts down on package size).
If you change anything defined exclusively in the service definition file (e.g. if you add a role or change a role size), you will have to repackage/redeploy (but you can deploy as an update, which won't take your service down [assuming you have 2 or more instances] or replace your assigned IP address).

I don't think you must package your project at the first time. You can publish your azure project for the first time. I'm not sure what prevent from your publishing. Could you explain a bit more.
I fact, the publish is very similar as package. Visual studio just packaged the project and uploaded them to azure on behalf of you.

Related

I don't have a 'azure-webjobs-hosts' container in azure storage explorer, therefore my azure function app will not run

I've pulled in our development project and other developers have no issue.
I am running Azure Storage Emulation.
In my case, when I run it, I get a 404 error saying can't find the container.
Drilling into it, the container is 'azure-webjobs-hosts', and googling this shows this to be a standard container name, that stores webjob information.
I cannot find how this is first created though, and the code I have pulled in, which is based on a default new project, does not appear to create it.
I would like to how 'azure-webjobs-hosts' container is usually created, as I can't find anything online. Perhaps I need to install some kind of tool, library or SDK?
I would assume it is supposed to be created automatically if it is missing, but it would appear that I've missed a step somehow.
If I manually create the container, it then complains about a missing blob, and rather than try to patch this together myself I thought it would be better if I found out the root issue and resolved it.
Any suggestions?
As per my experience,
Perhaps I need to install some kind of tool, library or SDK?
Azure Web Jobs SDK isa NuGet Package and can be installed through the cmdlets given in the Official Site.
I would assume it is supposed to be created automatically if it is missing, but it would appear that I've missed a step somehow.
Yes, it would be created/added automatically as a namespace, Package References when the Azure Functions is created as this package is linked with Storage account and essential to store the data i.e., processing in the background. You can get more info about the usage of it from the GitHub Official Article of Azure Web Jobs SDK Integration.
I would like to how 'azure-webjobs-hosts' container is usually created, as I can't find anything online.
In Local System:
azure-webjobs-hosts is a folder created in the blob container locally as soon as this storage account is used by any application moves to running state.
A Folder named Locks will be created inside the azure-webjobs-hostscontainer/folder.
timers folder also created during the Timer Trigger Function run and the log files also created with the block blob type inside the locks folder and also timers folder.
Few more folder created in the blob container automatically based on type of trigger/application integrated with Azure Web Jobs SDK. Those folders are part of that local storage account, can be deleted manually and can also be recreated when the application starts running.
In Azure Portal (cloud):
When you create the Azure Function App in the Portal, Storage account is required. After Creation, Functions will be in running state so the containers such as azure-webjobs-hosts, azure-webjobs-secrets can be created that stores some data such as host.json file (that contains Authorization keys), available in azure-webjobs-secrets.
You can also host multiple function apps to the same storage account so a folder can be created and named with Function App Name inside the containers to show the logs related to that specific Application.
After publishing the local function project (.Net 6) having Http & Timer Triggers from VS2022 IDE to Azure Portal function app, below folder can be created in the associated Storage Account Container:
These are the functionalities of the azure-webjobs-hosts and Azure Web Jobs SDK in an Azure Function App and more information on its usage can be given in the above mentioned references.
I've solved this problem and it was very simple. I had break on all exceptions turned on.
For some reason they decided that even though having no container is expected the first time, they would throw an exception. This exception is rethrown a few times, and eventually handled by something that creates the container.
IMO this is bad design, considering the program expects there to be no container the first time you run it and will create it if needed, it shouldn't be an exception.
Anyway, this was the reason. Hammering F5 or setting your exception settings to default so it doesn't catch runtime errors will fix the problem.

How to update a folder in an Azure AppService upon Checkin into a VSO TFS repository

I have a website hosted as an azure web app. It's an asp.net website that's in a vs solution. On folder of that website is my products documentation, all of it as static resources (html and images). These static resources are located in a folder in another vs solution (this is the actual products solution). Both solutions are TFS based in VSO.
As of now, i have a webjob running in the context of the website that is basically doing a "tfs get" on the documentation folder and placing it's contents into the documentation folder on the website. This is working, however, the vms the website is running on do change quite frequently and the mechanism to create a workspace is bound to the machine, not to the disk drive. Thus, i cannot get only the changes but i must always get all the content which right now takes about 20 minutes and creates unnecessary load on the website. (This is why i'm only running this webjob once a week.)
Now i'm looking for a better way to do this. I would like to only get the files that have changed, making this a lot faster and let cpu/drive costly.
I did not find a way to create a workspace on the webserver that isn't vanishing each time the webservers vm changes. (if it was possible to somehow attach the workspace to the drive instead of to the machine name, that would solve the problem.)
i was also looking at my continues build definition that i have running for the products solution. as part of that, i think it's possibly to create a deployment where the documentation folder is copied to the app services's documentation folder. This way i could get rid of the "special" webjob, but i'd still copy all the docs files each time. (also, the build agent for that is running on premises, so i'd also have to copy those files from premises up to the cloud when they're actually already there inside vso.). So basically, i don't think this option is a lot of use for my case.
Obviously if i moved the static docs resources from the products solution to the websites solution, i could simply use the automatic deployment that is available for website projects from vso to azure web app. Unfortunately, for various other reasons (one of which being, the static resources are partially created automatically from the .cs sources in the products solution) i simply cannot move the docs folder from the products solution to the websites solution.
So does anybody have a suggestion for a method where i could update the documentation folder in the web app based on changes in the corresponding VSO folder?
You can upload the updated files to Azure app service by using Kudu API.
Simple steps:
Create a Continuous Integration build
Check Allow Scripts to Access OAuth Token option in Options tab
Add PowerShell step/task to check the changes with REST API. (Refer to Calling VSTS APIs with PowerShell)
Add Azure PowerShell step/task to upload files to app service by using kudu api. (Refer to Remove files and folders on Azure before a new deploy from VSTS)
Here is what i ended up doing:
Created a ServiceHook in VSO that is wired to "Web Hooks". The hook is called upon each Check-In and filtered based on the directory i want. (All of this can be done using the existing functionality in VSO.)
The hook calls an Azure "functionapp" (which is easy to do, because functionapps have a "HttpTrigger" mechanism which fits in nicely here.)
The hook passes the id of the checkins ChangeSet to the function app.
The FunctionApp puts that id into an Azure (Storage) Queue.
This triggers an azure webjob which listens on that queue. That webjob uses the ChangeSet-Id to get the changes from VSO and acts on the changetype for each change. (e.g.: Downloads or deletes a file.)

patching website on azure webroles

Sometimes in our website which is deployed on Azure web roles, issue comes related to small bugs in javascript and HTML. We go to all instances of webroles and fix these JS and HTML file on machines.
But I was looking into some automated way of doing this, downloading the files to patch from some central location and replace the files in all azure web roles. I am using ASP .net MVC for website.
It is possible to redeploy the website with the patch in the package but we don't want to wait for long deployment time. Please let me know if it is possible via some internal WEB API which replaces the content on all azure web roles.
There are 2 ways to deploy a new webrole:
redeploy
inplace update
The first one is the slowest, meaning new VM's are booted.
With inplace upgrade (https://azure.microsoft.com/en-us/documentation/articles/cloud-services-update-azure-service/)
The new application package is mounted on a new drive (usually F: instead of E:) and the IIS website is swapped to the new drive.
You can try this by going to the old portal and upload a new application package. In just a few seconds/minutes the update is done.
After digging many things on stackoverflow, I crafted my own solution which is creating a topic and subscribing to the topic in code when website starts. When I want to patch the web app then I send a message to Topic to start patching then each machine in the web roles will get notification from topic and start patching themselves. Patching itself is very easy, which is going to a web storage and downloading files from there and replacing files in approot.
When azure maintenance happens this patching may go away, so for this situation I made patching work started at start up of website too.
Cloud service deployment packages tend to be slow since they are basically a recipe on how to build and configure your deployment. The deployment not only puts the recipe out in Azure (so it can be used again if it needs to move your machine), but also follows the recipe to build out a VM for your Cloud Service (WebRoles/WorkerRoles are platform as a service so you don't have to worry about the OS and infrastructure level like you would if you were using the Virtual Machine Azure product but they do still run in VMs on physical hardware).
What you are looking to do is something that will update the recipe (your cloud service package) and your deployment after it is out and running already ... there is no simple way to do that in Cloud Services.
However, yes you could create a startup script that could pull the site files from blob storage or some other centralized location - this would compare to how applications (fiddler for example) look for updates then know how to update and replace themselves. For that sort of feature you will likely need to run code as an elevated user - one nice thing about startup scripts are they can run as an elevated user - so they can do about anything you need done on a machine (but will require you to restart the instance for them to run). Basically you would need to write some code that will allow your site to update itself. This link may help: https://azure.microsoft.com/en-us/documentation/articles/cloud-services-startup-tasks/
If you have the ability to migrate to WebApps and WebJobs, I would recommend looking into that since that compute product solves your problem really well.
Here is a useful answer of the differences between WebApps and Cloud Services: What is the difference between an Azure Web Site and an Azure Web Role

Deploy the same cloud service to different VMs

I got a cloud service (worker role) which I want to deploy to a beta and a production environment.
It seems a waste to have to create three projects (one with the actual implementation and two for deployment).
Is it possible to create two deployment profiles which links to different Azure destinations but uses the same worker role project?
This is very simple to do. Just build your Azure package without deploying, and keep your dev/beta/prod settings in the Service Configuration, not embedded anywhere like web.config/app.config. Then store both the deployment package and configuration in blob storage (speeding up deployment). You'll want multiple configuration files: one for each environment, each stored separately in blob storage.
Once this is done, you can just deploy the package to multiple cloud services, each with a different configuration file. This can be done either through the portal or through PowerShell / CLI.
If you've been deploying directly from Visual Studio, it might not seem quite as obvious. But from VS, you can build a package without actually deploying.

How to backup a azure cloudapp?

I have created a cloud app, with a distributed packaged that makes azure download the content from a weburl. (composite c1)
Now i would like to take a backup of my application, possible do some changes to the source code and upload again.
How can i get a backup of the files on the cloudapp?
You aren't very clear if you need a backup of your application code, or the files that are being downloaded (though I think you somewhat clarify in your follow up comment).
When you deploy a Cloud Services app the code is packaged into the cspkg file and sent up to be deployed. Some of the deployment tools (like Visual Studio, the PowerShell Cmdlets, etc.) will use a storage account to upload the package file to BLOB storage. This means that you'd have a copy of the packages you've deployed. If you don't use a tool that uses the storage account to deploy from I highly recommend you also keep a copy of the packages you deploy just in case you need to roll back to one.
Now, if you are changing code in your application, then you make the change locally, test it out and then you can redploy. You have several options for this. You can delete the previous deployment and redeploy the new one (which will cause downtime for your app). Another option is to do an inplace upgrade where your deployment is updated with the new package a few machines at a time (if you are running multiple instances). The other options is also a VIP swap where you load up your new code in the staging slot and then swap the bits between staging and production. I'd suggest researching these options to deploy new code on MSDN to understand them (they all have benefits and drawbacks, and some of them can't be done depending on the changes you are making to your code).
In your comment it seems that you are more interested in getting a back up of the files that are in your BLOB storage account after you application pulls them down. You have a couple of opions here as well:
Manually download them using an Azure storage explorer tool like Cerebrata Cloud Storage Studio, AzureXplorer, etc. (I use the Cerebrata tool)
Create some code that pulls down the data. You can do this using any of the Client libraries (.NET, PHP, etc.), the PowerShell cmdlets, or even use a tool by a vendor like Red Gate who has Cmdlets designed for backup/restore.

Resources