Does virtual machine extensions provisioned for existing VM? - azure

We have a VM ARM template which runs a custom extension when we provision it. This extensions run fine and install framework correctly.
By looking at verbose log it seems like it runs the extension every time we run deployment script even if VM is already present. Is this correct?
Also if it does run the extension every time, is there anyway to avoid it?

Basically every time the VM is rebooted or the same template is deployed again, the extension is run but only if the new configuration is different from the old one. It can rerun if the force rerun flag is set.
Hope this helps! :)

Related

Post-build actions in Python/Linux webapp do not run

I have a Django-based web app deployed from Github, running in Python 3.9. The app deploys and starts successfully.
I need to add post-build actions to complete the deployment; the exceeding common Django task of running "manage.py". Following the general and python-specific docs I have added the an app setting of
POST_BUILD_SCRIPT_PATH=postbuild.sh
There is a shell script, postbuild.sh in the root of my app, which runs fine if I SSH into the running container. The expected behaviour is that after deployment, this should run, and output to the deployment log. Neither of these things happen.
I have tested the app setting POST_BUILD_COMMAND with a very simple echo, and that does nothing either.
Can you tell me either what I need to do to make these app settings work, or suggest an alternative method of running the post-build script?
Please note that this is a Linux app using Oryx, so answers concerning Windows/Kudu like this one aren't related.
I noticed you asked your question over here as well. Your setting needs to be set to a relative path, /postbuild.sh.

Azure Function: Old code still running after a deployment

Right now I again faced the issue that old code is used on an Azure Function App even after the zip deployment through KUDU returns success.
Of course, that is after some 30 mins that I expect the new code to get loaded, not immediately.
The issue is marked as closed.
What is considered to be the best practice in this case:
Programmatically force the Function App to restart, say, through Azure CLI or Powershell Az modules?
Or there is another way to mitigate the issue?
While restarting should fix it, my suggestion would be to enable "Run from package": https://learn.microsoft.com/en-us/azure/azure-functions/run-functions-from-deployment-package. That removes the chance of having old files running as the deployment is atomic.
You'd set an app setting of WEBSITE_RUN_FROM_PACKAGE to 1 and continue deploying the same way you are today. The site will be run directly from that package (wwwroot will appear as read-only in kudu) so there's no unzipping and copying, which may be causing the issue you're having.
Note: it looks like we're still tracking the issue here: https://github.com/Azure/azure-functions-host/issues/2636.
In my case the issue lay in the CI-CD pipelines, where an out-of-date artifact was being deployed - thus the successful deploy, but the old code.

Capture button missing from Azure Portal.

I am new to Azure Portal but have good knowledge with VMware ESX and vcenter. I created some vm images in Azure portal (not classic) and need to clone them. However, all online docs point to "Capture" button which for some reason is missing from my profile. I found some docs about using AZcopy tool which is fairly complex.
also, because I have some automation customized in the VM, running sysprep might break the tool/automation. so I prefer without running sysprep.
So the bottom line is that I need a way to clone single VM into multiple VMs without running sysprep. Yes I know about the conflicts and hopping I can do the modifications manually. But even if it is impossible without sysprep, fine. As long as I can clone them I'll live with sysprep somehow.
This is possible using Powershell and sysprep. This blog post has a step by step of how it's done.
http://www.codeisahighway.com/how-to-capture-your-own-custom-virtual-machine-image-under-azure-resource-manager-api/
If the Capture button is missing try to check whether your machine is a "Managed disk" one.
If it's not you can follow this tutorial to turn it on but I guess there's costs associate to it.
https://learn.microsoft.com/en-us/azure/virtual-machines/windows/convert-unmanaged-to-managed-disks
ps: last time I checked "unmanaged disks" and I spent like three hours trying to understand why the capture button has gone.

Updating code of managed vm on google compute engine

I understand this might be an easy solution, but I am very new to this so any help would be appreciated.
I have been running through the hello world application for node.js with managed vms on google compute engine, and I have just done this stage
gcloud preview app deploy app.yaml --promote
Which has allowed me to put up the app, and it works.
BUT HOW do I now update that code? If I run that command again it starts up new instances and essentially treats it like a new upload.
You can deploy the updated version of your app by running the same command you used to deploy the app the first time, as indicate in this article:
If you update your app, you can deploy the updated version by entering the same command you used to deploy the app the first time. The new deployment creates a new version of your app and promotes it to the default version. The older versions of your app remain, as do their associated VM instances. Be aware that all of these app versions and VM instances are billable resources. For information about deleting or stopping your VM instances, see Cleaning up.
Just in case anyone found this question looking for the same information, I finally seemingly worked out how to do it.
You need to attach the --version flag when you are deploying, instead of using --promote.
You can find the default version in google cloud console, by going menu (burger icon) -> app engine -> versions and you will see in that list one item with (default) by it.
so then when deploying put that version string after --version and it will deploy without needlessly creating new things

Azure Worker Role not working properly

I have a Worker Role which isn´t working on Cloud environment.
When I run it locally, it runs perfectly, but, when I deploy it to Azure I´m having some troubles. The deploy, itself, occurs seamlessy, after the VM starts, my app doesn´t run. There is nothing on Event Log, and, even after I setted up the app to flush all Trace Messages to Azure Table, nothing is wrote there too.
How can I check if my app is really running on the VM? Why my app isn´t working there as it works locally?
Have tried to implement diagnostics on your webrole? This is the best way to find any errors in your code. An other solution is to install sysinternals during startup. Patriek van Dorp has made a nu get package thad adds the sysinternal suite as a plugin for your cloud project.
The best way is to enable RDP and remote into the machine. Then you can look at the processes running and ensure that things are running as you expect. It is odd that there is nothing in the event log if it is failing to run. Does the portal show the deployment as Ready?
Maybe too late, but i had similar issue.When i run it locally, it was running.After deploy did not run anything.Problem was that, i deploying into website,where worker not available.You should deploy into CloudService(there are available both roles) or make Sheduler, which will do request on your page, where process job you needed.In your custom intreval, ofcourse.
BTW sorry about my english ...
Regards

Resources