Right now I again faced the issue that old code is used on an Azure Function App even after the zip deployment through KUDU returns success.
Of course, that is after some 30 mins that I expect the new code to get loaded, not immediately.
The issue is marked as closed.
What is considered to be the best practice in this case:
Programmatically force the Function App to restart, say, through Azure CLI or Powershell Az modules?
Or there is another way to mitigate the issue?
While restarting should fix it, my suggestion would be to enable "Run from package": https://learn.microsoft.com/en-us/azure/azure-functions/run-functions-from-deployment-package. That removes the chance of having old files running as the deployment is atomic.
You'd set an app setting of WEBSITE_RUN_FROM_PACKAGE to 1 and continue deploying the same way you are today. The site will be run directly from that package (wwwroot will appear as read-only in kudu) so there's no unzipping and copying, which may be causing the issue you're having.
Note: it looks like we're still tracking the issue here: https://github.com/Azure/azure-functions-host/issues/2636.
In my case the issue lay in the CI-CD pipelines, where an out-of-date artifact was being deployed - thus the successful deploy, but the old code.
Related
I have a Azure App Service app that I'm trying to get deployed.
Today I ran into an issue where .NET informed me (via the yellow screen of death when I browse to the URL of my app) that I had a missing DLL (for the purposes of this question I don't think it really matters).
I used FileZilla to publish my changes in an attempt to do a manual deployment first and then work my way to automate it.
After so many attempts to fix it I later realized that the error message never changed. I did something more severe and renamed my bin folder into something completely different and the exact same error message would appear.
I've stopped the service, restarted it, and as mentioned, renamed folders, etc. and still the exact same error message persisted.
I also decided to open up the Azure Portal Console for my App Service app to browse a bit and to my amazement, nothing seemed to have reflected at all. The FTP shows one thing and the Console shows another.
Would anyone have any idea as to why this is happening?
I eventually got it to work and I will share what I tried.
I deleted the web app and created it again (I found this to be important the first time around). This was quite time consuming and did help but it wasn't long before the same problem happened again.
Then I finally found a solution that seems to give me consistent results:
I kept on editing the Web.config which seems to force a recompile and clear some sort of cache. So each time the web app stopped updating, I would make a slight change in the Web.config, upload it via FTP and the app finally updates.
If anyone has any more details on this, it would be greatly appreciated.
My Azure Functions were running fine and all of a sudden I am getting several "Assembly changes detected. Restarting host..." messages that is preventing my functions from completing.
I am not deploying new code so not sure what is triggering the Assembly Change event to fire. I was running on the latest version of the runtime and have since reverted to version 1.0.10947 thinking that maybe the underlying runtime was updated, but I'm still getting that line showing up in the logs.
Update
Now that #Alexey has helped me track down what is causing the Assembly changes to be detected. I would like to ask if anyone can tell me WHY an assembly change is being detected even-tough I have not changed/redeployed my application.
After looking in your logs we opened an issue https://github.com/Azure/azure-webjobs-sdk-script/issues/1533#issuecomment-303595960.
Your functions had multiple restores but now issue is gone. Restores could be initiated by changing project.json.
If you are stuck with the multiple
Assembly changes detected. Restarting host
I fixed my issue by deleted the log file in the Kudu services:
https://[FunctionAppName].scm.azurewebsites.net/
and follow on the top menu:
Debug Console >> powerShell
And the file log is :
LogFiles >> Application >> Functions >> function >> [Function name]
You can remove the log file.
my 2c.
I was struggling with this issue for ages and not sure what was causing it. I believe I may have the answer.
Our solution has been toying with consumption plans, but pulled back to full App Service Plans because the initiation times were too long for our rather unique usage patterns.
But 2 of the appsetting params were still in place: WEBSITE_CONTENTSHARE And WEBSITE_CONTENTAZUREFILECONNECTIONSTRING.
per:
https://learn.microsoft.com/en-us/azure/azure-functions/functions-app-settings#websitecontentazurefileconnectionstring
these are ONLY for consumption plans.
I removed them and... touch wood, the issue seems to be resolved.
I created a cloud service and tested it successfully locally. I added service configurations for stage and production. Here is a snippet of my staging-configuration:
and here my configuration-settings:
Then when I publish I set up the deployment as follows:
All this worked like 2 weeks ago. But now he deploys in VS and when I look into Azure Service Configure area it looks like this:
I played a little bit with the "Update development ..."-checkbox on the second screen but the result is the same.
So it ignores all the settings I made and just won't tranistion my configuration to the ine I named "CloudStage". My current Web PI tells me that I use Windows Azure SDK for .NET (VS 2013) 2.3. I don't get the point.
Edit
Some more things I observed:
No WADLogsTable and WADWindowsEventLogsTable is generated automatically in the staging storage.
I deactivated Remote Desktop because it was one of the changes I made to monitor the event log (which wasn't useful here)
I manually changed the connection strings in Azure Portal but it seems as if the worker is totally unaware of the storage (rebooted it with no success).
Edit
I recognized another thing. Here you can see a running deployment of my service:
See the warning-mark on the left? If I go to my Error list this is shown:
This warning is senseless since it tells me that I did everything the right way. My *.Local.csfg-files are pointing to the local storage. So?!?
This seems weird. Please check the in your ServiceConfiguration.CloudStage.cscfg to verify the expected values.
Have you tried updating any other property like Enabling Remote desktop? Does that get updated on your deployment? You should select the "Deployment Update" check box in the publish dialog. Now, when deploying to an existing Cloud Service, it should ask you if you want to replace it.
If you get the Object reference error every time you right click on project, there might be some issue with the Azure SDK set up.
I'm a little bit further now. What I did was:
Deleted all Services in Azure.
Deleted all Storage Accounts in Azure
Removed my Service-Project completely from solution (not the library containing the worker-logic).
Re-added storage-accounts in Azure.
Re-added services in Azure.
Re-added a project in the solution and added the worker-logic inside it.
Builded up all the publishing-stuff again.
Published it.
The first publish ended like the one described in my question. After I checked the "Update development..."-option in properties of my worker it finally took my transitions into the stage!
Now I recognized, that WADLogsTable was still empty. I hit the instance right in server-explorer and choosse "Update diagnostic settings...". There was an option "Transfer period" suddenly set to "None". This explained to me, why my table was empty and after I set it back to "1" my table is filling again!
Another funny thing beside: When I right-click my Cloud-project in the solution I get "Object reference not set to an instance...". When I just click it left and choose Build->Publish it works.
I just hope that I can help somebody with this. Lets see if it's stable now.
Edit: Yesterday it worked - today is still the same issue :-(.
When you get "Object reference not set to an instance.." for a CloudService project you usually have some kind of mismatch. It could be that a setting in the ServiceConfiguration is not defined in the ServiceDefinition. It could also be that there is a publish profile defined in the .ccproj file for the CloudService that doesn't exist. This might also be what is causing your problems with the different configurations.
So it turns out that the problem is completely on client-side. My Visual Studio (now with SDK 2.4) is doing something wrong. I set up a fresh installation with all the stuff needed :-( and there it works perfect. I'll try to determine if one of my extensions is causing the strange "Object reference not set..."-bug.
Repair-Installation of VS does not solve the problem btw.
I've been having a generally wretched time deploying my worker roles to Azure. I'll publish my worker role once from Visual Studio and everything will work fine. I'll publish the worker role again later and the deployment fails. The instance goes into a 'recycling loop'. I spend hours trying to figure out what I broke. I tried Intellitrace but it always fails with a 'cannot download intellitrace logs' error message. Then eventually I'll delete the deployment from inside Azure Management Portal and try again and the same code that's been failing to deploy for hours will magically work.
This doesn't happen all the time, and some projects seem to 'fix' themselves and stop demonstrating this behavior all together. But what seems to be happening is that a publish from Visual Studio will fail unless I go manually delete the existing deployment.
I know this maybe a little vague but I've really got nothing to go on here. Intellitrace never works and I can't Remote Desktop into the role to poke around because it recycles so fast (which may also be why Intellitrace isn't working).
Does anyone have any idea what might be going on here?
I did more research and I think I may know what's going on. Apparently Visual Studio tries to upgrade your worker roles in place when you deploy. If that fails, for some reason such as you changing service configuration between deployments, it just complains that there's something wrong with your role and that your instance is recycling.
In deployment options there is an option called 'If deployment can't be updated, do a full deployment' which will delete the existing deployment and deploy from scratch if the existing deployment can't be updated. I'm not sure why this isn't checked by default instead of 'fail mysteriously'.
Deploying to sharepoint using the object model or STSADM commands sometimes results in one or more packages being in the "error" state in the web control, a redeploy instantly fixes this, usually, even stranger, if i create two apps one which adds and one which deploys then i get no problems, but putting a delay between a single program does not have a similar effect.
If i run the deploy twice for programs which did not deploy successfully it works fine, as long as I do not try to do it programatically in which case it makes no difference.
It is different files and sometimes is none.
I do use stsadm -execadmsvcjobs between add and deploy and even between two of the deploy bunches.
(i'm deploying around 10 wsp files programatically)
Does anyone have any ideas on why this happens? or how to solve it, as when i get to implementations it causes problems.
The problem lies in the fact that sharepoint will perform app pool recycles and / or full iisresets, as well restarts of the SharePoint Timer Service (altough not completely sure about that though). When you try to actually deploy the just installed package sharepoint is still busy getting up and running again, the timer job created to install / deploy is basically waiting for the central admin app pool to be fully running again.
The same thing happens (somewhat reproducably) while retracting a solution. Hit F5 a lot of times on the solution management page while the retract process is underway and if you refreshed fast enough it will hang and display "error" in red.
My solution was to create a WebRequest to at least the central admin (or just do a SPSite = new SPSite("centraladminurl")) in your deployment app or in powershell. Do this after every deploy action as well.
This SHOULD fix the timing issue (basically a kind of "race condition").