I have an Azure Service Bus Trigger that has been working both locally and in our Azure environment. Unfortunately, I had been deploying this to a consumption service plan, and I need it on a regular service plan. All deployments have been via Visua Studio.
No problem, right? I deployed the exact same code to a different service plan (in the same resource group) and it does not work. Messages are not being taken off of the queue.
At first I thought it was a connectivity or configuration issue, but then I noticed that the function isn't listed under "Functions."
I realize that the difference service plan itself isn't likely to be the cause, but I can't figure out what else has changed. I have also redeployed this code using the Consumption Plan and the trigger works perfectly.
Thanks for the advice.
I've just checked the debug console for both versions and there's obviously a lot missing from the one that isn't working. Bummer that VS said the deployment was successful. Btw, the bad version is "running." I guess I'll have to deploy again with some verbose output to see if that helps.
Related
Some of web apps from time to time fails to deploy or breaks and I end up getting a 503 error. I took out the website run from package and it ran fine but i have a lot of apps for both web and function apps and I want to know why this happened. Do I still need to use it or not? Should I still use it or not? Is the WEBSITE RUN FROM PACKAGE causing these apps to break or is there another way to fix them? Does something need to be updated in my part through settings etc? I deploy all of my apps through the azure pipeline builds. This is bothering me a lot as I dont want my pipelines to break now esp when I send out releases. Much appreciated with any insight.
WEBSITE_RUN_FROM_PACKAGE is recommended way for deployment of Function Apps if you want to deploy using ZIP deploy.
When you are running the function and If your function is returning the 503 Service Unavailable, then check the Request-Response time because there is a time limit for request processing in Azure Function Http Request i.e., 230 Seconds available in this MS Doc. In this case, either increase the timeout value or change the Hosting Plan to higher or use async pattern in Azure Durable Functions for long running tasks.
If you are getting the 503 Service Error unavailable post-deployment, then it will be 2 causes mainly:
check your function host is down or restarting state.
Check the memory consumption exceeded your Hosting Plan limit (Consumption Plan - Functions) and also, check the troubleshooting steps given in this SO 71658591.
Updated Answer
I can See the bug raised in GitHub Azure Functions Repo earlier with the similar scenario that the user #qJake is getting 503 Service when deployed Azure Functions using Azure DevOps when using the setting WEBSITE_RUN_FROM_PACKAGE and resolved the issue-solutions given in the ticket #11444
I am using Terraform to deploy Azure Functions infrastructure through Azure DevOps release pipeline. Everything works fine but starting from last Friday, I got an error said Function runtime is unable to start. After some investigation, I found out the host.json file is not generated in the Azure Function. Any ideas on what's the potential fix for this?
(Only change is updated Function version v2 to v3, but even I downgrade back to v2, same problem still exists).
Environment:
Azure Function V2/V3
Resources Created By Terraform using App Service Plan Tier
Seems this is an internal issue. Rising a support ticket from the Azure portal will be a good way to find its reason.
We have a bug in our WebJob running in our live environment, I have identified the bug and fixed it, this I can verify in our Dev-environment. I published my WebJob as a "Azure WebJob" to our live environment but the bug i still present. To add to the confusion the bug now just occurs sometimes. So for some reason the old code is running somewhere sometimes.
Can someone please help me understand this?
I had a similar problem. We deploy using a stage environment in Azure and it turned out that the "old" WebJobs (running code with an old version of the entity framework model) where still running on the queue. These jobs where then fetching messages and consuming them. To add to the problem the exception was consumed in a try catch and the status of the WebJob was success.
Check if you have a stage environment (add -stage to the Webapp name) and if so go in to the Azure management portal and stop them.
Note, it is not enough to stop the Webapp, you must stop the WebJobs directly. This is done (in the new portal) under Settings->WebJobs and then right-clicking on the webjobs name selecting stop.
I spent ages looking into this problem. Turns out I had the web job project running in a console on my PC at work! No matter what I did on Azure the presence of this exe running and using the same storage for the web jobs meant that the old code running on my work PC picked up the jobs before Azure did. Easy fix: just make sure no exes are running outside of Azure!
In our case the web app was published to physical path /site/www instead of the default /site/wwwroot, because of this the Azure web portal interface adds the WebJobs to folder /site/jobs, but the webdeploy via VS or Azure are still trying to publish the webjobs inside the /site/www.
More details at Publishing WebJobs with Azure Pipelines
I am deploying to local azure compute emulator. It goes into a loop displaying: Role state Unknown and Destroyed. I can deploy the same exact code to azure cloud with no issues, but it fails in the local deployment. I checked the fusion log to see if there is any assembly load issues with no avail. How can I chase down such an issue?
OK. It turns out I copied the web.config from the earlier version and it brought the old version of DiagnosticMonitorTraceListener and targetFramework. I guess, this is a pretty common pattern of mistake. Once I fixed them by copying them from a health project, it worked like charm.
I have several web and worker roles in my solution, but I also have a non-Azure application running on a Azure hosted VM. That application connects to Azure storage for various things like reading and writing blobs and queues, and that works fine.
I'd like to use Azure diagnostics from within that same application (a .NET app running on a VM hosted in Azure). However, if I try to initialize diagnostics I get an exception that:
System.InvalidOperationException: Not running in a hosted service or the Development Fabric.
This makes sense, but I'm wondering if it's possible to use the diagnostics in some way without being a hosted service. In particular, I'm using azure diagnostics to gather logging information, written out by System.Diagnostics.Trace, and that's all hidden away from the application code, so if there were some other APIs I have a place I can probably slot that in.
Any ideas?
Thanks,
JC
Unfortunately, no. At least not today. The agent has some hard-coded checks for the RoleEnvironment stuff and when it is not there, it fails. This is also the reason you cannot use the agent in the IaaS stuff today either.