Azure Emulator Won't Accept D-Series Role - azure

If I try to debug an Azure project using the new D-Series sizes, I get an error saying that the Specified argument was out of the range of valid values. I'm using the latest 2.4 tools. Is there a way around this or do I have to wait for 2.5?
Another weird issue is that I have to manually edit the csdef and re-open the role properties to even get the choices for the new sizes. I verified that my projects have been upgraded to 2.4 as well.

This was resolved with SDK v2.5

Related

New deployed azure function returns 404 Not Found error

Hey I have deployed new azure function using Azure dev ops CI/CD. The function app has been deployed successfully and when I go to the main URL, it says your function app is running. I tried to test the end points("/save") using azure portal and the output is 404 Not found. The same results when I use POST man as well. Any help would be appreciated?
2020-11-21T11:30:45.769 [Error] The following 2 functions are in error:
Get: The function type name 'DocumentContextFunction.Functions.GetDocument'
is invalid.
Save: The function type name
'DocumentContextFunction.Functions.SaveDocument' is invalid.
I have fixed this by updating the value of the "FUNCTIONS_EXTENSION_VERSION" from 1 to 3. For some reason every time I deployed using Azure CI/CD, its value is set to 1, so I have to manually change it to be 3.
I encountered this error when my build targeted win-x64 whereas the Azure Function Platform was configured as 32 bit.
In my case I had a function created in portal. I then published a function via visual studio. After publishing the portal created function was 404 not found and I could not even delete the function from the portal.
Exact same code in a new function worked as expected.
This is not intuitive and were no indications in the portal that previous portal created functions would break.
This maybe buried somewhere in documentation but I would have expected a warning in azure before allowing other functions to break without code changes.
The author's question helped me understand where the problem was coming from. In my case, it was not about the CI/CD pipeline doing anything funny.
It was my IaC code which was not setting up the function app properly. It picks version ~1 by default but I had to set it to ~3.
My function apps were working until I included a new custom package with a later version of Microsoft.Extensions.Logging. My functions were using Microsoft.Extensions.Logging 2.1.1. The package had version 7.0.0, which is for Net7. It was incompatible with my Net6 projects.
When the package was added, the Microsoft.Extensions.Logging version in my project was updated to 7.0.0. The were no compile errors, but debugging showed that the assembly could not be loaded. This was causing the 404.
Changing the package version back to 2.1.1 corrected the problem.
This is what worked for me...
Note: I was getting 404 on my function which is a nodejs and inline editing on the browser.
Open your function.json and take a backup as we are going to change it.
See if in your function.json there are two different settings with "direction": "in"
For me there were two. I tried deleting but it keep coming back.
Next I went to 'Integrations' (on left menu) and opened my trigger and deleted it. This will recycle your trigger. And hope this works for you too.

How can I run node v10.x on Azure Functions on a Linux host?

I've been working on a small function to automate my certificate renewal in Azure Functions.
The function works in my local emulator (in vscode), running under node v10.15.3.
However, when running it online, an exeption is generated on the syntax of an async iterator when the file containing it is included
Stack: /home/site/wwwroot/node_modules/acme-dns-01-cloudflare/index.js:125
for await(const zone of consumePages(pagination =>
It's my understanding that this syntax has been adopted in node versions 10.x. I therefore added the console output line: console.log(process.versions); and get the response that the function is running node version 8.16.1. I therefore checked the WEBSITE_NODE_DEFAULT_VERSION application setting, and confirmed it is set to 10.14.1. I have also checked it with another recommended setting of ~10 and got the same result.
Unfortunately the documentation is difficult to search for such a specific issue, but I have not yet come across anything that states that Linux functions are limited to node v8.x
As extra information, the FUNCTIONS_WORKER_RUNTIME is set to "node", and the runtime version is 2.0.12733.0 (~2)
At time of writing, this issue on github highlights the problem https://github.com/Azure/azure-functions-host/issues/4948. Different node versions are simply not available on Linux consumption plans regardless of the setting in WEBSITE_NODE_DEFAULT_VERSION.
Hopefully their new arrangements will be in place soon for anyone else who has this issue.
For now you can switch to a windows consumption plan, or potentially switch to a service plan (I haven't checked this, as it sort of defeats the point of functions)
Refer to this issue1 and issue2, looks like WEBSITE_NODE_DEFAULT_VERSION won't work for linux function, have to set the LinuxFxVersion property to select the node version.
Follow is my flow to change it.
1.Go to your Function App in the portal and open the Resource Explorer. You will find LinuxFxVersion is node:2.0-node8-appservice.
2.Select the web under config node, then choose the Edit button. Find the linuxFxVersion and change the value to NODE|10.14, after this click the PUT button to update the setting. Then restart your Function, check the node version you will find it's 10.14.

object reference not set to an instance of an object error coming when deployin to cloud

Locally the project works perfectly.
It is executing and there are proper entries in database.
But when I am deploying it to cloud at staging it is giving me this error as object reference not set to an instance.
My project requires 2 instances of each web and worker role but since I have a limited edition to free instances I am using only 2. one each. Can that be a problem ?
I am using SDK 1.8
It is ASP.Net project.
Can anyone suggest me how to resolve this issue ?
I had a similar issue where I was getting the following when publishing from Visual Studio 2013 to my Azure Cloud Service with a web role and a worker role. This occurred after upgrading the project from Azure SDK v2.4 to v2.5:
2:41:59 PM - Applying Diagnostics extension.
2:42:01 PM - Object reference not set to an instance of an object.
Even after working with a Microsoft Developer I could not get it to show me any kind of stack trace to indicate where the null was coming from. He just happened to notice something funny in my cloud configuration under the "Roles" folder that looked strange.
The problem was that the diagnostics.wadcfgx is needed for an SDK 2.5 project, whereas before that version the diagnostics.wadcfg is needed. (Note the "x" in the file extension.) Apparently the worker role did not have this file automatically created when the project was upgraded.
In order to fix this particular issue, just right click on the role (in the roles folder under cloud configuration) and select "Add Diagnostic Configuration", then Build the solution and attempt to publish. These diagnostics files are ONLY needed when publishing directly from Visual Studio.
I haved same error today, after upgrading Azure SDK to version 2.5
I'm add configuration diagnostics to each role in service and the error disappeared
Had this with MS Azure Tool 2.9. Turned out that it needed the following in diagnostics.wadcfgx :
<PrivateConfig xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration">
<StorageAccount endpoint="" />
</PrivateConfig>
Hope it helps someone!
I ran into the same issue when upgrading to 2.5. It created the .wadcfgx files and they looked sensible but deployment via Visual Studio would still fail with the same error.
Applying Diagnostics extension.
Object reference not set to an instance of an object.
Package deployment would still work. I was able to get past it via the following steps.
Delete the generated .wadcfgx files
Have Visual Studio regenerate them via the context menu option on the role.
Doing a clean build and redeploying.
I'd love to know what the actual error was and how to get more details. The messaging is pretty terrible.

Azure-Deployment to stage ignores service configuration

I created a cloud service and tested it successfully locally. I added service configurations for stage and production. Here is a snippet of my staging-configuration:
and here my configuration-settings:
Then when I publish I set up the deployment as follows:
All this worked like 2 weeks ago. But now he deploys in VS and when I look into Azure Service Configure area it looks like this:
I played a little bit with the "Update development ..."-checkbox on the second screen but the result is the same.
So it ignores all the settings I made and just won't tranistion my configuration to the ine I named "CloudStage". My current Web PI tells me that I use Windows Azure SDK for .NET (VS 2013) 2.3. I don't get the point.
Edit
Some more things I observed:
No WADLogsTable and WADWindowsEventLogsTable is generated automatically in the staging storage.
I deactivated Remote Desktop because it was one of the changes I made to monitor the event log (which wasn't useful here)
I manually changed the connection strings in Azure Portal but it seems as if the worker is totally unaware of the storage (rebooted it with no success).
Edit
I recognized another thing. Here you can see a running deployment of my service:
See the warning-mark on the left? If I go to my Error list this is shown:
This warning is senseless since it tells me that I did everything the right way. My *.Local.csfg-files are pointing to the local storage. So?!?
This seems weird. Please check the in your ServiceConfiguration.CloudStage.cscfg to verify the expected values.
Have you tried updating any other property like Enabling Remote desktop? Does that get updated on your deployment? You should select the "Deployment Update" check box in the publish dialog. Now, when deploying to an existing Cloud Service, it should ask you if you want to replace it.
If you get the Object reference error every time you right click on project, there might be some issue with the Azure SDK set up.
I'm a little bit further now. What I did was:
Deleted all Services in Azure.
Deleted all Storage Accounts in Azure
Removed my Service-Project completely from solution (not the library containing the worker-logic).
Re-added storage-accounts in Azure.
Re-added services in Azure.
Re-added a project in the solution and added the worker-logic inside it.
Builded up all the publishing-stuff again.
Published it.
The first publish ended like the one described in my question. After I checked the "Update development..."-option in properties of my worker it finally took my transitions into the stage!
Now I recognized, that WADLogsTable was still empty. I hit the instance right in server-explorer and choosse "Update diagnostic settings...". There was an option "Transfer period" suddenly set to "None". This explained to me, why my table was empty and after I set it back to "1" my table is filling again!
Another funny thing beside: When I right-click my Cloud-project in the solution I get "Object reference not set to an instance...". When I just click it left and choose Build->Publish it works.
I just hope that I can help somebody with this. Lets see if it's stable now.
Edit: Yesterday it worked - today is still the same issue :-(.
When you get "Object reference not set to an instance.." for a CloudService project you usually have some kind of mismatch. It could be that a setting in the ServiceConfiguration is not defined in the ServiceDefinition. It could also be that there is a publish profile defined in the .ccproj file for the CloudService that doesn't exist. This might also be what is causing your problems with the different configurations.
So it turns out that the problem is completely on client-side. My Visual Studio (now with SDK 2.4) is doing something wrong. I set up a fresh installation with all the stuff needed :-( and there it works perfect. I'll try to determine if one of my extensions is causing the strange "Object reference not set..."-bug.
Repair-Installation of VS does not solve the problem btw.

Your role instances have recycled a number of times during an update or upgrade operation

I am trying to deploy a Cloud Service with 1 Web Role to Azure.
When I do so, I get this message:
Your role instances have recycled a number of times during an update or upgrade operation. This indicates that the new version of your service or the configuration settings you provided when configuring the service prevent the role instances from running. Verify your code does not throw unhandled exceptions and that your configuration settings are correct and then start another update or upgrade operation.
The project runs just fine locally, and I'm having a hard time figuring out how to start debugging this issue. Are there any common problems that cause this message or steps to figure out what is causing it?
See https://learn.microsoft.com/en-us/archive/blogs/kwill/windows-azure-paas-compute-diagnostics-data. This will walk through all of the diagnostic data available as well as how to troubleshoot the most common issues.
We also had this annoying problem and in our case:
We use local storage, but it wasn't defined in service definition (or Worker Role's properties)
Our worker role project has reference to a service project which has reference to data layer project. But, the worker role project doesn't have reference to the data layer project. As soon as we added reference to data layer project in worker role project, it deploys successfully.
Problem #1 can be easily noticed if you first run the project in your local machine. Exception will be thrown.
Problem #2, however, is more difficult, mainly because it runs just fine in local machine. After 5 days of trouble shooting, we finally found the problem. So, check all references and try to add sub-reference projects, those that are referenced by other references.
We had similar problem, and it was due to some DDLs failed to load. (due to different version from the one MS have deployed to the VM)
Try to set CopyLocal to "true" for all the References in the project, and re-deploy.
I would either remote desktop to the cloud instance and review the Windows Event Logs for exceptions or redeploy with IntelliTrace Enabled. If you choose the later, you can download the IntelliTrace logs from Visual Studio and debug
http://msdn.microsoft.com/en-us/library/windowsazure/ff683671.aspx
One way to find out the actual error is to click on the " 1 instance" at the top of Dashboard after trying to deploy your web role. It will tell you the status of the role instance. The status should include more information about the type of error which blocks your deployment.
It depends on what your case is. For me, the status claimed that I had an unhandled Security exception. After some investigation, it turned out that under my role's OnStart(), I tried to create a event source. However, Azure service doesn't have the permission to create an event source.
For more possible issues, check http://blogs.msdn.com/b/kwill/archive/2013/09/06/troubleshooting-scenario-3-role-stuck-in-busy.aspx
For me, the issue was with my SQL Azure DB firewall rules. My Azure SQL Database servers are not set to "Allow Access to Azure services", so I have to explicitly list IPs that are allowed.
I discovered this after wrapping my code in a try/catch that swallowed all exceptions, refactoring my OnStart() and RunAsync() methods, and setting all my references to Copy Local = True. None of that worked, then I saw that I had this line in my RunAsync() method:
log4net.Config.XmlConfigurator.Configure();
I am using the AdoNetAdapter for log4net and connecting to an Azure SQL DB for logging, so that led me to check the firewall rules.
For me, I had some differing version of nuget packages in my various projects. Once I consolidated everything to the same version(s), it worked fine.
With the release of Windows Azure SDK version 2.2 for Visual Studio 2012 and 2013, Now you can Remote Debug Cloud Resources within Visual Studio.
Once your cloud service is published and running live in the cloud, you can simply set a breakpoint in your local source code. This may help you in digging out what's going wrong!

Resources