Are there any limitations on creating separate processes from an Azure Web Site (specifically, from a continuous Web Job)? I have an executable that often (about %20 of the time) stalls and eventually fails with exit code -1073741819 (access denied? or access violation?), but only when run as a separate process. If this work is retried later, it eventually succeeds (usually on the first retry).
When instead I call this logic directly via a .NET method call (so within the same process and app domain), the code succeeds 100% of the time. The same code also always succeeds when run locally, even when it creates a separate process.
Is there anything going on at the Azure Web Sites/Web Jobs level that I should be aware of, such as using Windows job objects or other security mechanisms to limit the creation or runtime of spawned processes? If not, any suggestions on how to diagnose further what might be going wrong? (I believe remote desktop to a web site isn't possible; anything else that would help "see" what's failing, such as whether there's a WER dialog appearing?)
In case it matters, the logic (in both cases) includes P/Invoking custom native code, and the web site I'm using is Always On, x64, Basic pricing tier.
#David Ebbo, thanks for the suggestion. I used it to help isolate, and I ultimately found this was non-determinism in the code made more likely in the Azure Web Sites environment but not 100% restricted to that context.
Related
I'm trying to move some computations to Azure cloud services. One of the steps of the workflow I'm trying to implement includes running a Win32 desktop application generating a file. Obviously, we cannot have a user interaction for cloud calculations, so the application is launched with command line arguments. The process starts, generates a file, and then exists. At the moment I cannot refactor the code and move this functionality to command-line windowless utility.
First, I chose Azure Functions because they are intended for event-driven short calculations, and that's exactly what I need. Also they are cheap. But I encountered a problem that processes in Azure Functions are being executed inside a sandbox blocking User32/GDI32 system calls and thus preventing me from launching desktop applications.
Another solution I came up with is mounting a virtual machine drive with all needed Visual C++ redistributables installed and then using Azure Batch with nodes based on the pre-configured drive. But this solution has another drawbacks, since it takes minutes to mount a new node. Of course, I could have some nodes that are always active, but anyway the further scaling is slow and having active nodes is not so cheap. Also I have a feeling that Azure Batch is a bit overkill, because there is no need for HPC in my case. Azure Functions' computation capabilities are enough for me.
Is there some kind of compromise solution? So that I would have a solution with fast scaling and quick responses, but with no need to establish Azure Batch based on Azure Virtual Machines?
A lot of GDI32 calls are available now but in a containerized form.
So, you can deploy a function with the desktop application but inside a docker container.
Refer the following articlefor more explanation.
Refer the following documentation on how to deploy containerized function.
I am building a provider-hosted app for SharePoint (O365) which is hosted in Azure. I do all of my logic through CSOM, more specifically using an MVC web project. At the moment, I have some branding logic being executed by the application after an AJAX call to a controller action.
If I have a lot of subsites in my hierarchy, this can take a very long time to execute, which is bad because, while the app will still process my request, leaving the page from which I called the action will prevent me from having any feedback concerning the completion of the task. This is of course because the state of the request is tied directly to the callback of that request in the calling page. This also means that someone could very well launch the request, refresh the page, and then launch it again, since I have no way to tell if a previous request is still executing. Furthermore, 2 different users could launch the same request, resulting in 2 simultaneous executions of that request's logic. Both situations can result in some nasty concurrent modification errors on server side artifacts.
So, what I need is to find a way to check if a certain request is already running, and if that is not the case, launch one that is stateful and asynchronous. The best example I can think of is simply SharePoint O365's own long running tasks mechanics: time intensive tasks (such as installing an app or creating a new site collection) can get launched from a page, and any subsequent refresh or access to that page will display the task as currently running, an even sometimes provide the possibility to cancel it (such as in an app install). The state will also get updated on its own (such as when the site collection creation finishes), which I am not sure is the result of client-side polling or some other mechanic I do not know about.
I have seen some solutions that seemed promising, like using Windows Services directly on Azure or this poor man's timer job, although none seem to fulfill all the requirements I listed above and/or seem complciated to implement for what I wan to do. I have a feeling that Timer Jobs could potentially help, but I wanted to have your advice on the situation.
Thanks for your input
Try using a Azure Worker Role. Use CSOM and side-loading of a SharePoint Provider Hosted App with Tenant Full Control Permissions. The Side-loading part enables your worker to read / write to SharePoint Online.
Side-loading is made via /_layouts/appregnew.aspx and _layouts/appinv.aspx.
I have a Windows Azure Web Site that recently got temporarily suspended due to CPU usage quotas. That's fine, I'd normally let it reset and continue on. But CPU usage has been extremely high for the last 12 hours now and doesn't seem to be stopping despite the site being suspended. Because it is suspended, there's been no data out, no incoming requests to process, nothing. So what the heck is suddenly using all of this CPU power? This site has been running just fine, well under quota, for months without any deployments/code changes.
In lieu of determining the cause of the high CPU usage, I'm more concerned with simply getting it to stop now. As it is, every time the CPU quota resets it immediately gets suspended again since usage is still so ridiculously high.
Is there some way I can kill the process/site? The Stop/Restart buttons are missing from the Azure Management portal (I'm guessing because it is suspended), but despite being suspended for so long now it is still consistently eating up CPU. (On the activity graph, CPU usage spiked sometime yesterday afternoon and has been plateaued since.)
This is very odd, and not what I would expect to see happen. You have a few options to try:
Attempt to use PowerShell to stop the web site. You can download the Windows Azure PowerShell Cmdlets at http://www.windowsazure.com/en-us/downloads/. The Stop-AzureWebsite cmdlet is what you'll want to use. If you are unfamiliar with the PowerShell cmdlets there are some examples and info on Microsoft's website.
Put in a support ticket if you have a support plan. If you don't have a support plan you can also put in a post on the MSDN Forums which is seen by support folks more than they are here I think. There is no SLA or such with the free accounts, but they see these and do investigate. Just might take some time.
If you for sure want to stop it one option is to attempt to delete it if that option is still available to you in the portal. If you don't see this in the portal either then there is a PowerShell CmdLet Remove-AzureWebsite which does this operation. This will also take out the code and data for the site unless that data is persisted outside of the web site environment (like to a database). This might be your last resort and hopefully you have the content of the site backed up or as part of source control somewhere. If not, attempt to get to it using FTP.
Add an app_offline.htm to the root of the site it will cause iis to stop the application.
http://weblogs.asp.net/scottgu/archive/2005/10/06/426755.aspx
It does sound like you have a piece of code in a loop though are you using threads in the site?
Edit
As I have never run into this situation I don't know if your ftp actions are also blocked
Behavior:
Application is loaded and being used as expected.
Suddenly, a particular DLL can no longer be loaded. The error message is:
ActiveX component cannot create object.
In each case, the object had been created successfully many times before failure. All objects are marked for "retain in memory".
This error is cleared when the application pool is recycled. It may be hours or months before it is seen again.
Issue has happened within two hours of a refresh, as well as never happened in months of uptime.
Issue has happened with hundreds of simultaneous users (heavy usage) and also with 1-3 users.
While the issue is occurring, the process running that application pool cannot create the object that is failing. However it can create any other objects. Memory, CPU, and other resources all remain at normal usage. In addition, other processes (such as a stand-alone exe) can successfully create the object.
The first instance of the issue appeared in mid 2008. There have been less than fifty instances since then, despite a pool of hundreds of servers for it to occur on. All instances except one have failed on the same DLL.
DLL Failure Info:
most common - generic data structure implementing a b-tree, has no references other than to its interface. Code consists of arrays and one use of the vb6 Event functionality. The object has not been changed in any way since 2005.
one-time - interop to a .NET module. the failure is occurring when trying to create the interop object, not the .NET object. This object is updated a few times each year.
Application Environment:
IIS hosted application
VB6, classic ASP, some interop to minor .NET components
Windows Server 2003 / Windows Server 2008 (both have independently had the problem)
Attempts to Reproduce:
Using scripts (and real-life humans) to run the same end-user workflows that our logs reported the days before the issue occurred.
Using scripts to create/destroy suspected objects as fast as possible from multiple simultaneous sessions.
Wild speculation.
No intentional success, but it does manifest randomly on the servers on its own.
Troubleshooting:
Code reviews
Test harnesses to investigate upper limits of object creation / destruction
Verification of ability to create object outside of the process experiencing the issue
Monitoring of resources over time on servers under load
Review of IIS, error, and event logs to determine events leading up to issue
Questions:
Any ideas on how to reproduce the issue?
What could cause this behavior?
Ideas for bypassing the first two questions in favor of a fast solution?
The DLL isn't on a network drive is it? You can get "glitches" where the drive is not available momentarily that then means COM can't do what it needs and could then fail to notice the drive is available again.
I used Process Monitor to debug similar problem when accessing ADO/OLEDB stack. Turned out environment got corrupted at some point and ADO classes are registered with InprocServer32 being REG_EXPAND_SZ pointing to %CommonProgramFiles%\System\ado\msado15.dll or similar ot x64 OSes.
Also when you register an application with Restart Manager, on failure the process gets restarted by winlogon process whose environment is different than explorer's one and unfortunately is missing %CommonProgramFiles% -- ouch!
This seems like a random failure; some race condition.
Try VMWARE to record the state of the machine you run this dll on. When the error happens you can then replay the record and inspect the memory contents. That why you won't have to play try and catch the error. At least you will have a solid record of it.
While I can't provide a solution, try catching the error and retry loading the dll when this happens after a refresh to the environment.
I haven't found a definitive list out there, but hopefully someone's got one going or we can come up with one ourselves. What causes disruptions for .NET applications, or general service disruption, running on IIS? For instance, web.config changes will cause a recompilation in JIT (while just deploying a single page doesn't affect the whole app), and iisresets halt everything (natch, but you see where I'm going). How about things like creating a new virtual directory under a current web app?
It's helpful to know all the cases so you know if you can affect a change to a server without causing issues with the whole thing.
EDIT: I had IIS 6 in mind when I asked, but of course a list of anything different in other versions would be helpful as well to people.
It depends on what exactly you are talking about with disruptions. IISReset can cause a Service Unavailable message to display for a short time as IIS is shutdown and re-started.
Changes to the web.config, or adding a .dll file to the bin directory of an application causes a recycle of the application domain but that is not a disruption exactly, more of a "delay" in responding, the user will NOT see an error just a delayed response from the server. You can also get that from changing any files in App_Code or .vb files on non WAP developed sites.
You can also get IIS Worker Process Shutdowns due to inactivity, default setting is 20 minutes. Again this is a delay, not a lack of service.