Azure Worker Role not working properly - azure

I have a Worker Role which isn´t working on Cloud environment.
When I run it locally, it runs perfectly, but, when I deploy it to Azure I´m having some troubles. The deploy, itself, occurs seamlessy, after the VM starts, my app doesn´t run. There is nothing on Event Log, and, even after I setted up the app to flush all Trace Messages to Azure Table, nothing is wrote there too.
How can I check if my app is really running on the VM? Why my app isn´t working there as it works locally?

Have tried to implement diagnostics on your webrole? This is the best way to find any errors in your code. An other solution is to install sysinternals during startup. Patriek van Dorp has made a nu get package thad adds the sysinternal suite as a plugin for your cloud project.

The best way is to enable RDP and remote into the machine. Then you can look at the processes running and ensure that things are running as you expect. It is odd that there is nothing in the event log if it is failing to run. Does the portal show the deployment as Ready?

Maybe too late, but i had similar issue.When i run it locally, it was running.After deploy did not run anything.Problem was that, i deploying into website,where worker not available.You should deploy into CloudService(there are available both roles) or make Sheduler, which will do request on your page, where process job you needed.In your custom intreval, ofcourse.
BTW sorry about my english ...
Regards

Related

Post-build actions in Python/Linux webapp do not run

I have a Django-based web app deployed from Github, running in Python 3.9. The app deploys and starts successfully.
I need to add post-build actions to complete the deployment; the exceeding common Django task of running "manage.py". Following the general and python-specific docs I have added the an app setting of
POST_BUILD_SCRIPT_PATH=postbuild.sh
There is a shell script, postbuild.sh in the root of my app, which runs fine if I SSH into the running container. The expected behaviour is that after deployment, this should run, and output to the deployment log. Neither of these things happen.
I have tested the app setting POST_BUILD_COMMAND with a very simple echo, and that does nothing either.
Can you tell me either what I need to do to make these app settings work, or suggest an alternative method of running the post-build script?
Please note that this is a Linux app using Oryx, so answers concerning Windows/Kudu like this one aren't related.
I noticed you asked your question over here as well. Your setting needs to be set to a relative path, /postbuild.sh.

After deployment Web job, execution of blob trigger stopped working

I have deployed my web job on production environment and suddenly blob trigger stops working(Looking into App Insight I know that blob trigger is not called).
If I debug the same code from the local machine then it triggers blob trigger.But stopped working in the production environment.
Always On is enabled.
Also, I have these containers are present
azure-jobs-host-archive,
azure-jobs-host-output,
azure-webjobs-dashboard,
Microsoft.Azure.WebJobs.Host,
Installed packages:-
Microsoft.Azure.Webjobs installed version is 2.0.0,
Microsoft.Azure.WebJobs.Host installed version is 2.0.0
According to your description, we couldn't directly find out why the blob trigger doesn't work. I suggest you could follow below way to troubleshoot by yourself.
If I debug the same code from the local machine then it triggers blob trigger.But stopped working in the production environment.
Since you found your webjob could work well in local, I guess maybe there are something wrong with the connection string config in your web app.
I suggest you could try to follow below way to change the production environment appsetting to set the storage connection string in it.
1.Open the portal and edit the appsetting as below.
2.Create another web app and publish the web job to it and run.
If these two ways all doesn't help you solve the error, I suggest you could post the log/error message and the details codes about the web job.
More details about how to find the log/error message, you could follow below way.

Azure: Worker role looping through "recycling"

I'm currently working on an Azure project that works 100% locally with emulator resources. I'm now trying to deploy a worker role, but I'm running into an issue that I'm not sure how to troubleshoot.
Upon deploying the worker role in my Azure portal, the two instances continually loop through "recycling".
I can try to RDP into the role, but I only have about a minute to look around before the connection closes, I'm assuming due to the recycling.
After some searching it doesn't seem like this is a super common problem. Is there something trivial I'm overlooking that could be causing this issue? How would you go about troubleshooting this? Thank you for your time :)
In case of missing Reference you can troubleshoot this issue by:
Unzip your CSPKG file and then again unzip .CSSX file (just rename CSSX to zip) and match that everything references and static content is all there.. This way you can match what is on VM. Also in 2 minute windows when you RDP, try to look for Application event log for exception and get it because that would be the key to find the root cause.
IF you could see the exception in event log and look for the exception, you sure can find where it was generated. You can also use Intellitrace which might require you to redeploy the app.
Also there are ways were copying WinDBG and locking to the specific process you can debug it. I am not sure how much you would want to try but just copy the WinDBG to VM and use it would be enough (not sure how much experience you have with WinDBG though and how much time you would want to spent.)
Also been pestered by this role recycle issue numerous times. Here is the sequence of steps to debug persistent role recycles:
Debugging Azure Role Recycles
Enable Remote Access to your role - RDP login
Check eventvwr.msc (Windows Logs -> Application, App & Service Logs->Windows Azure)
Review the Azure text file logs across both C:\logs and c:\resources
Review custom logs in the Volume E: or F: for any custom role startup logging
Run AzureTools and attach to startup processes (download WinDBG, use Utils->Attach Debugger, select process - WaWorkerHost/WaIISHost, etc), use G to continue and watch debugger output for assemblies failing to load.
Installing Azure Debugging Tools via Powershell
PS> md c:\tools; Import-Module bitstransfer; Start-BitsTransfer http://dsazure.blob.core.windows.net/azuretools/AzureTools.exe c:\tools\AzureTools.exe; c:\tools\AzureTools.exe
If all items above fail - try using other tools in the AzureTools treasure trove - such as fusion logging, etc, this approach above will work!
WinDBG Sample Output - Failing to Locate Assembly (WaIISHost)
The most likely cause is that you have a missing assembly. One tactic to catch this is to wrap any startup processing in a master try/catch that manual logs the error to Azure storage.
If you added any referrences, check to make sure they're set to copylocal=true and that any external assets that were included in your service package were also set to be included.
From Avkash above:
Yes. this mean some issue in your Worker Role code is causing your Worker Role Host Process to crash.. If you look your fault stack you must see the function or the link from your code which generate this fault. IF you need help open a free Azure Support incident to Windows Azure Support team and they will help you.
Just a suggestion: Also Check the installable(if any)and any other references you use are 64bit.Azure VMs have 64bit OS. Once i was stuck up with this kind of problem due to 32/64 bit issues.
Are your worker roles exiting their work loop? A local recycle is very fast and you might not notice it, but spin-up time in the cloud can be long.
If the issue is caused by a startup batch file, I have stopped the loop by editing the batch file on the instance to include "exit /b 0" at the beginning. This will tell Azure that the startup was successful and you then have all the time you need to diagnose issues without the VM getting killed.

Windows service status

I have written a windows service in C#.net and installed it successfully. When I run the service manually, it runs but the status of the service doesnt change to stopped. How can I make it to change its staus to stopped once the operation is over.
Thanks,
Well the service normally registers itself with the SCM, and reports it's status to the SCM. Although if the service isn't running at all the SCM will simply mark it as stopped.
I would suggest reading Microsoft's introduction to services to get a better idea of how they work, and the best practices to use.
https://msdn.microsoft.com/en-us/library/d56de412.aspx

deploying to sharepoint using the object model doesn't work reliably

Deploying to sharepoint using the object model or STSADM commands sometimes results in one or more packages being in the "error" state in the web control, a redeploy instantly fixes this, usually, even stranger, if i create two apps one which adds and one which deploys then i get no problems, but putting a delay between a single program does not have a similar effect.
If i run the deploy twice for programs which did not deploy successfully it works fine, as long as I do not try to do it programatically in which case it makes no difference.
It is different files and sometimes is none.
I do use stsadm -execadmsvcjobs between add and deploy and even between two of the deploy bunches.
(i'm deploying around 10 wsp files programatically)
Does anyone have any ideas on why this happens? or how to solve it, as when i get to implementations it causes problems.
The problem lies in the fact that sharepoint will perform app pool recycles and / or full iisresets, as well restarts of the SharePoint Timer Service (altough not completely sure about that though). When you try to actually deploy the just installed package sharepoint is still busy getting up and running again, the timer job created to install / deploy is basically waiting for the central admin app pool to be fully running again.
The same thing happens (somewhat reproducably) while retracting a solution. Hit F5 a lot of times on the solution management page while the retract process is underway and if you refreshed fast enough it will hang and display "error" in red.
My solution was to create a WebRequest to at least the central admin (or just do a SPSite = new SPSite("centraladminurl")) in your deployment app or in powershell. Do this after every deploy action as well.
This SHOULD fix the timing issue (basically a kind of "race condition").

Resources