Automatically Restarting Website after IIS AppPool recyle - iis

I'm using a website which interacts with SQL Server Agent in order to schedule the automatic processing and emailing of reports. I recently noticed that when the AppPool recyles, that i'm not getting reports afterwards - until someone logs into the website again. It's possible for the website not to get hit for hours/days, during which all the scheduled tasks are lost.
I'd like to set up a windows task to either run periodically or trigger off the AppPool recycle event. But I'm not sure what the task should be. I had one suggestion to set up a Windows task to exercise a .js script that would hit the website, but this only works with Windows Authentication (which isn't being used):
var xmlhttp = new ActiveXObject("MSXML2.XMLHTTP");
xmlhttp.open("POST", "http://localhost/website/default.aspx", false);
xmlhttp.send();
Looking for some suggestions on how to "wake up" a website after an IIS AppPool recycle.
Thanks.
ab.

Why not run gnu wget from a scheduled task:
wget -O - http://mysite.com/default.aspx

I ran across this today which seems like it might help you out. It's the IIS Warm-up module. Depends on whether you can convince the your customer to install it, although it is an official IIS module, so hopefully it's no problem.

Related

Azure Recovery services scheduled tasks keep going disabled

I have installed Azure Recovery Services (MARS) onto a 2019 server. I can fully configure it using the GUI, but the scheduled backups just don't run.
I can run the back manually and it runs perfectly and completes quickly; however, when I try to use the scheduler, it doesn't run.
I have checked the Task Scheduler and the job keeps switching to disabled with the notification:
User "System" disabled Task Scheduler task "\Microsoft\OnlineBackup\Microsoft-OnlineBackup"
When I installed the application, I changed the default path to C:\Domain Services to keep them separate, is this where it went wrong?
I have other servers on the backup platform which are not having any issues at all, I have also tried the steps in:
https://learn.microsoft.com/en-us/azure/backup/backup-azure-mars-troubleshoot#backups-dont-run-according-to-schedule
And also
https://dirteam.com/bas/2019/01/09/the-mysterious-case-of-azure-backup-agent-not-running-its-schedule/
But it is not fixing the issue.
I am completely out of ideas, hoping that somebody can help me!
Change the settings in the task scheduler for Online Backups. See the snippet below.
I have no idea how, but the system is now working correctly and not being disabled. I tried to remove all the MARS software on the machine and re-installed it and it now works correctly and has been backing up for a few weeks now.
Thank you for all your assistance.

IIS is serving but not executing classic asp script

I wrote a classic ASP script (.asp) for a customer a while back. it was running on IIS v6.1 Windows 2003. The customer contacted me and said they had a catastrophic server failure and restored from backup but my script isn't running now. I logged onto their server to check it out and IIS is serving the file (I am prompted to save when I browse to the script) but not executing the script.
Several people's hands were in the server before they called me, I think this is probably a simple config setting someone tried before they figured out how to enable the "ASP" web server roll feature. But for the life of me I can't figure out how they did it. this is obviously not the default behavior. If I was trying to get this behavior I would add the .asp extension to the MIME types, but I checked and it isn't there.
What could cause IIS to serve the source of the ASP script without executing it?
Based on your question I am assuming your restored server is also windows server 2003 ... in that case you will go to the file\folder and the permissions and select execute permission to enable a server side script processor to handle that request. Been almost a decade that I have touched a 2003 server so I can’t give you the exact steps ... but, you want to enable script permissions on that folder(I think, don’t remember if it’s granular enough to drill down to a file). Also, why on earth are they still running server 2003? Is that version even supported yet?
If it’s IIS 7, you want to make sure your app pool is in Classic ASP mode first off. Then go to site and then the handler mapping section, click edit and configure it that way.

Sharepoint 2013 Custom Timer Job running on development server but not on production server

I have developed a custom timer job for SharePoint 2013 in visual studio 2012 which sends email notifications. The issue is that it works fine on development server.
I have followed the following steps to debug it on the development server 1.) Deploy the timer job on respective site. 2.) Restart the timer service in services.msc 3.) Then is do attach to process OWSTIMER in visual studio. 4.) And finally Go to SharePoint 2013 Central administration->Monitoring->Review Job Definition and click on the respective timer job and say run now.
After doing this the breakpoint is hit in visual studio at the Execute() method. So in the development server it is running.
Now on the production server I cannot debug using visual studio so I have deployed the packaged solution(.wsp).
I can see the feature is activated in Site Collection Administration-> Site Collection Features.
Now on the production server I follow the following steps 1.)Restart the timer service in services.msc 2.)And finally Go to SharePoint 2013 Central administration->Monitoring->Review Job
Further to test whether the timer job is working on production server or not I had used PortalLog.LogString("Flow test1"); at the start of the Execute() method. Now this runs on the development server and I see the message in the SharePoint logs but on the production server I can't see "Flow Test1" in the logs after I click Run Now in central admin.
Can anyone suggest what is the issue and a possible solution?
It seems to me that there are two issues:
You should use other way for logging LoggingService should be preferred way. Use WriteEvent to write to EventLog or WriteTrace to write to ULS log.
Running job. Be sure that Owstimer.exe service on all web servers are restarted (can be done by this powershell script). I expect that you have correctly scheduled your job either in your powershell script or in your feature receiver.
Here are a few things to try:
Go to Central Administration and run the timer job from there. Then go to the job history page and check whether it finished successfully or not. If there was an error, you should see the error message from there. That will give you a clue on whats happening.
As Mazin said, restart the timer service in all servers. After deployment, the DLLs are cached by the process and you don't see your changes reflected.
Browse the SharePoint logs and search for an exception or error. You can narrow your search by selecting the timeframe on which your job ran. You can use the following PS script:
Get-SPLogEvent -StartTime "02/02/2014 11:00" -EndTime "02/02/2014 13:00" | Out-GridView
As stated here it seems your job assembly is not deployed in the GAC. Verify that the assembly is present there.

SharePoint 2007 WSP deployment process, restarting SPTimerV3 service

I've got a WF feature which I've been deploying into my development/test environment fairly frequently, and as such have run into an issue where the assembly seems to be cached by the SharePoint Timer service (SPTimerV3), and then an out-of-date version is used after the workflow rehydrates following a Delay Activity.
To fix this, I've tried adding a "NET STOP SPTimerV3" and "NET START SPTimerV3" to my batch file after the STSADM commands to install the .WSP . It works to restart the timer service, and I no longer have the caching problem, however restarting the timer this way seems to kill my SP App Pools in IIS fairly regularly.
Has anyone found a good way to restart the timer in a WSP deployment batch file without adverse affects? Do I need to restart another dependent service, or restart the App Pools each time as well?
you need to restart IIS as well. IISRESET /noforce should do the trick.

deploying to sharepoint using the object model doesn't work reliably

Deploying to sharepoint using the object model or STSADM commands sometimes results in one or more packages being in the "error" state in the web control, a redeploy instantly fixes this, usually, even stranger, if i create two apps one which adds and one which deploys then i get no problems, but putting a delay between a single program does not have a similar effect.
If i run the deploy twice for programs which did not deploy successfully it works fine, as long as I do not try to do it programatically in which case it makes no difference.
It is different files and sometimes is none.
I do use stsadm -execadmsvcjobs between add and deploy and even between two of the deploy bunches.
(i'm deploying around 10 wsp files programatically)
Does anyone have any ideas on why this happens? or how to solve it, as when i get to implementations it causes problems.
The problem lies in the fact that sharepoint will perform app pool recycles and / or full iisresets, as well restarts of the SharePoint Timer Service (altough not completely sure about that though). When you try to actually deploy the just installed package sharepoint is still busy getting up and running again, the timer job created to install / deploy is basically waiting for the central admin app pool to be fully running again.
The same thing happens (somewhat reproducably) while retracting a solution. Hit F5 a lot of times on the solution management page while the retract process is underway and if you refreshed fast enough it will hang and display "error" in red.
My solution was to create a WebRequest to at least the central admin (or just do a SPSite = new SPSite("centraladminurl")) in your deployment app or in powershell. Do this after every deploy action as well.
This SHOULD fix the timing issue (basically a kind of "race condition").

Resources