How to get IIS site to start up automatically after deployment? - iis-7.5

I have a Web API service that I'm deploying to my various environments (using Octopus Deploy). It is supposed to do various tasks on startup, e.g. run any migration scripts required by Entity Framework Code First, and some background tasks managed by Hangfire. Trouble is, the web site only wakes up the first time someone makes a call to it. I want it to run as soon as I've deployed it. I could do it manually just by pointing a web browser at the API's home page, but that requires me to remember to do that, and if I'm deploying to multiple tentacles, that's a major PITA.
How can I force the web site to start up automatically, right after it's been deployed?

In the control panel under turn windows features on or off,
Under "Web Server (IIS) | Web Server | Application Development",
select "Application Initialization".
In IIS, on the advanced settings for the Application Pool,
"Start Mode" should be set to "AlwaysRunning".
In IIS, on the advanced settings for the site,
"Preload Enabled" should be set to "true".
A new deployment will cause it to start again (possibly after a short delay).

You can use the Application Initialization Module for this, as many answers have mentioned, but there are a couple of benefits to dropping in a PowerShell step to your deployment...
It can warm up your site
It can act like a very basic smoke test
If it fails, the deployment is marked as failed in Octopus
I use this PowerShell script to warm up and check websites after deployment.
Write-Output "Starting"
If (![string]::IsNullOrWhiteSpace($TestUrl)) {
Write-Output "Making request to $TestUrl"
$stopwatch = [Diagnostics.Stopwatch]::StartNew()
$response = Invoke-WebRequest -UseBasicParsing $TestUrl -MaximumRedirection 0
$stopwatch.Stop()
$statusCode = [int]$response.StatusCode
If ($statusCode -ge 200 -And $statusCode -lt 400) {
Write-Output "$statusCode Warmed Up Site $TestUrl in $($stopwatch.ElapsedMilliseconds)s ms"
$stopwatch = [Diagnostics.Stopwatch]::StartNew()
$response = Invoke-WebRequest -UseBasicParsing $TestUrl -MaximumRedirection 0
$stopwatch.Stop()
$statusCode = [int]$response.StatusCode
Write-Output "$statusCode Second request took $($stopwatch.ElapsedMilliseconds)s ms"
} Else {
throw "Warm up failed for " + $TestUrl
}
} Else {
Write-Output "No TestUrl configured for this machine."
}
Write-Output "Done"

Use a powershell script to make a call to localhost or the specific machine being deployed to post deploy. The other option would to use the Application Initialization Module.

You will need to perform following steps -
Enable automatic start-up for Windows Process Activation (WAS) and World Wide Web Publishing (W3SVC) services (enabled by default).
Configure Automatic Startup for an Application pool (enabled by default).
Enable Always Running Mode for Application pool and configure Auto-start feature
For step3, you’ll need a special class that implements the IProcessHostPreloadClient interface. It will be called automatically by Windows Process Activation service during its start-up and after each Application pool recycle.
public class ApplicationPreload : System.Web.Hosting.IProcessHostPreloadClient
{
public void Preload(string[] parameters)
{
//Write code here to kick off the things at startup.
}
}
Complete process in detail is documented here on Hangfire.
http://docs.hangfire.io/en/latest/deployment-to-production/making-aspnet-app-always-running.html

Related

Azure Functions - PowerShell - "The pwsh executable cannot be found at ..."

When my PowerShell Azure Function runs using the Test/Run feature in the portal, I get this error in the connected console output.
The pwsh executable cannot be found at "C:\Program Files (x86)\SiteExtensions\Functions\3.3.1\workers\powershell\7\runtimes\win\lib\netcoreapp3.1\pwsh.exe"
Note that 'Start-Job' is not supported by design in scenarios where PowerShell is being hosted in other applications. Instead, usage of the 'ThreadJob' module is recommended in such scenarios
My script looks something like below.
Note the invocation of the web request does indeed fail with an HTTP 500, triggering, I assume the catch block and the if.
try {
Invoke-WebRequest ...
}
catch {
$exc = $_;
}
if ($null -ne $exc) {
Writing-Warning "This failed when something blah.";
throw $exc;
}
This is the gyst. The real script actually makes a few web requests, any of which could fail. I want to ensure they all get executed, so I catch and then store the exception, and then only at the end the script can throw and fail, and my hope is at least one of the problems makes it out into logging or somewhere in the portal or something.
The actual message looks like this. It smells like an Azure problem to me.
I just ran it again and it fixed itself. Thanks for wasting my time, Azure.

Moving away from Source-Safe but having problems installing SourceGear Vault on IIS 10

In order to keep my scripts I used to use Microsoft source safe but after many issues, I migrated to sourceGear Vault, which stores all the data in a few sql server databases, so that you can backup them, etc.
This question is specific to this version control system called SourceGear Vault.
In the past I had problems with SourceGear Vault installation and they were fixed.
Now again I am finding it not straight forward to install the SourceGear Vault client.
What I have done so far
I have used the following powershell commands to install the server and client:
msiexec /i VaultProServer64_10_0_0_30736.msi
msiexec /i VaultProClient_10_0_0_30736.msi
The server installation went on without major problems, other that you need to make sure you run the powershell above as Administrator. Same is valid for the client install.
Client install is ok too, the bit that I have got a problem is the IIS.
to find the version of IIS on powershell:
powershell "get-itemproperty HKLM:\SOFTWARE\Microsoft\InetStp\ | select setupstring,versionstring"
About the .NET version(s) I have installed
running the below Powershell script I get:
Get-ChildItem 'HKLM:\SOFTWARE\Microsoft\NET Framework Setup\NDP' -recurse |
Get-ItemProperty -name Version,Release -EA 0 |
Where { $_.PSChildName -match '^(?!S)\p{L}'} |
Select PSChildName, Version, Release, #{
name="Product"
expression={
switch -regex ($_.Release) {
"378389" { [Version]"4.5" }
"378675|378758" { [Version]"4.5.1" }
"379893" { [Version]"4.5.2" }
"393295|393297" { [Version]"4.6" }
"394254|394271" { [Version]"4.6.1" }
"394802|394806" { [Version]"4.6.2" }
"460798" { [Version]"4.7" }
{$_ -gt 460798} { [Version]"Undocumented 4.7 or higher, please update script" }
}
}
}
this is my current IIS SourceGear Environment:
The Application Pools
In the IIS Manager, click on Application Pools. there are multiple
pools for Vault. Check the Advanced Settings for each and look for
"Enable 32-bit Apps." That should be set to False.
I have all disabled the Enabled 32-bits Apps as you can see on the pictures below:
I show only one of the application pools but they are all set the same.
I had the following error but it is fixed now - see below for more info:
when I go to the http://localhost/vaultservice/index.html using google chrome,
I get the following error message:
HTTP Error 500.19 - Internal Server Error The requested page cannot be
accessed because the related configuration data for the page is
invalid.
Error Code 0x80070021
Config Error This configuration section cannot be used at this
path. This happens when the section is locked at a parent level.
Locking is either by default (overrideModeDefault="Deny"), or set
explicitly by a location tag with overrideMode="Deny" or the legacy
allowOverride="false".
This locked at parental level was fixed by doing the following:
I needed to change from read only to read/write some of the features: handler mappings and modules
before:
after:
The error message when using the application
This is the error message I am currently getting when connecting using the Vault client:
Unable to connect to http://mathura/VaultService. No server was found
at the specified URL. Please verify your network settings using the
Options dialog under the Tools menu in the Vault GUI Client. Web
Exception: The request failed with HTTP status 405: Method Not
Allowed.
How can I troubleshoot this and get to a healthy installation?
I fixed the problem.
when going to http://mathura/VaultService/VaultService.asmx
I was getting the following error:
The page you are requesting cannot be served because of the extension
configuration. If the page is a script, add a handler. If the file
should be downloaded, add a MIME map.
Then from the question below:
“The page you are requesting cannot be served because of the extension configuration.” error message
I had to check .NET Framework 4.5 Advanced Services > WCF Services >
HTTP Activation
and that solved my problem.
BEFORE:
AFTER:

Azure Service Fabric Application stuck in Deleting state

I had a deployment on my service fabric cluster go wrong; I attempted to delete an application and for some reason, the deletion never seemed to and now the application is stuck in the deleting state, while all my deployments remain. I can't delete or upgrade the application since I get a status of "deleting"
Is there a way to update the status of the application so I can then proceed to delete it (for real) this time?
You'll most likely need to use power shell and execute an application delete that way, I had this issue as well when starting out with service fabric.
For instructions on how to connect to the cluster using powershell click here.
$nodes = Get-ServiceFabricNode
foreach ($node in $nodes)
{
$replicas = Get-ServiceFabricDeployedReplica -NodeName $node.NodeName -ApplicationName "fabric:/AppNameHere"
foreach ($replica in $replicas)
{
Remove-ServiceFabricReplica -ForceRemove -NodeName $node.NodeName -PartitionId $replica.PartitionId -ReplicaOrInstanceId $replica.ReplicaOrInstanceId
}
}
Deletions that get stuck, in my experience, are often due to the application not honoring cancellation tokens. What kind of application did you deploy?

Access Denied - get-wmiobject win32_service (Powershell)

11/13/2013 11:35:37 TRCW1 using local computer 11/13/2013 11:35:37
TRCE1 System.Management.ManagementException: Access denied at
System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus
errorCode) at
System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext()
at Microsoft.PowerShell.Commands.GetWmiObjectCommand.BeginProcessing()
Code (inside a loop of server names):
$error.clear() #clear any prior errors, otherwise same error may repeat over-and-over in trace
if ($LocalServerName -eq $line.ServerName)
{
# see if not using -ComputerName on local computer avoids the "service not found" error
Add-Content $TraceFilename "$myDate TRCW1 using local computer "
$Service = (get-wmiobject win32_service -filter "name = '$($line.ServiceName)'")
}
else
{
Add-Content $TraceFilename "$myDate TRCW2 using remote computer $($line.ServerName) not eq $LocalServerName"
$Service = (get-wmiobject win32_service -ComputerName $line.ServerName -filter "name = '$($line.ServiceName)'")
}
if ($error -ne $null)
{
Write-Host "----> $($error[0].Exception) "
Add-Content $TraceFilename "$myDate TRCE1 $($error[0].Exception)"
}
I'm reading a CSV of server names. I finally added the exception logic, to find I'm getting an "Access Denied". This was only happening on the local server. Seems almost backwards, the local server fails, whereas the remote servers work fine. I even changed logic to test to see if it was the local server, then tried leaving off the -ComputerName parms on the WMI (as shown in code above), and still getting error.
So far, my research shows the answer may lie with
set-item trustedhosts
But my main question is whether trustedhosts is applicable to local servers, or only remote servers. Wouldn't a computer always trust itself? Does it still use remoting to talk to itself?
This server apparently was part of a cluster a long time before I got here, and now it's not. I'm also suspicious of that.
When I run interactively the script works fine, it's only when I schedule it and run it under a service account that it fails with the access denied. The Service Account is local Admin on that box.
I'm using get-wmiobject win32_service instead of get-service because it returns extra info I need to lookup the process, and date/time the service was started using another WMI call.
Running on Win 2008/R2.
Below Update 11/13/2013 5:27Pm
I have just verified that the problem happens on more than one server. [I took the scripts and ran them on another server.] My CSV input includes a list of servers to monitor. The ones outside of my own server always return results. The ones to my own server, that omit the -ComputerName fail. (I have tried with and without the -ComputerName parm for the local server).
Are you running the script "as administrator" (UAC)? When your credentials are calculated for your local instance if you have UAC enabled and you didn't run it "as administrator" it removes the local administrator security token. Connecting to a different machine over the network, A) it completely bypasses UAC, and B) when the target evaluates your token, the group memberships you're in are fully evaluated and thus you get "administrator" access.
Probably unrelated, but I've just run across two 2008 R2 servers out of 10 on my system that reject THE FIRST performance criteria that I'm collecting, but only when it's running as a scheduled task. If I run it interactively it works at least 95% of the time. I'm collecting Disk Seconds/Read and Seconds/Write, so it's the reads that don't show, for these two servers only. I flipped the order and what do you know, the Writes don't report. I just added one drive Seconds/Transfer as a sacrificial lamb to the start of my criteria list, and VOILA now I don't get ACCESS DENIED to the reads and writes.
$counterlist = #("\$server\PhysicalDisk(0*)\Avg. Disk sec/Transfer",
"\$server\PhysicalDisk()\Avg. Disk sec/Read",
"\$server\PhysicalDisk()\Avg. Disk sec/Write")
$counters = $counterlist | Get-Counter
(not sure how to edit this, but there are asterisks in between the parenthesis after physicaldisk...)

SharePoint 2013 :- Web Application taking forever to create

I have a SharePoint 2013 installation on a Window 8 machine.
I am trying to create a web application and it is taking forever. The creation process never stops. I checked in application event logs and found this error:
*Machine 'SHAREPOINT2013C (SharePoint - 43000(_LM_W3SVC_1458308317_ROOT))' failed ping validation and has been unavailable since '1/22/2013 3:56:48 AM'.*
Searched the web but could not find anything that works for me.
Can anyone suggest a way to resolve the issue? Thanks a lot in advance.
Below are my findings:
In order to recognize routing targets IIS has to be able to process SPPING HTTP method
To test run this code in Powershell:
$url = "http://your-Routing-Target-Server-Name"
$myReq = [System.Net.HttpWebRequest]::Create($url)
$myReq.Method = "SPPING";
$response = $myReq.GetResponse();
$response.StatusCode
If you get the following error message:
Exception calling "GetResponse" with "0" argument(s): "The remote server returned an error: (405) Method Not Allowed."
that means that web front end is not set up to process SPPING HTTP method
To resolve the issue run the following commands on each routing target server:
Import-Module WebAdministration
add-WebConfiguration /system.webserver/handlers "IIS:\" -value #{
name = "SPPINGVerbHandler"
verb = "SPPING"
path = "*"
modules = "ProtocolSupportModule"
requireAccess = "None"
}
This will add a handler for SPPING verb to IIS configuration.
Run the test script again to make sure this works.
So this has to do with the Request Management Service that runs on the WFE servers on SharePoint 2013. The Request Management Service is of no value since you only have one server. If you disable this service on your single server farm these messages will go away and your Web Application creation performance will greatly increase.
Mark Ringo
I recently faced this issue, I created new Web Application and it was showing a popup of "It shouldn't take long", then after some time it showed a Connection failure page. I browsed to the virtual directory folder for the new web application and found that the folder was totally empty.
Then what I did to solve this problem:
1. Open IIS
2. Go to Applicatin Pools
3. Select Central Admin application pool and right click and select "Advance Settings".
4. There was a property named "Shutdown Time Limit", it was set to "90" by default. I changed it to 400 and clicked OK.
It restarted the applicaition pool automatically. Then again I created new web application from central admin and it worked for me.
I've found that these events correlate to when the specified application pools are recycled (mine are at a specific time in the morning). It's unfortunate that they're logged in the event viewer and can't really clean it up.

Resources