11/13/2013 11:35:37 TRCW1 using local computer 11/13/2013 11:35:37
TRCE1 System.Management.ManagementException: Access denied at
System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus
errorCode) at
System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext()
at Microsoft.PowerShell.Commands.GetWmiObjectCommand.BeginProcessing()
Code (inside a loop of server names):
$error.clear() #clear any prior errors, otherwise same error may repeat over-and-over in trace
if ($LocalServerName -eq $line.ServerName)
{
# see if not using -ComputerName on local computer avoids the "service not found" error
Add-Content $TraceFilename "$myDate TRCW1 using local computer "
$Service = (get-wmiobject win32_service -filter "name = '$($line.ServiceName)'")
}
else
{
Add-Content $TraceFilename "$myDate TRCW2 using remote computer $($line.ServerName) not eq $LocalServerName"
$Service = (get-wmiobject win32_service -ComputerName $line.ServerName -filter "name = '$($line.ServiceName)'")
}
if ($error -ne $null)
{
Write-Host "----> $($error[0].Exception) "
Add-Content $TraceFilename "$myDate TRCE1 $($error[0].Exception)"
}
I'm reading a CSV of server names. I finally added the exception logic, to find I'm getting an "Access Denied". This was only happening on the local server. Seems almost backwards, the local server fails, whereas the remote servers work fine. I even changed logic to test to see if it was the local server, then tried leaving off the -ComputerName parms on the WMI (as shown in code above), and still getting error.
So far, my research shows the answer may lie with
set-item trustedhosts
But my main question is whether trustedhosts is applicable to local servers, or only remote servers. Wouldn't a computer always trust itself? Does it still use remoting to talk to itself?
This server apparently was part of a cluster a long time before I got here, and now it's not. I'm also suspicious of that.
When I run interactively the script works fine, it's only when I schedule it and run it under a service account that it fails with the access denied. The Service Account is local Admin on that box.
I'm using get-wmiobject win32_service instead of get-service because it returns extra info I need to lookup the process, and date/time the service was started using another WMI call.
Running on Win 2008/R2.
Below Update 11/13/2013 5:27Pm
I have just verified that the problem happens on more than one server. [I took the scripts and ran them on another server.] My CSV input includes a list of servers to monitor. The ones outside of my own server always return results. The ones to my own server, that omit the -ComputerName fail. (I have tried with and without the -ComputerName parm for the local server).
Are you running the script "as administrator" (UAC)? When your credentials are calculated for your local instance if you have UAC enabled and you didn't run it "as administrator" it removes the local administrator security token. Connecting to a different machine over the network, A) it completely bypasses UAC, and B) when the target evaluates your token, the group memberships you're in are fully evaluated and thus you get "administrator" access.
Probably unrelated, but I've just run across two 2008 R2 servers out of 10 on my system that reject THE FIRST performance criteria that I'm collecting, but only when it's running as a scheduled task. If I run it interactively it works at least 95% of the time. I'm collecting Disk Seconds/Read and Seconds/Write, so it's the reads that don't show, for these two servers only. I flipped the order and what do you know, the Writes don't report. I just added one drive Seconds/Transfer as a sacrificial lamb to the start of my criteria list, and VOILA now I don't get ACCESS DENIED to the reads and writes.
$counterlist = #("\$server\PhysicalDisk(0*)\Avg. Disk sec/Transfer",
"\$server\PhysicalDisk()\Avg. Disk sec/Read",
"\$server\PhysicalDisk()\Avg. Disk sec/Write")
$counters = $counterlist | Get-Counter
(not sure how to edit this, but there are asterisks in between the parenthesis after physicaldisk...)
Related
my current tasks is to set up an automatic configuration for Microsoft Azure Backup.
What i did so far:
wrote scripts and tasks that copy the Installer to a remote server, execute it, make sure it's installed, register the server with Azure, set up the schedule, the file specs and everything around it.
And it all works.
The problem though: I have now recieved the task to also include "System State" within the backup.
I'm aware, that this is about a 60 second task, if you do it using the Azure Console to schedule the backup. I do however have the requirement to build the script in a way, that not a single finger has to be moved to complete the whole thing.
Question: Has anyone figured out, if it's possible to activate and include the System State Backups ( which ends up under %MARSDIR%\Scratch\SSBS ) within the OBPolicy using only powershell?
If i activate it with the console and then execute the command "Get-OBPolicy" i find the System State listed along the other filespecs.
However i can't figure out how i would set it, using New-OBFileSpec or anything alike.
Thanks in advance :)
Edit: To clarify
Assume i'm in the config window seeing this:
I can "check" C: by doing
"New-OBFileSpec -FileSpec #("C:\")"
What command should i use in PS to "check" System State ?
Edit 2:
Below is the part of the code for this.
How do i add System State to the $inclusions?
## Register Server with Azure
$credsfile = ## Path to Vault credential file
Start-OBRegistration -VaultCredentials $credsfile -Confirm:$false
# Create Policy
$newpolicy = New-OBPolicy
$sched = New-OBSchedule -DaysofWeek Monday,Tuesday,Wednesday,Thursday,Friday,Saturday,Sunday -TimesofDay 22:00
Set-OBSchedule -Policy $newpolicy -Schedule $sched
# File Spec
$inclusions = New-OBFileSpec -FileSpec #("E:\")
Add-OBFileSpec -Policy $newpolicy -FileSpec $inclusions
# Retention
$retentionpolicy = New-OBRetentionPolicy -RetentionDays 30
Set-OBRetentionPolicy -Policy $newpolicy -RetentionPolicy $retentionpolicy
## Set the Policy
Set-OBPolicy -Policy $newpolicy -Confirm:$false
# Set Machine Encryption Key
$PassPhrase = ConvertTo-SecureString -String "...." -AsPlainText -Force
Set-OBMachineSetting -EncryptionPassPhrase $PassPhrase
I have a Web API service that I'm deploying to my various environments (using Octopus Deploy). It is supposed to do various tasks on startup, e.g. run any migration scripts required by Entity Framework Code First, and some background tasks managed by Hangfire. Trouble is, the web site only wakes up the first time someone makes a call to it. I want it to run as soon as I've deployed it. I could do it manually just by pointing a web browser at the API's home page, but that requires me to remember to do that, and if I'm deploying to multiple tentacles, that's a major PITA.
How can I force the web site to start up automatically, right after it's been deployed?
In the control panel under turn windows features on or off,
Under "Web Server (IIS) | Web Server | Application Development",
select "Application Initialization".
In IIS, on the advanced settings for the Application Pool,
"Start Mode" should be set to "AlwaysRunning".
In IIS, on the advanced settings for the site,
"Preload Enabled" should be set to "true".
A new deployment will cause it to start again (possibly after a short delay).
You can use the Application Initialization Module for this, as many answers have mentioned, but there are a couple of benefits to dropping in a PowerShell step to your deployment...
It can warm up your site
It can act like a very basic smoke test
If it fails, the deployment is marked as failed in Octopus
I use this PowerShell script to warm up and check websites after deployment.
Write-Output "Starting"
If (![string]::IsNullOrWhiteSpace($TestUrl)) {
Write-Output "Making request to $TestUrl"
$stopwatch = [Diagnostics.Stopwatch]::StartNew()
$response = Invoke-WebRequest -UseBasicParsing $TestUrl -MaximumRedirection 0
$stopwatch.Stop()
$statusCode = [int]$response.StatusCode
If ($statusCode -ge 200 -And $statusCode -lt 400) {
Write-Output "$statusCode Warmed Up Site $TestUrl in $($stopwatch.ElapsedMilliseconds)s ms"
$stopwatch = [Diagnostics.Stopwatch]::StartNew()
$response = Invoke-WebRequest -UseBasicParsing $TestUrl -MaximumRedirection 0
$stopwatch.Stop()
$statusCode = [int]$response.StatusCode
Write-Output "$statusCode Second request took $($stopwatch.ElapsedMilliseconds)s ms"
} Else {
throw "Warm up failed for " + $TestUrl
}
} Else {
Write-Output "No TestUrl configured for this machine."
}
Write-Output "Done"
Use a powershell script to make a call to localhost or the specific machine being deployed to post deploy. The other option would to use the Application Initialization Module.
You will need to perform following steps -
Enable automatic start-up for Windows Process Activation (WAS) and World Wide Web Publishing (W3SVC) services (enabled by default).
Configure Automatic Startup for an Application pool (enabled by default).
Enable Always Running Mode for Application pool and configure Auto-start feature
For step3, you’ll need a special class that implements the IProcessHostPreloadClient interface. It will be called automatically by Windows Process Activation service during its start-up and after each Application pool recycle.
public class ApplicationPreload : System.Web.Hosting.IProcessHostPreloadClient
{
public void Preload(string[] parameters)
{
//Write code here to kick off the things at startup.
}
}
Complete process in detail is documented here on Hangfire.
http://docs.hangfire.io/en/latest/deployment-to-production/making-aspnet-app-always-running.html
I can't start or stop Distributed Cache in one of SharePoint 2013 WFE servers. It says "Starting" for the status in Central Admin. I'm getting server errors and I think Distributed Cache may be corrupting the data because of this. It got stuck in "Starting" status when I first ran below command.
Add-SPDistributedCacheServiceInstance
When I try to stop it using below, I get "cacheHostInfo is null" error.
Stop-SPDistributedCacheServiceInstance -Graceful
I get the same "cacheHostInfo is null" error when I run below script to delete the instance.
$instanceName ="SPDistributedCacheService Name=AppFabricCachingService"
$serviceInstance = Get-SPServiceInstance | ? {($.service.tostring()) -eq $instanceName -and ($.server.name) -eq $env:computername}
$serviceInstance.Unprovision()
$serviceInstance.Delete()
Any ideas how to resolve this?
I have a SharePoint 2013 installation on a Window 8 machine.
I am trying to create a web application and it is taking forever. The creation process never stops. I checked in application event logs and found this error:
*Machine 'SHAREPOINT2013C (SharePoint - 43000(_LM_W3SVC_1458308317_ROOT))' failed ping validation and has been unavailable since '1/22/2013 3:56:48 AM'.*
Searched the web but could not find anything that works for me.
Can anyone suggest a way to resolve the issue? Thanks a lot in advance.
Below are my findings:
In order to recognize routing targets IIS has to be able to process SPPING HTTP method
To test run this code in Powershell:
$url = "http://your-Routing-Target-Server-Name"
$myReq = [System.Net.HttpWebRequest]::Create($url)
$myReq.Method = "SPPING";
$response = $myReq.GetResponse();
$response.StatusCode
If you get the following error message:
Exception calling "GetResponse" with "0" argument(s): "The remote server returned an error: (405) Method Not Allowed."
that means that web front end is not set up to process SPPING HTTP method
To resolve the issue run the following commands on each routing target server:
Import-Module WebAdministration
add-WebConfiguration /system.webserver/handlers "IIS:\" -value #{
name = "SPPINGVerbHandler"
verb = "SPPING"
path = "*"
modules = "ProtocolSupportModule"
requireAccess = "None"
}
This will add a handler for SPPING verb to IIS configuration.
Run the test script again to make sure this works.
So this has to do with the Request Management Service that runs on the WFE servers on SharePoint 2013. The Request Management Service is of no value since you only have one server. If you disable this service on your single server farm these messages will go away and your Web Application creation performance will greatly increase.
Mark Ringo
I recently faced this issue, I created new Web Application and it was showing a popup of "It shouldn't take long", then after some time it showed a Connection failure page. I browsed to the virtual directory folder for the new web application and found that the folder was totally empty.
Then what I did to solve this problem:
1. Open IIS
2. Go to Applicatin Pools
3. Select Central Admin application pool and right click and select "Advance Settings".
4. There was a property named "Shutdown Time Limit", it was set to "90" by default. I changed it to 400 and clicked OK.
It restarted the applicaition pool automatically. Then again I created new web application from central admin and it worked for me.
I've found that these events correlate to when the specified application pools are recycled (mine are at a specific time in the morning). It's unfortunate that they're logged in the event viewer and can't really clean it up.
I'm building a script to read the Security Log from several computers. I can read the Security log from my local machine with no problem when using the Get-EventLog command, but the problem with it is that I can't run it against a remote machine (the script is for powershell v1). The command below never returns any results, although that with any other LogFile, it works perfectly:
gwmi -Class Win32_NTLogEvent | where {$_.LogFile -eq "Security"}
I've done some research, and I seems to be a impersonation issue, but the -Impersonation option for the Get-WmiObject does not seem to be implemented. Is there anyway around this problem? The solution could be running the Get-EventLog on a remote machine somehow, or dealing with the impersonation issue so that the security log can be accessed.
Thanks
You could use .NET directly instead of going through WMI. The scriptblock below will give you the first entry in the security log
$logs = [System.Diagnostics.EventLog]::GetEventLogs('computername')
$security = $logs | ? {$_.log -like 'Security'}
$security.entries[0]
Have you tried to use the -Credential parameter? Also, use the filter parameter instead of where-object, it gets just the security events (where-object gets ALL events from all logs and only then performs the filtering)
gwmi Win32_NTLogEvent -filter "LogFile='Security'" -computer comp1,comp2 -credential domain\user