I can't start or stop Distributed Cache in one of SharePoint 2013 WFE servers. It says "Starting" for the status in Central Admin. I'm getting server errors and I think Distributed Cache may be corrupting the data because of this. It got stuck in "Starting" status when I first ran below command.
Add-SPDistributedCacheServiceInstance
When I try to stop it using below, I get "cacheHostInfo is null" error.
Stop-SPDistributedCacheServiceInstance -Graceful
I get the same "cacheHostInfo is null" error when I run below script to delete the instance.
$instanceName ="SPDistributedCacheService Name=AppFabricCachingService"
$serviceInstance = Get-SPServiceInstance | ? {($.service.tostring()) -eq $instanceName -and ($.server.name) -eq $env:computername}
$serviceInstance.Unprovision()
$serviceInstance.Delete()
Any ideas how to resolve this?
Related
I'm looking for a solution to my problem and I am not able to find it. I've tried everything online.
I'm trying to disable our on premise AD connect, I ran it as a test but it turns out our environment is not setup correctly for this to work and requires some restructuring.
I've followed the standard instructions of
Connect-MsolService and Set-MsolDirSyncEnabled -EnableDirSync $false
Connect works fine but when I try to run the disable command it returns back the error Set-MsolDirSyncEnabled : You cannot turn off Active Directory synchronization.
I've been told it could take a while but I had enabled it last week and most resources I've found say "24 - 72 hours".
The command (Get-MSOLCompanyInformation).DirectorySynchronizationStatus shows Enabled and not syncing.
Can anyone assist me with this issue?
Thank you!
You try to enable (or disable) Directory synchronization in Office 365, and you are greeted by the following error message.
PS C:\> Set-MsolDirSyncEnabled -EnableDirSync $false
Set-MsolDirSyncEnabled : You cannot turn off Active Directory synchronization.
At line:1 char:1
+ Set-MsolDirSyncEnabled -EnableDirSync $false
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : OperationStopped: (:) [Set-MsolDirSyncEnabled], MicrosoftOnlineException
+ FullyQualifiedErrorId : Microsoft.Online.Administration.Automation.DirSyncStatusChangeNotAllowedException,Microsoft.Online.Administration.Automation.SetDirSyncEnabled
The DirSyncStatusChangeNotAllowedException error in particular means that you have changed the status recently, and the service is simply preventing you from changing it back too soon
Note : The error message detailed is different and will occur even if the
DirSync status has been updated. It’s a simple block on Microsoft’s
side to prevent you from changing the status too often
check now or wait for atleast 12 hours to 72hr to reflect.
MSOLCompanyInformation | select DirectorySynchronizationStatus
NO FIX: Unfortunately, there is no way around this error. It simply means that your directory is still doing a full initial synch with Azure AD. This error message will clear once the initial sync is complete. The time will vary depending on the size of your on-premises AD but should take no longer than 72 hours for very large environments.
Reference : https://www.michev.info/Blog/Post/1797/you-cannot-turn-off-active-directory-synchronization
Note : If still problem is not getting solved would suggest you to reach out to MS Support. They can able to track down where the exact. issue
We just started getting this error yesterday but haven't changed anything in our app. Any ideas? If we restart the function app, it will run for a short time and then start giving us this error again. The function app is in PowerShell.
Host Error: Microsoft.Azure.WebJobs.Script: Host thresholds exceeded: [Connections]
This was recently added in the runtime to track running out of available connections on the VM
https://github.com/Azure/azure-webjobs-sdk-script/pull/2063
It should point to you running out of available connections. It could be wrong or correct, but I can't tell without looking at your functions themselves.
If you would like to discuss it, then you probably should use the repo above
I'm not sure if it is the above change that caused this but it is a coding issue on my side that is now being handled correctly by Azure Functions. I created this small repo and after I commented out the close, I received the error. My real code is more complex but clearly somewhere I'm not closing it out.
$Ports = #(21,22,23,53,69,71,80,98,110,139,111,389,443,445,1080,1433,2001,2049,3001,3128,5222,6667,6868,7777,7878,8080,1521,3306,3389,5801,5900,5555,5901)
for($i = 1; $i -le $ports.Count;$i++) {
$port = $Ports[($i-1)]
$client = New-Object System.Net.Sockets.TcpClient
$beginConnect = $client.BeginConnect("123.123.123.123",$port,$null,$null)
#$client.Close();
}
11/13/2013 11:35:37 TRCW1 using local computer 11/13/2013 11:35:37
TRCE1 System.Management.ManagementException: Access denied at
System.Management.ManagementException.ThrowWithExtendedInfo(ManagementStatus
errorCode) at
System.Management.ManagementObjectCollection.ManagementObjectEnumerator.MoveNext()
at Microsoft.PowerShell.Commands.GetWmiObjectCommand.BeginProcessing()
Code (inside a loop of server names):
$error.clear() #clear any prior errors, otherwise same error may repeat over-and-over in trace
if ($LocalServerName -eq $line.ServerName)
{
# see if not using -ComputerName on local computer avoids the "service not found" error
Add-Content $TraceFilename "$myDate TRCW1 using local computer "
$Service = (get-wmiobject win32_service -filter "name = '$($line.ServiceName)'")
}
else
{
Add-Content $TraceFilename "$myDate TRCW2 using remote computer $($line.ServerName) not eq $LocalServerName"
$Service = (get-wmiobject win32_service -ComputerName $line.ServerName -filter "name = '$($line.ServiceName)'")
}
if ($error -ne $null)
{
Write-Host "----> $($error[0].Exception) "
Add-Content $TraceFilename "$myDate TRCE1 $($error[0].Exception)"
}
I'm reading a CSV of server names. I finally added the exception logic, to find I'm getting an "Access Denied". This was only happening on the local server. Seems almost backwards, the local server fails, whereas the remote servers work fine. I even changed logic to test to see if it was the local server, then tried leaving off the -ComputerName parms on the WMI (as shown in code above), and still getting error.
So far, my research shows the answer may lie with
set-item trustedhosts
But my main question is whether trustedhosts is applicable to local servers, or only remote servers. Wouldn't a computer always trust itself? Does it still use remoting to talk to itself?
This server apparently was part of a cluster a long time before I got here, and now it's not. I'm also suspicious of that.
When I run interactively the script works fine, it's only when I schedule it and run it under a service account that it fails with the access denied. The Service Account is local Admin on that box.
I'm using get-wmiobject win32_service instead of get-service because it returns extra info I need to lookup the process, and date/time the service was started using another WMI call.
Running on Win 2008/R2.
Below Update 11/13/2013 5:27Pm
I have just verified that the problem happens on more than one server. [I took the scripts and ran them on another server.] My CSV input includes a list of servers to monitor. The ones outside of my own server always return results. The ones to my own server, that omit the -ComputerName fail. (I have tried with and without the -ComputerName parm for the local server).
Are you running the script "as administrator" (UAC)? When your credentials are calculated for your local instance if you have UAC enabled and you didn't run it "as administrator" it removes the local administrator security token. Connecting to a different machine over the network, A) it completely bypasses UAC, and B) when the target evaluates your token, the group memberships you're in are fully evaluated and thus you get "administrator" access.
Probably unrelated, but I've just run across two 2008 R2 servers out of 10 on my system that reject THE FIRST performance criteria that I'm collecting, but only when it's running as a scheduled task. If I run it interactively it works at least 95% of the time. I'm collecting Disk Seconds/Read and Seconds/Write, so it's the reads that don't show, for these two servers only. I flipped the order and what do you know, the Writes don't report. I just added one drive Seconds/Transfer as a sacrificial lamb to the start of my criteria list, and VOILA now I don't get ACCESS DENIED to the reads and writes.
$counterlist = #("\$server\PhysicalDisk(0*)\Avg. Disk sec/Transfer",
"\$server\PhysicalDisk()\Avg. Disk sec/Read",
"\$server\PhysicalDisk()\Avg. Disk sec/Write")
$counters = $counterlist | Get-Counter
(not sure how to edit this, but there are asterisks in between the parenthesis after physicaldisk...)
I have a SharePoint 2013 installation on a Window 8 machine.
I am trying to create a web application and it is taking forever. The creation process never stops. I checked in application event logs and found this error:
*Machine 'SHAREPOINT2013C (SharePoint - 43000(_LM_W3SVC_1458308317_ROOT))' failed ping validation and has been unavailable since '1/22/2013 3:56:48 AM'.*
Searched the web but could not find anything that works for me.
Can anyone suggest a way to resolve the issue? Thanks a lot in advance.
Below are my findings:
In order to recognize routing targets IIS has to be able to process SPPING HTTP method
To test run this code in Powershell:
$url = "http://your-Routing-Target-Server-Name"
$myReq = [System.Net.HttpWebRequest]::Create($url)
$myReq.Method = "SPPING";
$response = $myReq.GetResponse();
$response.StatusCode
If you get the following error message:
Exception calling "GetResponse" with "0" argument(s): "The remote server returned an error: (405) Method Not Allowed."
that means that web front end is not set up to process SPPING HTTP method
To resolve the issue run the following commands on each routing target server:
Import-Module WebAdministration
add-WebConfiguration /system.webserver/handlers "IIS:\" -value #{
name = "SPPINGVerbHandler"
verb = "SPPING"
path = "*"
modules = "ProtocolSupportModule"
requireAccess = "None"
}
This will add a handler for SPPING verb to IIS configuration.
Run the test script again to make sure this works.
So this has to do with the Request Management Service that runs on the WFE servers on SharePoint 2013. The Request Management Service is of no value since you only have one server. If you disable this service on your single server farm these messages will go away and your Web Application creation performance will greatly increase.
Mark Ringo
I recently faced this issue, I created new Web Application and it was showing a popup of "It shouldn't take long", then after some time it showed a Connection failure page. I browsed to the virtual directory folder for the new web application and found that the folder was totally empty.
Then what I did to solve this problem:
1. Open IIS
2. Go to Applicatin Pools
3. Select Central Admin application pool and right click and select "Advance Settings".
4. There was a property named "Shutdown Time Limit", it was set to "90" by default. I changed it to 400 and clicked OK.
It restarted the applicaition pool automatically. Then again I created new web application from central admin and it worked for me.
I've found that these events correlate to when the specified application pools are recycled (mine are at a specific time in the morning). It's unfortunate that they're logged in the event viewer and can't really clean it up.
I'm building a script to read the Security Log from several computers. I can read the Security log from my local machine with no problem when using the Get-EventLog command, but the problem with it is that I can't run it against a remote machine (the script is for powershell v1). The command below never returns any results, although that with any other LogFile, it works perfectly:
gwmi -Class Win32_NTLogEvent | where {$_.LogFile -eq "Security"}
I've done some research, and I seems to be a impersonation issue, but the -Impersonation option for the Get-WmiObject does not seem to be implemented. Is there anyway around this problem? The solution could be running the Get-EventLog on a remote machine somehow, or dealing with the impersonation issue so that the security log can be accessed.
Thanks
You could use .NET directly instead of going through WMI. The scriptblock below will give you the first entry in the security log
$logs = [System.Diagnostics.EventLog]::GetEventLogs('computername')
$security = $logs | ? {$_.log -like 'Security'}
$security.entries[0]
Have you tried to use the -Credential parameter? Also, use the filter parameter instead of where-object, it gets just the security events (where-object gets ALL events from all logs and only then performs the filtering)
gwmi Win32_NTLogEvent -filter "LogFile='Security'" -computer comp1,comp2 -credential domain\user