Groups taking so long.... finally timedout? - sharepoint

I have around 300,000 users in the site collection and around 250 groups in the same site collection. When I am trying to open any group to get users in it. It is taking so long and finally it is showing "Request time out". another than that.... site collection is working fine.
How can open groups normally with out delay?

I would advice to try to increase the default execution timeout in the shared web.config under the \12\TEMPLATE\LAYOUTS
By default, it's 6 minutes (360s), you can try to increase it to 20 minutes
<system.web>
<httpRuntime executionTimeout="1200" />
</system.web>
That will allow more time for the working process to execute its job. It will thus not solve your question "opening without delay" but it will hopefully avoid the dreaded "request has timed out"
I guess a little peak through Reflector might see something dirty in the way the UI is managing / retrieving the group users... A custom, code improved, administration page that will replace the built-in group management might thus be a possible solution.

Related

Dialogflow fulfillments timeout after 5 seconds, How can we overcome this limit?

Imagine the following scenario:
Our client starts a conversation with our bot regarding a problem with one of their websites.
We go and check the website in our backend through our fulfilment web service and if there is no obvious problem we want to then generate a screenshot of the website and present it to our client.
The screenshot could take more than 5 seconds to take as many websites take longer than 5 seconds to fully load.
As a result the response timeouts.
We can't force our clients to redesign their websites to lead faster than 5 seconds just so that our chatbot can handle their request.
I assume there are many other real-world examples in which fulfillments can take more than 5 seconds.
Example:
Client: My website www.example.com doesn't load.
Bot: I just checked the website and it loads fine for me. Here is a screenshot of your website for you to check:
{Image goes here}
The short answer is - you can't.
There are some tricks that work sometimes, but they won't work in all scenarios, and aren't good practices anyway.
Your best bet would be to respond to the user with a message saying you're doing some checks and diagnostics and to ask for an update in a moment (and, if possible, provide a Suggestion Chip prompting them to ask for it). After sending that message, launch a background task to do the checks to see what might be going on and take the screen shot. When they ask for an update, you can report with what you have (latency time, the screen shot, etc) or if things are being slow and that you're still checking.

Time based notifications service

I am dealing with subscriptions where a user is subscribed to a plan and it has an expiration.
So basically each user store has an expiration field.
I want to be able to get notified when a user plan is expired as soon as it is actually expired.
Right now, I have a job that runs on all users once a day and check if anyone has expired but ideally I would like to get a server postback or some sort of event whenever a user is expired without running this each day.
Can you think of any third party service / database / other tool that deals with these sort of things ?
A lot of services, Stripe for example, notify you with a webhook whenever a user's subscription is renewed / expired. Are they just running a job repeatedly like I am ?
Hope I made myself clear enough, would appreciate help in how to focus my search in Google as well.
My current stack is Mongodb, Node.js, AWS
Thanks
We do not know for sure, how Stripe handles it.
There are two solutions coming to my mind. Let's start with the simple one:
Cronjob
As you mentioned, you already have a Cronjob solution, but you can instead make it run each hour, or each 10 minutes. Just ensure you optimize your query to the maximum, so that it is not super-heavy to run.
It is attractive, easy to implement, very few edge cases, but as you might have though can be a performance drag once you reach millions of clients.
Timers
Implementation varries, and you need to worry about the edge cases, but the concept:
On day start* (00:00) query for all clients who are set to expire today, save them into array (in-memory). Sort the array by time (preferably descending).
Create timer to execute on last array's element time.
Function: If Client X expires now, query database to ensure subscription was not extended. Notify if it wasn't.
Remove Client X from the tracked array. Repeat step 2.
On day start* - Also run it on script launch.

Webpart Search Content simple OOTB search for a Timeout 15 seconds

OOTB Search Content webpart on a OTTB Publishing site. Intermitantly it timesout.
Farm, 2 WFE 2 APP
Search is running on APP, Query service is on WFE
Sevice Pack SP1 Sharepoint 2013
I know if i move all search components to one server this problem goes away.
Been down the Certificate not being correct....appear to make some improvements
Can oftern reproducing it by leave IE9 open wait 20+mins then press f5, rest of page reloads fine search webpart timesout: 15 seconds message.
Press f5 will probably then work, for a bit.
Nothing in Event log on all servers.
VMware hosts, with net scaler, set to Round robin.
ULS Microsoft.Ceres.InteractionEngine.Component.FlowHandleRegistry : Exceptions occurred when evaluating the flow. Microsoft.Ceres.Evaluation.DataModel.EvaluationException: Query execution timed out. ---> System.TimeoutException: Query timeout: Microsoft.Ceres.SearchCore.Query.MarsLookupComponent.LookupService.QueryClient.QuerySession, timeout is: 00:00:15
Have you tried adjusting the XSLT transform timeout ?
$myfarm = Get-SPFarm
$myfarm.XsltTransformTimeOut
You may have this set to 1 (second)
Try setting to 5 (or more)
I solved this. Its a latancy issue. One of the web frontends was in a different data centre. Moved it to the same data centre, and the timeout error was resolved. Search webpart was also instant.
http://sergeluca.wordpress.com/2014/01/21/stretched-farms-and-sharepoint-2013/
Is a very good article, plus a neat powershell script.

How do you prevent crawling from your web site?

I am running a website on IIS with more than 1000 page links at pagination and I want to prevent others to crawl/steal these pages by running a crawler script and get the info page by page.
Is there any way to understand the request if it is a user request or being ran by a script? or maybe some filters for this on highest level before coming to request?
You can't prevent automated crawling.
You can make it harder to automatically crawl your content, but if you allow users to see the content it can be automated (i.e. automating browser navigation is not hard and computer generally don't care to wait long time between requests).
One option is to require single "user" (either authenticated or not) to have some minimal delay between requests (i.e. 1-5 seconds). This way generic crawling will not be useful (require some "user id" in request and delay between requests), and one would have to write custom crawling code which is clearly more time intensive.
Note that writing special "crawler" for your site may be considered as "noble" action and significantly increase incentive to create one (i.e. check out "how to make Google maps available offline" questions).

SharePoint 403 error for users not exist in "All People"

It is complex, I'll trying to describe it here.
If the user and his group have no access rights to anything on the SP site, the user will get a proper "Error:Access Denied" SharePoint page upon logon.
If the user has some access to something through his group membership, then
a. If the user is listed in the All People list, then the user can logon and use the site with no problem.
b. If the user is not listed in the All People list, then the user will get a IIS 403 Error page. Back on the server, there will be an event of "A process serving application pool '[IIS app pool name]' suffered a fatal communication error with the World Wide Web Publishing Service", which indicates a crash in the IIS app pool. If the user is keen and keeps trying, he can crash the app pool frequently and eventually cause the app pool to stop and the application is down!!!
We are using forms authentication and Asp.net membership provider and role provider. It appears that when 2b is happening, SP is repeatedly (should be only once) calling membership provider GetUser method (until the fatal communication error is coming up I guess). I believe it is for the initila user profile import. When 2a is happening, the GetUser method is not called.
We can manually do things like adding the user to the Visitors group and then taking the user out of the Visitors group, which will add the user to the All People list so he will be able to log on. During the manual process, the membership provider GetUesr is also called but just once and works fine.
This problem only just started occuring recently and only in one environment (the PRODUCTION!). It was all fine and the other environments UAT and training environment both don't have this issue. We've compared the environments and checked all the obvious and couldn't find any differences that could cause this. The production has got around 110 users, which is more than the other environments but still not a lot.
Anyone out there can help?
Based on the comment below it looks like the error is occuring in the custom implementation of GetUser, after the call to the web service. It is also only occuring in the environment that has the most data.
The next thing to check therefore is the code between the call to the web service and the return of getuser. Do you have any arrays where the max length is set? Do you make any assumptions about which data is contained i a spesific item in an array? How do you check/log that the web service is returning a valid result?
Hope this helps
Shiraz
Cause of the problem found. The advanced setting on All People list has got Item Level Edit permission set to none.

Resources