What is the maximum value for the IIS -> .NET Compilation -> Batch -> time-out ?
i need to set it to maximum in order to avoid compilation cannot be completed within the time-out period, the compiler reverts to single-compilation mode for the current page.
The maximum value internally is 2147483647 (32-bit signed integer). In the UI of IIS you have to specify the value as hh:mm:ss, so that would be something like:
596523 hours or 68 years.
I think even if your site exceeds the default value of 15 minutes you have a problem with your application.
By the way, to find this value, open:
%systemroot%\System32\inetsrv\config\schema\ASPNET_schema.xml
and search for batchTimeout
Related
I am currently tracing cannot fork() errors on my Ubuntu Server and I was able to pinpoint it to the pid.max value of 700 under /sys/fs/cgroup/pids/.
However, I can only able to set the values /system.slice/pids.max and /user.slice/pids.max - not pids.max. Plus, these reset after reboot to the value max which again enforces the global pids.max value.
Is it possible to simply change the it from 700 to something higher? root + sudo were of no help.
Is there another way to override this value?
After a long back and forth with my server hoster, they finally spilled the beans.
I was renting a V-Server with very beefy hardware for very little money. The catch? You can only run 700 concurrent tasks. That's where the pids.max value came from and overruled the TasksMax and numprocs values.
I think you're looking for DefaultTasksMax= directive from /etc/systemd/system.conf.
You can check the runtime value by issuing systemctl show -p DefaultTasksMax:
$ systemctl show -p DefaultTasksMax
DefaultTasksMax=19096
If you wish to change it, simply edit the respective directive line in /etc/systemd/system.conf. Similar directive (TasksMax=) exists to tweak this setting on per-unit basis.
Relevant documentation[0][1] snippets:
TasksMax=N
Specify the maximum number of tasks that may be created in the unit. This ensures that the number of tasks accounted for the unit (see above) stays below a specific limit. This either takes an absolute number of tasks or a percentage value that is taken relative to the configured maximum number of tasks on the system. If assigned the special value "infinity", no tasks limit is applied. This controls the "pids.max" control group attribute. For details about this control group attribute, see Process Number Controller.
The system default for this setting may be controlled with DefaultTasksMax= in systemd-system.conf(5).
DefaultTasksMax=
Configure the default value for the per-unit TasksMax= setting. See systemd.resource-control(5) for details. This setting applies to all unit types that support resource control settings, with the exception of slice units. Defaults to 15%, which equals 4915 with the kernel's defaults on the host, but might be smaller in OS containers.
I have a few questions regarding the report of lighthouse (see screenshot below)
The first is a culture think: i assume the value 11.930 ms stands for 11 seconds and 930 ms. Is this the case?
The second is the Delayed Paint compared to the size. The third entry (7.22 KB) delays the paint by 3,101 ms the fourth entry delays the paint by 1,226 ms although the javascript file is more than three times the size 24.03 KB versus 7.22 KB. Does anybody know what might be the cause?
Screenshot of Lighthouse
This is an extract of a lighthouse report. In the screenshot of google-chrome-lighthouse you can see that a few metrics are written with a comma 11,222 ms and others with a full stop 7.410 ms
Thank you for discovering quite a bug! An issue has been filed in the Lighthouse GitHub repo.
To explain what's likely going on, it looks like this report was generated with the CLI (or at least a locale that is different from the one that it is being displayed in). Some numbers (such as the ones in the table) are converted to strings ahead of time while others are converted at display time in the browser. The browser numbers are respecting your OS/user-selected locale while the pre-stringified numbers are not.
To answer your questions...
Yes, the value it's reporting is 11930 milliseconds or 11 seconds and 930 milliseconds (11,930 ms en-US or 11.930 ms de-DE).
The delayed paint metric is reporting to you how many milliseconds after the load started the asset finished loading. There are multiple factors that influence this number including when the asset was discovered by the browser, queuing time, server response time, network variability, and payload size. The small script that delayed paint longer likely had a lower priority or was added to your page later than the larger script was and thus was pushed out farther.
AFAIK V8 has a known hard limit on the length of allowed Strings. Trying to parse >500MB Strings will pop the error:
Invalid String Length
Using V8 flags to increase the heap size doesn't make any difference
$ node --max_old_space_size=5000 process-large-string.js
I know that I should be using Streams instead. However is there any way to increase the maximum allowed String length anyway?
Update: Answer from #PaulIrish below indicates they upped it to 1GB - but it's still not user-configurable
In summer 2017, V8 increased the maximum size of strings from ~256MB to ~1GB. Specifically, from 2^28 - 16 to 2^30 - 25 on 64-bit platforms. V8 ticket.
This change landed in:
V8: 6.2.100
Chromium: 62.0.3167.0
Node.js: 9.0.0
Sorry, no, there is no way to increase the maximum allowed String length.
It is hard-coded in the source, and a lot of code implicitly relies on it, so while allowing larger strings is known to be on people's wishlist, it is going to be a lot of work and won't happen in the near future.
I'm using sharepoint2013 + windows2012. I noticed that the SP search component has 5 processes in taskmgr. each uses about 400-500 MB memory. Is this normal? I also tried
Set-SPEnterpriseSearchService -PerformanceLevel Reduced
But it did not change anything. Should I restart the server?
I never nooticed this on other SP server I worked before. Just curious, is it because of SP 2013, some default settings?
thanks
user3211586 ‘s link worked for me. Basically this article says:
Quick and Dirty
Kill the noderunner.exe (Microsoft Sharepoint Search Component) process via TaskManager
This will obviously break everything related to Search on the site
Production
Change the Search Service Performance Level with powerhsell
Get-SPEnterpriseSearchService | Set-SPEnterpriseSearchService –PerformanceLevel “PartlyReduced”
Performance Level Explained:
Reduced: Total number of threads = number of processors, Max Threads/host = number of processors
PartlyReduced: Total number of threads = 4 times the number of processors , Max Threads/host = 16 times the number of processors
Maximum: Total number of threads = 4 times the number of processors , Max Threads/host = 16 times the number of processors (threads are created at HIGH priority)
For the setting to take effect do an IISReset or restart the Search Service in Central Admin
I had the same issue as the OP and running Set-SPEnterpriseSearchService –PerformanceLevel “PartlyReduced” followed by IISRESET /noforce resolved the issue for me.
Please check below given article:
http://social.technet.microsoft.com/wiki/contents/articles/20413.sharepoint-2013-performance-issues-with-search-service-application.aspx
When I tried this method, and when I changed the config setting from 0 to any value between 1 and 500, it did reduce the memory usage but the search stopped working. After I reverted back the config settings to 0, the memory usage increased but search started working again.
In my rails app i do a nslookup using a ruby library resolv. If the site like dgdfgdfgdfg.com is entered its talking too long to resolve. in some instance like 20 sec.(mostly for non-existent sites) Because it cause the application to slowdown.
So i though of introducing a timeout period for the dns lookup.
What will be the ideal timeout period for the dns lookup so that resolution of actual site doesnt fail. will something like 10 sec will be fine?
There's no IETF mandated value, although §6.1.3.3 of RFC 1123 suggests a value not less than 5 seconds.
Perl's Net::DNS and the command line dig utility do default to 5 seconds between retries. Some versions of the Microsoft resolver appear to default to 3 seconds.
You can run some tests among the users to find out the right number compromising responsiveness / performance.
Also you can adjust that timeout dinamically depending on the network traffic.
For example, for every sucessful resolv, you save how much time it took you to resolv it. And every hour (for example) you can calculate an average and set double of its value as timeout (Remember that "average" is, roughly speaking, "the middle"). This way if your latency is high at some point, it autoadjust itself to increase the timeout period.