Commerce Server: Custom counters file view is out of memory - windows-server-2003

Commerce Server seems to be adding a ton of counters for each catalog/site, and we are currently pushing 45 of them. I tried upping the size in the machine.config which helps, but not enough.
Does anyone know how to programmatically clear out the commerce server counters? Barring that, disable them?

So, after a great deal of research, I ended up putting together this .reg file that would disable all the counters. Copy/paste this into a .reg file, run it, and reboot the server, and it will stop registering counters for commerce server
Windows Registry Editor Version 5.00
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Commerce: Catalog\Performance]
"Disable Performance Counters"=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Commerce: Catalog Pipeline\Performance]
"Disable Performance Counters"=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Commerce: Marketing\Performance]
"Disable Performance Counters"=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Commerce: Orders\Performance]
"Disable Performance Counters"=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Commerce: Pipelines\Performance]
"Disable Performance Counters"=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\Commerce: PromoCode Pipeline\Performance]
"Disable Performance Counters"=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\csauthperf\Performance]
"Disable Performance Counters"=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\cscustomperf\Performance]
"Disable Performance Counters"=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\csdwperf\Performance]
"Disable Performance Counters"=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\csmarketingperf\Performance]
"Disable Performance Counters"=dword:00000001
[HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\csupmperf\Performance]
"Disable Performance Counters"=dword:00000001

Related

Application Performance Counter Causes Exception

We are having a weird problem with custom performance counters. On a colleagues machine I am getting an exception that a performance counter doesn't exist. Yet on other machines I do not have that problem. These are custom performance counters for our application.
In a .NET 4.7.2 application, does writing performance counters or even creating them require elevated privileges? Is there a work around for application that is in the field?
TIA,
Doug

Should I disable or enable Transparent Huge Pages for performance?

I have varnish 6.2 and Redis 5 with magento 2.3 on same box running centos 7. Should I disable or enable Transparent Huge Pages for performance?
Disable it, please. It's known to impact Varnish performance.
(not sure about Redis and Magento)
Same recommendation for Redis: '...when a Linux kernel has transparent huge pages enabled, Redis incurs to a big latency penalty after the fork call is used in order to persist on disk' [1][2].
[1] https://redis.io/topics/latency
[2] When to turn off Transparent Huge Pages for redis

How to improve read/write speed when using distributed file system?

If I browse the Distributed File System (DFS) shared folder I can create a file and watch it replicate almost immediately across to the other office DFS share. Accessing the shares is pretty instant even across the broadband links.
I would like to improve the read/write speed. Any tips much appreciated.
Improving hardware always help but keep in mind that in any distributed file system the performance of the parent host will influence besides that in many cases you can't touch the hardware and you need to optimize network or tune your systems to best fit your current provider architecture.
An example of this, mainly in virtualized environments, is the case when disabling the TCP segmentation offload from the network cards, ifconfig_DEFAULT="SYNCDHCP -tso", it will considerably improve the throughput but at a cost of more CPU usage.
Depending on how far you want to go you can start all these optimizations from the very bottom:
creating your custom lean kernel/image
test/benchmark network settings (iperf)
fine tune your FS, if using ZFS here are some guides:
http://open-zfs.org/wiki/Performance_tuning
https://wiki.freebsd.org/ZFSTuningGuide
performance impact when using Solaris ZFS lz4 compression
Regarding moosefs there are some threads about how the block size affects I/O performance and how in many cases by disabling cache allow blocks > 4k.
Mainly for FreeBSD we added special cache option for MooseFS client
called DIRECT.
This option is available in MooseFS client since version 3.0.49.
To disable local cache and enable DIRECT communication please use this
option during mount:
mfsmount -H mfsmaster.your.domain.com -o mfscachemode=DIRECT /mount/point
In most filesystems speed factors are: type of access (sequential or random) and block size. Hardware performance is also the factor on MooseFS. You can improve speed by improving hard drives performance (for example you can switch to SSD), network topology (network latency) and network capacity.

XPage Slowness when agent runs

I have seen some strange XPage slowness in a database that have a long running export agent.
I you fire the export agent all xpages starts to got slow in the application. If I look at the server the agent manager is using 25% CPU so it's plenty of CPU power left. I don't have any agent that runs from the XPages.
Anybody else seeing this?
It there a way to prevent this from happening ?
The cause could be one of many. You'll need to start to diagnose what is happening to discover where the contention is occurring. For example, if you are reading/writing a lot of documents then depending on your disk configuration there could be contention in the disk subsystem. Alternatively, if your memory is too low, you may be causing a lot of garbage collection to happen in the JVM, which can also cause slowness.
I would start with the XPages Toolbox to see if you can determine where the slowdown occurs and investigate from there. If you need to look deeper, look at yourkit java profiler (http://www.yourkit.com) which will give you a plethora of information to help identify the source.
... and perhaps you should try to profile your agent to see if there are any obvious places in the code that you could improve in terms of performance. Concurrent access to the same data could give bad response times (especially if one is write-access which could force view rebuilds). Try to open an XPage in the database that do not access the same data as the export agent - still slow?
To profile the agent you open it in the Designer and on the Basics tab of the properties you can enable "Profile this agent" :-)
/John

TFS and SharePoint are slower after upgrading to TFS2010 & SharePoint2010 [closed]

This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 10 years ago.
We've recently upgraded (migration path) from TFS/Sharepoint to TFS2010/Sharepoint2010.
Most things went well, but there were a few issues that became immediately apparent.
TFS was noticeably slower (as pointed out by the entire dev team). Basically all "get latest", and query operations were more sluggish. Starting VS2010 SP1 is also really slow with loading all the projects (40+) on my machine. A refresh after that is not normally a problem. Even though other people may only have 3-4 projects open at the time, they too noticed the "working..." delay.
Sharepoint was definitely much slower. The "Show Portal" takes forever to load, and the basic editing is slower too.
Work items occasionally "time out" for no reason, and end up in a "connection lost" error. It's normally while creating a new work item, and a redo of the same command works fine. It happens even during bulk work item creation, but the timing is random.
The server runs on Windows 2008, 12 GB, and plenty of CPU power (QuadCore). The IIS connectionTimeout is set to 2 minutes (default), I've played with the MinBytesPerSecond which is set to 240 by default (I've set it to 42 as well, but no joy), and I understand that VS 2010 in general might be a bit slower than its 2008 counterpart, but even then. No processors are maxed out. There are lots of MSSQLSERVER info logs in the Event Viewer though (I just noticed this - not sure if this is a problem). I've also changed the defaultProxy setting in the devenv.exe file - no joy there either.
It's too late for a downgrade. ;)
Has anyone experienced similar problems after the upgrade?
I would love to hear from ya! :o)
We experienced performance issues after upgrading from TFS 2008 to 2010 but it is much better now. We have learned that the Antivirus and SQL Server configurations are critical. In a virtualized environment store performance is key too. We have about 100 TFS users in a 2 tier Server setup.
The SQL server has it's default memory setting set as follows:
1 - SQL Server max memory 2TB
2 - Analysis Services max memory 100%
With those settings, our 8GB SQL machine was unusable.
Now we have:
1 - SQL Server max memory 4GB
2 - Analysis Server Max memory 15%
Now the performance is ok but not great.
The Antivirus Exclusions have to configured too. Basically excluded all the data and temp directories.
As our TFS setup is virtualized we are in the process of adding more storage hardware to have better disk performance. That should definitely solve our performance issues.
Simon
are all components installed on one machine? Is SQL layer also installed on that machine? Is the machine virtualized?
It's always better to install SQL layer on physical hardware than installing it virtually. SharePoint 2010 requires 4 gigs of RAM. To ensure that SharePoint is usable you should size the WFE with at least 8 gigs of RAM.
Our TFS was also slow with 4 gigs so I've added another 4 gigs. With this setup the entire environment is right now really fast.
Let's summarize
SQL: physical installation w/ 12GB RAM, Quad Core (duplicated for failover)
SharePoint: virtualized w/ 8GB RAM, Quad Core
TFS: virtualized w/ 8GB RAM, Quad Core
Both SharePoint and TFS are generating heavy load on the database. I've a showcase machine running on my Elitebook as HyperV image. The image has about 12 gigs of ram and is running on an external SSD but it is a lot slower than our productive environment.
I hope my thoughts and setups are helpful.
Thorsten

Resources