there!
I use MS Sql server 8.0.
I have big problem with memory leak in it.
The physical memory, that is used by ms sql server can up to 900 MB. My question is can it be by bug in ms sql server or it is impossible?
The fact that SQL Server is using 900 MB is no indication of a bug. Databases aggressively cache to improve performance. I'm actually surprised that 900MB is the highest you've seen.
To add onto recursive's answer, if you are on a development box where you want to limit it, you can run a query something like this:
use master
EXEC sp_configure 'show advanced options', 1
RECONFIGURE
GO
EXEC sp_configure 'max server memory', 512
RECONFIGURE
GO
This will limit it to 512 MB. I would not limit your SQL server's memory usage in a production environment without carefully understanding the effects of it. The minimum, I believe, is 4 MB. Depending on what queries you run, how much data there is, and how it is organized, below 256 might starve SQL of memory.
Related
I am running around eight solr servers (version 3.5) instances behind a Load Balancer. All servers are identical and the LB is weighted by number connections. The servers have around 4M documents and receive a constant flow of queries. When the solr server starts, it works fine. But after some time running, it starts to take longer respond to queries, and the server I/O goes crazy to 100%. Look at the New Relic graphic:
If the servers behaves well in the beginning, I it starts to fail after some time? Then if I restart the server, it gets back to low I/O for same time and this repeats over and over.
The answer to this question is related to the content in this blog post.
What happens in this case is that queries are highly dependent of reading solr indexes. These indexes are in disk, to I/O i high. To optimize disk accesses, Linux OS creates a cache in memory for the most accessed disk areas. It uses free memory (not occupied my applications) for this cache. When the memory is full, the server needs to read from disks again. For this reason, when solr restarts, JVM occupies less memory and there is more free space for disk cache.
(The problem is happening in a server with 15Gb RAM and a 20Gb solr index)
The solution is to simple increase the server's RAM, so the whole index fits into memory and no I/O is required.
When adding a VHD data disk to a VM I am asked for a "Host Cache Preference" (None, Read Only, Read/write).
Can someone tell me the effect of choosing one over the other?
Specifically, I am using a VM as a Build Server so the disks are used for compiling .Net source code. Which setting would be best in this scenario?
Just as the settings mention this setting turns on caching preferences for I/O. The effect of changing them is that reads, writes or both read/writes can be cached for performance. For example, if you have read-only database/Lucene index/read-only files it would be optimal to turn on read-cache for the drive.
I have not seen dramatic performance changes in changing this setting (until I used SQL Server/Lucene) on the drives. High I/O will be improved by stripping disks...in your case if you have millions of lines of code across 10,000s of files then you could see performance improvement in reading/writing. The default IOPs max for a single drive is 500 IOPs (which is about 2x15k SAS drives or a high-end SSD). If you need more than that, add more disks and stripe them...
For example, on an extra large VM you can attach 16 drives * 500 IOPs (~8,000 IOPs):
http://msdn.microsoft.com/en-us/library/windowsazure/dn197896.aspx
(there are some good write-ups/whitepapers for people who did this and netted optimal performance by adding the max amount of smaller drives..rather than just one massive one).
Short summary: leave the defaults for caching. Test with an I/O tools for specific performance. Single drive performance will not likely matter, if I/O is your bottleneck striping drives will be MUCH better than the caching setting on the VHD drive.
I'm working with MongoDB on a 32bits CentOS VPS with 1GB of internal memory.
It works fine most of the time, but every now and then it's memory usage spikes and crashes my server.
Is there a way to prevent this, for example, by limiting the memory and CPU that MongoDB daemon uses?
I was thinking of running the Mongo daemon with ionice and giving it a low priority, but I'm not sure if that would work.
Any help or pointers are welcome!
It is not currently possible to limit amount of memory. MongoDB uses memory-mapped file mechanism to access data files. Therefore, amount of used memory is governed by the system. The more data you touch, the more RAM you need.
I'm guessing you're also running everything else on that same server?
Really, the best way to run mongo is to put it on its own server, where things like apache, mysql, etc. won't jump up and interfere with the RAM it wants to use. I had the same problem myself--the server would go into swap and choke itself every once in a while, with heavy use.
You'd probably be better off with two 512MB servers which is hopefully comparable in price (one running mongo, and one running the rest). I also thought about trying to run a VM with mongo on it within the VPS, but that fell into the "too much effort" category, for me.
And yeah, as dcrosta says, use 64-bit, unless you want to limit your data size to under 2GB (and why would you want to do that?)
I did have similar problems, when I was using lots of map/reduce where the memory leaks and crashes were often. I don't use map/reduce anymore and there have been no memory leaks / crashes for many months now.
I am working in a MOSS 2007 project and have customized many parts of it. There is a problem in the production server where it takes a very long time (more than 15 minutes, sometimes fails due to timeouts) to create a sub site (even with the built-in site templates). While in the development server, it only takes 1 to 2 minutes.
Both servers are having same configuration with 8 cores CPU and 8 GIGs RAM. Both are using separate database servers with the same configuration. The content db size is around 100 GB. More than a hundred subsites are there.
What could be the reason why in the other server it will take so much time? Is there any configuration or something else I need to take care?
Update:
So today I had the chance to check the environment with my clients. But site creation was so fast though they said they didn't change any configuration in the server.
I also used that chance to examine the database. The disk fragmentation was quite high at 49% so I suggested them to run defrag. And I also asked the database file growth to be increased to 100MB, up from the default 1MB.
So my suspicion is that some processes were running heavily on the server previously, that's why it took that much of time.
Update 2:
Yesterday my client reported that the site creation was slow again so I went to check it. When I checked the db, I found that instead of the reported 100GB, the content db size is only around 30GB. So it's still far below the recommended size.
One thing that got my attention is, the site collection recycle bin was holding almost 5 millions items. And whenever I tried to browse the site collection recycle bin, it would take a lot of time to open and the whole site collection is inaccessible.
Since the web application setting is set to the default (30 days before cleaning up, and 50% size for the second stage recycle bin), is this normal or is this a potential problem also?
Actually, there was also another web application using the same database server with 100GB content db and it's always fast. But the one with 30GB is slow. Both are having the same setup, only different data.
What should I check next?
So today I had the chance to check the environment with my clients. But site creation was so fast though they said they didn't change any configuration in the server.
I also used that chance to examine the database. The disk fragmentation was quite high at 49% so I suggested them to run defrag. And I also asked the database file growth to be increased to 100MB, up from the default 1MB.
So my suspicion is that some processes were running heavily on the server previously, that's why it took that much of time.
Thanks for the inputs everyone, I really appreciate.
Yesterday my client reported that the site creation was slow again so I went to check it. When I checked the db, I found that instead of the reported 100GB, the content db size is only around 30GB. So it's still far below the recommended size.
One thing that got my attention is, the site collection recycle bin was holding almost 5 millions items. And whenever I tried to browse the site collection recycle bin, it would take a lot of time to open and the whole site collection is inaccessible.
Since the web application setting is set to the default (30 days before cleaning up, and 50% size for the second stage recycle bin), is this normal or is this a potential problem also?
Actually, there was also another web application using the same database server with 100GB content db and it's always fast. But the one with 30GB is slow. Both are having the same setup, only different data.
Any idea what should I check next? Thanks a lot.
Yes, its normal OOB if you haven't turned the Second Stage Recycle bin off or set a site quota. If a site quota has not been set then the growth of the Second Stage Recycle bin is not limited...
the second stage recycle bin is by default limited to 50% size of the site quota, in other words if you have a site quota of 100gb then you would have a Second Stage recycle bin of 50gb. If a site quota has not been set, there are not any growth limitations...
I second everything Nat has said and emphasize splitting the content database. There are instructions on how to this provided you have multiple site collections and not a single massive one.
Also check your SharePoint databases are in good shape. Have you tried DBCC CHECKDB? Do you have SQL Server maintenance plans configured to reindex and reduce fragmentation? Read these resources on TechNet (particularly the database maintenance article) for details.
Finally, see if there is anything more you can do to isolate the SQL Server as the problem. Are there any other applications with databases on the same SQL Server and are they having problems? Are you running performance monitoring on the SQL Server or SharePoint servers that show any bottlenecks?
Backup the production database to dev and attach it to your dev SharePoint server.
Try and create a site. If it does not take forever to create a site, you can assume there is a problem with the Prod database.
Despite that, at 100gig, you are running up to the limit for a content database and should be planning to put content into more than one. you will know why when you try and backup the database. Searching should also be starting to take a good long time now.
So long term you are going to have to plan on splitting your websites out into different content databases.
--Responses--
Yeah, database size is all just about SQL server handling it. 100GB is just the "any more than this and it starts to be a pain" rule of thumb. Full Search crawls will also start a while.
Given that you do not have access to the production database and that creating a sub-site is primarily a database operation, there is nothing you can really do to figure out what the issue is.
You could try creating a subsite while doing a trace of the Dev database and look at the tables those commands reference to see if there is a smoking gun, but without production access you are really hampered.
Does the production system server pages and documents at a reasonable speed?
See if you can start getting some stats from the database during the creation, find out what work is being done. SQL has some great tools for that now.
I wanted to know what people used as a best practice for limiting memory on IIS [5/6/7]. I'm running on 32bit web servers with 4GB of physical memory, and no /3GB switch. I'm currently limiting my app pools to 1GB used memory. Is this too low? any thoughts?
All the limits in the application pool are for bad behaving apps. And more specifically:
To prevent the bad app from disturbing to good apps.
To try and keep the bad app running as much as possible.
In that light, the answer is of course: It depends.
If your application is leaking then without a limit it will crash around 1.2 - 1.6 Gb (if memory serves). So 1 Gb is sensible. If during normal operation your application consumes not more than 100 Mb and you have many app pools on the server, than you should set the limit lower to prevent one app from damaging other apps.
To conclude: 1 Gb is sensible. Hitting the limits should be treated as an application crash and should be debugged and fixed.
David Wang blog is a good resource on those issues.
There's a great writeup from a MS Field Engineer about this subject.