IIS Memory Leak in 64 bit only mode - iis

I am having an issue where w3wp.exe is gaining around 100 meg per page load on a specific page (not entire site). The page is not that memory intensive and should not require so much memory.
I modified a single setting, "Enable 32-bit Applications" and set it to true and now the leak is gone, however I need to understand why this might be happening. It is happening only on one server, the other servers we test on do not see this issue. When Enable 32-bit Applications is disabled (false), the results from the ANTS memory profiler are attached below. Does anyone have any idea what's going on? Please note the only thing growing is "Unused Memory" / "Free Space"

Can you go into the list of classes and sort by the memory occupied?

I was too quick to judge this. After I disabled most of the work that one page was doing, the growth stopped, but looking at other pages I saw a similar pattern, but it stopped at a lower memory limit, like say 450 meg. then I upped our private memory limit to 2 gig instead of 1 gig, and re-enabled the "leaking" code. The memory shot up to 1.05 gig in 3 refreshes. 20 refreshes later, it is not significantly changing.
This is a case of IIS 64 bit app pool allocating way more memory than is necessary. Since it isn't actually leaking, this question was invalid.
anyway, if you notice the same behavior with Enable 32-bit Applications, I hope this helps you

Related

Shoud I be concerned if my swap usage is 100%?

Recently, I've been playing around with flutter. Between running an emulator, using the browser, and using vscode, my system memory has been getting decently close to maxed out. My laptop has crashed twice now before I started paying attention to memory usage.
Looking at Ubuntu's system manager, I noticed that my Swap frequently goes up to 100%, even though I still have some free ram. Is this expected behavior, or should I be concerned?
Here's a picture of memory usage in System manager
Swap space usage becomes an issue only when there is not enough RAM available. You can reduce swap usage by configuring /etc/sysctl.conf as root. Change vm.swappiness= to any value lower than 60(default value).
In short, no. SWAP is less efficient than RAM which is why you don't want to maximize SWAP usage.

Why would free be reporting a freakishly high swap used value?

I have seen twice recently RHEL 6 boxes showing a swap used value, reported by 'free', of something in the 10^15 bytes range. This is, of course, far in excess of what is allocated. Further, the '-/+ buffers/cache' line shows around 3 GB free. Both of these machines subsequently became unstable and had to be rebooted.
Does anyone have any ideas as to what might cause this? Someone told me that this could be indicative of a memory leak, but I can not find any supporting information online.
Are the systems running kernel-2.6.32-573.1.1.el6.x86_64?
Could be this bug:
access.redhat.com/solutions/1571043
Linux moves unused memory pages to swap (see linux swappiness), so if you still have free memory (considering buffers/cache) you are good to go.
You probably have a process that is not used much and it was swapped out.

Swappiness in JVM

I stumbled upon some interesting problems last week where I noticed that the one of our production servers, which has Apache HTTP Server over Tomcat running, stopped, reporting an HTTP outage.
On further investigating this issue it seemed like it was due to JVM causing memory pages to be swapped out quickly. This resulted in the swap space being completely populated, causing a memory issue the next time a page is moved to swap.
Investigating further, it seems that there is a JVM swappiness factor that's set to 60% by default with some of the our Linux distributions. Based on some research, it seems that this could be a high value for a web service that has high traffic. Our swap space was set to 2 GB.
The swap details are:
Filename Type Size Used Priority
/dev/sda3 partition 2096472 1261420 -1
From /proc/meminfo
SwapCached: 944668 kB
Our JVM properties are as follows:
-Xmx6g -Xms4g -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:PermSize=512M -XX:MaxPermSize=1024M -XX:NewSize=2g -XX:MaxNewSize=2g -XX:ParallelGCThreads=8
The server runs with 12 GB RAM.
Swappiness does not go well with the JVM's GC process. Thus I tried reducing the swappiness to 0, but that did not change anything. We still see cases where the entire swap space is consumed and resulting in an OutOfMemory error.
How can I tune the JVM performance?
The swappiness factor is actually a system-wide setting in Linux, not JVM-specific.
My personal experience with some kinds of applications is that they need swap to be turned off in order to work without hiccups, period. While I can't say for sure if your application belongs to this category, I have seen apps being swapped out despite enough free RAM being available. As you pointed out, GC and swap don't mix well. Swappiness is just an indication to the OS, so its impact on how much gets swapped out may be larger or smaller depending on circumstances. My suggestion would be to try turning swap completely off.
In order to do this, you need to be root or have sudo access. Comment out the line describing swap in /etc/fstab, which will prevent swap from being turned on after a reboot. Then, in order to turn swap off for the current server run, run swapoff -a. This may take up to a few minutes if there's data in swap that needs to be pulled back into RAM. Then, check the last line of the output of free to ensure that the total size of available swap is 0. After this, observe your app in order to determine if turning swap off solved your issue.
If your physical RAM is 12 GB and your heap is set to only 6 GB (max), then you have plenty of RAM. Is this server dedicated to the Tomcat instance?
Take a look at the verbose GC log files to see what the memory usage is over time. Check your access log files to confirm whether the access requests have increased over time.

How do I fix leaking SSLSessionImpl in Glassfish?

So the basics are I have Glassfish 2.1 and Java 1.6.0_15 and it will work for a few days but it eats up all the memory it can, seemingly no matter how high the max memory is set to. It's a 32-bit jvm with the max memory now at 4GB and it uses it all up quickly then thrashes with the garbage collector bringing throughput to a crawl. So after a few tries I got a 3GB heap dump and opened it with YourKit.
The usage on this server is a swing client doing a few RMI calls and some REST https calls, plus a php web site calling a lot of REST https services.
It shows:
Name Objects Shallow Size Retained Size
java.lang.Class 22,422 1,435,872 1,680,800,240
java.lang.ref.Finalizer 3,086,366 197,527,424 1,628,846,552
com.sun.net.sll.internal.ssl.SSLSessionImpl 3,082,887 443,935,728 1,430,892,816
byte[] 7,901,167 666,548,672 666,548,672
...and so on. Gee, where did the memory go? Oh, 3 million SSLSessionImpl instances, that's all.
It seems like all the https calls are causing these SSLSessionImpl objects to accumulate, but then they are never GC'ed. Looking at them in YourKit, the finalizer is the GC root. Poking around the web this looks very much like http://forums.sun.com/thread.jspa?threadID=5266266 and http://bugs.sun.com/bugdatabase/view_bug.do;jsessionid=80df6098575e8599df9ba5c9edc1?bug_id=6386530
Where do I go next? How do I get to the bottom of this?
This seems to be fixed now with an upgrade to the latest JVM. 1.6.0_18 fixes bug 4918870 which is related to this. Prior to upgrading the JVM, I had several heap dumps with 100,000-4,000,000 SSLSessionImpl, now there are usually less than 5000 instances.

Do Small Memory Leaks Matter Anymore?

With RAM typically in the Gigabytes on all PC's now, should I be spending time hunting down all the small (non-growing) memory leaks that may be in my program? I'm talking about those holes that may be less than 64 bytes, or even a bunch that are just 4 bytes.
Some of these are very difficult to identify because they are not in my own code, but may be in third party code or in the development tool's code, and I may not even have direct access to the source. In those cases, it would involve lengthy communication with the vendors of these products.
I have seen the number one memory leak question here at SO: Are memory leaks ever ok? and the number one answer to that, as of now voted up 85 times, is: No.
But here I'm talking about small leaks that may take an inordinate amount of debugging, research and communication to track down.
And I'm only talking about a simple desktop app. I understand that apps running on servers must be as tight as possible.
So the question I am really asking is, if I know I have a program that leaks, say 40 bytes every time it is run, does that matter?
(source: beholdgenealogy.com)
Also see my followup question: What Operating Systems Will Free The Memory Leaks?
Postscript: I just purchased EurekaLog for my program development.
I found an excellent article by Alexander, the author of EurekaLog (who should know these things), about catching memory leaks. In that article, Alexander states the answer to my question very well and succinctly:
While any error in your application is always bad, there are types of errors, which can be not visible in certain environments. For example, memory or resources leaks errors are relatively harmless on client machines and can be deadly on servers.
This is completely a personal decision.
However, if:
So the question I am really asking is, if I know I have a program that leaks, say 40 bytes every time it is run, does that matter?
In this case, I'd say no. The memory will be reclaimed when the program terminates, so if it's only leaking 40 bytes one time during the operation of an executable, that's practically meaningless.
If, however, it's leaking 40 bytes repeatedly, each time you do some operation, that might be more meaningful. The longer running the application, the more significant that becomes.
I would say, though, that fixing memory leaks often is worthwhile, even if the leak is a "meaningless" leak. Memory leaks are typically indicators of some underlying problem, so understanding and correcting the leak will often make your program more reliable over time.
Leaks are bugs.
You probably have other bugs too.
When you ship a product, you ship it with known bugs. When you choose which (of the known) bugs to "fix" versus "ship with", you do so based on the cost and risk to fix versus the customer benefit.
Leaks are no different. If it's a small leak that happens during an infrequent operation in a non-server app (e.g. an app that runs for minutes or hours and then shuts down), it might be "ok" in the same way any other bug is ok.
Actually, leaks can be kinda different in one important way, which is that if you are shipping a library/API, you really should fix them, because the customer benefit is enormous (otherwise all your customer 'inherit' your leak, and will be phoning you just as you have to do to talk to 3rd party vendor now).
While I agree that every little leak adds up, I don't agree that it's always the best business decision to fix it.
What if you have a stateless legacy system and no coders who understand it? Now you are using it in a situation that has to scale... and it's 100X cheaper to spawn a new instance and swap them out before memory goes overboard.
Or let's say you have a batch processing system that runs 24x7 but for which there is no real user. If it's cheaper to monitor memory and tell the system to restart itself periodically, why hunt down the leak?
I think you should try real hard but be pragmatic about the business ramifications of the decision.
No, it does not matter, however, only if, as you pointed out, the memory leak must not be repetitive. Memory leaks that don't grow as a program progress is usually okay. Non-growing memory leaks will eventually be solved when a process terminate.
However, it is difficult to prove an observed memory leak is not growing; you have sufficient empirical data. In reality, many huge program (even written in Java/C#) have memory leaks, but most of them are non-growing leaks.
Seriously, we can't live without memory leaks, deadlocks, data races. Having these bugs itself are okay. Only when it kills your program, it matters.
But, I have to disagree with your opinion: "memory is cheap". That can't justify memory leaks. That's very dangerous.
Yes. Leaks matter. If your apps runs 24x7x365 and handles a few thousands transactions per second, a few bytes turns into gigabytes rapidly.
A memory leak really depends on several things:
How often the leak happens
How much memory is lost each time
How long is the program going to run
For example, if you lose 40 bytes every time a task happens, and that task happens when the program starts, then nobody cares. If you lose 40Mb every time the program starts, then it should be investigated. If you lose 40 bytes every frame in your video or game engine, then you should look into that, because you'll lose 1.2kB each second, and after an hour you would have lost almost 4Mb.
It also depends on how long the program is going to stick around for. For example, I have a small calculator app, that I open, run a calculation in, and then close again. If that app loses 4Mb in it's run, then it doesn't really matter, because the OS will reclaim that lost memory once I close it. If the hypothetical video/game engine mentioned earlier lost 4Mb an hour, and it ran a demo unit, for several hours a day at a stand at a convention, then I'd look into it.
An example of a famous memory leak is Firefox, which lost a lot of memory in it's earlier versions. If your browser leaked memory 10 years ago, then you probably wouldn't care. You shut down the computer every day, and you while running the browser you only had one page up at a time. Today I just let my laptop go to standby, and I never close Firefox. It is open for weeks at a time, and I have at least 10 tabs open at any given time. If memory leaks every time a tab is closed, then that is going to build up to a larger leak now than it did 10 years ago, and so it is more important.
Are memory leaks ever ok?
Sure, if it's a short-lived process.
Memory leaks over a long period of time are, as the 85-point answer implies, problematic. Take a simple desktop app, for example -- prior to versions 3.x, did you ever notice how you needed it reboot Firefox after a while to recover it from sluggishness?
As for the short term, no, it doesn't matter. Take CGI or PHP scripts for example, or the little Perl three-liner in your ~/bin directory. Nobody's going to call the memory police if you write a 30-line non-looping application in C with 5 lines of malloc() and not a single call to free().
I am in the same boat as you. I have small memory leaks that don't grow ever. Most of the leaks are caused by improperly tearing down COM objects. I have studied the leaks and come to realize the time and money to fix them is disproportional to the damage the leaks do. Windows cleans up most of the time so the true damage is only realized if the user runs his computer for years without rebooting.
I think it's acceptable to leave in the leaks. It sounds so taboo, but if the leaks never ever ever grow and they are small, it's pretty insignificant in the larger scheme of things.
I agree with the earlier responses that leaks do matter.
People may have tons of memory, but they are also running more and more programs, and unless your application is completely hogging up the processor, it needs to play nice with other programs, which also means not hogging up resources it doesn't need.
So, this small memory leak will add up and mean that the user will have other problems, and if they decide they are having memory issues, if they decide that running your app causes them problems then they will stop running it.
Besides, as has been pointed out, if you don't know what is causing the leak then you may have other problems you don't know about. It may be the tip of a bug iceberg.
It depends on the nature of your application. I work primarily with web sites and web applications. So by most definitions, my application "runs" once per request. Code that leaks a few bytes per request on a high volume site can be catastrophic. From experience, we had some code which leaked a few kb per request. Added up, that caused our web server worker processes to restart so often it caused minute-long outages throughout the day.
But web applications (and many other kinds) have an indefinite lifespan - they run continuously, forever. The shorter-lived your application, the better. If each session of your application has a finite and reasonably predictable end point, there's of course a reasonable amount of leakage you can tolerate. It's all about the ROI.
It all depends. Reasons not to worry: the process is short-lived, the leaks are small and/or infrequent, the cost of an out of memory exception is low (eg, a web server instance in a cluster needs restarting and a few fetches need retrying). So I agree that some leaks don't really matter in practical terms.
But on the other hand, if you do have cause to worry, or even feel a nagging sense of doubt that maybe you're not taking quality seriously enough, it's a small matter (in most cases) to run your software with a memory leak detector and fix the problems. There are many good leak detectors out there. And you might find that the leak is part of a more serious problem, such as not releasing other resources (like open files). You may even find that the harmless leak would turn quite dangerous in usage scenarios you haven't tested yet.
Yes, it matters. Every little leak adds up.
For one, if your leaky code is used in a context where it is repeatedly used, and it leaks a little bit each time, those little bits add up. Even if the leak is small, and infrequent, those things can add up to significant quantities over long periods of time.
Secondarily... if you're writing code that has memory leaks, then that code has problems. I'm not saying that good code doesn't from time to time have memory leaks, but the fact of their existence means that there are some serious problems going on. Many, many security holes are due to just this sort of oversight (unbounded string copy, anyone?).
Bottom line is, if you know about it, and don't do all you can to track it down and fix it, then you're causing problems.
Memory leaks are never OK in any program, however small it may be.
Slowly they will add up to fill up your entire memory. Suppose you have a calling system which leaks about 4 bytes of memory per call it handles. You can handle say, 100 calls a second (this is a very small number), so you end up leaking 400 bytes a second or 400x60x60(1440000B) an hour. So, even a small leak is not acceptable.
And if you dont know the source of the leak then it may be some real big issue and you end up having buggy software.
But, essentially it boils down to the questions like, for how much time the program runs and what number of times the leak happens. So, it may be ok it leaks a very small amount and is not repeated but still the leak may be a small part of a bigger problem.
So, I feel that memory leaks are never ok.
That's like asking if there was a crack in a dam is it ok? NO! NEVER! Avoid memory leaks as if your life depends on it because when your application grows and gets more use that leak is going to turn into a flood and sooner or later someone or something is going to drown in it. Just because you can have lots of memory doesn't mean that you can take shortcuts with your code. Once you find a leak do what you can to resolve it and if you can't solve it make sure you keep coming back to it until it's fixed.
If you can't resolve the leak then try to see if you can clean up after it. The bigger issues come when the leak is repetitive.
Last note: if you ever hand the software to someone else and that leak is still there it may be a long time before someone else finds and/or fixes it.
I wouldn't be so worried about the quantity but the frequency of memory which you leak, but if you leak even just a few bytes very very often, your malloc's data structures will grow and might make it dramatically slower to traverse them, to allocate new memory and free. Unless you hit the border where you have leaked more than a tiny fraction of your RAM, mainly your program will suffer under those performance problems and not the whole system. Does not apply to even remotely dlmalloc-based systems (FreeBSD, Linux, etc), there it's just don't care, all you loose there is memory (perhaps a few times more than the amount you think) and not performance.
A single allocation which is not reclaimed by your program is not a leak at all. If you write a small command line utility which takes a second to complete, you may not need to even reclaim any memory there. Upon termination, the OS reclaims RAM, file handles, should basically apply to any kind of system resource, but you cannot rely on some OSes as much as on others, but as long as it's just memory, even Windows 95 will manage it just right.
Oh and another thing follows from that, if you leak memory, don't bother cleaning up at the end of the program or after a long execution time, or you will just waste even more CPU time. Always fix the leaks as near to the timepoint where they are created as possible. Other reason: malloc implementations prefer to keep the RAM they got from the OS for future allocations instead of giving it back. Also you may suffer address space fragmentation.
If someone says memory leaks are ok in small amounts and as long as it doesn't crash the application, it is like saying, stealing is ok if in small amounts and as long as you are not caught :)
Memory leaks are very important in 32 bit applications because the application is limited to 2^32 bytes of memory which is approximately 4 GB. Once a 32 bit application attempts to allocate more than 2^32 bytes of memory the application may crash or hang.
Memory leaks are not as important in 64 bit applications because the application is limited to 2^64 bytes of memory which is approximately 16 EB; so with a 64 bit application you are more-so limited by hardware, but the OS will still likely impose an artificial limit at some time.
Bottom line is that having a memory leak in your code is bad programming; so fix it and be a better programmer.

Resources