How to clear garbage in PHP 4 - memory-leaks

I have written an application. But there is an issue of memory overflowing. Is there a way to clear all garbage values in PHP 4?

I think more information about your specific case and environment (I am just guessing that you are running PHP from a web server and not CLI) is needed. And you should look through your entire code yourself for places that can be optimized.
As you probably know, garbage collection is not a part of PHP 4. Check out unset and http://www.obdev.at/developers/articles/00002.html for some pointers.

If the problem is memory overflow. You can use:
ini_set('memory_limit', '128M'); //or the quantity of memory you need
This will expand the default memory used on the httpd.conf for that script.
Note: I'm not sure if it works on php 4 but, give it a try.

Related

node.js mongoose.js memory leak?

I'm creating bower package search site (everything is open sourced) and I hit the wall. I have some memory leak (or I think I have) and I honestly don't know why it is there.
You can download it and run on Your own, but simple hint will help me greatly.
I have narrowed it down to this function call here https://github.com/kamilbiela/bowereggs-backend/blob/master/main.js#L14 ( nest.fetchAndSave() ) which is all defined here: https://github.com/kamilbiela/bowereggs-backend/blob/master/lib/nest.js
Basically it downloads a package list from internet, Json.parse and inserts it into database, plus some when.js promises.
Running this function few times creates a 30mb of memory per run, that is not cleaned by garbage collector. Also note that this is my first "real" node.js project, so I'll be really grateful for any tip.
For anyone having the same problem:
https://github.com/c4milo/node-webkit-agent
After making few heap dumps I discovered that objects are garbage collected and the real memory usage isn't tied to it. I think that real memory usage is bigger because of using mongo and other non node.js stuff. Also real memory usage stabilizes at ~300mb, heap dumps at ~35mb.

Node JS, Highcharts Memory usage keeps climbing

I am looking after an app built with Node JS that's producing some interesting issues. It was originally running on Node JS v0.3.0 and I've since upgraded to v0.10.12. We're using Node JS to render charts on the server and we've noticed the memory usage keeps climbing chart after chart.
Q1: I've been monitoring the RES column in top for the Node JS process, is this correct or should I be monitoring something else?
I've been setting variables to null to try and reallocate memory back to the system resources (I read this somewhere as a solution) and it makes only a slight difference.
I've pushed the app all the way to 1.5gb and it then ceases to function and the process doesn't appear to die. No error messages which I found odd.
Q2: Is there anything else I can do?
Thanks
Steve
That is a massive jump in versions. You may want to share what code changes you may have made to get it working on latest stable. The api is not the same as back in v0.3, so that may be part of the problem.
If not then the issue you see it more likely from heap fragmentation than from an actual leak. In later v8 versions garbage collection is more liberal with cleanup to improve performance. (see http://code.google.com/p/chromium/issues/detail?id=112386 for some discussion on this)
You may try running the application with --max_old_space_size=32 which will limit the amount of memory v8 can use to around 32MB. Note the docs say "max size of the old generation", so it won't be exactly 32MB. Just around it, for lack of a better technical explanation.
Also you can track the amount of external memory usage with --trace_external_memory. This will allow you to know if external memory (i.e. Buffers) are being retained in your application.
You're note on the application hanging around 1.5GB would tell me you're probably on a 64-bit system. You only mentioned it ceases to function, but didn't note if the CPU is spinning during that time. Also since I don't have example code I'm not sure of what might be causing this to happen.
I'd try running on latest development (v0.11.3 at the time of this writing) and see if the issue is fixed. A lot of performance/memory enhancements are being worked on that may help your issue.
I guess you have somewhere a memory leak (in form of a closure?) that keeps the (not longer used?) diagrams(?) somewhere in memory.
The v8 sometimes needs a bit tweaking when it comes to > 1 GB of memory. Try out --noincremental_marking and/or --max_old_space_size=81920000 (if you have 8 GB available).
Check for more options with node --v8-options and go through the --trace*-parameters to find out what slows down/stops node.

Why Nodejs serves a file with 80x more CPU usage than Nginx?

Take the same code that sits on nodejs.org home page. Serve a static file that is 1.8Mb. And do the same with Nginx, and watch the difference.
Code : http://pastie.org/3730760
Screencast : http://screencast.com/t/Or44Xie11Fnp
Please share if you know anything that'd prevent this from happening, so we don't need to deploy nginx servers and complicate our lives.
ps1. this test is done with node 0.6.12. out of curiosity, i downgraded to 0.4.12 just to check if it's a regression, on the contrary, it was worse. same file used 25% twice.
ps2. this post is not a nodejs hate - we use nodejs, and we love it, except this glitch which actually delayed our launch (made us really sad), and seemed quite serious to me. i've never read, heard, seen or expected to come across.
The problem with your node benchmark is that you store the static file in a variable inside the V8 heap. Due to the way how V8 handles memory it can't directly send data contained in javascript variables to the network, because addresses of allocated objects may change during runtime, therefore V8 has to make a copy of your 1.8MB string on every request, sure that kills performance.
What you could do is to use a Buffer:
replace: longAssString = fs.readFileSync(pathToABigFile, 'utf8');
with: longAssString = fs.readFileSync(pathToABigFile);
that way you have your static file in a buffer, buffers are stored outside of V8s heap and require no copy when sent to the network and should therefore be much faster.

what causes memory leak in java

I have a web application deployed in Oracle iPlanet web server 7. Website is used actively in Internet.
After deploying, heap size is growing and after 2 or 3 weeks, OutOfMemory error is thrown.
So I began to use profiling tool. I am not familiar with heap dump. All I noticed that char[], hashmap and String objects occupy too much at heap. How can I notice what causes memory leak from heap dump? My assumptations about my memory leak;
I do so much logging in code using log4j for keeping in log.txt file. Is there a problem with it?
may be an error removing inactive sessions?
some static values like cities, gender type stored in static hashmap ?
I have a login mechanism but no logout mechanism. When site is opened again, new login needed. (silly but not implemented yet.) ?
All?
Do you have an idea about them or can you add another assumptions about memory leak?
Since Java has garbage collection a "memory leak" would usually be the result of you keeping references to some objects when they shouldn't be kept alive.
You might be able to see just from the age of the objects which ones are potentially old and being kept around when they shouldn't.
log4j shouldn't cause any problems.
The hashmap should be okay, since you actually want to keep these values around.
Inactive sessions might be the problem if they're stored in memory and if something keeps references to them.
There is one more thing you can try: new project, Plumbr, which aims to find memory leaks in java applications. It is in beta stage, but should be stable enough to give it a try.
As a side node, Strings and char[] are almost always on top of the profilers' data. This rarely means any real problem.

Under Linux, how do I track down a memory leak in pre-built software?

I have a new Ubuntu Linux Server 64bit 10.04 LTS.
A default install of Mysql with replication turned on appears to be leaking memory.
However, we've tried going back to an earlier version and memory is still leaking but I can't tell where.
What tools/techniques can I use to pinpoint where memory is leaking so that I can rectify the problem?
Valgrind, http://valgrind.org/, can be very useful in these situations. It runs on unmodified executables but it does help tremendously if you can install the debugging symbols. Be sure to use the --show-reachable=yes flag as the leaked memory may still be reachable in some way but just not the way you want it. Also --trace-children in case of a fork. You'll likely have to track down in the start-up script where the executable is called and then add something like the following:
valgrind --show-reachable=yes --trace-children=yes --log-file=/path/to/log SQL-cmdline sqlargs
The man page has lots of other potentially useful options.
Have you tried the MySQL mailing list? Something like this would certainly be of interest to them if you can reproduce it in a straightforward manner.
You can use Valgrind as ninjalj suggests, but I doubt you'll get that close to anything useful. Even if you see a real leak (and they will be hard enough to validate), tracking down the root cause through the C call stacks will likely be very annoying (for example if the leak is triggered by a particular SQL pattern or stored procedure, you'll be looking at the call stack from the resultant optimized query, and not the original calls, which are likely in a different language).
Normally you might have no recourse, and have to resort to tracking it down through callstacks and iterative testing, but you have the source code to MySQL (including the source for the exact default package install), so you can use more advanced tools like MemoryScape (or at least build with symbols in order to provide Valgrind more food for thought).
Try using valgrind.
A very good and powerful tool, which is installed/available for most distributions is Valgrind.
It has a plethora of different options and is pretty much (as far as I've seen) the default profiler under linux systems.

Resources