PHP - Plesk - Cron - Allowed memory size exhausted? - cron

ini_set('max_execution_time',0);
ini_set('memory_limit','1000M');
These are the first two lines at the very top of my script.
I was under the impression if I ran something via cron memory limits didn't apply, but I was wrong. Safe mode is off and when I test to see if these values are being set they are but I keep getting the good ol' "PHP Fatal: Memory exhausted" error.
Any ideas what I may be doing wrong? And whats the "more elegant way" of writing "infinite" for the "memory limit" value is it -1 or something?

possible that suhosin is running on your Server? If yes, you have to set the "suhosin.memory_limit" inside your php.ini.
Suhosin does now allow to allocate more memory, even if safemode is off.

Changed memory limit to -1 instead of '1000M' now everything works perfectly.

You can't use non-numeric values ("M", "K") outside php.ini proper. Setting 10000000 would probably work.

Related

disk usage increasing indefinitely with php script

I am using the following code to create backups of the php variables.
if(file_exists(old_backup.txt))
unlink('old_backup.txt');
copy('new_backup.txt', 'old_backup.txt');
$content = serialize($some_ar);
file_put_contents('new_backup.txt', $content);
new_backup.txt will have current variables dump and old_backup.txt will have variables dump sometime back in time.
dump size is constant, around 300Mb. But every time above code is run, disk usage increases indefinitely. When the php script killed, disk usage is normal.
Not sure where the file handler still open for deleted files.
How do I make above code work, without much increase in disk usage.
Not sure about what exactly is causing the disk usage increase, because you posted only a snippet and not the full script. However there are a few things that are not correct for sure:
if(file_exists(old_backup.txt))
should be
if(file_exists('old_backup.txt'))
Then the mere existence of the file does not mean you can unlink it, you should check permissions too.
That being said, those aren't good reasons to fill the disk, but we need to see where you get the $some_ar variable from to give better advice.

What it is a soft variable?

I am trying to understand this fragment from the benchmark of CentOs6' guide Hardering but I'm not been success yet... Can anybody explain me what it is a soft variable?
The context where is said is the next:
"Setting a hard limit on core dumps prevents users from overriding the soft variable. If core dumps are required, consider setting limits for user groups (see limits.conf(5)). In addition, setting the fs.suid_dumpable variable to 0 will prevent setuid programs from dumping core [...]"
Page 39 from Hardering Guide of CentOs6
Thank you so much!
Here read this: linux.die.net/man/5/limits.conf it helps explain hard v. soft. – Tom Myddeltyn
A "soft" limit is one that you may decrease or increase (up to the "hard" limit), and a "hard" limit is one that you can't increase.
Initial limits are imposed by the system configuration. See the limits.conf manual.
See also the entry for the ulimit builtin of your particular shell.

How to clear garbage in PHP 4

I have written an application. But there is an issue of memory overflowing. Is there a way to clear all garbage values in PHP 4?
I think more information about your specific case and environment (I am just guessing that you are running PHP from a web server and not CLI) is needed. And you should look through your entire code yourself for places that can be optimized.
As you probably know, garbage collection is not a part of PHP 4. Check out unset and http://www.obdev.at/developers/articles/00002.html for some pointers.
If the problem is memory overflow. You can use:
ini_set('memory_limit', '128M'); //or the quantity of memory you need
This will expand the default memory used on the httpd.conf for that script.
Note: I'm not sure if it works on php 4 but, give it a try.

Clearing Large Apache Domain Logs

I am having an issue where Apache logs are growing out of proportion on several servers (Linux CentOS 5)... I will eventually disable logging completely but for now I need a quick fix to reclaim the hard disk space.
I have tried using the echo " " > /path/to/log.log or the * > /path/to/log.log but they take too long and almost crash the server as the logs are as large as 100GB
Deleting the files works fast but my question is, will it cause a problem when I restart apache. My servers are live and full of users so I can't crash them.
Your help is appreciated.
Use the truncate command
truncate -s 0 /path/to/log.log
In the longer term you should use logrotate to keep the logs from getting out of hand.
Try this:
cat /dev/null > /path/to/log.log
mv /path/to/log.log /path/to/log.log.1
Do this for your access, error and if you are really doing it on prod, you rewrite logs.
This doesn't effect Apache on *nix, since the file is open. Then restart Apache. Yes, I know I said restart, but this usually takes a second or so, so I doubt that anyone will notice -- or blame it on the network. The restarted Apache will be running with a new set of log files.
In terms of your current logs, IMO you need to keep at least the last 3 months error logs, and 1 month access logs, but look at your volumetrics to decide your rough per week volumes for error and access logs. Don't truncate the old files. If necessary do a nice tail piped to gzip -c of these to archives. If you want to split the use a loop doing a tail|head|gzip using the --bytes=nnG option. OK, you'll split across the odd line but that's better than deleting the lot as you suggest.
Of course, you could just delete the lot as you and others propose, but what are you going to do if you've realised that the site has been hacked recently? "Sorry: too late; I've deleted the evidence!"
Then for goodness sake implement a logrotate regime.

Linux #open-file limit

we are facing a situation where a process gets stuck due to running out of open files limit. The global setting file-max was set extremely high (set in sysctl.conf) & per-user value also was set to a high value in /etc/security/limits.conf. Even ulimit -n reflects the per-user value when ran as that head-less user (process-owner). So the question is, does this change require system reboot (my understanding is it doesn't) ? Has anyone faced the similar problem ? I am running ubuntu lucid & the application is a java process. #of ephemeral port range too is high enough, & when checked during the issue, the process had opened #1024 (Note the default value) files (As reported by lsof).
One problem you might run into is that the fd_set used by select is limited to FD_SETSIZE, which is fixed at compile time (in this case of the JRE), and that is limited to 1024.
#define FD_SETSIZE __FD_SETSIZE
/usr/include/bits/typesizes.h:#define __FD_SETSIZE 1024
Luckily both the c library and the kernel can handle arbitrary sized fd_set, so, for a compiled C program, it is possible to raise that limit.
Considering you have edited file-max value in sysctl.conf and /etc/security/limits.conf correctly; then:
edit /etc/pam.d/login, adding the line:
session required /lib/security/pam_limits.so
and then do
#ulimit -n unlimited
Note that you may need to log out and back in again before the changes take effect.

Resources