Maximum execution time of 300 seconds exceeded in - linux

I know this subject has like ... many duplcates here and there, but trust me I've spent time fetching these posts and my question remains answerless.
I'm running a PHP script on a Debian Linux / Nginx / PHP-FPM / APC / I think that's all.
I'm executing the script from my SSH terminal (CLI) like : > php plzrunthisFscript.php &
It used to work flawlessly but now, it returns this famous error.
What I've tried ALREADY and it failed (I mean it didn't change anything) :
Check what PHP.INI was used with a phpinfo(); inside my script (it's /etc/php5/cli/php.ini)
maximum_execution time is hardcoded to 0 (means no limit) for CLI if I'm not mistaken.
Tried to add at begining of my script : set_time_limit(0);
Tried to add : ini_set('max_execution_time', -1);
Tried to execute my php command with parameter -d max_execution_time=0
Tried to execute from a web interface (served by Nginx)
Tried to execute from another web interface (SErver by Apache2 this time) gives same error on page.
tried to configure max_execution_time 15 in /etc/php5/cli/php.ini to check if the php.ini that is supposed to be used (because phpinfo() inside my script is ignored or not : it's ignored
Everytime, it brings this error, and sometimes even after more than 300 seconds I think, which is really confusing.
If someone has any idea on how to fix this, or has some things I can try, please advise.
Thank you for your time.

God I finaly solved it!
I think that could be considered as a Magento Bug in a way.
Here is how to reproduce this :
You have Magento (My version is CE 1.7) running with APACHE 2.
One day, you decide to get rid of Apache and try to adopt Nginx.
You configure everything and it's working great, but you end up with this error one day, trying to rebuild your indexes as you often do.
Thing is, when you run (for example) : php indexer.php --reindex catalog_url &
This script includes another one called abstract.php which contains this awesome function :
protected function _applyPhpVariables()
This function will look for a .htaccess in your Magento root directory, then parse every single configuration parameters, and will execute the index script with these parameters.
How clever ...
Anyway, to solve this you just need to (delete / rename / burn) this .htaccess file, and then everything will go back to normal.
Thanks for your help everyone.

Related

Golem Task respons back with runtime error 2, can't determine the cause

Repo for all code I've been using is updated here . When I run the requestor script it exits with a runtime error 2 (File not found). I am not sure how to further debug this or fix it. So far I've converted my code over to a python slim docker image to better mirror the example. It also works for me when I spin up a docker image that typing and running "/golem/work/imageclassifier.py --trainmodel" works from root. I switched all my code to use absolute paths. I also did make sure the shebang (#!) uses linux end of file characters rather than windows before which was giving me errors. Fixed a bug where my script returns error code 2 when called with no args to now pass.
clf.fit(trainDataGlobal, trainLabelsGlobal)
pkl_file = "classifier.pkl"
with open(pkl_file, 'wb') as file:
pickle.dump(clf, file)
is the only piece I could think of that causes the issue, but as far as I can tell this is the proper way to pickle something in python. Requestor script is also heavily based on the simple service example and I tried to mirror my design to that. I just need help in getting more information while debugging, or guidance on how to move forward from here

Run init.d script conditionally based on hostname

What would be the best way to conditionally run an init.d script on linux based on hostname? I'm working with New Relic and some of the servers simply don't need it installed, but they're all otherwise basic copies of one another. This is Ubuntu.
I've tried (and failed) to put in a host conditional but for the life of me I can't get it working. Threw exits in the top of the file as well as in the start function, but it seems to fire up every time. Without knowing completely how those scripts are fired I'm a little confused on how to alter it to not fire if it server name isn't something like production, etc.
Any guidance would be super helpful.
Put this at the top of the script you would like to disable:
if [ $(hostname) != "goodhost" ]
then
exit
fi
replacing "goodhost" with the actual name of the host where the script is supposed to run.
Does that solve the problem?

kohana 2 return empty page in browser only

My app is built on Kohana 2,at first I built it in windows with php5.4, it worked well, now I move it to fedora I use php 5.5.9 and it always return empty page in browser, but if I run it in command line, it works well.
I noticed that in bootstrap file after the line require Kohana,it return empty page, before this line I can print something in browser
there's no error in serve logs. can anyone help me. I tried many ways but it still return empty page
There are many things changed in php 5.4 >=, and probably many kohana lib files causing an issue, one possible thing is mysql connection type because mysql is replace by mysqli/pdo.
https://wiki.php.net/rfc/mysql_deprecation

PHP-FPM fastcgi_finish_request() not working as expected

I read #Dmitri 's original example of how to use fastcgi_finish_request() question and tried to follow the example in the answer in my Kohana 3.1 setup in index.php:
echo Request::factory()
->execute()
->send_headers()
->body();
Right after that, I added:
fastcgi_finish_request();
sleep(5);
Initially, I thought it worked. But then I realised in only worked for every other request. Example:
Navigate to localhost (works, no pause)
Click link to localhost/controller (pause 5 seconds)
Click another link to localhost/controller (works again, no pause)
And it continues on like that. Am I missing something? Like maybe a setting in php5-fpm config file?
Running PHP 5.3.5-1ubuntu7.2 with Suhosin-Patch, Nginx
Call session_write_close() before you call fastcgi_finish_request() to resolve this issue:
session_write_close();
fastcgi_finish_request();
sleep(5);
Next to the server-response itself (which you can control with the fastcgi_finish_request function and rest assured it works that way), there can be other resources that is blocking the (next) script from starting right ahead.
This can be file-lockings (popular for session) and other stuff. As you have not shared much code and we do not see your Kohana configuration you should take a look which components you use and which resources they acquire.
Is it because your web server only handles one php instance at a time and it is still exciting the previous script?

How to disable "header already sent" message on linux, cpanel?

I building my sites on the localhost (runs wamp on windows), and when I upload it to my server, I always get
"Cannot modify header information - headers already sent"
I understand that there shouldn't be any blank lines and everyhing, and usually this works out. but now I need to redirect someone after the header has been sent, how can I make my server act like my localhost ?
i'm using cpanel and WHM:
cPanel 11.25.0-R42399 - WHM 11.25.0 - X 3.9
CENTOS 5.4 x86_64 virtuozzo on vps
I will appreciate any help
In short, you need to prevent PHP from outputting anything to the browser before you get to the point where you want to use the header() function.
This should be done by careful programming practices, of which your 'no blank lines' is one, or by storing PHP's output in an output buffer, and only outputting when you're ready for it.
See the ob_start() and ob_flush() methods. You use ob_start() at the start of your application. This disables output and stores it into a buffer. When you're ready to start outputting, use ob_flush() and PHP will send the buffer's contents to the browser, including the headers that are set till that point. If you don't call ob_flush() then the buffer is output (flushed) at the end of the script.
The reason why it works on your WAMP development environment is most likely that output buffering is already enable by default in the php.ini. Quite often these all-in-one packages enable a default buffer for the first 4k bytes or so. However, it is generally better to explicitly start and flush the buffer in your code, since that forces better coding practices.
Well,
I guess by more thinking and better programing you can manage to keep all redirects before any HTML is written.
This problem solved by the old rules...
#user31279: The quickest and dirtiest way I know of is to use # to suppress the warning, so e.g.
#header('Location: some-other-page.php');

Resources