we noticed a strange problem / misbehavior of IIS 7.5 + CGI module.
when POSTing long content (50k+ bytes) to an CGI exe, IIS is not sending
the full content to our CGI exe via STDIN.
the length of the content that IIS is sending to the exe is not of fixed size, so it
might not be an buffer size problem, etc!
the cgi exe behaves as following:
get the content length of the HTTP header
read as much bytes from STDIN as specified in content length
do some magic work and exit
as not all bytes are forwarded by IIS / CGI module to our exe file,
the exe file / process is running "forever" and is not returning
any content. the process is killed after some minutes (timeout).
any suggestions / hints / etc. ... thanks in advance
regards,
mherderich
This can be the common Windows CR+LF problem. The content length may not be accurate when CR+LF translation is done, as CR+LF is translated to the single chàracter \n.
Related
How do I load a certain line from a text document (for example line 546) into a website?
Do it with PHP. Use the fopen(), fread() and fclose() methods. To get a specific line of txt is relative, because the formatting of .txt
Start with w3schools and PHP manual. The best way to figured it out is to try and documentations. In my case it was helpful all the way.
I propose this solution using old technology because you asked to make this work for an old file format (html)! So here it goes! Don't be scared! What we are going to do is use Server Side Includes (SSI) to embed the results of a Common Gateway Interface (CGI) program into a static HTML document by using the "#exec cgi" directive. It sounds complicated but it's pretty simple. We only have to create two files and then make sure the web server is properly configured. Three simple steps! I'll guide you step by step:
1. Create Your HTML File
Your HTML file content would be just this (simply copy and paste into an HTML file on your server using a simple text editor):
<HTML>
<TITLE>Load a certain line from a .txt into html</TITLE>
Here's the content of line 546 in filename.txt:
<!--#exec cgi="/cgi-bin/perl-print-line.pl 546 filename.txt"-->
</HTML>
2. Create Your PERL Script File
The above won't work yet. It depends on a simple program to output your line number. It is perl-print-line.pl by Alvin Alexander, and you can simply create this file by copying and pasting the PERL code below into a text editor too (be sure to place it inside your cgi-bin directory):
#!/usr/bin/perl
# purpose: print a specific line from a text file
# usage: perl-print-line.pl line-number input-file
# use perl argv to verify the number of command line arguments
#ARGV == 2 or die "Usage: print-specific-line.pl line-number input-file\n";
$desired_line_number = $ARGV[0];
$filename = $ARGV[1];
# use perl open function to open the file (or die trying)
open(FILE, $filename) or die "Could not read from $filename, program halting.";
# loop through the file with a perl while loop, stopping when you get to the desired record
$count = 1;
while (<FILE>)
{
if ($count == $desired_line_number)
{
# print line, then get out of here
print $_;
close FILE;
exit;
}
$count++;
}
close FILE;
print "Sorry, I think the line number you entered is greater than the number of lines in the file.\n";
3. Configure SSI On Your Web Server
Now, all of the above will not work unless your web server is first configured to process SSI. All popular web servers can still do it, but you must first make sure it is configured. Per the PerlScriptsJavaScripts.com site,
If your server is a unix/linux server, you can be pretty sure it
supports SSI. SI requires the use of Apache Server software. Most
unix/linux server use Apache. Some Windows server also use Apache, but
most of them use IIS (Internet Information Server). The server
software is not something you can change, and because it runs th
entire server (including other people's web sites) your host will also
refuse to change it for you. They may instead be kind enough to move
your site to anther one of their servers which does use Apache.
Most people will tell you that to use SSI, you must name the page with
a "shtml" extension. While that may be the case initially, you can set
the server to also accept and parse regular "htm" or "html" files.
If you have a Windows server instead of a UNIX/Linux server, then you might find this article by Robert McMurray useful. Most importantly he states,
SSI is not enabled by default on IIS 6. To enable SSI on IIS 6, set
the status for the Server Side Includes web service extension to
Allowed in the Internet Information Services (IIS) Manager.
SSI is not installed by default on IIS 7. To install SSI on IIS 7, see
the Server Side Include topic.
The cmd directive for #exec is disabled by default in IIS 5 and IIS 6.
To enable the cmd directive, see Microsoft KB article 233969.
The cmd directive for #exec is now disabled for SSI files in IIS 7;
you can only use the cgi directive.
Finally, if you are dealing with IIS 7.5 and your SSI directives inside your *.html files aren't being automatically preprocessed, try reading this post.
I have a sandbox program which uses setrlimit() to limit the output file size of another program run under its control, which I run like so:
sandbox -max 2048 /usr/bin/mono --debug myprogram.exe <p1 >r1 2>r2
The "-max 2048" switch tells the sandbox to limit the output to a maximum of 2K bytes.
If an exception occurs inside "myprogram.exe", or if I deliberately write something to stderr from inside "myprogram.exe", it appears in r2 as expected. However, when the file size limit is exceeded, I get this error message:
File size limit exceeded (core dumped)
but instead of being written to the error log r2 as expected, it comes out on the console. Can anyone tell me why this is happening? Is there any way I can arrange for this message to be written to r2 along with everything else?
It looks like an error that would be reported by the shell, not the program. The shell's stdout/stderr aren't being redirected anywhere.
Most probably it open another file descriptor (lets say 3), assign it to stderr (3>&2), and then although you redirected 2>r2 descriptor 3 is still attached to console. Thus printing something on 3 will output to console. You can try to list all open descriptors at /proc/self/fd/ to see if this is the case.
I know this subject has like ... many duplcates here and there, but trust me I've spent time fetching these posts and my question remains answerless.
I'm running a PHP script on a Debian Linux / Nginx / PHP-FPM / APC / I think that's all.
I'm executing the script from my SSH terminal (CLI) like : > php plzrunthisFscript.php &
It used to work flawlessly but now, it returns this famous error.
What I've tried ALREADY and it failed (I mean it didn't change anything) :
Check what PHP.INI was used with a phpinfo(); inside my script (it's /etc/php5/cli/php.ini)
maximum_execution time is hardcoded to 0 (means no limit) for CLI if I'm not mistaken.
Tried to add at begining of my script : set_time_limit(0);
Tried to add : ini_set('max_execution_time', -1);
Tried to execute my php command with parameter -d max_execution_time=0
Tried to execute from a web interface (served by Nginx)
Tried to execute from another web interface (SErver by Apache2 this time) gives same error on page.
tried to configure max_execution_time 15 in /etc/php5/cli/php.ini to check if the php.ini that is supposed to be used (because phpinfo() inside my script is ignored or not : it's ignored
Everytime, it brings this error, and sometimes even after more than 300 seconds I think, which is really confusing.
If someone has any idea on how to fix this, or has some things I can try, please advise.
Thank you for your time.
God I finaly solved it!
I think that could be considered as a Magento Bug in a way.
Here is how to reproduce this :
You have Magento (My version is CE 1.7) running with APACHE 2.
One day, you decide to get rid of Apache and try to adopt Nginx.
You configure everything and it's working great, but you end up with this error one day, trying to rebuild your indexes as you often do.
Thing is, when you run (for example) : php indexer.php --reindex catalog_url &
This script includes another one called abstract.php which contains this awesome function :
protected function _applyPhpVariables()
This function will look for a .htaccess in your Magento root directory, then parse every single configuration parameters, and will execute the index script with these parameters.
How clever ...
Anyway, to solve this you just need to (delete / rename / burn) this .htaccess file, and then everything will go back to normal.
Thanks for your help everyone.
I have a log file and want to create a webpage (possibly Python but not strictly) that will work much like unix "tail -f filename" command works (show new log lines when they are written to file).
So that user will continuously see log right in browser.
How would you implement this?
Tailon is a python webapp that, among other things, provides tail -f like functionality. In addition, wtee (a sister project of tailon) can make all its stdin viewable in the browser - its use is identical to the unix tee command: tail -f filename | wtee
I implemented this using jquery (.ajax) and php (json).
The flow is essentially as follows:
user calls an html page on their browser
html page contains an initial jquery .ajax call to a remote php script on the server that performs the required function, in this case, retrieving a few of the last lines of the file being 'tailed'
if no new lines are available, the php script just loops (wile the ajax caller waits, ie longpolling), and can be configured to time out if necessary (returning an appropriate value back to the ajax calling function on the client)
when new lines are detected by the php script, they are wrapped in a json response and sent back to the ajax calling function on the browser, which then appends it to the existing content of the page.
The javascript function will then recursively make the same ajax call, effectively sitting in an infinite loop.
In my specific implementation, i did the following:
both the ajax call on the client AND the php script on the server have timeouts to handle, for example, broken connections nicely. Also ensures the ajax call does not wait forever.
the ajax call passes a line number as a reference back to the server to tell it what the last line number was that it received, so the server knows which lines to return. Initially the value is zero, and the server will immediately return the last 10 lines of the file
when the php script is called, it uses the clients last line number to do a quick check on the file; if new lines have already been added it returns them immediately, if not it sits in a loop (1 second) and then instead checks the files ctime (or mtime) to detect when new lines are written. This is more effective than counting the lines in the file (which could be huge) every second.
See my longpolling/realtime tail implementation using jquery and php here:
https://github.com/richardvk/web_file_tail
Scullog, having capability of sharing the local drive to the browser. Stream the log file via Socket.IO over browser. It run on any platform such as windows/linux/mac. It run as service or standalone mode.
You read the file and print the last lines to the page. You might also use a GET-variable to you define number of rows to output using ?n=x where x is the number of lines.
I building my sites on the localhost (runs wamp on windows), and when I upload it to my server, I always get
"Cannot modify header information - headers already sent"
I understand that there shouldn't be any blank lines and everyhing, and usually this works out. but now I need to redirect someone after the header has been sent, how can I make my server act like my localhost ?
i'm using cpanel and WHM:
cPanel 11.25.0-R42399 - WHM 11.25.0 - X 3.9
CENTOS 5.4 x86_64 virtuozzo on vps
I will appreciate any help
In short, you need to prevent PHP from outputting anything to the browser before you get to the point where you want to use the header() function.
This should be done by careful programming practices, of which your 'no blank lines' is one, or by storing PHP's output in an output buffer, and only outputting when you're ready for it.
See the ob_start() and ob_flush() methods. You use ob_start() at the start of your application. This disables output and stores it into a buffer. When you're ready to start outputting, use ob_flush() and PHP will send the buffer's contents to the browser, including the headers that are set till that point. If you don't call ob_flush() then the buffer is output (flushed) at the end of the script.
The reason why it works on your WAMP development environment is most likely that output buffering is already enable by default in the php.ini. Quite often these all-in-one packages enable a default buffer for the first 4k bytes or so. However, it is generally better to explicitly start and flush the buffer in your code, since that forces better coding practices.
Well,
I guess by more thinking and better programing you can manage to keep all redirects before any HTML is written.
This problem solved by the old rules...
#user31279: The quickest and dirtiest way I know of is to use # to suppress the warning, so e.g.
#header('Location: some-other-page.php');