It looks like I just found an undocumented PHP7 backwards compatibility break, before I run off the report I just want to make sure I'm not missing something.
Here's the code:
$proc = proc_open("pwd",[1=>['pipe','w']], $pipes);
if(!is_resource($proc)) {
exit("bad proc\n");
}
$node = stream_get_contents($pipes[1]);
fclose($pipes[1]);
$status = proc_close($proc);
var_dump($status);
This returns int(0) in PHP 5.6.11-1+deb.sury.org~utopic+1 (cli) and int(-1) in PHP 7.0.2 (cli) (built: Feb 29 2016 16:53:37) ( NTS ).
The changelog doesn't show any changes for PHP7 in proc_open, fclose or proc_close so in theory these functions shouldn't have changed their behaviour.
Is there something else I'm overlooking? Why is this failing in PHP7?
Amended code:
$proc = proc_open("pwd",[1=>['pipe','w'],2=>['pipe','w']], $pipes);
$stdout = stream_get_contents($pipes[1]);
var_dump($stdout);
$stderr = stream_get_contents($pipes[2]);
var_dump($stderr);
fclose($pipes[1]);
fclose($pipes[2]);
$status = proc_close($proc);
var_dump($status);
Both php5.6 and php7 print correct working directory and empty string for stderr. php5.6 still returns exit code 0, and php7 exit code -1. Adding pipe 0 (stdin) makes no difference.
Related
I encountered a problem, that a ksh command in a legacy ksh script behaves strangely in one enviromment but not another.
These 2 environments have :
the same version of ksh Version AJM 93u+ 2012-08-01
almost the same contents in .profile
same output of cat /proc/version
Linux version 3.10.0-514.10.2.el7.x86_64 (mockbuild#x86-039.build.eng.bos.redhat.com) (gcc version 4.8.5 20150623 (Red Hat 4.8.5-11) (GCC) ) #1 SMP Mon Feb 20 02:37:52 EST 2017
the same output of locale :
LANG=C
LC_CTYPE="C"
LC_NUMERIC="C"
LC_TIME="C"
LC_COLLATE="C"
LC_MONETARY="C"
LC_MESSAGES="C"
LC_PAPER="C"
LC_NAME="C"
LC_ADDRESS="C"
LC_TELEPHONE="C"
LC_MEASUREMENT="C"
LC_IDENTIFICATION="C"
LC_ALL=
Inside of this legacy ksh script, there's a function who
executes a sql query
stores the result in a temporary file
removes the title (column name) and \r characte from the temporary file,
assigns the value to a variable,
returns the value of this variable as the function's result by echo
Here's an example of implementation of this function
function get_value {
# A temp file to store sql query result
TEMP_FILE="/tmp/get_value_tmp.txt"
# here's a bloc of code that executes a sql query
# then store the result in ${TEMP_FILE}
# execute_sql_query is just a pseudocode who launches database request program
if execute_sql_query << EOF
# ...
# code do send request against database
# write the result to the tmp file ${TEMP_FILE}
# ...
EOF
then
# removing the title (column name) and \r characte from the temporary file
# assign the value to a variable RET
RET=`tail -n +2 $TEMP_FILE | tr -d '\r'`
# the varible is returned as the result of this function
echo $RET
else
# in case of sql query execution fails
# CR is a variable who gets error code of execute_sql_query
CR=$?
# retrun error code as function result
echo $CR
fi
}
The sql query's result was exported to ${TEMP_FILE}, inside of this ${TEMP_FILE}, we have :
val_str
?
when the variable RET get the value ?
The strange thing is,
in one environment the function returns incorrectly a 0, as it interprets the $RET as $?
in another environment it returns correctly a ?
the function get_value is in a ksh script script_A.ksh
the script_A.ksh is called in another ksh script script_B.ksh
the script script_B.ksh is launched as a background job by nohup ksh script_B.ksh &
Has anyone encountered the same problem, does anyone have some ideas to analyse this issue, please ?
Thanks in advance.
I have a pretty long bash script that invokes quite a few external commands (git clone, wget, apt-get and others) that print a lot of stuff to the standard output.
I want the script to have a few verbosity options so it prints everything from the external commands, a summarized version of it (e.g. "Installing dependencies...", "Compiling...", etc.) or nothing at all. But how can I do it without cluttering up all my code?
I've thought about to possible solutions to this: One is to create a wrapper function that runs the external commands and prints what's needed to the standard output, depending on the options set at the start. This ones seems easier to implement, but it means adding a lot of extra clutter to the code.
The other solution is to send all the output to a couple of external files and, when parsing the arguments at the start of the script, running tail -f on that file if verbosity is specified. This would be very easy to implement, but seems pretty hacky to me and I'm concerned about the performance impact of it.
Which one is better? I'm also open to other solutions.
Improving on #Fred's idea a little bit more, we could build a small logging library this way:
declare -A _log_levels=([FATAL]=0 [ERROR]=1 [WARN]=2 [INFO]=3 [DEBUG]=4 [VERBOSE]=5)
declare -i _log_level=3
set_log_level() {
level="${1:-INFO}"
_log_level="${_log_levels[$level]}"
}
log_execute() {
level=${1:-INFO}
if (( $1 >= ${_log_levels[$level]} )); then
"${#:2}" >/dev/null
else
"${#:2}"
fi
}
log_fatal() { (( _log_level >= ${_log_levels[FATAL]} )) && echo "$(date) FATAL $*"; }
log_error() { (( _log_level >= ${_log_levels[ERROR]} )) && echo "$(date) ERROR $*"; }
log_warning() { (( _log_level >= ${_log_levels[WARNING]} )) && echo "$(date) WARNING $*"; }
log_info() { (( _log_level >= ${_log_levels[INFO]} )) && echo "$(date) INFO $*"; }
log_debug() { (( _log_level >= ${_log_levels[DEBUG]} )) && echo "$(date) DEBUG $*"; }
log_verbose() { (( _log_level >= ${_log_levels[VERBOSE]} )) && echo "$(date) VERBOSE $*"; }
# functions for logging command output
log_debug_file() { (( _log_level >= ${_log_levels[DEBUG]} )) && [[ -f $1 ]] && echo "=== command output start ===" && cat "$1" && echo "=== command output end ==="; }
log_verbose_file() { (( _log_level >= ${_log_levels[VERBOSE]} )) && [[ -f $1 ]] && echo "=== command output start ===" && cat "$1" && echo "=== command output end ==="; }
Let's say the above source is in a library file called logging_lib.sh, we could use it in a regular shell script this way:
#!/bin/bash
source /path/to/lib/logging_lib.sh
set_log_level DEBUG
log_info "Starting the script..."
# method 1 of controlling a command's output based on log level
log_execute INFO date
# method 2 of controlling the output based on log level
date &> date.out
log_debug_file date.out
log_debug "This is a debug statement"
...
log_error "This is an error"
...
log_warning "This is a warning"
...
log_fatal "This is a fatal error"
...
log_verbose "This is a verbose log!"
Will result in this output:
Fri Feb 24 06:48:18 UTC 2017 INFO Starting the script...
Fri Feb 24 06:48:18 UTC 2017
=== command output start ===
Fri Feb 24 06:48:18 UTC 2017
=== command output end ===
Fri Feb 24 06:48:18 UTC 2017 DEBUG This is a debug statement
Fri Feb 24 06:48:18 UTC 2017 ERROR This is an error
Fri Feb 24 06:48:18 UTC 2017 WARNING This is a warning
Fri Feb 24 06:48:18 UTC 2017 FATAL This is a fatal error
As we can see, log_verbose didn't produce any output since the log level is at DEBUG, one level below VERBOSE. However, log_debug_file date.out did produce the output and so did log_execute INFO, since log level is set to DEBUG, which is >= INFO.
Using this as the base, we could also write command wrappers if we need even more fine tuning:
git_wrapper() {
# run git command and print the output based on log level
}
With these in place, the script could be enhanced to take an argument --log-level level that can determine the log verbosity it should run with.
Here is a complete implementation of logging for Bash, rich with multiple loggers:
https://github.com/codeforester/base/blob/master/lib/stdlib.sh
If anyone is curious about why some variables are named with a leading underscore in the code above, see this post:
Correct Bash and shell script variable capitalization
You already have what seems to be the cleanest idea in your question (a wrapper function), but you seem to think it would be messy. I would suggest you reconsider. It could look like the following (not necessarily a full-fledged solution, just to give you the basic idea) :
#!/bin/bash
# Argument 1 : Logging level for that command
# Arguments 2... : Command to execute
# Output suppressed if command level >= current logging level
log()
{
if
(($1 >= logging_level))
then
"${#:2}" >/dev/null 2>&1
else
"${#:2}"
fi
}
logging_level=2
log 1 command1 and its args
log 2 command2 and its args
log 3 command4 and its args
You can arrange for any required redirection (with file descriptors if you want) to be handled in the wrapper function, so that the rest of the script remains readable and free from redirections and conditions depending on the selected logging level.
Solution 1.
Consider using additional file descriptors.
Redirect required file descriptors to STDOUT or /dev/null depending on selected verbosity.
Redirect output of every statement in your script to a file descriptor corresponding to its importance.
Have a look at https://unix.stackexchange.com/a/218355 .
Solution 2.
Set $required_verbosity and pipe STDOUT of every statement in your script to a helper script with two parameters, something like this:
statement | logger actual_verbosity $required_verbosity
In a logger script echo STDIN to STDOUT (or log file, whatever) if $actual_verbosity >= $required_verbosity.
I am trying to write a Perl script to do an SNMP get. It should work like the following command:
snmpget -v 3 -l authNoPriv -a MD5 -u V3User -A V3Password 10.0.1.203 sysUpTime.0
Returns:
SNMPv2-MIB::sysUpTime.0 = Timeticks: (492505406) 57 days, 0:04:14.06
But my Perl script returns the following:
ERROR: Received usmStatsUnknownUserNames.0 Report-PDU with value 1 during synchronization.
Last but not least, here is the Perl script:
use strict;
use warnings;
use Net::SNMP;
my $desc = 'sysUpTime.0';
my ($session, $error) = Net::SNMP->session(
-hostname => '10.0.1.202',
-version => 'snmpv3',
-username => 'V3User',
-authprotocol => 'md5',
-authpassword => 'V3Password'
);
if (!defined($session)) {
printf("ERROR: %s.\n", $error);
exit 1;
}
my $response = $session->get_request($desc);
my %pdesc = %{$response};
my $err = $session->error;
if ($err){
return 1;
}
print %pdesc;
exit 0;
I called the Perl script and snmpget on the same (Linux) machine. What could be causing this and how can I fix it?
As PrgmError points out, you're using a different IP address in your Perl script than in your snmpget command; I would double check that. The particular error you're getting indicates that your username is wrong; if the IP mismatch was simply a typo in your question, I would double check the username next.
A few other points about your Perl script:
Use die
You should use die instead of printf and exit since die will print the line number where it was invoked. This will make debugging your script much easier if there are multiple places it could fail:
die "Error: $error" if not defined $session;
will print something like
Error: foo bar at foo.pl line 17.
Also, using return inside an if statement doesn't make any sense; I think you meant to use
if ($err) {
exit 1;
}
but you should die with the specific error message you get instead of silently failing:
die $err if $err;
Fix arguments to get_request
Your invocation of the get_request method looks wrong. According to the docs, you should be calling it like this:
my $response = $session->get_request(-varbindlist => [ $oid ]);
Note that Net::SNMP only works with numeric OIDs, so you'll have to change sysUpTime.0 to 1.3.6.1.2.1.1.3.0.
Looking at your script I noticed that hostname value has 10.0.1.202
but the snmpget command you're using has 10.0.1.203
wrong IP by any chance?
I have the following code running as CGI. It starts to run and returns an empty PDF file to the browser and writes an error message to the error_log.
Does anybody have suggestions on how to solve this?
linux: Linux version 2.6.35.6-48.fc14.i686.PAE (...) (gcc version 4.5.1 20100924 (Red Hat 4.5.1-4) (GCC) ) #1 SMP Fri Oct 22 15:27:53 UTC 2010
wkhtmltopdf: wkhtmltopdf 0.10.0 rc2
perl: This is perl 5, version 12, subversion 2 (v5.12.2) built for i386-linux-thread-multi
Thank You in Advance.
~Donavon
perl CODE:
#!/usr/bin/perl
#### takes string containing HTML and outputs PDF to browser to download
#### (otherwise would output to STDOUT)
print "Content-Disposition: attachment; filename='testPDF.pdf'\n";
print "Content-type: application/octet-stream\n\n";
my $htmlToPrint = "<html>a bunch of html</html>";
### open a filehandle and pipe it to wkhtmltopdf
### *the arguments "- -" tell wkhtmltopdf to get
### input from STDIN and send output to STDOUT*
open(my $makePDF, "|-", "/usr/local/bin/wkhtmltopdf", "-", "-") || die("$!");
print $makePDF $htmlToPrint; ## sends my HTML to wkhtmltopdf which streams immediately to STDOUT
error_log message:
Loading pages (1/6)
QPainter::begin(): Returned false============================] 100%
Error: Unable to write to destination
Here is my code that I got to work. Hopefully some folks will find it useful.
Make sure the rights are set up correctly on the server side. We have a sysadmin here that set the module up on the server side so I can't tell you what those need to be, just that it can cause problems.
#!/usr/bin/perl
use warnings;
use strict;
use IPC::Open3;
use Symbol;
my $cmd = '/usr/local/bin/wkhtmltopdf - -';
my $err = gensym();
my $in = gensym();
my $out = gensym();
my $pdf = '';
my $pid = open3($in, $out, $err, $cmd) or die "could not run cmd : $cmd : $!\n";
my $string = '<html><head></head><body>Hello World!!!</body></html>';
print $in $string;
close($in);
while( <$out> ) {
$pdf .= $_
}
# for trouble shooting
while( <$err> ) {
# print "err-> $_<br />\n";
}
# for trouble shooting
waitpid($pid, 0 ) or die "$!\n";
my $retval = $?;
# print "retval-> $retval<br />\n";
print "Content-Disposition: attachment; filename='testPDF.pdf'\n";
print "Content-type: application/octet-stream\n\n";
print $pdf;
Looking into behavior in this question, I was surprised to see that perl lstat()s every path matching a glob pattern:
$ mkdir dir
$ touch dir/{foo,bar,baz}.txt
$ strace -e trace=lstat perl -E 'say $^V; <dir/b*>'
v5.10.1
lstat("dir/baz.txt", {st_mode=S_IFREG|0664, st_size=0, ...}) = 0
lstat("dir/bar.txt", {st_mode=S_IFREG|0664, st_size=0, ...}) = 0
I see the same behavior on my Linux system with glob(pattern) and <pattern>, and with later versions of perl.
My expectation was that the globbing would simply opendir/readdir under the hood, and that it would not need to inspect the actual pathnames it was searching.
What is the purpose of this lstat? Does it affect the glob()s return?
This strange behavior has been noticed before on PerlMonks. It turns out that glob calls lstat to support its GLOB_MARK flag, which has the effect that:
Each pathname that is a directory that matches the pattern has a slash appended.
To find out whether a directory entry refers to a subdir, you need to stat it. This is apparently done even when the flag is not given.
I was wondering the same thing - "What is the purpose of this lstat? Does it affect the glob()s return?"
Within bsd_glob.c glob2() I noticed a g_stat call within an if branch that required the GLOB_MARK flag to be set, I also noticed a call to g_lstat just before that was not guarded by a flag check. Both are within an if branch for when the end of pattern is reached.
If I remove these 2 lines in the glob2 function in perl-5.12.4/ext/File-Glob/bsd_glob.c
- if (g_lstat(pathbuf, &sb, pglob))
- return(0);
the only perl test (make test) that fails is test 5 in ext/File-Glob/t/basic.t with:
not ok 5
# Failed test at ../ext/File-Glob/t/basic.t line 92.
# Structures begin differing at:
# $got->[0] = 'asdfasdf'
# $expected->[0] = Does not exist
Test 5 in t/basic.t is
# check nonexistent checks
# should return an empty list
# XXX since errfunc is NULL on win32, this test is not valid there
#a = bsd_glob("asdfasdf", 0);
SKIP: {
skip $^O, 1 if $^O eq 'MSWin32' || $^O eq 'NetWare';
is_deeply(\#a, []);
}
If I replace the 2 lines removed with:
+ if (!((pglob->gl_flags & GLOB_NOCHECK) ||
+ ((pglob->gl_flags & GLOB_NOMAGIC) &&
+ !(pglob->gl_flags & GLOB_MAGCHAR)))){
+ if (g_lstat(pathbuf, &sb, pglob))
+ return(0);
+ }
I don't see any failures from "make test" for perl-5.12.4 on linux x86_64 (RHEL6.3 2.6.32-358.11.1.el6.x86_64) and when using:
strace -fe trace=lstat perl -e 'use File::Glob q{:glob};
print scalar bsd_glob(q{/var/log/*},GLOB_NOCHECK)'
I no longer see the lstat calls for each file in the dir.
I don't mean to suggest that the perl tests for glob (File-Glob) are comprehensive (they are not), or that a change such as this will not break existing
behaviour (this seems likely). As far as I can tell the code with this (g_l)stat call existed in original-bsd/lib/libc/gen/glob.c 24 years ago in 1990.
Also see:
Chapter 6. Benchmarking Perl of "Mastering Perl" By brian d foy, Randal L. Schwartz
contains a section on comparing code where code using glob() and opendir() is compared.
"future globs (was "UNIX mindset...")" in comp.unix.wizards from Dick Dunn in 1991.
Usenet newsgroup mod.sources "'Globbing' library routine (glob)" from Guido van Rossum in July 1986 - I don't see a reference to "stat" in this code.