Show output of a perl script in linux - linux

I am using a web interface written in php that runs a perl script in a linux environment.It passes parameters(username,password,...) to the script . I want to view the output of the script without interfering with the process. Note that the script in his turn also passes data and excutes another program.
The script contains print commands like
if( $# ){
print "Error :".$#."\n";
print "skip...\n"; }
else{
}
I just want to view these results from the shell, also it would do it if i can save into a txt file .
thanks a lot!

Run the Perl program from the shell to see the output from print.
$ perl theprogram
⋮
Error : blah blah
skip...
⋮
Redirect STDOUT to save it into a file.
$ perl theprogram > theprogram.log
These are the very basics of shell usage, you already should know all this if you are a programmer. If not, read a Unix book for beginners.

Related

Pass a indicator from Bash back to Perl over SSH via STDIN

We have a Linux server which can run a diagnostic script, diag.pl, which coordinates reporting over other servers.
diag.pl iterates over the child servers, and for each of them, SSHs in and runs a bash script, which passes information back:
my $cmd=sprintf("ssh %s sudo /usr/lib/support/report.sh -e %s | uudecode -o \"%s-outfile.tgz\") 2>%1 |", $server, $specialparam, $servername)
The line of code in report.sh that sends the data back is:
uuencode --base64 ${REPORT}.tar.gz /dev/stdout
I would like to update report.sh to send back an additional line of information, something like:
echo "special-file-found=${SFF}" > /tmp/sff.cfg
uuencode --base64 /tmp/sff.cfg > /dev/stdout
Once the special file has been found, the Perl script will update so that it no longer sends the specialparam back to subsequent report.sh calls.
Is there a good way to send that input so that it will be easy for Perl to catch it?
What have I tried
Setting a user.comment attr on the tar.gz using setattr, but the comment does not survive the uuencoding
Currently thinking that my best bet is to use the pseudocode above, creating a new file to encode and send along, and update the Perl script to check it with each new transmission until it finds the special file.
I take it that the objective is to modify a shell script which returns to the caller an encoded file, so that it sends yet more information, specifically a string to be used as a flag in the caller.
It is not clear how the shell script is run from the Perl script, but there are ways to do this so that the caller gets back separate "lines" that are printed, either as they are emitted or altogether after the run completes.
Then you can just add to the shell script the needed extra print to STDOUT, and in the caller check each line of shell output to see whether it conforms to some "protocol;" for example, whether it is, or starts with, special-file-found string. Then you can set flags for further calls or write control file for following runs, etc. Otherwise, the line is the encoded file.
A made-up basic example using pipe-open (see by the end of the page)
use warnings;
use strict;
use feature 'say';
my #cmd = qw(ls -l ./);
my $file_found = quotemeta 'special-file-found';
my ($flag, $binfile);
my $pid = open(my $out, '-|', #cmd) // die "Can't open #cmd: $!";
while (<$out>) {
chomp;
if (/^$file_found/) {
$flag = 1;
}
else {
$binfile = $_;
# whatever else need be done, or perhaps last;
}
}
close $out;
This example runs the command ls -l ./ but instead of it you can run any executable, like #cmd = ('report.sh', 'arg1', 'arg2',...).
Another way is to use backticks (qx) and assign its return to an array, in which case each element receives a line of output.
Yet another, better, way is to use a module which manages external commands. For example, from simple to more capable: IPC::System::Simple, Capture::Tiny, IPC::Run3, IPC::Run.

Why is the command in /proc/XXX/cmdline truncated but not the arguments

I have a small bash script
#!/bin/bash
echo $(cat /proc/$PPID/cmdline | strings -1)
I call this script from a perl script which is run through nginx.
my $output_string = `/tmp/my_bash_script.sh`;
print $output_string;
When I load this in a browser, the result is something like:
/mnt/my_working_d -d /etc/my_httpd -f /etc/my_httpd/conf/httpd.conf
The location of the perl script is indeed somewhere in /mnt/my_working_directory/.... but why is this truncated and is there anyting I can do to log the whole command. I don't think the cmdline limit of 4k characters (?) which seems hardcoded in the kernel applies here.

Very weird redirection behavior

I execute a program which print some texts. I redirect the texts to file by using > but I cannot see any texts on the file. For example, if the program prints "Hello" I can see the result on the shell:
$ ./a.out arg
Hello
But after I redirect I cannot get any message hello on shell as well as the redirected file.
$ ./a.out arg > log.txt
(print nothing)
$ cat log.txt
(print nothing)
I have no idea what's going on. Is there someone who knows what's happening here? Or is there someone who suffered similar situation?
OS: Ubuntu 14.10, x86_64 arch, and the program is really chromium-browser rather than ./a.out. I edited its JavaScript engine (v8, which is included in chromium-browser) and I tried to print some logs with lots of texts. I tried to save it by redirection but it doesn't work.
Surely I checked whether > symbol work or not. It works as expected on other programs like echo, ls, and so on.
$ echo hello > hello.txt
$ cat hello.txt
hello
How can the messages just go away? I think it should be printed on stdout (or stderr) or file. But it just goes away when I use > symbol.
It is somewhat common for programs to check isatty(stdout) and display different output based on whether stdout is connected to a terminal or not. For example, ls will display file names in a tabular format if output is to a terminal, but display them strictly one per line otherwise. It does this to make it easy to parse its output when it's part of a pipeline.
Not having looked at Chrome's source code myself, this is speculation, but it's possible Chrome is performing this sort of check and changing its output based on where stdout is redirected to.
Try to use "2>" which should redirect stderr to file
Or you can also try to use "&>" which should redirect everything (stderr and stdout)
See more at http://tldp.org/HOWTO/Bash-Prog-Intro-HOWTO-3.html

Create bash script that takes input

I want to create a bash script that is simular to a programming interpreter like mongo, node, redis-cli, mysql, etc.
I want to be able to use a command like test and it behave like the examples above.
thomas#workstation:~$ test
>
How do I make a command that behaves like this? What is this called?
I want to be able to take the content and turn it into a variable.
thomas#workstation:~$ test
> hello world
hello world
thomas#workstation:~$
I only want to take one "entry" after enter is pressed once I want to be able to process the string "hello world" in the code, like echo it.
What is this called? How do I make one using BASH?
I think "read" is what you are looking for, isn't it?
here is a link with some examples: http://bash.cyberciti.biz/guide/Getting_User_Input_Via_Keyboard
so you can do stuff like this:
read -p "Enter your name : " name
echo "Hi, $name. Let us be friends!"
I'm sorry this doesn't answer you directly, but it might be worth it to look into using a more fully capable programming language such as Python, Ruby, or Perl for a task like this. In Python you can use the raw_input() function.
user_command = raw_input('> ')
would yield your prompt.
First, do not name your script test. That generates too much confusion. Whatever you call it, you can do many things:
#!/bin/sh
printf '> '
read line
echo "$line"
If your shell supports it:
#!/bin/sh
read -p '> ' line
echo "$line"
or
#!/bin/sh
printf '> '
sed 1q # This will print the input. To store in in a variable: a=$( sed 1q )
[spatel#tux ~]$ read a
Hello World!!!!!
[spatel#tux ~]$ echo $a
Hello World!!!!!
Key word that might be useful here is REPL (Read–eval–print loop) used primarily for programming languages or coding environments. Your browsers console is a great example of a REPL.
Node allows you use their REPL to build interactive apps.

Bash script execution with and without shebang in Linux and BSD

How and who determines what executes when a Bash-like script is executed as a binary without a shebang?
I guess that running a normal script with shebang is handled with binfmt_script Linux module, which checks a shebang, parses command line and runs designated script interpreter.
But what happens when someone runs a script without a shebang? I've tested the direct execv approach and found out that there's no kernel magic in there - i.e. a file like that:
$ cat target-script
echo Hello
echo "bash: $BASH_VERSION"
echo "zsh: $ZSH_VERSION"
Running compiled C program that does just an execv call yields:
$ cat test-runner.c
void main() {
if (execv("./target-script", 0) == -1)
perror();
}
$ ./test-runner
./target-script: Exec format error
However, if I do the same thing from another shell script, it runs the target script using the same shell interpreter as the original one:
$ cat test-runner.bash
#!/bin/bash
./target-script
$ ./test-runner.bash
Hello
bash: 4.1.0(1)-release
zsh:
If I do the same trick with other shells (for example, Debian's default sh - /bin/dash), it also works:
$ cat test-runner.dash
#!/bin/dash
./target-script
$ ./test-runner.dash
Hello
bash:
zsh:
Mysteriously, it doesn't quite work as expected with zsh and doesn't follow the general scheme. Looks like zsh executed /bin/sh on such files after all:
greycat#burrow-debian ~/z/test-runner $ cat test-runner.zsh
#!/bin/zsh
echo ZSH_VERSION=$ZSH_VERSION
./target-script
greycat#burrow-debian ~/z/test-runner $ ./test-runner.zsh
ZSH_VERSION=4.3.10
Hello
bash:
zsh:
Note that ZSH_VERSION in parent script worked, while ZSH_VERSION in child didn't!
How does a shell (Bash, dash) determines what gets executed when there's no shebang? I've tried to dig up that place in Bash/dash sources, but, alas, looks like I'm kind of lost in there. Can anyone shed some light on the magic that determines whether the target file without shebang should be executed as script or as a binary in Bash/dash? Or may be there is some sort of interaction with kernel / libc and then I'd welcome explanations on how does it work in Linux and FreeBSD kernels / libcs?
Since this happens in dash and dash is simpler, I looked there first.
Seems like exec.c is the place to look, and the relevant functionis are tryexec, which is called from shellexec which is called whenever the shell things a command needs to be executed. And (a simplified version of) the tryexec function is as follows:
STATIC void
tryexec(char *cmd, char **argv, char **envp)
{
char *const path_bshell = _PATH_BSHELL;
repeat:
execve(cmd, argv, envp);
if (cmd != path_bshell && errno == ENOEXEC) {
*argv-- = cmd;
*argv = cmd = path_bshell;
goto repeat;
}
}
So, it simply always replaces the command to execute with the path to itself (_PATH_BSHELL defaults to "/bin/sh") if ENOEXEC occurs. There's really no magic here.
I find that FreeBSD exhibits identical behavior in bash and in its own sh.
The way bash handles this is similar but much more complicated. If you want to look in to it further I recommend reading bash's execute_command.c and looking specifically at execute_shell_script and then shell_execve. The comments are quite descriptive.
(Looks like Sorpigal has covered it but I've already typed this up and it may be of interest.)
According to Section 3.16 of the Unix FAQ, the shell first looks at the magic number (first two bytes of the file). Some numbers indicate a binary executable; #! indicates that the rest of the line should be interpreted as a shebang. Otherwise, the shell tries to run it as a shell script.
Additionally, it seems that csh looks at the first byte, and if it's #, it'll try to run it as a csh script.

Resources