system() call in perl [duplicate] - linux

This question already has answers here:
in perl, how do we detect a segmentation fault in an external command
(2 answers)
Closed 8 years ago.
I want to make a system() call in a perl script and, because I want to redirect stdin and stdout, I think I need to pass a single string to system() and let the shell interpret the metacharacters. However, I do not seem to be able to correctly detect when the program called via system() segfaults.
The perl system() man page at http://perldoc.perl.org/functions/system.html cautions "When system's arguments are executed indirectly by the shell, results and return codes are subject to its quirks." Should I be concerned about this?
My code for testing the return value of system() is pretty much identical to the example given on the same man page (just above the warning I mention) but in retrospect that appears to be for calling system() with a LIST.
So, I tihnk my core issue is, how do I detect how a program terminated that was called in a shell from perl's system(). Apologies if this is a repeat question but I cannot find it addressed anywhere before. FWIW I'm running the script on a Fedora distro of linux.
Many thank.

I would suggest you have a look at IPC::Open2 and IPC::Open3
What you are trying to do is a bit too complicated for system which is more or less geared up to running a command, and then capturing output.
IPC::Open2 allows you to open an exec pipe to a process, and attach your own filehandles to STDIN and STDOUT, meaning you can do bidirectional communication. (Open3 also allows STDERR).
Catching signals and errors on your attached process is a bit more complicated - the only thing you're fairly sure to get is a return code. With system, $? should be set automatically, but with IPC::Open[23] you may need to use waitpid to catch the return code.

Related

Is there a Linux command that writes its command line arguments to a file, without using a shell?

I have a batch processing system that can execute a number of commands sequentially. These commands are specified as list of words, that are executed by python's subprocess.call() function, without using a shell. For various reasons I do not want to change the processing system.
I would like to write something to a file, so a subsequent command can use it. Unfortunately, all the ways I can think of to write something to the disk involve some sort of redirection, which is a shell concept.
So is there a way to write a Linux command line that will take its argument and write it to a file, in a context where it is executed outside a shell?
Well, one could write a generalised parser and process manager that could handle this for you, but, luckily, one already comes with Linux. All you have to do is tell it what command to run, and it will handle the redirection for you.
So, if you were to modify your commands a bit, you could easily do this. Just concatenate the words together with strings, quoting when those words may have spaces or other special characters in them, and then you can use a list such as:
/bin/sh, -c, {your new string here} > /some/file
Et voila, stuff written to disk. :)
Looking at the docs for subprocess.call, I see it has extra parameters:
subprocess.call(args, *, stdin=None, stdout=None, stderr=None, shell=False)
If you specify stdout= to a file you have opened, then the output of your code will go to that file, which is basically the same behaviour?
I don't see your exact usage case, but this is certainly a way to synthesise the command-line pipe behaviours, with little coding change.
Note that the docs also say that you should not use the built-in =PIPE support, depending on your exact requirements. It is important that you read data from a pipe regularly or the writer will stall when the buffer is full.

Get triggered by GPIO state change in bash

I have a GPIO pin, that value of which is represented in the sysfs node /sys/class/gpio/gpioXXXX/value) and I want to detect a change to the value of this GPIO pin. According to the sysfs documentation you should use poll(2) or select(2) for this.
However, both poll and message only seems to be available as a system calls and not from bash. Is there some way to use to get triggered by a state change of the GPIO pin functionality from a bash script?
My intention is to not have (semi-)busy waiting or userland polling. I would also like to simply do this from bash without having to dip into another language. I don't plan to stick with bash throughout the project, but I do want to use it for this very first version. Writing a simple C program to be called from bash for just this is a possibility, but before doing that, I would like to know if I'm not missing something.
Yes, you'll need a C or Python helper -- and you might think about abandoning bash for this project entirely.
See this gist for an implementation of such a helper (named "wfi", "watch-for-interrupt", modified from a Raspberry Pi StackExchange question's answer.
That said:
If you want to (semi-)efficiently have a shell script monitor for a GPIO signal change, you'll want to have a C helper that uses poll() and writes to stdout whenever a noteworthy change occurs. Given that, you can then write a shell loop akin to the following:
while IFS= read -r event; do
echo "Processing $event"
done < <(wfi /sys/class/gpio/gpioXXXX/value)
Using process substitution in this way ensures that the startup cost for your monitor-gpio-signal helper is paid only once. Note some caveats:
Particularly if anything inside the body of your loop calls an external command (rather than relying on shell builtins alone), this is still going to be much slower than using a program written in C, Go or even an otherwise-relatively-slow language such as Python.
If the shell script isn't ready to receive a write, that write may block for an indefinite amount of time. A tool such as pv may be useful to add a buffer to your pipeline:
done < <(wfi "name" | pv -q -B 1M)
...for instance, will establish a 1MB buffer.

user command line not using stdin and stdout

My program is invoked by another process, and communicates with it via stdin and stdout. I want to interact with my program via a command line interface, but obviously the usual method of just running it in a terminal doesn't work. I'm looking for the simplest possible way of achieving this.
My program is currently written in Lua, but might become C or something else. Running under GNU/Linux, but something simple that works under Windows as well would be great.

view output of already running processes in linux

I have a process that is running in the background (sh script) and I wonder if it is possible to view the output of this process without having to interrupt it.
The process ran by some application otherwise I would have attached it to a screen for later viewing. It might take an hour to finish and i want to make sure it's running normally with no errors.
There is already an program that uses ptrace(2) in linux to do this, retty:
http://pasky.or.cz/dev/retty/
It works if your running program is already attached to a tty, I do not know if it will work if you run your program in background.
At least it may give some good hints. :)
You can probably retreive the exit code from the program using ptrace(2), otherwise just attach to the process using gdb -p <pid>, and it will be printed when the program dies.
You can also manipulate file descriptors using gdb:
(gdb) p close(1)
$1 = 0
(gdb) p creat("/tmp/stdout", 0600)
$2 = 1
http://etbe.coker.com.au/2008/02/27/redirecting-output-from-a-running-process/
You could try to hook into the /proc/[pid]/fd/[012] triple, but likely that won't work.
Next idea that pops to my mind is strace -p [pid], but you'll get "prittified" output. The possible solution is to strace yourself by writing a tiny program using ptrace(2) to hook into write(2) and writing the data somewhere. It will work but is not done in just a few seconds, especially if you're not used to C programming.
Unfortunately I can't think of a program that does precisely what you want, which is why I give you a hint of how to write it yourself. Good luck!

invoking less application from GNU readline

Bit support question. Apologies for that.
I have an application linked with GNU readline. The application can invoke shell commands (similar to invoking tclsh using readline wrapper). When I try to invoke the Linux less command, I get the following error:
Suspend (tty output)
I'm not an expert around issues of terminals. I've tried to google it but found no answer. Does any one know how to solve this issue?
Thanks.
You probably need to investigate the functions rl_prep_terminal() and rl_deprep_terminal() documented in the readline manual:
Function: void rl_prep_terminal(int meta_flag)
Modify the terminal settings for Readline's use, so readline() can read a single character at a time from the keyboard. The meta_flag argument should be non-zero if Readline should read eight-bit input.
Function: void rl_deprep_terminal(void)
Undo the effects of rl_prep_terminal(), leaving the terminal in the state in which it was before the most recent call to rl_prep_terminal().
The less program is likely to get confused if the terminal is already in the special mode used by the Readline library and it tries to tweak the terminal into an equivalent mode. This is a common problem for programs that work with the curses library, or other similar libraries that adjust the terminal status and run other programs that also do that.
Whilst counterintuitive it may be stopped waiting for input (some OSs and shells give Stopped/Suspended (tty output) when you might expect it to refer to (tty input)). This would fit the usual behaviour of less when it stops at the end of (what it thinks is) the screen length.
Can you use cat or head instead? or feed less some input? or look at the less man/info pages to see what options to less might suit your requirement (e.g w, z, F)?
Your readline application is making itself the controlling app for your tty.
When you invoke less from inside the application, it wants to be in control of the tty as well.
If you are trying to invoke less in your application to display a file for the user,
you want to set the new fork'd process into it's own process group before calling exec.
You can do this with setsid(). Then when less call tcsetpgrpp(), it will not get
thrown into the backgroud with SIGTTOU.
When less finishes, you'll want to restore the foregroud process group with tcsetpgrp(), as well.

Resources