Reading /proc/PID/maps of short-lived process - linux

I have a binary that exits/segfaults shortly after executing. I am trying to read /proc/PID/maps of this binary before it exits but with no luck.
I've tried cat /proc/($./binary [input] & echo $!)/maps which should result in /proc/pidofprocess/maps but always get No such file or directory.
My goal is to see where things get loaded to on subsequent runs.

You solution cat /proc/$(<pipeline>)/maps won't call cat until the whole <pipeline> has ended (even if you use & within it), so you will never get your maps.
On the other hand,
<pipeline> & cat /proc/${!}/maps
will return immediately. Of course, as your binary exits immediately, cat may still run too late to capture /proc/${!}/maps
But you could try:
while true; do; ./binary [input] & cat /proc/${!}/maps ; done
This restarts a race between your binary and cat all day long, and sometimes cat may win (it does, in my case, with ls <nonexisting file> instead of ./binary 1 out of 30 times)
This prints a lot of garbage on your terminal, but you can collect the succesful maps by redirecting your cats output:
while true; do; ./binary [input] & cat /proc/${!}/maps >> mymaps; done
Elegant? No. Effective: I hope so!

Related

Bash output happening after prompt, not before, meaning I have to manually press enter

I am having a problem getting bash to do exactly what I want, it's not a major issue, but annoying.
1.) I have a third party software I run that produces some output as stderr. Some of it is useful, some of it is regularly stuff I don't care about and I don't want this dumped to screen, however I do want the useful parts of the stderr dumped to screen. I figured the best way to achieve this was to pass stderr to a function, then use conditions in that function to either show the stderr or not.
2.) This works fine. However the solution I have implemented dumped out my errors at the right time, but then returns a bash prompt and I want to summarise the status of the errors at the end of the function, but echo-ing here prints the text after the prompt meaning that I have to press enter to get back to a clean prompt. It shall become clear with the example below.
My error stream generator:
./TestErrorStream.sh
#!/bin/bash
echo "test1" >&2
My function to process this:
./Function.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # This is used simply to simulate the processing work I'm doing on the errors.
echo "Completed"
}
I source the Function.sh file to make ProcessErrors() available, then I run:
2> >(ProcessErrors) ./TestErrorStream.sh
I expect (and want) to get:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
Completed
user#user-desktop:~/path$
However what I really get is:
user#user-desktop:~/path$ 2> >(ProcessErrors) ./TestErrorStream.sh
Line was:test1
user#user-desktop:~/path$ Completed
And no clean prompt. Of course the prompt is there, but "Completed" is being printed after the prompt, I want to printed before, and then a clean prompt to appear.
NOTE: This is a minimum working example, and it's contrived. While other solutions to my error stream problem are welcome I also want to understand how to make bash run this script the way I want it to.
Thanks for your help
Joey
Your problem is that the while loop stay stick to stdin until the program exits.
The release of stdin occurs at the end of the "TestErrorStream.sh", so your prompt is almost immediately available compared to what remains to process in the function.
I suggest you wrap the command inside a script so you'll be able to handle the time you want before your prompt is back (I suggest 1sec more than the suspected time needed for the function to process the remaining lines of codes)
I successfully managed to do this like that :
./Functions.sh
#!/bin/bash
function ProcessErrors()
{
while read data;
do
echo Line was:"$data"
done
sleep 5 # simulate required time to process end of function (after TestErrorStream.sh is over and stdin is released)
echo "Completed"
}
./TestErrorStream.sh
#!/bin/bash
echo "first"
echo "firsterr" >&2
sleep 20 # any number here
./WrapTestErrorStream.sh
#!/bin/bash
source ./Functions.sh
2> >(ProcessErrors) ./TestErrorStream.sh
sleep 6 # <= this one is important
With the above you'll get a nice "Completed" before your prompt after 26 seconds of processing. (Works fine with or without the additional "time" command)
user#host:~/path$ time ./WrapTestErrorStream.sh
first
Line was:firsterr
Completed
real 0m26.014s
user 0m0.000s
sys 0m0.000s
user#host:~/path$
Note: the process substitution ">(ProcessErrors)" is a subprocess of the script "./TestErrorStream.sh". So when the script ends, the subprocess is no more tied to it nor to the wrapper. That's why we need that final "sleep 6"
#!/bin/bash
function ProcessErrors {
while read data; do
echo Line was:"$data"
done
sleep 5
echo "Completed"
}
# Open subprocess
exec 60> >(ProcessErrors)
P=$!
# Do the work
2>&60 ./TestErrorStream.sh
# Close connection or else subprocess would keep on reading
exec 60>&-
# Wait for process to exit (wait "$P" doesn't work). There are many ways
# to do this too like checking `/proc`. I prefer the `kill` method as
# it's more explicit. We'd never know if /proc updates itself quickly
# among all systems. And using an external tool is also a big NO.
while kill -s 0 "$P" &>/dev/null; do
sleep 1s
done
Off topic side-note: I'd love to see how posturing bash veterans/authors try to own this. Or perhaps they already did way way back from seeing this.

Bash run a function in background

Have a relatively simple question here. I need to run a function in the background in bash. Normally I would do it just like so:
FUNCTION &
but things are a bit more complicated than that. I have the following line that runs the main function for each record in a text database. I cant really edit this code all that much without vastly changing the rest of the entire project, but im still open to new ideas.
cat databases/$WAN | grep -v \# | while read LINE; do MAIN; done
I want to spawn a new terminal in background for each record to do a sort of parallel type processing, making things go much faster. Main takes a minute to process for each record. This however does not work.
cat databases/$WAN | grep -v \# | while read LINE; do MAIN &; done
Any suggestions?
* UPDATE *
Thanks for all the responses. Let me see if I can answer some of those questions.
gniourf_gniourf - Yes I know using cat like this is wrong. This was early on, and critical code, so I have not updated it yet. I now read into the while loop for most things I do. I will fix it eventually. You may be right about syntax. When I break it up like so, things seem to work now:
cat databases/$WAN | grep -v \# | while read LINE
do
MAIN & > /dev/null 2>&1
done
So that fixes the background problem. I wonder what was messed up in my single line syntax. Thanks
chepner - I don't believe LINE is a variable. I could be wrong though. Some things about Bash still confuse me. Maybe it is and is a variable that the entire record from the database gets stored to prior to processing.
Bruce K - Waiting is exactly what I was trying to avoid. If I let it run in the same terminal one at a time, it will slowly process each record in order. If I push each record to a seperate terminal for processing, all records will be processed simultaneously (at least in our eyes). The additional overhead is intentional in order to speed up how quickly the loop through the database occurs.
Radix - Yes you're right. I'll read up on that. Thanks for the link.
This worked for me:
$ function testt(){ echo "lineee is <$lineee>";}
$ grep 5432 /etc/services|while read lineee;do testt&done
lineee is <postgres 5432/udp # POSTGRES>
lineee is <postgres 5432/tcp # POSTGRES>
If, for some reason, your MAIN function is not seeing a LINE variable, you can try:
"export" the LINE variable beforehand:
$ export LINE
$ # do your thing
Or, pass the line read as an argument to the function:
$ function testt(){ LINE="$1"; echo "LINE is <$LINE>";}
$ grep 5432 /etc/services|while read LINE;do testt "$LINE"&done

How to do multiple if, else statements in this case in bash script?

Right now I am running a few programs with bash scripts (with Cygwin).
Basically what I am doing is after the program is starting, a loop is ran that checks that the program is still running.
I was doing:
while true
do
if [ "$(ps -W | grep -w name | gawk '{print $8,$9}' | gawk -F \\ '{print $4}')" == 'program' ];then
sleep 1
else
"start program" (whatever is needed here)
fi
done
But I started to realize having such a script multiple times is just causing unnecessary system resources to be used.
I tried doing an if then, elif, but it never goes past the first if.
I need it to go "alright the first if is negative, try the next, go to the end, start over".
Here is the copy of my script and I forgot to say I was using Cygwin but that really doesn't change anything cause it seems to still use normal bash scripting just maybe different paths to start files. http://pastebin.com/s8ZdPQMn and yes the h. is not there I just can't seem to edit the pastebin.
My overall plan is check that first SRPro is running, check the next, etc, only triggering if it's detected one is not running.
EDIT: I solved it. Not exactly sure why but in my original single file per program, gawk printing $4 at the end gave me what I wanted, but for some reason when doing it this way, it turned to $5. So changing $4 to $5 made the script work.
EDIT: One really strange issue though is, it will work minutes on end, then all the sudden get confused at times, and start 7 copies of one program or something. Also it can be random on which it starts.
You might find the wait command (try help wait from a bash prompt) useful. It's unclear exactly what you want, but as an example, here's a basic respawn function:
$ respawn () {
> while true
> do
> "${#}" &
> wait ${!}
> echo "respawning ..."
> done
> }
$ respawn some_program arg1 arg2 etc

Shell script to call external program which has user-interface

I have an external program, say a.out, which while running asks for an input parameter, i.e.,
./a.out
Please select either 1 or 2:
this will do something
this will do something else
Then when I enter '1', it will do its job. I don't have the code itself but just binary so can't change it.
I want to write a shell script which runs a.out and also inserts '1' in.
I tried many things including silly things like:
./a.out 1
./a.out << 1
./a.out < 1
etc.
but don't work.
Could you please let me know if there is any way to write such as shell script?
Thanks,
dbm368
I think you just need a pipe. For example:
echo 1 | ./a.out
In general terms a pipe takes whatever the program on the left writes to stdout and redirects to the stdin of the program on the right.

redirecting stdin _and_ stdout to a pipe

I would like to run a program "A", have its output go to the input to another program "B", as well as stdin going to intput of "B". If program "A" closes, I'd like "B" to continue running.
I can redirect A output to B input easily:
./a | ./b
And I can combine stderr into the output if I'd like:
./a 2>&1 | ./b
But I can't figure out how to combine stdin into the output. My guess would be:
./a 0>&1 | ./b
but it doesn't work.
Here's a test that doesn't require us to rewrite up any test programs:
$ echo ls 0>&1 | /bin/sh -i
$ a b info.txt
$
/bin/sh: Cannot set tty process group (No such process)
If possible, I'd like to do this using only bash redirection on the command line (I don't want to write a C program to fork off child processes and do anything complicated everytime I want to do some redirection of stdin to a pipe).
This cannot be done without writing an auxiliary program.
In general, stdin could be a read-only file descriptor (heck, it might refer to read-only file). So you cannot "insert" anything into it.
You will need to write a "helper" program that monitors two file descriptors (say, 0 and 3) in order to read from both and "merge" them. A simple select or poll loop would be sufficient, and you could write it in most scripting languages, but not the shell, I don't think.
Then you can use shell redirection to feed your program's output to descriptor 3 of the "helper".
Since what you want is basically the opposite of "tee", I might call it "eet"...
[edit]
If only you could launch "cat" in the background...
But that will fail because background processes with a controlling terminal cannot read from stdin. So if you could just detach "cat" from its controlling terminal and run it in the background...
On Linux, "setsid cat" should do it, roughly. But (a) I could not get it to work very well and (b) I really do not have time for this today and (c) it is non-standard anyway.
I would just write the helper program.
[edit 2]
OK, this seems to work:
{ seq 5 ; sleep 2 ; seq 5 ; } | /bin/bash -c 'set -m ; setsid cat ; echo HELLO'
The set -m thing forces bash to enable job control, which apparently is needed to prevent the shell from redirecting stdin from /dev/null.
Here, the echo HELLO represents your "program A". The seq commands (with the sleep in the middle) are just to provide some input. And yes, you can pipe this whole thing to process B.
About as ugly and non-portable a solution as you could ask for...
A pipe has two ends. One is for writing, and that which gets written appears in the other end, which is for reading.
It's a pipe, not a T or Y junction.
I don't think your scenario is possible. Having "stdin going to input of" anything doesn't make sense.
If I understand your requirements correctly, you want this set up (ASCII art to the fore):
o----+----->| A |----+---->| B |---->o
| ^
| |
+------------------+
with the additional constraint that if process A closes up shop, process B should be able to continue with the input stream going to B.
This is a non-standard setup, as you realize, and can only be achieved by using an auxilliary program to drive the input to A and B. You end up with some interesting synchronization issues but it will all work remarkably well as long as your messages are short enough.
The plumbing necessary to achieve this is notable - you'll need two pipes, one for the input to A and the other for the input to B, and the output of A will be connected to the input of B as well.
o---->| C |---------->| A |----+---->| B |---->o
| ^
| |
+--------------------------+
Note that C will be writing the data twice, once to A and once to B. Note, too, that the pipe from A to B is the same pipe as the pipe from C to A.
To make the given test case work you have to while ... read from the controlling terminal device /dev/tty inside a sh -c '...' construct.
Note the use of eval (could it be avoided here?) and that multi-line commands on input> will fail.
echo 'ls; export var=myval' | (
stdin="$(</dev/stdin)"
/bin/sh -i -c '
eval "$1";
while IFS="" read -e -r -p "input> " line; do
history -s "${line}"
eval "${line}";
done </dev/tty
' argv0 "${stdin}"
)
input> echo $var
For a similar problem and the use of named pipes see here:
BASH: Best architecture for reading from two input streams
This can't be done exactly as shown, but to perform your example you can make use of cat's ability to join files together:
cat <(echo ls) - | /bin/sh
(You can do -i, but then you'll have to have another process kill the /bin/sh, as your attempts to Ctrl-C and Ctrl-D out will fail.)
This assumes that you want to pass in your piped input and then accept from stdin. You can also make it so that it does something after stdin is done, or on both sides -- but it won't merge input character-by-character or line-by-line.
This seems to do what you want:
$ ( ./a <&-; cat ) | ./b
(It's not clear to me if you want a to get input...this solution sends all input to b)
Of course, in this case the inputs to b are strictly ordered: all of the output of a
is sent to b first, then a terminates, then input goes to b. If you want things
interleaved, try:
$ ( ./a <&- & cat ) | ./b

Resources