Bash: Split stdout from multiple concurrent commands into columns - linux

I am running multiple commands in a bash script using single ampersands like so:
commandA & commandB & commandC
They each have their own stdout output but they are all mixed together and flood the console in an incoherent mess.
I'm wondering if there is an easy way to pipe their outputs into their own columns... using the column command or something similar. ie. something like:
commandA | column -1 & commandB | column -2 & commandC | column -3
New to this kind of thing, but from initial digging it seems something like pr might be the ticket? or the column command...?

Regrettably answering my own question.
None of the supplied solutions were exactly what I was looking for. So I developed my own command line utility: multiview. Maybe others will benefit?
It works by piping processes' stdout/stderr to a command interface and then by launching a "viewer" to see their outputs in columns:
fooProcess | multiview -s & \
barProcess | multiview -s & \
bazProcess | multiview -s & \
multiview
This will display a neatly organized column view of their outputs. You can name each process as well by adding a string after the -s flag:
fooProcess | multiview -s "foo" & \
barProcess | multiview -s "bar" & \
bazProcess | multiview -s "baz" & \
multiview
There are a few other options, but thats the gist of it.
Hope this helps!

pr is a solution, but not a perfect one. Consider this, which uses process substitution (<(command) syntax):
pr -m -t <(while true; do echo 12; sleep 1; done) \
<(while true; do echo 34; sleep 2; done)
This produces a marching column of the following:
12 34
12 34
12 34
12 34
Though this trivially provides the output you want, the columns do not advance individually—they advance together when all files have provided the same output. This is tricky, because in theory the first column should produce twice as much output as the second one.
You may want to investigate invoking tmux or screen in a tiled mode to allow the columns to scroll separately. A terminal multiplexer will provide the necessary machinery to buffer output and scroll it independently, which is important when showing output side-by-side without allowing excessive output from commandB to scroll commandA and commandC off-screen. Remember that scrolling each column separately will require a lot of screen redrawing, and the only way to avoid screen redraws is to have all three columns produce output simultaneously.
As a last-ditch solution, consider piping each output to a command that indents each column by a different number of characters:
this is something that commandA outputs and is
and here is something that commandB outputs
interleaved with the other output, but visually
you might have an easier time distinguishing one
here is something that commandC outputs
which is also interleaved with the others
from the other

Script print out three vertical rows and a timer each row containing the output from a single script.
Comment on anything you dont understand and ill add answers to my answer as needed
Hope this helps :)
#!/bin/bash
#Script by jidder
count=0
Elapsed=0
control_c()
{
tput rmcup
rm tail.tmp
rm tail2.tmp
rm tail3.tmp
stty sane
}
Draw()
{
tput clear
echo "SCRIPT 1 Elapsed time =$Elapsed seconds"
echo "------------------------------------------------------------------------------------------------------------------------------------------------------"
tail -n10 tail.tmp
tput cup 25 0
echo "Script 2 "
echo "------------------------------------------------------------------------------------------------------------------------------------------------------"
tail -n10 tail2.tmp
tput cup 50 0
echo "Script 3 "
echo "------------------------------------------------------------------------------------------------------------------------------------------------------"
tail -n10 tail3.tmp
}
Timer()
{
if [[ $count -eq 10 ]]; then
Draw
((Elapsed = Elapsed + 1))
count=0
fi
}
main()
{
stty -icanon time 0 min 0
tput smcup
Draw
count=0
keypress=''
MYSCRIPT1.sh > tail.tmp &
MYSCRIPT2.sh > tail2.tmp &
MYSCRIPT3.sh > tail3.tmp &
while [ "$keypress" != "q" ]; do
sleep 0.1
read keypress
(( count = count + 2 ))
Timer
done
stty sane
tput rmcup
rm tail.tmp
rm tail2.tmp
rm tail3.tmp
echo "Thanks for using this script."
exit 0
}
main
trap control_c SIGINT

Related

Retrieving individual element from xdotool search result with Bash script

Having trouble with a bash script. I can't figure out how to get the individual pids from the xdotool search function.
Code
google-chrome --app=https://google.com &
google-chrome --app=https://google.com &
google-chrome --app=https://google.com &
google-chrome --app=https://google.com &
sleep 5
pids=$(xdotool search --onlyvisible --name google)
echo $pids
width=1920
height=1080
for i in 0 1 2 3;
do
x=$((((i)/2)*$width))
y=$(((i%2)*$height))
echo $x
echo $y
echo "${pids[$i]}"
#xdotool windowmove ${pids[i]} $x $y
done
Output
46137345 46137352 46137355 46137358
0
0
46137345
46137352
46137355
46137358
0
1080
1920
0
1920
1080
I can't see a /n in the string which is what I thought was doing it so don't know why it's making the new lines.
I'm very new to bash scripting so I have no doubt it's something stupid obvious.
I'm using bash version 5.0.3
The problem is that your pids variable is just a string but you're trying to treat it like an array.
Use an outer set of parens to make pids an array as in:
pids=($(xdotool search --onlyvisible --name google))
$ for i in "${pids[#]}"; do echo $i; done
46137345
46137352
46137355
46137358

Write output of subprocess launched by a `screen` CLI Command to a log file?

I am launching a bunch of the same script (generate_records.php) into screens. I am doing this to easily parallelize the processes. I would like to write the output of each of the PHP processes to a log file using something like &> log_$i (StdOut an StdErr).
My shell scripting is weak sauce, and I can't get the syntax correct. I keep getting the output of the screen, which is empty.
Exmaple: launch_processes_in_screens.sh
max_record_id=300000000
# number of parallel processors to run
total_processors=10
# max staging companies per processor
(( num_records_per_processor = $max_record_id / $total_processors))
i=0
while [ $i -lt $total_processors ]
do
(( starting_id = $i * $num_records_per_processor + 1 ))
(( ending_id = $starting_id + $num_records_per_processor - 1 ))
printf "\n - Starting processor #%s starting at ID:%s and ending at ID: %s" "$i" "$starting_id" "$ending_id"
screen -d -m -S "process_$i" php generate_records.php "$starting_id" "$num_records_per_processor" "FALSE"
((i++))
done
If the only reason you're using screen is to launch many processes in parallel, you can avoid it entirely and use & to start them in the background:
php generate_records.php "$starting_id" "$num_records_per_processor" FALSE &
You may also be able to remove some code by using parallel.

Two way communication with process

I have given some compiled program. I want to communicate with it from my bash script by program stdin and stdout. I need two way communication. Program cannot be killed between exchange of information. How I can do that?
Simple example:
Let that program be compiled partial summation (C++) and script results will be squares of that sums. Program:
int main() {
int num, sum = 0;
while(true) {
std::cin >> num;
sum += num;
std::cout << sum << std::endl;
}
}
My script should looks like that:
for i in 1 2 3 4; do
echo "$i" > program
read program to line;
echo $((line * line))
done
If in program I have for(int i = 1; i <= 4; ++i), then I can do something like that:
exec 4< <(./program); # Just read from program
for i in 1 2 3 4; do
read <&4 line;
echo "sh: $((line * line))";
done
For more look here. From the other hand, if in program I have std::cout << sum * sum;, then solution could be:
exec &3> >(./program); # Write to program
for i in 1 2 3 4; do
echo "$i" > &3
done
My problem is two way communication with other process / program. I don't have to use exec. I cannot install third party software. Bash-only solution, without files, will be nice.
If I run other process, it will be nice to know pid to kill that at the end of script.
I think about communication with two or maybe three processes in the future. Output of firs program may dependents on output of second program and also in second side. Like communicator of processes.
However, I cannot recompile programs and change something. I have only stdin and stdout communication in programs.
If you have bash which is newer than 4.0, you can use coproc.
However, don't forget that the input/output of the command you want to communicate might be buffered.
In that case you should wrap the command with something like stdbuf -i0 -o0
Reference: How to make output of any shell command unbuffered?
Here's an example
#!/bin/bash
coproc mycoproc {
./a.out # your C++ code
}
# input to "std::cin >> num;"
echo "1" >&${mycoproc[1]}
# get output from "std::cout << sum << std::endl;"
# "-t 3" means that it waits for 3 seconds
read -t 3 -u ${mycoproc[0]} var
# print it
echo $var
echo "2" >&${mycoproc[1]}
read -t 3 -u ${mycoproc[0]} var
echo $var
echo "3" >&${mycoproc[1]}
read -t 3 -u ${mycoproc[0]} var
echo $var
# you can get PID
kill $mycoproc_PID
output will be
1
3
6
If your bash is older than 4.0, using mkfifo can do the same thing like:
#!/bin/bash
mkfifo f1 f2
exec 4<> f1
exec 5<> f2
./a.out < f1 > f2 &
echo "1" >&4
read -t 3 -u 5 var
echo $var
rm f1 f2
Considering that your C++ program reads from standard output, and prints to standard output, it's easy to put it inside a simple chain of pipes:
command_that_writes_output | your_cpp_program | command_that_handle_output
In your specific case you probably need to modify the program to only handle one single input and writing one single output, and remove the loop. Because then you can do it very simple, like this:
for i in 1 2 3 4; do
result=`echo $i | ./program`
echo $((result * result))
done

How to delete older contents of file that is being continuously written to?

I have a simulation running and expect it to go on for atleast 10 more hours. I have directed the console out put to a .txt file using
(binary) > out.txt
This out.txt is becoming too huge. I do not need a lot of contents in this file. How can I delete the older parts of this file without harming the writing process? The contents that will be written towards the end of the simulation is important to me.
As Carl mentioned in the comments, you cannot really do this on an actively written log file. However, if the initial data is not relevant to you, you can do the following (though beware that you will loose all data)
> out.txt
For future, you can use a utility called logrotate(8)
You could use tail to only store the end of the file:
# Say you want to save the last 100 lines
your_binary | tail -n 100 > out.txt
This assumes that the output ends at some point.
saw your comments - the file is 10 GB now ... try using sed -i to reduce the size so that it will work with the other tools, if you want to completely erase it then :> logfile.
tools can cope up with a file which is as big as their buffer , else they should be streamed ..... something like split wont work on a 4 GB file , dont know if they made a code adjustment for this , its been long since i had to work with a file that big.
two suggestions :
1
there were a few methods i could think off like using split ....but almost all were involving creation of a seperate file from the log (a reduced version) and renaming that or redirecting to that.
use split to break the log to smaller logs (split -l 100 ...) and just redirect the program output to the recent the last log found using ls -1.
this seems to work fine .
2
Also i tried a second method to edit/truncate top 10 lines in the same file ......
Kaizen ~/shell_prac
$ cat zcntr.sh
## test truncate a log file
##set -xv
:> zcntr.log ;
## fxn
cntr_log()
{
limit=$1 ;
start=0 ;
while [ $start -lt $limit ]
do
echo "count is $start" >> zcntr.log ; ## generate a continuous log
start=$(($start + 1));
sleep 1;
cnt=$(($start % 10)) ;
if [ $cnt -eq 0 ] ## check to truncate the top 10 lines using sed
then
echo "truncate at $start " >> zcntr.log ;
sed -i "1,10d" zcntr.log ;
fi
done ;
}
## main cntrlr
echo "enter a limit" ;
read lmt ;
cntr_log $lmt ;
this seems to work
i tested it with a counter to print till value 25
output :
Kaizen ~/shell_prac
$ cat zcntr.log
count is 19
truncate at 20
count is 20
count is 21
count is 22
count is 23
count is 24
i think either of the two will help.
let me know if there is something else on your mind !!
Truncate file with cat
> cat /dev/null > out.txt

Improve my password generation script

I have created a little password generation script. I'm curious to what improvements can be made for it except input error handling, usage information etc. It's the core functionality I'm interested in seeing improvements upon.
This is what it does (and what I like it to do):
Keep it easy to change which Lowercase characters (L), Uppercase characters (U), Numbers (N) and Symbols (S) that are used in passwords.
I'd like it to find a new password of legnth 10 for me in max two seconds.
It should take a variable length of the password string as an argument.
Only a password containing at least one L, U, N and S should be accepted.
Here is the code:
#!/bin/bash
PASSWORDLENGTH=$1
RNDSOURCE=/dev/urandom
L="acdefghjkmnpqrtuvwxy"
U="ABDEFGHJLQRTY"
N="012345679"
S="\-/\\)?=+.%#"
until [ $(echo $password | grep [$L] | grep [$U] | grep [$N] | grep -c [$S] ) == 1 ]; do
password=$(cat $RNDSOURCE | tr -cd "$L$U$N$S" | head -c $PASSWORDLENGTH)
echo In progress: $password # It's simply for debug purposes, ignore it
done
echo Final password: $password
My questions are:
Is there a nicer way of checking if the password is acceptable than the way I'm doing it?
What about the actual password generation?
Any coding style improvements? (The short variable names are temporary. Though I'm using uppercase names for "constants" [I know there formally are none] and lowercase for variables. Do you like it?)
Let's vote on the most improved version. :-)
For me it was just an exercise mostly for fun and as a learning experience, albeit I will start using it instead of the generation from KeepassX which I'm using now. It will be interesting to see which improvements and suggestions will come from more experienced Bashistas (I made that word up).
I created a little basic script to measure performance: (In case someone thinks it's fun)
#!/bin/bash
SAMPLES=100
SCALE=3
echo -e "PL\tMax\tMin\tAvg"
for p in $(seq 4 50); do
bcstr=""; max=-98765; min=98765
for s in $(seq 1 $SAMPLES); do
gt=$(\time -f %e ./genpassw.sh $p 2>&1 1>/dev/null)
bcstr="$gt + $bcstr"
max=$(echo "if($max < $gt ) $gt else $max" | bc)
min=$(echo "if($min > $gt ) $gt else $min" | bc)
done
bcstr="scale=$SCALE;($bcstr 0)/$SAMPLES"
avg=$(echo $bcstr | bc)
echo -e "$p\t$max\t$min\t$avg"
done
You're throwing away a bunch of randomness in your input stream. Keep those bytes around and translate them into your character set. Replace the password=... statement in your loop with the following:
ALL="$L$U$N$S"
password=$(tr "\000-\377" "$ALL$ALL$ALL$ALL$ALL" < $RNDSOURCE | head -c $PASSWORDLENGTH)
The repetition of $ALL is to ensure that there are >=255 characters in the "map to" set.
I also removed the gratuitous use of cat.
(Edited to clarify that what appears above is not intended to replace the full script, just the inner loop.)
Edit: Here's a much faster strategy that doesn't call out to external programs:
#!/bin/bash
PASSWORDLENGTH=$1
RNDSOURCE=/dev/urandom
L="acdefghjkmnpqrtuvwxy"
U="ABDEFGHJLQRTY"
N="012345679"
# (Use this with tr.)
#S='\-/\\)?=+.%#'
# (Use this for bash.)
S='-/\)?=+.%#'
ALL="$L$U$N$S"
# This function echoes a random index into it's argument.
function rndindex() { echo $(($RANDOM % ${#1})); }
# Make sure the password contains at least one of each class.
password="${L:$(rndindex $L):1}${U:$(rndindex $U):1}${N:$(rndindex $N):1}${S:$(rndindex $S):1}"
# Add random other characters to the password until it is the desired length.
while [[ ${#password} -lt $PASSWORDLENGTH ]]
do
password=$password${ALL:$(rndindex $ALL):1}
done
# Now shuffle it.
chars=$password
password=""
while [[ ${#password} -lt $PASSWORDLENGTH ]]
do
n=$(rndindex $chars)
ch=${chars:$n:1}
password="$password$ch"
if [[ $n == $(( ${#chars} - 1 )) ]]; then
chars="${chars:0:$n}"
elif [[ $n == 0 ]]; then
chars="${chars:1}"
else
chars="${chars:0:$n}${chars:$((n+1))}"
fi
done
echo $password
Timing tests show this runs 5-20x faster than the original script, and the time is more predictable from one run to the next.
you could just use uuidgen or pwgen to generate your random passwords, maybe later shuffling some letters around or something of the sort
secpwgen is very good (it can also generate easier to remember diceware passwords) - but has almost disappeared from the net. I managed to track down a copy of the 1.3 source & put it on github.
It is also now part of Alpine Linux.

Resources