Insert input to exe program with linux script - linux

I got a script that asks 1000 times for input of 1-5, it looks like this:
insert1:
insert2:
insert3:
insert4:
insert5:
//and again 1-5
insert 1:
...in total it will get 1000 inputs
I want to write a one line script that will run the script I just described, it will insert the input that needed each time.
this is what I tried:
#!/bin/bash
./my_script.exe -l | for i in {1..200}; do for j in {1..5}; do j; done; done

You are nearly there, but do it the other way around:
for ((i=1;i<=200:i++)) ; do
for ((j=1;j<=5;j++)) ; do
echo $j
done
done | ./myscript.exe -l
You can put a # before the | to comment it out and see what the script sends to your program.
You need to differentiate between parameters which are specified after the program name like this:
program param1 param2 param3
and inputs, which a program gets by reading its stdin and are supplied like this:
printf "input1\ninput2\ninput3\n" | program
Alternative version of second command:
{ echo input1; echo input2; echo input3; } | program

Related

Pass output logs from a program into a function and store the return code in a variable at the same time

I have a shell script which has a function to Log statements. SomeProgram is another program which is run from my shell script and the logs from it are passed into the function LogToFile.
#!/bin/sh
LogToFile() {
[[ ! -t 0 ]] && while read line; do echo "$line" >> $MY_LOG_FILE; done
for arg; do echo "$arg" >> $MY_LOG_FILE; done
}
SomeProgram | LogToFile
Question:
All is good until here. But I have been trying to get the return code from SomeProgram and store it in a variable. How can I do that without loosing the functionality of logs from SomeProgram going into my LogToFile function. I tried the following options but in vain.
RETVAL=SomeProgram | LogToFile
RETVAL=(SomeProgram) | LogToFile
RETVAL=(SomeProgram | LogToFile)
Is it possible to pass the output of a program to a function parameter and collect the return value in another variable at the same time?
I figured it out eventually. PIPESTATUS is the tool to use here.
Following is the way I can use it to get the return code of SomeProgram into RETVAL for example.
SomeProgram | LogToFile
RETVAL=${PIPESTATUS[0]}
Above is the way of getting the output of the program on the left of the pipe. PIPESTATUS is an array which contains the return codes of all the programs run adjacent to the pipe commands.
PIPESTATUS[1] could give the output of the LogToFile for example if LogToFile was a program.

how to redirect this perl script's output to file?

I don't have much experience with perl, and would appreciate any/all feedback....
[Before I start: I do not have access/authority to change the existing perl scripts.]
I run a couple perl scripts several times a day, but I would like to begin capturing their output in a file.
The first perl script does not take any arguments, and I'm able to "tee" its output without issue:
/asdf/loc1/rebuild-stuff.pl 2>&1 | tee $mytmpfile1
The second perl script hangs with this command:
/asdf/loc1/create-site.pl --record=${newsite} 2>&1 | tee $mytmpfile2
FYI, the following command does NOT hang:
/asdf/loc1/create-site.pl --record=${newsite} 2>&1
I'm wondering if /asdf/loc1/create-site.pl is trying to process the | tee $mytmpfile2 as additional command-line arguments? I'm not permitted to share the entire script, but here's the beginning of its main routine:
...
my $fullpath = $0;
$0 =~ s%.*/%%;
# Parse command-line options.
...
Getopt::Long::config ('no_ignore_case','bundling');
GetOptions ('h|help' => \$help,
'n|dry-run|just-print' => \$preview,
'q|quiet|no-mail' => \$quiet,
'r|record=s' => \$record,
'V|noverify' => \$skipverify,
'v|version' => \$version) or exit 1;
...
Does the above code provide any clues? Other than modifying the script, do you have any tips for allowing me to capture its output in a file?
It's not hanging. You are "suffering from buffering". Like most programs, Perl's STDOUT is buffered by default. Like most programs, Perl's STDOUT is flushed by a newline when connected to a terminal, and block buffered otherwise. When STDOUT isn't connected to a terminal, you won't get any output until 4 KiB or 8 KiB of output is accumulated (depending on your version of Perl) or the program exits.
You could add $| = 1; to the script to disable buffering for STDOUT. If your program ends with a true value or exits using exit, you can do that without changing the .pl file. Simply use the following wrapper:
perl -e'
$| = 1;
$0 = shift;
do($0);
my $e = $# || $! || "$0 didn\x27t return a true value\n";
die($e) if $e;
' -- prog args | ...
Or you could fool the program into thinking it's connected to a terminal using unbuffer.
unbuffer prog args | ...

Two way communication with process

I have given some compiled program. I want to communicate with it from my bash script by program stdin and stdout. I need two way communication. Program cannot be killed between exchange of information. How I can do that?
Simple example:
Let that program be compiled partial summation (C++) and script results will be squares of that sums. Program:
int main() {
int num, sum = 0;
while(true) {
std::cin >> num;
sum += num;
std::cout << sum << std::endl;
}
}
My script should looks like that:
for i in 1 2 3 4; do
echo "$i" > program
read program to line;
echo $((line * line))
done
If in program I have for(int i = 1; i <= 4; ++i), then I can do something like that:
exec 4< <(./program); # Just read from program
for i in 1 2 3 4; do
read <&4 line;
echo "sh: $((line * line))";
done
For more look here. From the other hand, if in program I have std::cout << sum * sum;, then solution could be:
exec &3> >(./program); # Write to program
for i in 1 2 3 4; do
echo "$i" > &3
done
My problem is two way communication with other process / program. I don't have to use exec. I cannot install third party software. Bash-only solution, without files, will be nice.
If I run other process, it will be nice to know pid to kill that at the end of script.
I think about communication with two or maybe three processes in the future. Output of firs program may dependents on output of second program and also in second side. Like communicator of processes.
However, I cannot recompile programs and change something. I have only stdin and stdout communication in programs.
If you have bash which is newer than 4.0, you can use coproc.
However, don't forget that the input/output of the command you want to communicate might be buffered.
In that case you should wrap the command with something like stdbuf -i0 -o0
Reference: How to make output of any shell command unbuffered?
Here's an example
#!/bin/bash
coproc mycoproc {
./a.out # your C++ code
}
# input to "std::cin >> num;"
echo "1" >&${mycoproc[1]}
# get output from "std::cout << sum << std::endl;"
# "-t 3" means that it waits for 3 seconds
read -t 3 -u ${mycoproc[0]} var
# print it
echo $var
echo "2" >&${mycoproc[1]}
read -t 3 -u ${mycoproc[0]} var
echo $var
echo "3" >&${mycoproc[1]}
read -t 3 -u ${mycoproc[0]} var
echo $var
# you can get PID
kill $mycoproc_PID
output will be
1
3
6
If your bash is older than 4.0, using mkfifo can do the same thing like:
#!/bin/bash
mkfifo f1 f2
exec 4<> f1
exec 5<> f2
./a.out < f1 > f2 &
echo "1" >&4
read -t 3 -u 5 var
echo $var
rm f1 f2
Considering that your C++ program reads from standard output, and prints to standard output, it's easy to put it inside a simple chain of pipes:
command_that_writes_output | your_cpp_program | command_that_handle_output
In your specific case you probably need to modify the program to only handle one single input and writing one single output, and remove the loop. Because then you can do it very simple, like this:
for i in 1 2 3 4; do
result=`echo $i | ./program`
echo $((result * result))
done

Whiptail Gauge: Variable in loop not being set

Am new to bash and whiptail so excuse the ignorance.
When assigning a var in the for loop, the new value of 20 is never set when using a Whiptail dialog. Any suggestions why ?
andy="10"
{
for ((i = 0 ; i <= 100 ; i+=50)); do
andy="20"
echo $i
sleep 1
done
} | whiptail --gauge "Please wait" 5 50 0
# }
echo "My val $andy
A command inside a pipeline (that is, a series of commands separated by |) is always executed in a subshell, which means that each command has its own variable environment. The same is true of the commands inside the compound command (…), but not the compound command {…}, which can normally be used for grouping without creating a subshell.
In bash or zsh, you can solve this problem using process substitution instead of a pipeline. For example:
andy="10"
for ((i=0 ; i <= 100 ; i+=50)); do
andy="20"
echo $i
sleep 1
done > >(whiptail --gauge "Please wait" 6 50 0)
echo "My val $andy
>(whiptail ...) will cause a subshell to be created to execute whiptail; the entire expression will be substituted by the name of this subshell's standard input (in linux, it will be something like /dev/fd/63, but it could be a FIFO on other OSs). > >(...) causes standard output to be redirected to the subshell's standard input; the first > is just a normal stdout redirect.
The statements inside {} are not ordinarily executed in a sub-shell. However, when you add a pipe (|) to it, they seem to be executed in a sub-shell.
If you remove the pipe to whiptail, you will see the update value of andy.

How to delete older contents of file that is being continuously written to?

I have a simulation running and expect it to go on for atleast 10 more hours. I have directed the console out put to a .txt file using
(binary) > out.txt
This out.txt is becoming too huge. I do not need a lot of contents in this file. How can I delete the older parts of this file without harming the writing process? The contents that will be written towards the end of the simulation is important to me.
As Carl mentioned in the comments, you cannot really do this on an actively written log file. However, if the initial data is not relevant to you, you can do the following (though beware that you will loose all data)
> out.txt
For future, you can use a utility called logrotate(8)
You could use tail to only store the end of the file:
# Say you want to save the last 100 lines
your_binary | tail -n 100 > out.txt
This assumes that the output ends at some point.
saw your comments - the file is 10 GB now ... try using sed -i to reduce the size so that it will work with the other tools, if you want to completely erase it then :> logfile.
tools can cope up with a file which is as big as their buffer , else they should be streamed ..... something like split wont work on a 4 GB file , dont know if they made a code adjustment for this , its been long since i had to work with a file that big.
two suggestions :
1
there were a few methods i could think off like using split ....but almost all were involving creation of a seperate file from the log (a reduced version) and renaming that or redirecting to that.
use split to break the log to smaller logs (split -l 100 ...) and just redirect the program output to the recent the last log found using ls -1.
this seems to work fine .
2
Also i tried a second method to edit/truncate top 10 lines in the same file ......
Kaizen ~/shell_prac
$ cat zcntr.sh
## test truncate a log file
##set -xv
:> zcntr.log ;
## fxn
cntr_log()
{
limit=$1 ;
start=0 ;
while [ $start -lt $limit ]
do
echo "count is $start" >> zcntr.log ; ## generate a continuous log
start=$(($start + 1));
sleep 1;
cnt=$(($start % 10)) ;
if [ $cnt -eq 0 ] ## check to truncate the top 10 lines using sed
then
echo "truncate at $start " >> zcntr.log ;
sed -i "1,10d" zcntr.log ;
fi
done ;
}
## main cntrlr
echo "enter a limit" ;
read lmt ;
cntr_log $lmt ;
this seems to work
i tested it with a counter to print till value 25
output :
Kaizen ~/shell_prac
$ cat zcntr.log
count is 19
truncate at 20
count is 20
count is 21
count is 22
count is 23
count is 24
i think either of the two will help.
let me know if there is something else on your mind !!
Truncate file with cat
> cat /dev/null > out.txt

Resources