Input loop with FIFO problems - linux

I'm having some trouble using FIFOs for stdin.
I have a script like this:
#!/usr/bin/env ruby
while true do
data = gets
puts "Got: #{data}"
end
Then I run it like:
$ ./script < input.fifo &
$ echo testdata > input.fifo
It will print something like:
Got: testdata
Got:
Got:
Got:
Got:
Got:
etc.
My suspicion is that something is wrong with the FIFO. That something is not getting cleared out after it is sent to the script.
I tried the same thing with a C program with a similar input loop using a scanf("%d" ...) and it acted like this:
$ echo 1 > input.fifo
Got: 1
Got: 1
Got: 1
Got: 1
etc.
So it would seem that the last thing in the FIFO gets stuck there. In the ruby example, it is a null line, because gets captures the \n. In the second it is the 1 itself.
Can anyone offer any insight?
Thanks!

The situation is simple, after:
'echo 1 > input.fifo', the file 'input.fifo' was opened, "1" was written to it,
and it is closed.
The problem is that it is closed. When fifo closed from writing side, this is equal "end of file" for reading side. So if you check return code from "scanf" in your C example it will be equal to EOF constant.
And after "end of file" when you read all data from fifo, and try read something,
the "reading" code will alaways return immediately and report "end of file".

Related

Pass output logs from a program into a function and store the return code in a variable at the same time

I have a shell script which has a function to Log statements. SomeProgram is another program which is run from my shell script and the logs from it are passed into the function LogToFile.
#!/bin/sh
LogToFile() {
[[ ! -t 0 ]] && while read line; do echo "$line" >> $MY_LOG_FILE; done
for arg; do echo "$arg" >> $MY_LOG_FILE; done
}
SomeProgram | LogToFile
Question:
All is good until here. But I have been trying to get the return code from SomeProgram and store it in a variable. How can I do that without loosing the functionality of logs from SomeProgram going into my LogToFile function. I tried the following options but in vain.
RETVAL=SomeProgram | LogToFile
RETVAL=(SomeProgram) | LogToFile
RETVAL=(SomeProgram | LogToFile)
Is it possible to pass the output of a program to a function parameter and collect the return value in another variable at the same time?
I figured it out eventually. PIPESTATUS is the tool to use here.
Following is the way I can use it to get the return code of SomeProgram into RETVAL for example.
SomeProgram | LogToFile
RETVAL=${PIPESTATUS[0]}
Above is the way of getting the output of the program on the left of the pipe. PIPESTATUS is an array which contains the return codes of all the programs run adjacent to the pipe commands.
PIPESTATUS[1] could give the output of the LogToFile for example if LogToFile was a program.

Dont know how to fix popen ,"Invalid file object" error

I am trying to get a file name and pass it to a command using popen. Then I want to print the output. This is my code:
filePath = tkinter.filedialog.askopenfilename(filetypes=[("All files", "*.*")])
fileNameStringForm = (basename(filePath ))
fileNameByteForm = fileNameStringForm.encode(encoding='utf-8')
process = subprocess.Popen(['gagner','-arg1'], shell = True, stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
process .communicate(fileNameByteForm )
stdout, stderr = process .communicate() <<------ERROR POINTS TO THIS LINE
stringOutput = stdout.decode('urf-8')
print(stringOutput)
I am getting the following error:
ValueError: Invalid file object: <_io.BufferedReader name=9>
I have looked at other similar questions but nothing seems to have solved my problem. Can some show me where I am going wrong in the code?
Edit:
If I were to run the command in a command line it would be:
gagner -arg1 < file1
What you are doing is not what you are describing in the supposed command line argument. You are actually executing this:
echo "file1" | gagner -arg1
You will need to make sure that you pass in the file contents yourself. Popen will not open and read the file for you.
According to the documentation, what communicate() does is
interact with process: Send data to stdin. Read data from stdout and stderr, until end-of-file is reached. Wait for process to terminate.
So, once you have run
process.communicate(fileNameByteForm)
your sub process has finished and the pipes have been closed. The second call will then fail as a result.
What you want to do instead is
stdout, stderr = process.communicate(input_data)
which will pipe your input data into the sub process and read stdout and stderr.

Perl anonymous pipe no output

Question
Why is nothing printed when using anonymous pipe, unless I print the actual data from pipe ?
Example
use strict;
use warnings;
my $child_process_id = 0;
my $vmstat_command = 'vmstat 7|';
$child_process_id = open(VMSTAT, $vmstat_command) || die "Error when executing \"$vmstat_command\": $!";
while (<VMSTAT>) {
print "hi" ;
}
close VMSTAT or die "bad command: $! $?";
Appears to hang
use strict;
use warnings;
my $child_process_id = 0;
my $vmstat_command = 'vmstat 7|';
$child_process_id = open(VMSTAT, $vmstat_command) || die "Error when executing \"$vmstat_command\": $!";
while (<VMSTAT>) {
print "hi" . $_ ;
# ^^^ Added this
}
close VMSTAT or die "bad command: $! $?";
Prints
hiprocs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------
hi r b swpd free buff cache si so bi bo in cs us sy id wa st
hi 1 0 0 7264836 144200 307076 0 0 0 1 0 14 0 0 100 0 0
etc...
Expected behaviour
Would be to print hi for every line of output of vmstat for the first example.
Versions
perl, v5.10.0
GNU bash, version 3.2.51
Misc
It also appears to hang when using chomp before printing the line (which i thought only removes newlines).
I feel like i'm missing something fundamental to how the pipe is read and processed but could not find a similar question. If there is one then dupe this and I'll have a look at it.
Any further information needed just ask.
Alter
print "hi";
to
print "hi\n";
and it also "works"
the reason it fails is that output is line buffered by default
setting $| will flush the buffer straight away
If set to nonzero, forces a flush right away and after every write or print on the currently selected output channel. Default is 0 (regardless of whether the channel is really buffered by the system or not; "$|" tells you only whether you've asked Perl explicitly to flush after each write). STDOUT will typically be line buffered if output is to the terminal and block buffered otherwise. Setting this variable is useful primarily when you are outputting to a pipe or socket, such as when you are running a Perl program under rsh and want to see the output as it's happening. This has no effect on input buffering. See the getc entry in the perlfunc manpage for that. (Mnemonic: when you want your pipes to be piping hot.)

Until Loop not working as expected

I'm currently learning Linux and as an homework, we have to create a few basic shell scripts. Nothing especially complicated but this one is giving me headaches. Here's my code :
until [ "$toPrint" == 'fin' ]
do
echo "Enter file name to print out :" ; read toPrint
sh ./afficher.sh "$toPrint"
done
Basically, I have another script called afficher.sh (I'm french so don't mind the french language used) and it reads whatever file name it gets as a parameter. However, the moment I type "fin", everything is supposed to stop except it still tries to print the file called "fin". I read a bit about the until loop on Internet and once it becomes True, it should stop, which is not my case...
Personally, I'd implement this like so -- with a while loop, not an until loop, and checking for the exit condition separately and explicitly:
while true; do
echo "Enter file name to print out :" ; read toPrint
[ "$toPrint" = fin ] && break
sh ./afficher.sh "$toPrint"
done
If you really want to use the loop's condition, you can do that:
while echo "Enter file name to print out :";
read toPrint &&
[ "$toPrint" != fin ]; do
sh ./afficher.sh "$toPrint"
done
...but personally, I'm less fond of this on aesthetic grounds.
You check the condition at the top of the loop, but you enter the value in the middle of the loop. The next thing you do after reading the value is always pass it to afficher.sh and then once that is done you check its value to see if you should stop. If you don't want to run afficher.sh on the fin value, you'll need to make sure your control flow allows you to do the comparison before you invoke afficher.sh.

CSV Bash loop Issue with Variables

I have a csv file which im trying to loop through with the purpose to find out if an User Input is found inside the csv data. I wrote the following code which sometimes works and others doesn't. It always stops working when I try to compare to a 2+ digit number. It works OK for numbers 1 through 9, but once u enter lets say 56 , or 99 or 100, it stops working.
the csv data is comma delimited, i have about 300 lines they are just like this.
1,John Doe,Calculus I,5.0
1,John Doe,Calculus II,4.3
1,John Doe,Physics II,3.5
2,Mary Poppins,Calculus I,3.7
2,Mary Poppins,Calculus II,4.7
2,Mary Poppins,Physics I,3.7
Data is just like that, all the way down until ID #100 for a total of 300 lines. Both the sh file and csv file are in the same folder, I'm using a fresh installation of Ubuntu 12.04.3, using gedit as the text editor.
I tried Echoing the variables ID and inside the IF conditionals but it doesn't behave the way it should when testing for the same value. Could someone point me out in the right direction. Thanks
Here's the code:
#s!/bin/bash
echo "enter your user ID";
read user;
INPUT_FILE=notas.csv
while IFS="," read r- ID name asignature final;
do
if [$ID = $user]; then
userType=1;
else
userType=2;
fi
done < notas.csv
Well, your code as written has a few issues.
You have r- instead of -r on the read line - I assume that's a typo not present in your actual code or you wouldn't get very far.
Similarly, you need space around the [...] brackets: [$ID is a syntax error.
You need to quote the parameter expansions in your if clause, and/or switch bracket types. You probably make it a numeric comparison as #imp25 suggested, which I would do by using ((...)).
You probably don't want to set userType to 2 in an else clause, because that will set it to 2 for everyone except whoever is listed last in the file (ID 100, presumably). You want to set it to 2 first, outside the loop. Then, inside the loop when you find a match, set it to 1 and break out of the loop:
userType=2
while IFS=, read -r ID name asignature final; do
if (( $ID == $user )); then
userType=1;
break
fi
done < notas.csv
You could also just use shell tools like awk:
userType=$(awk -F, -vtype=2 '($1=="'"$user"'") {type=1}; END {print type}' notas.csv)
or grep:
grep -q "^$user," notas.csv
userType=$(( $? + 1 ))
etc.
You should quote your variables in the if test statement. You should also perform a numeric test -eq rather than a string comparison =. So your if statement should look like:
if [[ "$ID" -eq "$user" ]]

Resources