python3 reset stdin after end of file - python-3.x

On Linux, I want to read from stdin where stdin comes first from pipe and then from user input.
So the command looks like:
cat my-file | ./my-prog.py
After reading all the lines from the pipe:
inf = open(0, "r")
inf.readlines()
I want to get further input from user. I do it with input(). But I get EOFError: EOF when reading a line.
I need a way to reset the stdin before the call to input().
Trying sys.stdin.seek(0) gives
io.UnsupportedOperation: underlying stream is not seekable
I read that in c there is clearerr that do that but I was not able to find how to do it in python.

Related

isatty() always returning False?

I want to pipe data via stdin to a Python script for onwards processing. The command is:
tail -f /home/pi/ALL.TXT | python3 ./logcheck.py
And the Python code is:
import sys
while 1:
if (sys.stdin.isatty()):
for line in sys.stdin:
print(line)
I want the code to continuously watch stdin and then process each row when received. The tail command is working when run on its own but the python script never outputs anything.
Checking isatty() it appears to always return False?
Help!
A TTY is when you use your regular terminal - as in opening up a python in your shell, and typing
BASH>python
>>>from sys import stdin
>>>stdin.isatty() #True
In your case the standard input is coming from something which is not a tty. Just add a not in the if statement.

Pipe Input on a timer

I have a program that asks for input but it takes a while to load up.
I need a bash script that will pipe out the output into a named pipe.
I need a command that will cause my echo to insert my input after the program prompts for input. This is my command right now but it pipes in the input before my prompt.
echo "R" | nc localhost 123 > fifo
This will result in the following output:
usernname#name:
R
Please enter in an input (R, Q, T):
So my command needs to "wait" until my program prompts then pipe in the input. Any ideas? This needs to be in a bash script
You can use sleep:
(sleep 3; echo "R") | nc localhost 123 > fifo
Obviously this has a race condition, and so for industrial applications you should use expect instead.

linux fifo, when read peer returns, write peer will also return immediately?

I'm on linux and I originally expected that for a fifo file, when I dump some content into it by a writer, the writer will wait for read peer to read "all" its content and then return. But seems not as I expected, I had a simple experiment:
First of all, I created a fifo file:
$mkfifo hfifo.txt
Then I had a "my.txt" file, having several lines":
$cat my.txt
line1
line2
line3
line4
I open a "tty1" to write to hfifo.txt with my.txt:
cat my.txt >hfifo.txt
I open another terminal "tty2",to read one line from it:
$read l1<hfifo.txt
Well to my supprise, as "read" finishes, the "cat" in my "tty1" also returns immediately. I "echo $l1" will print "line1". This is quite weird to me because I expected that the reader peer should read all the content being writen to the fifo, and then the writer peer(tty1) will return. But the actual result is, once the reader peer ends, writer peer also ends.
I am just curious
(1) how the writer peer know that there's no more reader to read the fifo so it finishes? I could be in a loop to call the "read" command to print each line of this file.
(2) Beside "cat" command as a reader to dump the fifo, is there a way for shell programming to read this fifo one line by one line?
Please kindly suggest, thanks!
'strace' comes in handy. You can see the following lines for file with 3 characters + newline:
read(3, "qqq\n", 131072) = 4
write(1, "qqq\n", 4) = 4
read(3, "", 131072) = 0
As you can see, both read() and write() return the number of characters read, and on the last interation return zero, which signals the process has ended.
regardins (2) There are commands that do other things, like sed awk and egrep that also read the file line by line, but to just read the file, AFAIK only cat.

How do I provide STDIN data to a J script run via jconsole.exe?

I want to run a J script, providing STDIN, and receive the output of the script with STDOUT.
I feel like I'm missing something blindingly obvious, but the help pages on using jconsole.exe are . . . terse.
My naive thought was that I could run the following in the cmd.exe shell to provide STDIN:
jconsole.exe script.ijs inputstring
While that works without the attempted STDIN:
C:\..\bin>jconsole.exe "C:\path\no-input-script.ijs"
success
C:\..\bin>
The no-input-script.ijs file is the following:
stdout 'success'
exit ''
I have the following script-with-input.ijs file:
input =: stdin ''
stdout 'input was ' , input
exit ''
When I run the following, the system hangs:
C:\..\bin>jconsole.exe "C:\path\script-with-input.ijs" xyz
When I then hit Ctrl+C, the script exits and I am left with the following:
C:\..\bin>jconsole.exe "C:\path\script-with-input.ijs" xyz
input was
C:\..\bin>
stdin reads input from STDIN until EOF (usually in *nix ^D). So your 'script-with-input.ijs' waits for user input or a pipe.
c:>jconsole.exe "script-with-input.ijs" hello
this is user input
^D
input was this is user input
What you, instead, are trying to do is read the arguments of the command. Those are stored in ARGV:
NB. script-with-input.ijs
input =: ARGV
echo input
exit''
Then:
c:>jconsole.exe "script-with-input.ijs" hello
┌────────────┬─────────────────────┬─────┐
│jconsole.exe│script-with-input.ijs│hello│
└────────────┴─────────────────────┴─────┘

Writing Block buffered data to a File without fflush(stdout)

From what I understood about buffers: a buffer is a temporarily stored data.
For example: let's assume that you wanted to implement an algorithm for determining whether something is speech or just noise. How would you do this using a constant stream flow of sound data? It would be very difficult. Therefore, by storing this into an array you can perform analysis on this data.
This array of data is called a buffer.
Now, I have a Linux command where the output is continuous:
stty -F /dev/ttyUSB0 ispeed 4800 && awk -F"," '/SUF/ {print $3,$4,$5,$6,$10,$11,substr($2,1,2),".",substr($2,3,2),".",substr($2,5,2)}' < /dev/ttyUSB0
If I were to write the output of this command to a file, I won't be able to write it, because the output is probably block buffered and only an empty text file will be generated when I terminate the output of the above command (CTRL+C).
Here is what i mean by Block Buffered.
The three types of buffering available are unbuffered, block
buffered, and line buffered. When an output stream is unbuffered,
information appears on the destination file or terminal as soon as
written; when it is block buffered many characters are saved up and
written as a block; when it is line buffered characters are saved
up until a newline is output or input is read from any stream
attached to a terminal device (typically stdin). The function
fflush(3) may be used to force the block out early. (See
fclose(3).) Normally all files are block buffered. When the first
I/O operation occurs on a file, malloc(3) is called, and a buffer
is obtained. If a stream refers to a terminal (as stdout normally
does) it is line buffered. The standard error stream stderr is
always unbuffered by default.
Now, executing this command,
stty -F /dev/ttyUSB0 ispeed 4800 && awk -F"," '/SUF/ {print $3,$4,$5,$6,$10,$11,substr($2,1,2),".",substr($2,3,2),".",substr($2,5,2)}' < /dev/ttyUSB0 > outputfile.txt
An empty file will be generated because the buffer block might have not been completed when I terminated the process, and since i don't know the block buffer size, there is no way to wait for the block is complete.
In order to write the output of this command to a file I have to use fflush() inside awk, which would successfully write the output into the text file, which I have already done successfully.
Here it goes:
stty -F /dev/ttyUSB0 ispeed 4800 && awk -F"," '/GGA/ {print "Latitude:",$3,$4,"Longitude:",$5,$6,"Altitude:",$10,$11,"Time:",substr($2+50000,1,2),".",substr($2,3,2),".",substr($2,5,2); fflush(stdout) }' < /dev/ttyUSB0 | head -n 2 > GPS_data.txt
But my question is:
Is there any way to declare the buffer block size so that I would know when the buffer block in generated, so eliminating the need of using fflush()?
OR
Is there anyway to change buffer type from Block buffered to unbuffered or line buffered ?
You can use stdbuf to run a command with a modified buffer size.
For example, stdbuf -o 100 awk ... will run awk with a 100 byte standard output buffer.

Resources