How to capture escape sequences sent by terminal? - linux

How would one capture the escape sequences as they are sent by a terminal application (say Konsole for example) ? For example, if you hit PgDown, what is sent to the virtual console ?
I would like to record the byte stream sent to the virtual console (like when I hit "Ctrl+C" what escape sequence it produced) to a file I can then read with hexdump.

I did a small python script to do the trick :
#!/bin/env python
import curses
from pprint import pprint
buf = ''
def main(stdscr):
global buf
curses.noecho()
curses.raw()
curses.cbreak()
stdscr.keypad(False)
stop = stdscr.getkey()
c = stdscr.getkey()
buf = ''
while c != stop:
buf += c
c = stdscr.getkey()
def run():
curses.wrapper(main)
pprint(buf)
tmp = buf.encode('latin1')
pprint([hex(x) for x in tmp])
pprint([bin(x) for x in tmp])
run()
...It clear the screen, then type a key (e.g. a), then, type any thing, and type the same key as the first one to stop. Then, it will display all the bytes received (example : a [start recording]
Alt+b [stop recording] a produces the bytes : ['0x1b', '0x62'] with my terminal

Related

Wireshark unable to open btsnoop file

I followed the instructions in the answer to Bluetooth HCI snoop log not generated to create a btsnoop file using btsnooz.py from my Android bugreport file. When I opened the resulting btsnoop.log file in Wireshark I got the error The file "btsnoop.log" isn't a capture file in a format Wireshark understands.
The adb bugreport was done against an S7 Edge running Android 8.0.0.
A copy of the btsnoop.log file can be found here https://drive.google.com/file/d/1Y3544DrhPbI9YxktL6rSWkpAe-YeoPn4/view?usp=sharing
How I can analyze this file?
While it doesn't exactly answer the question perhaps it'll help find the answer (I'm not allowed to comment yet). Your trace starts with FFFE6200740073006E006F006F007000, where FFFE looks like UTF-16LE BOM and then goes "btsnoop" interleaved with zeroes. I got exactly that when running that script btsnooz.py "from under" python-2.7 that had been installed on my W10x64 laptop in the past - like this:
C:\python27-x64\python.exe btsnooz.py bugreport-2021-01-09-22-52-09.txt >bugreport.pcap
I then tried running it directly like this instead, hoping that W10 has some built-in python of its own:
.\btsnooz.py bugreport-2021-01-09-22-52-09.txt >bugreport.pcap
and got a twice smaller file that starts with "btsnoop" like valid ones do. Although I'm having a different issue with it (Wireshark can only decode the first few packets) it looks like a step forward as method #1 of running btsnooz.py above appears to corrupt the output file somehow.
Here's what btsnooz.py should write into the beginning of the file as follows from its code, so anything else there means incorrect execution of the script:
sys.stdout.write('btsnoop\x00\x00\x00\x00\x01\x00\x00\x03\xea')
HTH...
Update: Apparently that python script btsnooz.py is implied for use on Linux: just verified it on Ubuntu with python v2.7 and it produces correct traces parsed by Wireshark w/o any such errors. When used on Windows it seems to write 0D0A instead of every 0A, causing above described Wireshark parsing errors. Replacing those 0D0A "back" with 0A in a hex editor fixed those errors in my case.
I've just run into the same issue, and have managed to modify btsnooz.py to work correctly with Python 3 on Windows (and I expect Linux etc too, as I didn't do anything specifically with line endings, just encodings etc to match Python 2 behaviour). I've also made it output to a file called "btsnoop.log" in the current directory (overwriting any that might already exist) rather than trying to write to stdout.
I have included my modified code below - just save it as "btsnooz.py" and run python btsnooz.py <name of bugreport file>.txt, using Python 3
#!/usr/bin/env python
"""
This script extracts btsnooz content from bugreports and generates
a valid btsnoop log file which can be viewed using standard tools
like Wireshark.
btsnooz is a custom format designed to be included in bugreports.
It can be described as:
base64 {
file_header
deflate {
repeated {
record_header
record_data
}
}
}
where the file_header and record_header are modified versions of
the btsnoop headers.
"""
import base64
import fileinput
import struct
import sys
import zlib
# Enumeration of the values the 'type' field can take in a btsnooz
# header. These values come from the Bluetooth stack's internal
# representation of packet types.
TYPE_IN_EVT = 0x10
TYPE_IN_ACL = 0x11
TYPE_IN_SCO = 0x12
TYPE_IN_ISO = 0x17
TYPE_OUT_CMD = 0x20
TYPE_OUT_ACL = 0x21
TYPE_OUT_SCO = 0x22
TYPE_OUT_ISO = 0x2d
def type_to_direction(type):
"""
Returns the inbound/outbound direction of a packet given its type.
0 = sent packet
1 = received packet
"""
if type in [TYPE_IN_EVT, TYPE_IN_ACL, TYPE_IN_SCO, TYPE_IN_ISO]:
return 1
return 0
def type_to_hci(type):
"""
Returns the HCI type of a packet given its btsnooz type.
"""
if type == TYPE_OUT_CMD:
return '\x01'
if type == TYPE_IN_ACL or type == TYPE_OUT_ACL:
return '\x02'
if type == TYPE_IN_SCO or type == TYPE_OUT_SCO:
return '\x03'
if type == TYPE_IN_EVT:
return '\x04'
if type == TYPE_IN_ISO or type == TYPE_OUT_ISO:
return '\x05'
raise RuntimeError("type_to_hci: unknown type (0x{:02x})".format(type))
def decode_snooz(snooz):
"""
Decodes all known versions of a btsnooz file into a btsnoop file.
"""
version, last_timestamp_ms = struct.unpack_from('=bQ', snooz)
if version != 1 and version != 2:
sys.stderr.write('Unsupported btsnooz version: %s\n' % version)
exit(1)
# Oddly, the file header (9 bytes) is not compressed, but the rest is.
decompressed = zlib.decompress(snooz[9:])
fp.write('btsnoop\x00\x00\x00\x00\x01\x00\x00\x03\xea'.encode("latin-1"))
if version == 1:
decode_snooz_v1(decompressed, last_timestamp_ms)
elif version == 2:
decode_snooz_v2(decompressed, last_timestamp_ms)
def decode_snooz_v1(decompressed, last_timestamp_ms):
"""
Decodes btsnooz v1 files into a btsnoop file.
"""
# An unfortunate consequence of the file format design: we have to do a
# pass of the entire file to determine the timestamp of the first packet.
first_timestamp_ms = last_timestamp_ms + 0x00dcddb30f2f8000
offset = 0
while offset < len(decompressed):
length, delta_time_ms, type = struct.unpack_from('=HIb', decompressed, offset)
offset += 7 + length - 1
first_timestamp_ms -= delta_time_ms
# Second pass does the actual writing out to stdout.
offset = 0
while offset < len(decompressed):
length, delta_time_ms, type = struct.unpack_from('=HIb', decompressed, offset)
first_timestamp_ms += delta_time_ms
offset += 7
fp.write(struct.pack('>II', length, length))
fp.write(struct.pack('>II', type_to_direction(type), 0))
fp.write(struct.pack('>II', (first_timestamp_ms >> 32), (first_timestamp_ms & 0xFFFFFFFF)))
fp.write(type_to_hci(type).encode("latin-1"))
fp.write(decompressed[offset:offset + length - 1])
offset += length - 1
def decode_snooz_v2(decompressed, last_timestamp_ms):
"""
Decodes btsnooz v2 files into a btsnoop file.
"""
# An unfortunate consequence of the file format design: we have to do a
# pass of the entire file to determine the timestamp of the first packet.
first_timestamp_ms = last_timestamp_ms + 0x00dcddb30f2f8000
offset = 0
while offset < len(decompressed):
length, packet_length, delta_time_ms, snooz_type = struct.unpack_from('=HHIb', decompressed, offset)
offset += 9 + length - 1
first_timestamp_ms -= delta_time_ms
# Second pass does the actual writing out to stdout.
offset = 0
while offset < len(decompressed):
length, packet_length, delta_time_ms, snooz_type = struct.unpack_from('=HHIb', decompressed, offset)
first_timestamp_ms += delta_time_ms
offset += 9
fp.write(struct.pack('>II', packet_length, length))
fp.write(struct.pack('>II', type_to_direction(snooz_type), 0))
fp.write(struct.pack('>II', (first_timestamp_ms >> 32), (first_timestamp_ms & 0xFFFFFFFF)))
fp.write(type_to_hci(snooz_type).encode("latin-1"))
fp.write(decompressed[offset:offset + length - 1])
offset += length - 1
fp = None
def main():
if len(sys.argv) > 2:
sys.stderr.write('Usage: %s [bugreport]\n' % sys.argv[0])
exit(1)
iterator = fileinput.input(openhook=fileinput.hook_encoded("latin-1"))
found = False
base64_string = ""
for line in iterator:
if found:
if line.find('--- END:BTSNOOP_LOG_SUMMARY') != -1:
global fp
fp = open("btsnoop.log", "wb")
decode_snooz(base64.standard_b64decode(base64_string))
fp.close
sys.exit(0)
base64_string += line.strip()
if line.find('--- BEGIN:BTSNOOP_LOG_SUMMARY') != -1:
found = True
if not found:
sys.stderr.write('No btsnooz section found in bugreport.\n')
sys.exit(1)
if __name__ == '__main__':
main()

C extensions - how to redirect printf to a python logger?

I have a simple C-extension(see example below) that sometimes prints using the printf function.
I'm looking for a way to wrap the calls to the function from that C-extensions so that all those printfs will be redirected to my python logger.
hello.c:
#include <Python.h>
static PyObject* hello(PyObject* self)
{
printf("example print from a C code\n");
return Py_BuildValue("");
}
static char helloworld_docs[] =
"helloworld(): Any message you want to put here!!\n";
static PyMethodDef helloworld_funcs[] = {
{"hello", (PyCFunction)hello,
METH_NOARGS, helloworld_docs},
{NULL}
};
static struct PyModuleDef cModPyDem =
{
PyModuleDef_HEAD_INIT,
"helloworld",
"Extension module example!",
-1,
helloworld_funcs
};
PyMODINIT_FUNC PyInit_helloworld(void)
{
return PyModule_Create(&cModPyDem);
};
setup.py:
from distutils.core import setup, Extension
setup(name = 'helloworld', version = '1.0', \
ext_modules = [Extension('helloworld', ['hello.c'])])
to use first run
python3 setup.py install
and then:
import helloworld
helloworld.hello()
I want to be able to do something like this:
with redirect_to_logger(my_logger)
helloworld.hello()
EDIT: I saw a number of posts showing how to silence the prints from C, but I wasn't able to figure out from it how can I capture the prints in python instead.
Example of such post: Redirect stdout from python for C calls
I assume that this question didn't get much traction because I maybe ask too much, so I don't care about logging anymore... how can I capture the C prints in python? to a list or whatever.
EDIT
So I was able to achieve somewhat a working code that does what I want - redirect c printf to python logger:
import select
import threading
import time
import logging
import re
from contextlib import contextmanager
from wurlitzer import pipes
from helloworld import hello
logger = logging.getLogger()
logger.setLevel(logging.DEBUG)
ch = logging.StreamHandler()
ch.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
ch.setFormatter(formatter)
logger.addHandler(ch)
class CPrintsHandler(threading.Thread):
def __init__(self, std, poll_std, err, poll_err, logger):
super(CPrintsHandler, self).__init__()
self.std = std
self.poll_std = poll_std
self.err = err
self.poll_err = poll_err
self.logger = logger
self.stop_event = threading.Event()
def stop(self):
self.stop_event.set()
def run(self):
while not self.stop_event.is_set():
# How can I poll both std and err at the same time?
if self.poll_std.poll(1):
line = self.std.readline()
if line:
self.logger.debug(line.strip())
if self.poll_err.poll(1):
line = self.err.readline()
if line:
self.logger.debug(line.strip())
#contextmanager
def redirect_to_logger(some_logger):
handler = None
try:
with pipes() as (std, err):
poll_std = select.poll()
poll_std.register(std, select.POLLIN)
poll_err = select.poll()
poll_err.register(err, select.POLLIN)
handler = CPrintsHandler(std, poll_std, err, poll_err, some_logger)
handler.start()
yield
finally:
if handler:
time.sleep(0.1) # why do I have to sleep here for the foo prints to finish?
handler.stop()
handler.join()
def foo():
logger.debug('logger print from foo()')
hello()
def main():
with redirect_to_logger(logger):
# I don't want the logs from here to be redirected as well, only printf.
logger.debug('logger print from main()')
foo()
main()
But I have a couple of issues:
The python logs are also being redirected and caught by the CPrintsHandler. Is there a way to avoid that?
The prints are not exactly in the correct order:
python3 redirect_c_example_for_stackoverflow.py
2020-08-18 19:50:47,732 - root - DEBUG - example print from a C code
2020-08-18 19:50:47,733 - root - DEBUG - 2020-08-18 19:50:47,731 - root - DEBUG - logger print from main()
2020-08-18 19:50:47,733 - root - DEBUG - 2020-08-18 19:50:47,731 - root - DEBUG - logger print from foo()
Also, the logger prints all go to err, perhaps the way I poll them causes this order.
I'm not that familiar with select in python and not sure if there is a way to poll both std and err at the same time and print whichever has something first.
On Linux you could use wurlitzer which would capture the output from fprint, e.g.:
from wurlitzer import pipes
with pipes() as (out, err):
helloworld.hello()
out.read()
#'example print from a C code\n'
wurlitzer is based on this article of Eli Bendersky, the code from which you can use if you don't like to depend on third-party libraries.
Sadly, wurlitzer and the code from the article work only for Linux (and possible MacOS).
Here is a prototype (an improved version of the prototype can be installed from my github) for Windows using Eli's approach as Cython-extension (which probably could be translated to ctypes if needed):
%%cython
import io
import os
cdef extern from *:
"""
#include <windows.h>
#include <io.h>
#include <stdlib.h>
#include <stdio.h>
#include <fcntl.h>
int open_temp_file() {
TCHAR lpTempPathBuffer[MAX_PATH+1];//path+NULL
// Gets the temp path env string (no guarantee it's a valid path).
DWORD dwRetVal = GetTempPath(MAX_PATH, // length of the buffer
lpTempPathBuffer); // buffer for path
if(dwRetVal > MAX_PATH || (dwRetVal == 0))
{
return -1;
}
// Generates a temporary file name.
TCHAR szTempFileName[MAX_PATH + 1];//path+NULL
DWORD uRetVal = GetTempFileName(lpTempPathBuffer, // directory for tmp files
TEXT("tmp"), // temp file name prefix
0, // create unique name
szTempFileName); // buffer for name
if (uRetVal == 0)
{
return -1;
}
HANDLE tFile = CreateFile((LPTSTR)szTempFileName, // file name
GENERIC_READ | GENERIC_WRITE, // first we write than we read
0, // do not share
NULL, // default security
CREATE_ALWAYS, // overwrite existing
FILE_ATTRIBUTE_TEMPORARY | FILE_FLAG_DELETE_ON_CLOSE, // "temporary" temporary file, see https://learn.microsoft.com/en-us/archive/blogs/larryosterman/its-only-temporary
NULL); // no template
if (tFile == INVALID_HANDLE_VALUE) {
return -1;
}
return _open_osfhandle((intptr_t)tFile, _O_APPEND | _O_TEXT);
}
int replace_stdout(int temp_fileno)
{
fflush(stdout);
int old;
int cstdout = _fileno(stdout);
old = _dup(cstdout); // "old" now refers to "stdout"
if (old == -1)
{
return -1;
}
if (-1 == _dup2(temp_fileno, cstdout))
{
return -1;
}
return old;
}
int restore_stdout(int old_stdout){
fflush(stdout);
// Restore original stdout
int cstdout = _fileno(stdout);
return _dup2(old_stdout, cstdout);
}
void rewind_fd(int fd) {
_lseek(fd, 0L, SEEK_SET);
}
"""
int open_temp_file()
int replace_stdout(int temp_fileno)
int restore_stdout(int old_stdout)
void rewind_fd(int fd)
void close_fd "_close" (int fd)
cdef class CStdOutCapture():
cdef int tmpfile_fd
cdef int old_stdout_fd
def start(self): #start capturing
self.tmpfile_fd = open_temp_file()
self.old_stdout_fd = replace_stdout(self.tmpfile_fd)
def stop(self): # stops capturing, frees resources and returns the content
restore_stdout(self.old_stdout_fd)
rewind_fd(self.tmpfile_fd) # need to read from the beginning
buffer = io.TextIOWrapper(os.fdopen(self.tmpfile_fd, 'rb'))
result = buffer.read()
close_fd(self.tmpfile_fd)
return result
And now:
b = CStdOutCapture()
b.start()
helloworld.hello()
out = b.stop()
print("HERE WE GO:", out)
# HERE WE GO: example print from a C code
This is what I would do if I am free to edit the C code. Open a memory map in C and write to its file descriptor using fprintf(). Expose the file descriptor to Python either as the int and then use mmap module to open it or use os.openfd() to wrap it in a simpler file-like object, or wrap it in file-like object in C and let Python use that.
Then I would create a class that will enable me to write to sys.stdout through usual interface, i.e. its write() method (for Python's side usage) , and that would use select module to poll the file from C that acts as its stdout in a thread. Then I would switch sys.stdout with an object of this class. So, when Python does sys.stdout.write(...) the string will be redirected to sys.stdout.write(), and when the loop in a thread detects output on a file from C, it will write it using sys.stdout.write(). So, everything will be written to the screen and be available to loggers as well.
In this model, the strictly C part will never actually be writing to the file descriptor connected to the terminal.
You can even do much of this in C itself and leave little for the Python's side, but its easier to influence the interpreter from the Python's side as the extension is the shared library which involves some kind of, lets call it, IPC and OS in the whole story. That's why the stdout is not shared between extension and Python in the first place.
If you want to continue printf() on C side, you can see how you can redirect it in C before programming this whole mess.
This answer is strictly theoretical because I have no time to test it; but it should be doable according to my knowledge. If you try it, please let me know in a comment how it went. Perhaps I missed something, but, I am certain the theory is sane.
Beauty of this idea is that it will be OS independent, although the part with shared memory or connecting a file descriptor to allocated space in RAM can be sometimes PITA on Windows.
If you are not constrained to using the printf in C, it would be easier to use the print equivalent from python C API and pass where you want to redirect the message as an argument.
For example, your hello.c would be:
#include <Python.h>
static PyObject* hello(PyObject* self, PyObject *args)
{
PyObject *file = NULL;
if (!PyArg_ParseTuple(args, "O", &file))
return NULL;
PyObject *pystr = PyUnicode_FromString("example print from a C code\n");
PyFile_WriteObject(pystr, file, Py_PRINT_RAW);
return Py_BuildValue("");
}
static char helloworld_docs[] =
"helloworld(): Any message you want to put here!!\n";
static PyMethodDef helloworld_funcs[] = {
{"hello", (PyCFunction)hello,
METH_VARARGS, helloworld_docs},
{NULL}
};
static struct PyModuleDef cModPyDem =
{
PyModuleDef_HEAD_INIT,
"helloworld",
"Extension module example!",
-1,
helloworld_funcs
};
PyMODINIT_FUNC PyInit_helloworld(void)
{
return PyModule_Create(&cModPyDem);
};
We can check if it is working with the program below:
import sys
import helloworld
helloworld.hello(sys.stdout)
helloworld.hello(sys.stdout)
helloworld.hello(sys.stderr)
In the command line we redirect each output separately:
python3 example.py 1> out.txt 2> err.txt
out.txt will have two print calls, while err.txt will have only one, as expected from our python script.
You can check python's print implementation to get some more ideas of what you can do.
cpython print source code

Using Python and Arduino to change delay of light bulb via Serial

I am using an Arduino Mega and python 3.7 on Windows 10 64-bit. I am trying to make a light bulb blink using python and pyserial. I want the light bulb to stay on for a x amount of time and turn off for an y amount of time. I enter the values in a python Tkinter program: https://pastebin.com/zkRmcP60 full code. after I've entered the values I send into the Arduino via this code:
import msgpack
import serial
arduionoData = serial.Serial('com3', 9600, timeout=1)
def sendlower(*args):
try:
global arduionoData
arduionoData.write(b"1")
while arduionoData.readline().decode() != "Send Next":
pass
first = int(firstdelay.get())
arduionoData.write(msgpack.packb(first, use_bin_type=True))
except ValueError:
print("Only Positive Integers")
def senduppper(*args):
try:
global arduionoData
arduionoData.write(b"2")
while arduionoData.readline().decode() != "Send Next":
pass
second = int(seconddelay.get())
arduionoData.write(msgpack.packb(second, use_bin_type=True))
except ValueError:
print("Only Positive Integers")
The Tkinter program executes the functions above visit Pastebin for entire code.
First I specify the mode or whether or not it's going to be the on delay or the off delay changing.
With this code (Setup and other code omitted please look in the paste bin for it.)
void readdelay(){
mode = Serial.parseInt();
if (mode == 1){
delay(200);
Serial.write("Send Next");
delay1 = Serial.parseInt();
}else if (mode == 2){
delay(200);
Serial.write("Send Next");
delay2 = Serial.parseInt();
}
}
void loop() {
if (Serial.available() > 0){
readdelay();
}
}
Right now if I send in any positive number into the program it either turns off(when i send in a number for the on delay) the light completely or turns it on(when I send in a number for the off delay). My guess is that whenever the Serial.parseInt(); the function gets the wrong type of input it interprets it as a zero.
The documentation says:
If no valid digits were read when the time-out (see Serial.setTimeout()) > occurs, 0 is returned;
It seems the parseInt fails, therefor the delay is set to 0, which is why the light goes completely on or off.
Another possibility it that the arduino only recieves the first character, which means the light switches so fast you cant see it. (Issue described here)
Try printing out what value is received by the arduino. It should tell you what is happening and what direction to go to solve it.

Circle Piping to and from 2 Python Subprocesses

I needed help regarding the subprocess module. This question might sound repeated, and I have seen a number of articles related to it in a number of ways. But even so I am unable to solve my problem. It goes as follows:
I have a C program 2.c it's contents are as follows:
#include<stdio.h>
int main()
{
int a;
scanf("%d",&a);
while(1)
{
if(a==0) //Specific case for the first input
{
printf("%d\n",(a+1));
break;
}
scanf("%d",&a);
printf("%d\n",a);
}
return 0;
}
I need to write a python script which first compiles the code using subprocess.call() and then opens two process using Popen to execute the respective C-program. Now the output of the first process must be the input of the second and vice versa. So essentially, if my initial input was 0, then the first process outputs 2, which is taken by second process. It in turn outputs 3 and so on infinitely.
The below script is what I had in mind, but it is flawed. If someone can help me I would very much appreciate it.
from subprocess import *
call(["gcc","2.c"])
a = Popen(["./a.out"],stdin=PIPE,stdout=PIPE) #Initiating Process
a.stdin.write('0')
temp = a.communicate()[0]
print temp
b = Popen(["./a.out"],stdin=PIPE,stdout=PIPE) #The 2 processes in question
c = Popen(["./a.out"],stdin=PIPE,stdout=PIPE)
while True:
b.stdin.write(str(temp))
temp = b.communicate()[0]
print temp
c.stdin.write(str(temp))
temp = c.communicate()[0]
print temp
a.wait()
b.wait()
c.wait()
If you want the output of the first command a to go as the input of the second command b and in turn b's output is a's input—in a circle like a snake eating its tail— then you can't use .communicate() in a loop: .communicate() doesn't return until the process is dead and all the output is consumed.
One solution is to use a named pipe (if open() doesn't block in this case on your system):
#!/usr/bin/env python3
import os
from subprocess import Popen, PIPE
path = 'fifo'
os.mkfifo(path) # create named pipe
try:
with open(path, 'r+b', 0) as pipe, \
Popen(['./a.out'], stdin=PIPE, stdout=pipe) as b, \
Popen(['./a.out'], stdout=b.stdin, stdin=pipe) as a:
pipe.write(b'10\n') # kick-start it
finally:
os.remove(path) # clean up
It emulates a < fifo | b > fifo shell command from #alexander barakin answer.
Here's more complex solution that funnels the data via the python parent process:
#!/usr/bin/env python3
import shutil
from subprocess import Popen, PIPE
with Popen(['./a.out'], stdin=PIPE, stdout=PIPE, bufsize=0) as b, \
Popen(['./a.out'], stdout=b.stdin, stdin=PIPE, bufsize=0) as a:
a.stdin.write(b'10\n') # kick-start it
shutil.copyfileobj(b.stdout, a.stdin) # copy b's stdout to a' stdin
This code connects a's output to b's input using redirection via OS pipe (as a | b shell command does).
To complete the circle, b's output is copied to a's input in the parent Python code using shutil.copyfileobj().
This code may have buffering issues: there are multiple buffers in between the processes: C stdio buffers, buffers in Python file objects wrapping the pipes (controlled by bufsize).
bufsize=0 turns off the buffering on the Python side and the data is copied as soon as it is available. Beware, bufsize=0 may lead to partial writes—you might need to inline copyfileobj() and call write() again until all read data is written.
Call setvbuf(stdout, (char *) NULL, _IOLBF, 0), to make the stdout line-buffered inside your C program:
#include <stdio.h>
int main(void)
{
int a;
setvbuf(stdout, (char *) NULL, _IOLBF, 0); /* make line buffered stdout */
do {
scanf("%d",&a);
printf("%d\n",a-1);
fprintf(stderr, "%d\n",a); /* for debugging */
} while(a > 0);
return 0;
}
Output
10
9
8
7
6
5
4
3
2
1
0
-1
The output is the same.
Due to the way the C child program is written and executed, you might also need to catch and ignore BrokenPipeError exception at the end on a.stdin.write() and/or a.stdin.close() (a process may be already dead while there is uncopied data from b).
Problem is here
while True:
b.stdin.write(str(temp))
temp = b.communicate()[0]
print temp
c.stdin.write(str(temp))
temp = c.communicate()[0]
print temp
Once communicate has returned, it does noting more. You have to run the process again. Plus you don't need 2 processes open at the same time.
Plus the init phase is not different from the running phase, except that you provide the input data.
what you could do to simplify and make it work:
from subprocess import *
call(["gcc","2.c"])
temp = str(0)
while True:
b = Popen(["./a.out"],stdin=PIPE,stdout=PIPE) #The 2 processes in question
b.stdin.write(temp)
temp = b.communicate()[0]
print temp
b.wait()
Else, to see 2 processes running in parallel, proving that you can do that, just fix your loop as follows (by moving the Popen calls in the loop):
while True:
b = Popen(["./a.out"],stdin=PIPE,stdout=PIPE) #The 2 processes in question
c = Popen(["./a.out"],stdin=PIPE,stdout=PIPE)
b.stdin.write(str(temp))
temp = b.communicate()[0]
print temp
c.stdin.write(str(temp))
temp = c.communicate()[0]
print temp
better yet. b output feeds c input:
while True:
b = Popen(["./a.out"],stdin=PIPE,stdout=PIPE) #The 2 processes in question
c = Popen(["./a.out"],stdin=b.stdout,stdout=PIPE)
b.stdin.write(str(temp))
temp = c.communicate()[0]
print temp

D (Tango) Read all standard input and assign it to a string

In the D language how I can read all standard input and assign it to a string (with Tango library) ?
Copied straight from http://www.dsource.org/projects/tango/wiki/ChapterIoConsole:
import tango.text.stream.LineIterator;
foreach (line; new LineIterator!(char)(Cin.stream))
// do something with each line
If only 1 line is required, use
auto line = Cin.copyln();
Another, probably more efficient way, of dumping the contents of Stdin would be something like this:
module dumpstdin;
import tango.io.Console : Cin;
import tango.io.device.Array : Array;
import tango.io.model.IConduit : InputStream;
const BufferInitialSize = 4096u;
const BufferGrowingStep = 4096u;
ubyte[] dumpStream(InputStream ins)
{
auto buffer = new Array(BufferInitialSize, BufferGrowingStep);
buffer.copy(ins);
return cast(ubyte[]) buffer.slice();
}
import tango.io.Stdout : Stdout;
void main()
{
auto contentsOfStdin
= cast(char[]) dumpStream(Cin.stream);
Stdout
("Finished reading Stdin.").newline()
("Contents of Stdin was:").newline()
("<<")(contentsOfStdin)(">>").newline();
}
Some notes:
The second parameter to Array is necessary; if you omit it, Array will not grow in size.
I used 4096 since that's generally the size of a page of memory.
dumpStream returns a ubyte[] because char[] is defined as a UTF-8 string, which Stdin doesn't necessarily need to be. For example, if someone piped a binary file to your program, you would end up with an invalid char[] that could throw an exception if anything checks it for validity. If you only care about text, then casting the result to a char[] is fine.
copy is a method on the OutputStream interface that causes it to drain the provided InputStream of all input.

Resources