Problem with using CreatePseudoConsole to create a pty pipe in python - python-3.x

I am trying to make a pty pipe. For that I have to use the CreatePseudoConsole function from the Windows api. I am loosely copying this which is this but in python.
I don't know if it's relevant but I am using Python 3.7.9 and Windows 10.
This is my code:
from ctypes.wintypes import DWORD, HANDLE, SHORT
from ctypes import POINTER, POINTER, HRESULT
import ctypes
import msvcrt
import os
# The COORD data type used for the size of the console
class COORD(ctypes.Structure):
_fields_ = [("X", SHORT),
("Y", SHORT)]
# HPCON is the same as HANDLE
HPCON = HANDLE
CreatePseudoConsole = ctypes.windll.kernel32.CreatePseudoConsole
CreatePseudoConsole.argtypes = [COORD, HANDLE, HANDLE, DWORD, POINTER(HPCON)]
CreatePseudoConsole.restype = HRESULT
def create_console(width:int, height:int) -> HPCON:
read_pty_fd, write_fd = os.pipe()
read_pty_handle = msvcrt.get_osfhandle(read_pty_fd)
read_fd, write_pty_fd = os.pipe()
write_pty_handle = msvcrt.get_osfhandle(write_pty_fd)
# Create the console
size = COORD(width, height)
console = HPCON()
result = CreatePseudoConsole(size, read_pty_handle, write_pty_handle,
DWORD(0), ctypes.byref(console))
# Check if any errors occured
if result != 0:
raise ctypes.WinError(result)
# Add references for the fds to the console
console.read_fd = read_fd
console.write_fd = write_fd
# Return the console object
return console
if __name__ == "__main__":
consol = create_console(80, 80)
print("Writing...")
os.write(consol.write_fd, b"abc")
print("Reading...")
print(os.read(consol.read_fd, 1))
print("Done")
The problem is that it isn't able to read from the pipe. I expected it to print "a" but it just gets stuck on the os.read. Please note that this is the first time I use the WinAPI so the problem is likely to be there.

There is nothing wrong with the code: what you got wrong is your expectations.
What you are doing is writing to the pipe meant to feed ‘keyboard’ input to the program, and reading from another pipe that returns ‘screen’ output from the program. But there is no actual program at the other end of either pipe, and so there is nothing the read call can ever return.
The HPCON handle returned by the CreatePseudoConsole API is supposed to be passed in a thread attribute list to a newly-spawned process via CreateProcess. Pretty much nothing can be done with it other than that. (You cannot even connect yourself to the pseudoconsole, as you would be able on Unix.) After the handle is passed in this manner, you can communicate with the process using the read_fd and write_fd descriptors.
An article on MSDN provides a full sample in C that creates a pseudoconsole and passes it to a new process; the exact same thing is done by the very source you linked.

Related

Python avoid partial writes with non-blocking write to named pipe

I am running python3.8 on linux.
In my script, I create a named pipe, and open it as follows:
import os
import posix
import time
file_name = 'fifo.txt'
os.mkfifo(file_name)
f = posix.open(file_name, os.O_RDWR | os.O_NONBLOCK)
os.set_blocking(f, False)
Without yet having opened the file for reading elsewhere ( for instance, with cat), I start to write to the file in a loop.
base_line = 'abcdefghijklmnopqrstuvwxyz'
s = base_line * 10000 + '\n'
while True:
try:
posix.write(f, s.encode())
except BlockingIOError as e:
print("Exception occurred: {}".format(e))
time.sleep(.5)
When I then go to read from the named pipe with cat, I find that there was a partial-write that took place.
I am confused how I can know how many bytes were written in this instance. Since the exception was thrown, I do not have access to the return value (num bytes written). The documentation suggests that BlockingIOError has a property called characters_written, however when I try to access this field an AttributeError is raised.
In summary: How can I either avoid this partial write in the first place, or at least know how much was partially written in this instance?
os.write performs an unbuffered write. The docs state that BlockingIOError only has a characters_written attribute when a buffered write operation would block.
If any bytes were successfully written before the pipe became full, that number of bytes will be returned from os.write. Otherwise, you'll get an exception. Of course, something like a drive failure will also cause an exception, even if some bytes were written. This is no different from how POSIX write works, except instead of returning -1 on error, an exception is raised.
If you don't like dealing with the exception, you can use a wrapper around the file descriptor, such as a io.FileIO object. I've modified your code since it tries to write the entire buffer every time you looped back to the os.write call (if it failed once, it will fail every time):
import io
import os
import time
base_line = 'abcdefghijklmnopqrstuvwxyz'
data = (base_line * 10000 + '\n').encode()
file_name = 'fifo.txt'
os.mkfifo(file_name)
fd = os.open(file_name, os.O_RDWR | os.O_NONBLOCK)
# os.O_NONBLOCK makes os.set_blocking(fd, False) unnecessary.
with io.FileIO(fd, 'wb') as f:
written = 0
while written < len(data):
n = f.write(data[written:])
if n is None:
time.sleep(.5)
else:
written += n
BTW, you might use the selectors module instead of time.sleep; I noticed a slight delay when trying to read from the pipe because of the sleep delay, which shouldn't happen if you use the selectors module:
with io.FileIO(fd, 'wb') as f:
written = 0
sel = selectors.DefaultSelector()
sel.register(f, selectors.EVENT_WRITE)
while written < len(data):
n = f.write(data[written:])
if n is None:
# Wait here until we can start writing again.
sel.select()
else:
written += n
sel.unregister(f)
Some useful information can also be found in the answer to POSIX named pipe (fifo) drops record in nonblocking mode.

Access Violation when using Ctypes to Interface with Fortran DLL

I have a set of dlls created from Fortran that I am running from python. I've successfully created a wrapper class and have been running the dlls fine for weeks.
Today I noticed an error in my input and changed it, but to my surprise this caused the following:
OSError: exception: access violation reading 0x705206C8
If seems that certain input values somehow cause me to try to access illegal data. I created the following MCVE and it does repeat the issue. Specifically an error is thrown when 338 < R_o < 361. Unfortunately I cannot publish the raw Fortran code, nor create an MCVE which replicates the problem and is sufficiently abstracted such that I could share it. All of the variables are either declared as integer or real(8) types in the Fortran code.
import ctypes
import os
DLL_PATH = "C:\Repos\CASS_PFM\dlls"
class wrapper:
def __init__(self,data):
self.data = data
self.DLL = ctypes.CDLL(os.path.join(DLL_PATH,"MyDLL.dll"))
self.fortran_subroutine = getattr(self.DLL,"MyFunction_".lower())
self.output = {}
def run(self):
out = (ctypes.c_longdouble * len(self.data))()
in_data = []
for item in self.data:
item.convert_to_ctypes()
in_data.append(ctypes.byref(item.c_val))
self.fortran_subroutine(*in_data, out)
for item in self.data:
self.output[item.name] = item.convert_to_python()
class FortranData:
def __init__(self,name,py_val,ctype,some_param=True):
self.name = name
self.py_val = py_val
self.ctype = ctype
self.some_param = some_param
def convert_to_ctypes(self):
ctype_converter = getattr(ctypes,self.ctype)
self.c_val = ctype_converter(self.py_val)
return self.c_val
def convert_to_python(self):
self.py_val = self.c_val.value
return self.py_val
def main():
R_o = 350
data = [
FortranData("R_o",R_o,'c_double',False),
FortranData("thick",57.15,'c_double',False),
FortranData("axial_c",100,'c_double',False),
FortranData("sigy",235.81,'c_double',False),
FortranData("sigu",619.17,'c_double',False),
FortranData("RO_alpha",1.49707,'c_double',False),
FortranData("RO_sigo",235.81,'c_double',False),
FortranData("RO_epso",0.001336,'c_double',False),
FortranData("RO_n",6.6,'c_double',False),
FortranData("Resist_Jic",116,'c_double',False),
FortranData("Resist_C",104.02,'c_double',False),
FortranData("Resist_m",0.28,'c_double',False),
FortranData("pressure",15.51375,'c_double',False),
FortranData("i_write",0,'c_int',False),
FortranData("if_flag_twc",0,'c_int',),
FortranData("i_twc_ll",0,'c_int',),
FortranData("i_twc_epfm",0,'c_int',),
FortranData("i_err_code",0,'c_int',),
FortranData("Axial_TWC_ratio",0,'c_double',),
FortranData("Axial_TWC_fail",0,'c_int',),
FortranData("c_max_ll",0,'c_double',),
FortranData("c_max_epfm",0,'c_double',)
]
obj = wrapper(data)
obj.run()
print(obj.output)
if __name__ == "__main__": main()
It's not just the R_o value either; there are some combinations of values that cause the same error (seemingly without rhyme or reason). Is there anything within the above Python that might lead to an access violation depending on the values passed to the DLL?
Python version is 3.7.2, 32-bit
I see 2 problems with the code (and a potential 3rd one):
argtypes (and restype) not being specified. Check [SO]: C function called from Python via ctypes returns incorrect value (#CristiFati's answer) for more details
This may be a consequence (or at least it's closely related to) the previous. I can only guess without the Fortran (or better: C) function prototype, but anyway there is certainly something wrong. I assume that for the input data things should be same as for the output data, so the function would take 2 arrays (same size), and the input one's elements would be void *s (since their type is not consistent). Then, you'd need something like (although I can't imagine how would Fortran know which element contains an int and which a double):
in_data (ctypes.c_void_p * len(self.data))()
for idx, item in enumerate(self.data):
item.convert_to_ctypes()
in_data[index] = ctypes.addressof(item.c_val)
Since you're on 032bit, you should also take calling convention into account (ctypes.CDLL vs ctypes.WinDLL)
But again, without the function prototype, everything is just a speculation.
Also, why "MyFunction_".lower() instead of "myfunction_"?

Python Multiprocessing Pipe hang

i'm trying to build a program to send a string to process Tangki and Tangki2 then send a bit of array data each to process Outdata, but it seems not working correctly. but when i disable gate to the Outdata everything works flawlessly.
this is the example code:
import os
from multiprocessing import Process, Pipe
from time import sleep
import cv2
def outdata(input1,input2):
while(1):
room=input1.recv()
room2=input2.recv()
def tangki(keran1,selang1): ##============tangki1
a=None
x,y,degree,tinggi=0,0,0,0
dout=[]
while(1):
frame=keran1.recv()
dout.append([x,y,degree,tinggi])
selang1.send(dout)
print ("received from: {}".format(frame))
def tangki2(keran3,selang2): ##=================tangki2
x,y,degree,tinggi=0,0,0,0
dout2=[]
while(1):
frame=keran3.recv()
dout2.append([x,y,degree,tinggi])
selang2.send(dout2)
print("received from: {}".format(frame))
def pompa(gate1,gate2):
count=0
while(1):
count+=1
gate1.send("gate 1, val{}".format(count))
gate2.send("gate 2, val{}".format(count))
if __name__ == '__main__':
pipa1, pipa2 = Pipe()
pipa3, pipa4 = Pipe()
tx1,rx1 = Pipe()
tx2,rx2 = Pipe()
ptangki = Process(target=tangki, args=(pipa2, tx1))
ptangki2 = Process (target=tangki2, args=(pipa4, tx2))
ppompa = Process(target=pompa, args=(pipa1,pipa3))
keran = Process(target=outdata, args=(rx1,rx2))
ptangki.start()
ptangki2.start()
ppompa.start()
keran.start()
ptangki.join()
ptangki2.join()
ppompa.join()
keran.join()
at exact count reach 108 the process hang, not responding whatsoever. when i TOP it, the python3 process has gone, it seems that selang1 and selang2 causing the problem. i've search in google and it might be a Pipe Deadlock. so the question is how to prevent this from happening since i've already dump all data in pipe via repeated reading both in input1 and input2.
Edit: it seems that the only problem was the communication between tangki and tangki2 to outdata
it's actually because buffer size limit? but adding dout=[x,y,degree,tinggi] and dout=[x,y,degree,tinggi] reset the size of data to minimal, or by assigning dout=[0,0,0,0] and dout2=[0,0,0,0] right after selang1.send(dout) and selang2.send(dout2)

Reopening a closed stringIO object in Python 3

So, I create a StringIO object to treat my string as a file:
>>> a = 'Me, you and them\n'
>>> import io
>>> f = io.StringIO(a)
>>> f.read(1)
'M'
And then I proceed to close the 'file':
>>> f.close()
>>> f.closed
True
Now, when I try to open the 'file' again, Python does not permit me to do so:
>>> p = open(f)
Traceback (most recent call last):
File "<pyshell#166>", line 1, in <module>
p = open(f)
TypeError: invalid file: <_io.StringIO object at 0x0325D4E0>
Is there a way to 'reopen' a closed StringIO object? Or should it be declared again using the io.StringIO() method?
Thanks!
I have a nice hack, which I am currently using for testing (Since my code can make I/O operations, and giving it StringIO is a nice get-around).
If this problem is kind of one time thing:
st = StringIO()
close = st.close
st.close = lambda: None
f(st) # Some function which can make I/O changes and finally close st
st.getvalue() # This is available now
close() # If you don't want to store the close function you can also:
StringIO.close(st)
If this is recurring thing, you can also define a context-manager:
#contextlib.contextmanager
def uncloseable(fd):
"""
Context manager which turns the fd's close operation to no-op for the duration of the context.
"""
close = fd.close
fd.close = lambda: None
yield fd
fd.close = close
which can be used in the following way:
st = StringIO()
with uncloseable(st):
f(st)
# Now st is still open!!!
I hope this helps you with your problem, and if not, I hope you will find the solution you are looking for.
Note: This should work exactly the same for other file-like objects.
No, there is no way to re-open an io.StringIO object. Instead, just create a new object with io.StringIO().
Calling close() on an io.StringIO object throws away the "file contents" data, so re-opening couldn't give access to that anyways.
If you need the data, call getvalue() before closing.
See also the StringIO documentation here:
The text buffer is discarded when the close() method is called.
and here:
getvalue()
Return a str containing the entire contents of the buffer.
The builtin open() creates a file object (i.e. a stream), but in your example, f is already a stream.
That's the reason why you get TypeError: invalid file
After the method close() has executed, any stream operation will raise ValueError.
And the documentation does not mention about how to reopen a closed stream.
Maybe you need not close() the stream yet if you want to use (reopen) it again later.
When you f.close() you remove it from memory. You're basically doing a deref x, call x; you're looking for a memory location that doesn't exist.
Here is what you could do in stead:
import io
a = 'Me, you and them\n'
f = io.StringIO(a)
f.read(1)
f.close()
# Put the text form a without the first char into StringIO.
p = io.StringIO(a[1:]).
# do some work with p.
I think your confusion comes form thinking of io.StringIO as a file on the block device. If you used open() and not StringIO, then you would be correct in your example and you could reopen the file. StringIO is not a file. It's the idea of a file object in memory. A file object does have a StringIO, but It also exists physically on the block device. A StringIO is just a buffer, a staging area in memory of the data with in it. When you call open() a buffer is created, but there is still the data on block device.
Perhaps this is more what you want
fo = open('f.txt','w+')
fo.write('Me, you and them\n')
fo.read(1)
fo.close()
# reopen the now closed file `f`
p = open('f.txt','r')
# do stuff with p
p.close()
Here we are writing the string to the block device, so that when we close the file, the information written to it will remain after it's closed. Because this is creating a file in the directory the progarm is run in, it may be a good idea to give the file an extension. For example, you could name the file f.txt instead of f.

Linux - Named pipes - losing data

I am using named pipes for IPC. At times the data sent between the process can be large and frequent. During these time I see lots of data loss. Are there any obvious problems in the code below that could cause this?
Thanks
#!/usr/bin/env groovy
import java.io.FileOutputStream;
def bytes = new File('/etc/passwd').bytes
def pipe = new File('/home/mohadib/pipe')
1000.times{
def fos = new FileOutputStream(pipe)
fos.write(bytes)
fos.flush()
fos.close()
}
#!/usr/bin/env groovy
import java.io.FileInputStream;
import java.io.ByteArrayOutputStream;
def pipe = new File('/home/mohadib/pipe')
def bos = new ByteArrayOutputStream()
def len = -1
byte[] buff = new byte[8192]
def i = 0
while(true)
{
def fis = new FileInputStream(pipe)
while((len = fis.read(buff)) != -1) bos.write(buff, 0, len)
fis.close()
bos.reset()
i++
println i
}
Named pipes lose their contents when the last process closes them. In your example, this can happen if the writer process does another iteration while the reader process is about to do fis.close(). No error is reported in this case.
A possible fix is to arrange that the reader process never closes the fifo. To get rid of the EOF condition when the last writer disconnects, open the fifo for writing, close the read end, reopen the read end and close the temporary write end.
This section gives me worries:
1000.times{
def fos = new FileOutputStream(pipe)
fos.write(bytes)
fos.flush()
fos.close()
}
I know that the underlying Unix write() system call does not always write the requested number of bytes. You have to check the return value to see what number was actually written.
I checked the docs for Java and it appears fos.write() has no return value, it just throws an IOException if anything goes wrong. What does Groovy do with exceptions? Are there any exceptions happening?
If you can, run this under strace and view the results of the read and write system calls. It's possible that the Java VM isn't doing the right thing with the write() system call. I know this can happen because I caught glibc's fwrite implementation doing that (ignoring the return value) two years ago.

Resources