I am writing an idapython debugger script for logging runtime program information.
In particular, I want to log system calls such as read(0, buf, size), and especially, record the data in buf after calling.
As illustrated in debughook.py, it is easy to trace each instruction:
class MyDbgHook(DBG_Hooks):
def dbg_trace(self, tid, ea):
print("Trace tid=%d ea=0x%x" % (tid, ea))
# return values:
# 1 - do not log this trace event;
# 0 - log it
return 0
debughook = MyDbgHook()
debughook.hook()
Therefore, a simple solution is:
record the memory address of buf when entrying read()
get the data in buf after calling read()
However, I found it will be complex and unstable when actually implementing. If there is a more elegent solution for above purpose?
Related
I'm implementing features of an ssh server, so given a shell request I open a pty-tty pair.
A snippet:
import (
"github.com/creack/pty"
...
)
func attachPty(channel ssh.Channel, shell *exec.Cmd) {
mypty, err := pty.Start(shell)
go func() {
io.Copy(channel, mypty) // (1) ; could also be substituted with read() syscall, same problem
}
go func() {
io.Copy(mypty, channel) // (2) - this returns on channel exit with eof, so let's close mypty
if err := syscall.Close(int(mypty.Fd())); err != nil {
fmt.Printf("error closing fd") // no error is printed out, /proc/fd shows it's successfuly closed
}
}
}
Once the ssh channel gets closed, I close the pty. My expected behavior is that it should send SIGHUP to the shell.
If I comment out the (1) copy (src: mypty, dst: channel), it works!
However - when it's not commented out:
The (1) copy doesn't return, meaning the read syscall from mypty is still blocking, and doesn't return eof => master device doesn't get closed?
shell doesn't get SIGHUP
I'm not sure why if I comment out the (1) copy it works, maybe the kernel reference counts the reads?
My leads:
pty.read is actually dispatched to the tty, as said in:
pty master missing read function
Walkthrough of SIGHUP flow
pty_close in drivers/tty/pty.c, which calls tty_vhangup(tty->link);, see here
Linux Device Drivers, 3rd edition, PTY chapter
Go notes:
I close the fd directly, because otherwise using the usual os.File.close() doesn't actually close the fd for some reason, it stays open in /proc/<pid>/fd
substituting the (1) copy with a direct read syscall would lead to the same outcome
Thank you!
I have multiple producer and one consumer processes. Each process is launched by MPIPoolExecutor class. First I launch the consumer process and then starts launching producer processes using starmap method. Consumer accepts receives data and save it to hard drive. Each producer process will create a buffer with the same size as the data needs to be sent and send it using the blocker method bsend. I am expecting each producer process to dump the data into the buffer and exit. However, I am noticing a delay where it looks like each producer process waits for the data to be consumed by the consumer process. What am I missing? My code goes like this:
def consumer(args...):
comm = MPI.COMM_WORLD
file = tb.open_file(file_name, 'w')
filters = tb.Filters(complevel=5, complib='blosc')
array = file.create_carray(file.root, 'data', tb.Float32Atom(), shape=(n_, n_), filters=filters)
for i in range(num_tasks):
t = time.time()
idxs, data = MPI.Comm.recv()
print("time for waiting --consumer ", time.time() - t)
array[idxs,:] = data
def producer(args...):
comm = MPI.COMM_WORLD
#adding 1000 just to be in the safe side.
mem = MPI.Alloc_mem(data.nbytes + idxs.nbytes + 1000)
MPI.Attach_buffer(mem)
#Since consumer is launched first, it guarantees to get a rank of 1.
MPI.Comm.bsend([idxs, data], 1)
MPI.Detach_buffer()
....
with MPIPoolExecutor() as executor:
executor.starmap(consumer, [(args)])
executor.starmap(produces, list_of_args)
If the consumer is launched first, it gets rank zero, not 1. Also: you're misunderstanding buffered communication. If you want the producer to return, use MPI_Isend. The buffer detach call blocks until all messages in it have been completed.
I am working on a testing tool for nvme-cli(written in c and can run on linux).
For SSD validation purpose, i was actually looking for a custom command(For e.g. I/O command, write and then read the same and finally compare if both the data are same)
For read the ioctl() function is used as shown in the below code.
struct nvme_user_io io = {
.opcode = opcode,
.flags = 0,
.control = control,
.nblocks = nblocks,
.rsvd = 0,
.metadata = (__u64)(uintptr_t) metadata,
.addr = (__u64)(uintptr_t) data,
.slba = slba,
.dsmgmt = dsmgmt,
.reftag = reftag,
.appmask = appmask,
.apptag = apptag,
};
err = ioctl(fd, NVME_IOCTL_SUBMIT_IO, &io);
Can I to where exactly the control of execution goes in order to understand the read.
Also I want to have another command that looks like
err = ioctl(fd,NVME_IOCTL_WRITE_AND_COMPARE_IO, &io);
so that I can internally do a write, then read the same location and finally compare the both data to ensure that the disk contains only the data that I wanted to write.
Since I am new to this nvme/ioctl(), if there is any mistakes please correct me.
nvme_io() is a main command handler that accepts as a parameter the NVMe opcode that you want to send to your device. According to the standard, you have separate commands (opcodes) for read, write and compare. You could either send those commands separately, or add a vendor specific command to calculate what you need.
Is there any way for a writer to know that a reader has closed its end of a named pipe (or exited), without writing to it?
I need to know this because the initial data I write to the pipe is different; the reader is expecting an initial header before the rest of the data comes.
Currently, I detect this when my write() fails with EPIPE. I then set a flag that says "next time, send the header". However, it is possible for the reader to close and re-open the pipe before I've written anything. In this case, I never realize what he's done, and don't send the header he is expecting.
Is there any sort of async event type thing that might help here? I'm not seeing any signals being sent.
Note that I haven't included any language tags, because this question should be considered language-agnostic. My code is Python, but the answers should apply to C, or any other language with system call-level bindings.
If you are using an event loop that is based on the poll system call you can register the pipe with an event mask that contains EPOLLERR. In Python, with select.poll,
import select
fd = open("pipe", "w")
poller = select.poll()
poller.register(fd, select.POLLERR)
poller.poll()
will wait until the pipe is closed.
To test this, run mkfifo pipe, start the script, and in another terminal run, for example, cat pipe. As soon as you quit the cat process, the script will terminate.
Oddly enough, it appears that when the last reader closes the pipe, select indicates that the pipe is readable:
writer.py
#!/usr/bin/env python
import os
import select
import time
NAME = 'fifo2'
os.mkfifo(NAME)
def select_test(fd, r=True, w=True, x=True):
rset = [fd] if r else []
wset = [fd] if w else []
xset = [fd] if x else []
t0 = time.time()
r,w,x = select.select(rset, wset, xset)
print 'After {0} sec:'.format(time.time() - t0)
if fd in r: print ' {0} is readable'.format(fd)
if fd in w: print ' {0} is writable'.format(fd)
if fd in x: print ' {0} is exceptional'.format(fd)
try:
fd = os.open(NAME, os.O_WRONLY)
print '{0} opened for writing'.format(NAME)
print 'select 1'
select_test(fd)
os.write(fd, 'test')
print 'wrote data'
print 'select 2'
select_test(fd)
print 'select 3 (no write)'
select_test(fd, w=False)
finally:
os.unlink(NAME)
Demo:
Terminal 1:
$ ./pipe_example_simple.py
fifo2 opened for writing
select 1
After 1.59740447998e-05 sec:
3 is writable
wrote data
select 2
After 2.86102294922e-06 sec:
3 is writable
select 3 (no write)
After 2.15910816193 sec:
3 is readable
Terminal 2:
$ cat fifo2
test
# (wait a sec, then Ctrl+C)
There is no such mechanism. Generally, according to the UNIX-way, there are no signals for streams opening or closing, on either end. This can only be detected by reading or writing to them (accordingly).
I would say this is wrong design. Currently you are trying to have the receiver signal their availability to receive by opening a pipe. So either you implement this signaling in an appropriate way, or incorporate the "closing logic" in the sending part of the pipe.
I am having some serious trouble getting a Python 2 based C++ engine to work in Python3. I know the whole IO stack has changed, but everything I seem to try just ends up in failure. Below is the pre-code (Python2) and post code (Python3). I am hoping someone can help me figure out what I'm doing wrong.I am also using boost::python to control the references.
The program is supposed to load a Python Object into memory via a map and then upon using the run function it then finds the file loaded in memory and runs it. I based my code off an example from the delta3d python manager, where they load in a file and run it immediately. I have not seen anything equivalent in Python3.
Python2 Code Begins here:
// what this does is first calls the Python C-API to load the file, then pass the returned
// PyObject* into handle, which takes reference and sets it as a boost::python::object.
// this takes care of all future referencing and dereferencing.
try{
bp::object file_object(bp::handle<>(PyFile_FromString(fullPath(filename), "r" )));
loaded_files_.insert(std::make_pair(std::string(fullPath(filename)), file_object));
}
catch(...)
{
getExceptionFromPy();
}
Next I load the file from the std::map and attempt to execute it:
bp::object loaded_file = getLoadedFile(filename);
try
{
PyRun_SimpleFile( PyFile_AsFile( loaded_file.ptr()), fullPath(filename) );
}
catch(...)
{
getExceptionFromPy();
}
Python3 Code Begins here: This is what I have so far based off some suggestions here... SO Question
Load:
PyObject *ioMod, *opened_file, *fd_obj;
ioMod = PyImport_ImportModule("io");
opened_file = PyObject_CallMethod(ioMod, "open", "ss", fullPath(filename), "r");
bp::handle<> h_open(opened_file);
bp::object file_obj(h_open);
loaded_files_.insert(std::make_pair(std::string(fullPath(filename)), file_obj));
Run:
bp::object loaded_file = getLoadedFile(filename);
int fd = PyObject_AsFileDescriptor(loaded_file.ptr());
PyObject* fileObj = PyFile_FromFd(fd,fullPath(filename),"r",-1,"", "\n","", 0);
FILE* f_open = _fdopen(fd,"r");
PyRun_SimpleFile( f_open, fullPath(filename) );
Lastly, the general state of the program at this point is the file gets loaded in as TextIOWrapper and in the Run: section the fd that is returned is always 3 and for some reason _fdopen can never open the FILE which means I can't do something like PyRun_SimpleFile. The error itself is a debug ASSERTION on _fdopen. Is there a better way to do all this I really appreciate any help.
If you want to see the full program of the Python2 version it's on Github
So this question was pretty hard to understand and I'm sorry, but I found out my old code wasn't quite working as I expected. Here's what I wanted the code to do. Load the python file into memory, store it into a map and then at a later date execute that code in memory. I accomplished this a bit differently than I expected, but it makes a lot of sense now.
Open the file using ifstream, see the code below
Convert the char into a boost::python::str
Execute the boost::python::str with boost::python::exec
Profit ???
Step 1)
vector<char> input;
ifstream file(fullPath(filename), ios::in);
if (!file.is_open())
{
// set our error message here
setCantFindFileError();
input.push_back('\0');
return input;
}
file >> std::noskipws;
copy(istream_iterator<char>(file), istream_iterator<char>(), back_inserter(input));
input.push_back('\n');
input.push_back('\0');
Step 2)
bp::str file_str(string(&input[0]));
loaded_files_.insert(std::make_pair(std::string(fullPath(filename)), file_str));
Step 3)
bp::str loaded_file = getLoadedFile(filename);
// Retrieve the main module
bp::object main = bp::import("__main__");
// Retrieve the main module's namespace
bp::object global(main.attr("__dict__"));
bp::exec(loaded_file, global, global);
Full Code is located on github: