I am using the OpenSSL BIO_... functions to create a connection which is either secure or not. The basics work just fine, whether the connection is encrypted or not (i.e. I can communicate using the BIO_read() and BIO_write() functions, just as expected.)
However, when I want to create a child, it is customary to close sockets we are not going to use in the parent or the child. I can solve the child problem by marking the socket with SOCK_CLOEXEC. However, if I want to do the opposite, keep the socket open in the child and close it in the parent I run in a problem: the BIO_free_all() function calls shutdown(s, SHUT_RDWR) before the close(s). The result after that is that the socket is dead (effectively, shutdown...) in the child as well.
My question is: is there a way to properly share an OpenSSL BIO interface between parent and child by avoiding the shutdown() call?
If not in the BIO interface itself, I can always close the file descriptor before calling BIO_free_all(), but that sounds like an ugly hack!
Related
I have a driver, which handles several TCP connections.
Is there a way to perform something similar to user space application api's select/poll()/epoll() in kernel given a list of struct sock's?
Thanks
You may want to write your own custom sk_buff handler, which calls the kernel_select() that tries to lock the semaphore and does a blocking wait when the socket is open.
Not sure if you have already gone through this link Simulate effect of select() and poll() in kernel socket programming
On the kernel side it's easy to avoid using sys_epoll() interface outright. After all, you've got a direct access to kernel objects, no need to jump through hoops.
Each file object, sockets included, "overrides" a poll method in its file_operations "vtable". You can simply loop around all your sockets, calling ->poll() on each of them and yielding periodically or when there's no data available.
If the sockets are fairly high traffic, you won't need anything more than this.
A note on the API:
poll() method requires a poll_table() argument, however if you do not intend to wait on it, it can safely be initialized to null:
poll_table pt;
init_poll_funcptr(&pt, NULL);
...
// struct socket *sk;
...
unsigned event_mask = sk->ops->poll(sk->file, sk, &pt);
If you do want to wait, just play around with the callback set into poll_table by init_poll_funcptr().
I have a simple C server that accepts connection to which I try connecting using telnet or netcat. Each time I receive a connection, I print out the descriptor and close the connection in a child process.
I run an instance of netcat, connect to the server, disconnect(Ctrl^C), and repeat this a few times. The values printed at the server side on the descriptors used are 4,5,6,7 .. and it goes on increasing.
I tried repeating this exercise after a period of time and the values still keep increasing. I'm concerned that my descriptions aren't closing(despite an explicit call to close).
Is there some signal I should be handling, setting the handler to close the connection?
After a fork the child process has a copied set of the parent's file descriptors. So the proper procedure is, after the fork, to (1) close the parent's listening socket in the child and to (2) close the new connection socket inherited by the child in the parent.
Open file descriptors are reference counted by the kernel. So when the child inherits the connection socket the reference count is 2. After the parent closes the connection socket the count remains at 1 until the child is done and closes it. The reference count having then dropped to 0 the connection is then closed. (Some details omitted.)
The upshot is that after making this change you then see a lot of FDs equal to 4 in the parent because the same FD will continue to be opened/closed/reused even though multiple connections are being processed by the children.
after fork both parent and child have a copy of the socket file descriptor;
you should close the socket in the parent process after fork.
Now that does not close the connection, only when the child process closes the socket too, this then closes the connection.
I have multithread server (inherits QTcpServer). When new connection appears, I create new task (inherits QRunnable), passing socket descriptor to constructor and push this task to QThreadpool (have 3 workers).
QThreadPool::globalInstance()->start(task);
In run() I dynamically create QTcpSocket, set socket descriptor and read first received byte. Based on value of this byte I create new specific task (also inherits QRunnable), passing to its ctr pointer to earlier created QTcpSocket object, and also push this task to QThreadpool.
This specific task make some routine and app crashes.
From log file, I see destructor of this specific task was called.
Also Qt Creator throws next error message:
QObject: Cannot create children for a parent that is in a different thread.
(Parent is QNativeSocketEngine(0x18c62290), parent's thread is QThread(0x18c603e0), current thread is QThread(0x18cc3b60)
QSocketNotifier: socket notifiers cannot be disabled from another thread
ASSERT failure in QCoreApplication::sendEvent: "Cannot send events to objects owned by a different thread. Current thread 18cc3b60. Receiver '' (of type 'QNativeSocketEngine') was created in thread 18c603e0", file kernel/qcoreapplication.cpp, line 420
I found similar posts but unfortunately I could not understand how to fix my problem.
Please, help me.
You cannot use QTcpSocket from two different threads, because QObjects are not thread-safe.
You've created your QTcpSocket in the first task, so it "lives" in the thread associated with that task. If you pass its pointer into another QRunnable, then a second thread will try to access it, which will break things.
You'll need to redesign your app in a way that doesn't share the same QTcpSocket between different threads. One possibility is to implement different specific functions in your original task, and simply select the appropriate function based on the first received byte
I'm working on a project to inject a shared library in a program with LD_PRELOAD.
My injected library creates a new thread when it is injected into the program. All logic happens in this thread (like analyzing network traffic and so on).
First you need to know this about the program that is being preloaded. It is a client application that encrypts every packet, written to a static buffer, that it sends to the server. I found the function that encrypts and sends the packets in the client and I was able to detour it. So now I can just modify the static buffer and let the 'send' function encrypt the buffer and send the buffer to the server.
But now I have a problem: what if I change contents of the static buffer in my library's thread (so that I can send a fake packet) and at the same time the program's thread changes the static buffer too? That would cause a crash.
I need some kind of synchronization.
So I've been thinking of some solutions:
Find every function in the program that changes the buffer, detour them and add a mutex to that call or something like that. Would take like ages though...
Find a way to execute my piece of code, that changes the buffer, in one block. So my piece of code actually gets executed at once, without POSIX threads switching to other threads. Is this even possible?
Make my application synchronous and cry.
Can anyone come up with a better solution? Or do you know how to make solution 2 possible?
Thanks in advance,
Gillis
If you detoured the 'send' function and you have the code of your 'detoured send' in your preloaded library it means that when the main thread calls 'send', your 'detoured send' code will be executed in the main thread's context, your thread is doing nothing at that moment. If you have more than one 'main thread' that could potentially call 'send', then you need synchronization in your 'detoured send'.
Alternatively, it you really want to process something in your new 'injected' thread you can:
1) in your 'detoured send' (invoked from main thread's context): pass the data to your thread
and wait untill it finishes processing the data (notice: the main thread is waiting).
I have the following situation (pseudocode):
function f:
pid = fork()
if pid == 0:
exec to another long-running executable (no communication needed to that process)
else:
return "something"
f is exposed over a XmlRpc++ server. When the function is called over XML-RPC, the parent process prints "done closing socket" after the function returned "something". But the XML-RPC client hangs as long as the child process is still running. When I kill the child process, the XML-RPC client correctly finishes the RPC call.
It seems to me that I'm having a problem with fork() copying socket descriptors to the child process (parent called closesocket but child still owns a reference -> connection still established). How can I circumvent this?
EDIT: I read about FD_CLOEXEC already, but can't I force all descriptors to be closed on exec?
No, you can't force all file descriptors to be closed on exec. You will need to loop over all unwanted file descriptors in the child after the fork() and close them. Unfortunately, there isn't an easy, portable, way to do that - the usual approach is to use getrlimit() to get the current value of RLIMIT_NOFILE and loop from 3 to that number, trying close() on each candidate.
If you are happy to be Linux-only, you can read the /proc/self/fd/ directory to determine the open file descriptors and close them (except 0, 1 and 2 - which should either be left alone or reopened to /dev/null).