I have a simple application on a OpenWRT style router. It's written in C++ currently. The router (embedded Linux) has very limited disk space and RAM. For example there is not enough space to install Python.
So, I want to control this daemon app via the network. I have read some tutorials on creating sockets, and listening to the port for activity. But I haven't been able to integrate the flow into a C++ class. And I haven't been able to figure out how to decode the information received, or how to send a response.
All the tutorials I've read are dead ends, they show you how to make a server that basically just blocks until it receives something, and then returns a message when it got something.
Is there something a little more higher level that can be used for this sort of thing?
Sounds like what you are asking is "how do I build a simple network service that will accept requests from clients and do something in response?" There are a bunch of parts to this -- how do you build a service framework, how do you encode and decode the requests, how do you process the requests and how do you tie it all together?
It sounds like you're having problems with the first and last parts. There are two basic ways of organizing a simple service like this -- the thread approach and the event approach.
In the thread approach, you create a thread for each incoming connection. That thread reads the messages (requests) from that connection (file descriptor), processes them, and writes back responses. When a connection goes away, the thread exits. You have a main 'listening' thread that accepts incoming connections and creates new threads to handle each one.
In the event approach, each incoming request becomes an event. You then have event handlers that processes these events, sending back responses. Its important that the event handlers NOT block and complete promptly, otherwise the service may appear to lock up. Your program has a main event loop that waits for incoming events (generally blocking on a single poll or select call) and reads and dispatches each event as appropriate.
I installed python-mini package with opkg, which has socket and thread support.
Works like a charm on a WRT160NL with backfire/10.03.1.
Related
I have a program that uses ZMQ to send and receive between a C++ application and a Python GUI. The Python sends all commands to the C++ app to do the work, the C++ app periodically sends back status to update the GUI.
The C++ is multi-theaded but we never made the call zmq_send thread safe, so in 1 out of 100,000 runs we'd get an unhandled exception or segmentation fault if two threads tried to send status back to the gui simultaneously. This took longer than I care to admit figuring out since it was so sporadic. This was easily solved with a mutex around zmq_send because the socket is managed by a singleton.
In addition to the processing threads, there is one thread that just idly waits to receive and dispatch commands from the gui using zmq_poll and then zmq_msg_recv when something is available.
The question, can I safely poll the same socket while a send is happening? Most of the time the receive thread is sitting in zmq_poll with a timeout, and sends seem to be happening without issue. I can't seem to find any good documentation about this. I assume a mutex needs to protect zmq_send and zmq_msg_recv from occurring simultaneously, but I am not sure about the safety of polling while sending.
Details about the setup: using PAIR interface with a single client and server. All messages are small (<1KB). There is only one socket shared for sending and receiving.
This is a large, decade old application I'd like to avoid redesigning if possible.
Situation:
User has sent image, after image, he will send message. While the second user does not receive a picture, the message will not be sended.
How to send messages normally, like in normal chat?
I have found, that there are "async" module for node.js, but how to use it with Socket IO?
You could simply pass every messages in a queue. So each messages must wait for the first one to be send before passing to the next.
Although, here in your case. I don't think waiting for an image to be sent is wise - this will make your chat unresponsive.
Rather, use simple text image message. Once you receive this, put a placeholder in the chat where you'll load the image when you received it (displaying a loader meanwhile). This will allow you to continue the chat without being blocked by a long IO process to finish.
Socket.IO uses a single WebSocket connection which only allows for sending one item at a time. You should consider sending that image out-of-band on a separate WebSocket, or via another method.
I have a similar situation where I must stream continuous binary data and signaling messages. For this, I use BinaryJS to set up logical streams which are mirrored on both ends. One stream is used for binary streaming, and the other is used for RPC. Unfortunately, Socket.IO cannot use arbitrary streams. The only RPC library that seems to work is rpc-stream. The RPC functionality isn't nearly as powerful as Socket.IO (in particular when dealing with callbacks), but it does work well.
assume we have an application which uses winsock to implement tcp communication.
for each socket we create a thread and block-receiving on it.
when data arrives, we would like to notify other threads (listening threads).
i was wondering what is the best way to implement this:
move away from this design and use a non-blocking socket, then the listening thread will have to iterate constantly and call a non-blocking receive, thus making it thread safe (no extra threads for the sockets)
use asynchronous procedure calls to notify listening threads - which again will have to alert-wait for apc to queue for them.
implement some thread safe message queue, where each socket thread will post messages to it, and the listener, again, will go over it every interval and pull data from it.
also, i read about WSAAsyncSelect, but i saw that this is used to send messages to a window. isnt there something similar for other threads? (well i guess apcs are...)
Thanks!
Use I/O completion ports. See the CreateIoCompletionPort() and the GetQueuedCompletionStatus() functions of the Win32 API (under File Management functions). In this instance, the socket descriptors are used in place of file handles.
You'll always be better off abstracting the mechanics of socket API (listening, accepting, reading & writing) in a separate layer from the application logic. Have an object that captures the state of a connection, which is created during an incoming connection and you can maintain buffers in this object for the incoming and outgoing traffic. This will allow your network interface layer to be independent of the application code. This will also make the code cleaner by separating the application functionality from the underlying communication mechanism.
Blocking or non-blocking socket decision depends on the level of scalability that your applications needs to achieve. If your application needs to support hundreds of incoming connections, adopting a thread-per-socket approach is not going to be very wise. You'll be better off going for an Io ports based implementation, which will make your app immensely scaleable at added code complexity. However, if you only foresee a few 10s of connections at any point in time, you can go for an asynchronous sockets model using Win32 events or messages. Win32 events based approach doesn't scale very well beyond a certain limit as you would have to manage multiple threads if the number of concurrent sockets exceed 63 (as WaitForMultipleObjects can only support a max of 64 sockets). Windows message based mechanism doesn't have this limitation though. OHOH, Win32 event based approach does not require a GUI window to work.
Check out WSAEventSelect along with WSAAsyncSelect API documentation in MSDN.
You might want to take a look at boost::asio package as well. It provides a neat (though a little complex) C++ abstraction over sockets API.
I have a Delphi 6 application that uses the ICS component suite to do socket communications. I have my own server socket VCL component that creates client TWSocket sockets when a new session becomes available. The client sockets I create do have the Multithreaded property set to TRUE, but all that does is changes the way the client socket handles socket messages to a manner that is safe from a background thread (non-main VCL thread). TWSocket does not spawn a thread to handle socket data traffic, which is what I need.
I need to have the receive calls occur off the main VCL thread, the main user interface thread, because the incoming data to the client socket is audio data that needs to be processed rapidly, in 50-100 milliseconds or less. In other words, one hiccup on the main VCL thread and the audio stream is disrupted. This is why I want to push the OnDataAvailable() event that fires whenever incoming data is available on to a high priority background thread. In other words, I want to force the message processing loop belonging to the client TWSocket object to a background thread.
I believe I can do this by creating the client socket via a background thread, but I'm hoping to avoid that since currently I use a VCL component I made that acts as a socket server. This is the entity that Accepts the incoming connection and spawns the client sockets. The socket server is created on the main VCL thread.
Therefore my question is, is there a (relatively) easy way to create the client sockets so they use an existing background thread to do their socket processing, especially the FD_RECV message handling? If not an existing background thread, then I will create one for each client socket I create, but I need to know how to make sure the new TWSocket object uses that background thread when it runs its message loop that processes socket messages, so how would I do that?
For other ICS/TWSocket users out there, the solution is in the ICS ThrdSrv demonstration project that comes with the package. Take a close look at that project, especially its use of the ThreadAttach() and ThreadDetach() methods. That sample project shows how to create client sockets that have message pumps that run in the context of a worker thread.
I should state that I'm not asking about specific implementation details (yet), but just a general overview of what's going on. I understand the basic concept behind a socket, and need clarification on the process as a whole. My (probably very wrong) understanding is currently this:
A socket is constantly listening for clients that want to connect (in its own thread). When a connection occurs, an event is raised that spawns another thread to perform the connection process. During the connection process the client is assigned it's own socket in which to communicate with the server. The server then waits for data from the client and when data arrives an event is raised which spawns a thread to read the data from a stream into a buffer.
My questions are:
How off is my understanding?
Does each client socket require it's own thread to listen for data on?
How is data routed to the correct client socket? Is this something taken care of by the guts of TCP/UDP/kernel?
In this threaded environment, what kind of data is typically being shared, and what are the points of contention?
Any clarifications and additional explanation would be greatly appreciated.
EDIT:
Regarding the question about what data is typically shared and points of contention, I realize this is more of an implementation detail than it is a question regarding general process of accepting connections and sending/receiving data. I had looked at a couple implementations (SuperSocket and Kayak) and noticed some synchronization for things like session cache and reusable buffer pools. Feel free to ignore this question. I've appreciated all your feedback.
One thread per connection is bad design (not scalable, overly complex) but unfortunately way too common.
A socket server works more or less like this:
A listening socket is setup to accept connections, and added to a socketset
The socket set is checked for events
If the listening socket has pending connections, new sockets are created by accepting the connections, and then added to the socket set
If a connected socket has events, the relevant IO functions are called
The socket set is checked for events again
This happens in one thread, you can easily handle thousands of connected sockets in a single thread, and there's few valid reasons for making this more complex by introducing threads.
while running
select on socketset
for each socket with events
if socket is listener
accept new connected socket
add new socket to socketset
else if socket is connection
if event is readable
read data
process data
else if event is writable
write queued data
else if event is closed connection
remove socket from socketset
end
end
done
done
The IP stack takes care of all the details of which packets go to what "socket" in which order. Seen from the applications point of view, a socket represents a reliable ordered byte stream (TCP) or an unreliable unordered sequence of packets(UDP)
EDIT: In response to updated question.
I don't know either of the libraries you mention, but on the concepts you mention:
A session cache typically keeps data associated with a client, and can reuse this data for multiple connections. This makes sense when your application logic requires state information, but it's a layer higher than the actual networking end. In the above sample, the session cache would be used by the "process data" part.
Buffer pools are also an easy and often effective optimization of a high-traffic server. The concept is very easy to implement, instead of allocating/deallocating space for storing data you read/write, you fetch a preallocated buffer from a pool, use it, then return it to a pool. This avoids the (sometimes relatively expensive) backend allocation/deallocation mechanisms. This is not directly related to networking, you can just as well use buffer pools for e.g. something that reads chunks of files and process them.
How off is my understanding?
Pretty far.
Does each client socket require it's own thread to listen for data on?
No.
How is data routed to the correct client socket? Is this something taken care of by the guts of TCP/UDP/kernel?
TCP/IP is a number of layers of protocol. There's no "kernel" to it. It's pieces, each with a separate API to the other pieces.
The IP Address is handled in on place.
The port # is handled in another place.
The IP addresses are matched up with MAC addresses to identify a particular host. The port # is what ties a TCP (or UDP) socket to a particular piece of application software.
In this threaded environment, what kind of data is typically being shared, and what are the points of contention?
What threaded environment?
Data sharing? What?
Contention? The physical channel is the number one point of contention. (Ethernet, for example depends on collision-detection.) After that, well, every part of the computer system is a scarce resource shared by multiple applications and is a point of contention.