I've got a basic knowledge from here about epoll. I know that epoll can monitor multiple FDs and handle them.
My question is: can a heavy event block the server so I must use multithreading?
For example, the epoll of a server is monitoring 2 sockets A and B. Now A starts to send lot of messages to the server so the server starts to read them. One second later, B starts to send messages too while A is still sending. In this case, Need I create a thread for these read actions? If I don't, does it mean that the server has no chance to get the messages from B until A finishes its sending?
If you can process incoming messages fast enough (no blocking calls, no heavy computations), you don't need a separate thread. Otherwise, you would benefit from going multi-threaded.
In any case, it helps to understand what happens when you have only one thread and you can't process messages fast enough. If you are working with TCP protocol, the machines sending you the data will simply reduce their transmission rate. When using UDP, some incoming packets will get dropped.
Related
Now I am building an application to send big data from client to server via UDP. I have some questions are:
Should I use one thread to send data or multi-threads to send data?
If I should use multi-threads to send data, I will use one socket for all threads or one socket per one thread?
Thanks,
Should I use one thread to send data or multi-threads to send data?
Either way can work, so it's mostly a matter of personal preference. If it was me, I would use a single thread rather than multiple threads, because multiple threads are a lot harder to implement correctly, and in this case they won't buy you any additional performance, since your throughput bottleneck is almost certainly going to be either your hard disk or your network card, not the speed of your CPU core(s).
If I should use multi-threads to send data, I will use one socket for all threads or one socket per one thread?
Again, either way will work (for UDP), but if it was me, I would use one socket per thread, only because then you don't have to worry so much about race conditions during process-setup and process-shutdown (i.e. each thread simply creates and destroys its own separate/private socket, so there's no worrying about who does what to the socket when)
So I have this API endpoint called www.example.com/endpoint on which many devices post(I work in an IOT firm). We have implemented our whole backed in NodeJS and are stuck while scaling from 1 device to 'n' number of devices. The devices post their packets at this API endpoint, from where I execute a complex bit of code(arnd 1000 lines) and save the state of the device in the database(mongoDB). Now the issue is. Whenever I receive a packet from device 1 and I am executing it and in the middle I get a packet from device 2, NodeJS leaves the device 1 execution as it is and starts serving the packet 2 from device 2, I saw this when I put extensive console.log() statements
Now in an ideal world. I would want Node to save the context of my current progress with packet 1. then leave. and go on to save the packet 2 in a queue to be processed later. Once I am done with packet 1 I shall take up packet 2 and process it.
I know libraries like RabbitMQ and kue for storing it in queue and processing it later, but how do I context switch from one execution to another?
This is my way of thinking. There could be other solutions as well. Would like to hear your thoughts on the matter.
Q: How to implement concurrency or context-switching in NodeJS.
A: Short answer: Not possible. Because Javascript is single threaded.
Q: Now the issue is. Whenever I receive a packet from device 1 and I am executing it and in the middle I get a packet from device 2, NodeJS leaves the device 1 execution as it is and starts serving the packet 2 from device 2, I saw this when I put extensive console.log() statements
A: As you might have already read in numerous places that NodeJS is based on an event-driven model that is non-blocking for I/O.
The reason why Node seems to have ditched device1 midway to serve device2 was because the code for device1 has already been processed up till a point where it is just waiting on an asynchronous function to callback. E.g. performing a database write. So meantime while it is available, it went on to service device2
Similar case for device2 - once it hits an async function where an event gets pushed into the event queue, pending for a return. Node might go back to device1 if a response has come back. Or it could be other devices, deviceN.
We say NodeJS is non-blocking because the node process does not lock the entire web application down for a sole response. Instead it move on and pick the next event (essentially a block of code) from the queue to run it. Hence it is constantly busy, unless there is really nothing available on the event queue.
Q: I know libraries like RabbitMQ and kue for storing it in queue and processing it later, but how do I context switch from one execution to another?
A:
As said earlier. as of 2016 - it is still not possible for Javascript to do threading. NodeJS is not designed for heavy computation work, it should only be focused on serving requests therefore the code should preferably be light and non-blocking. Basically you will want to leave those heavy I/O duties like writing to file or databases or making HTTP requests (network) to other processes by wrapping the calls with async functions.
NodeJS is not a silver bullet technology. If your application is expected to do a lot of computational work on the event thread then Node is probably not a good choice of technology but it is not the end of the world - as you can fork your own child process for the heavy computational jobs.
See:
https://nodejs.org/api/child_process.html
You might also want to consider alternative like Java which has NIO and Threading capabilities.
I have the following situation:
A daemon that does a privileged operation on data that is kept in memory.
A multithreaded server currently running on about 30 cores handling user requests.
The server (1) would receive queries from (2), process them one by one, and return an answer. Each query to (1) would never block and only take a fraction of a microsecond on (1) to process, so we are guaranteed to get responses back fast unless (1) gets overrun by too much load.
Essentially, I would like to set up a situation where (1) listens to a UNIX domain socket and (2) writes requests and reads responses. However, I would like each thread of (2) to be able to read and write concurrently. My idea is to have one UNIX socket per thread for communication between (1) and (2) have (1) block on epoll_wait on these sockets processing requests one by one. Each thread on (2) would then read and write independently to its socket.
The problem that I see with this approach is that I can't easily dynamically grow the number of threads on (2). Is there a way to accomplish this in a way that is flexible with respect to runtime configuration? I guess one approach would be to have a large number of sockets and a thread on (2) would pick one socket by random, take a mutex on it, write a query and block waiting for a response, then release the mutex once it gets a response back from (1).
Anyone have better ideas?
I would suggest a viable possibility is to go with your own proposal and have each thread create its own socket for communicating with the daemon. You can use streaming (tcp) sockets which can easily solve your problem of adding more threads dynamically:
The daemon listens on a particular port, using socket(), bind() and listen(). The socket being listened to is initially the only thing in its epoll_wait set.
The client threads connect to this port with connect()
The daemon server accepts (with accept()) the incoming connection to create a new socket, which is added to its epoll_wait set with epoll_ctl().
The above procedure can be used to arbitrarily add as many sockets as you need, all with a single epoll_wait loop on the daemon side.
I'm looking using to transfer an accept()ed socket between processes using sendmsg(). In short, I'm trying to build a simple load balancer that can deal with a large number of connections without having to buffer the stream data.
Is this a good idea when dealing with a large number (let's say hundreds) of concurrent TCP connections? If it matters, my system is Gentoo Linux
You can share the file descriptor as per the previous answer here.
Personally, I've always implemented servers using pre-fork. The parent sets up the listening socket, spawns (pre-forks) children, and each child does a blocking accept. I used pipes for parent <-> child communication.
Until someone does a benchmark and establishes how "hard" it is to send a file descriptor, this remains speculation (someone might pop up: "Hey, sending the descriptor like that is dirt-cheap"). But here goes.
You will (likely, read above) be better off if you just use threads. You can have the following workflow:
Start a pool of threads that just wait around for work. Alternatively you can just spawn a new thread when a request arrives (it's cheaper than you think)
Use epoll(7) to wait for traffic (wait for connections + interesting traffic)
When interesting traffic arrives you can just dispatch a "job" to one of the threads.
Now, this does circumvent the whole descriptor sending part. So what's the catch ? The catch is that if one of the threads crashes, the whole process crashes. So it is up to you to benchmark and decide what's best for your server.
Personally I would do it the way I outlined it above. Another point: if the workers are children of the process doing the accept, sending the descriptor is unnecessary.
I'm working on an application that is divided in a thin client and a server part, communicating over TCP. We frequently let the server make asynchronous calls (notifications) to the client to report state changes. This avoids that the server loses too much time waiting for an acknowledgement of the client. More importantly, it avoids deadlocks.
Such deadlocks can happen as follows. Suppose the server would send the state-changed-notification synchronously (please note that this is a somewhat constructed example). When the client handles the notification, the client needs to synchronously ask the server for information. However, the server cannot respond, because he is waiting for an answer to his question.
Now, this deadlock is avoided by sending the notification asynchronously, but this introduces another problem. When asynchronous calls are made more rapidly than they can be processed, the call queue keeps growing. If this situation is maintained long enough, the call queue will get totally full (flooded with messages). My question is: what can be done when that happens?
My problem can be summarized as follows. Do I really have to choose between sending notifications without blocking at the risk of flooding the message queue, or blocking when sending notifications at the risk of introducing a deadlock? Is there some trick to avoid flooding the message queue?
Note: To repeat, the server does not stall when sending notifications. They are sent asynchronously.
Note: In my example I used two communicating processes, but the same problem exists with two communicating threads.
If the server is sending informational messages to the client, which you yourself say are asynchronous, it should not have to wait for a reply from the client. If they are not informational, in other words they require an answer, I would say a server should never send such messages to a client, and their presence indicates a poor design.
If you have a constant congestion problem, there is little you can do other than gracefully fail and notify the client that no new messages can be posted; then it is up to the client to maintain a backlog of messages to be posted.
Introducing a priority queue and using message expiration/filtering could allow you to free up space in the queue, but that really just postpones the problem. If possible, you could also aggregate messages or ignore duplicate messages, but again the problem does not seem to be the queue itself. (Not to mention that the more complex queue logic could eat up valuable resources that would be better used actually processing messages.)
Depending on what the server side does, you could introduce result hashing for long computations, offload some types of messages to a dedicated device, check if the server waits unreasonably long for I/O operations, and a myriad of other techniques. Profile if possible, at least try to find out which message(s) causes congestion.
Oh, and the business solution: Compare cost of estimated development time to the cost of better hardware and conclude that you should just buy a more powerful server (or an additional one).
Depending on how important these messages are you might want to look into Message Expiration, or perhaps a Message Filter, though it sounds like your architecture may be incorrect.
I would rather fix the logic in the server side. The message queue should not stall waiting for the answer. Rather have a state machine which can also receive those info queries while it is waiting for the answer from the client.
Of course you can still flood your message queue, but with TCP you can handle it pretty easily.
The best way, I believe, would be to add another state to your client. This I borrowed from the SMPP protocol specs.
Add a congestion state to the client, whereby it always checks the queue length, assuming this is possible, and therefore once a certain threshold is attained, say 1000 unprocessed messages, the client sends the server a message indicating that it's congested and the server will be required to cease all messaging until it receives a notification indicating that the client is no longer congested.
Alternatively, on the server side, if there is a certain number of pending replies, the server could simply cease sending messages until the client replies a certain number of them.
These thresholds can be dynamically calculated or fixed, depending.....