If Redis is single Threaded, how can it be so fast? - multithreading

I'm currently trying to understand some basic implementation things of Redis. I know that redis is single-threaded and I have already stumbled upon the following Question: Redis is single-threaded, then how does it do concurrent I/O?
But I still think I didn't understood it right. Afaik Redis uses the reactor pattern using one single thread. So If I understood this right, there is a watcher (which handles FDs/Incoming/outgoing connections) who delegates the work to be done to it's registered event handlers. They do the actual work and set eg. their responses as event to the watcher, who transfers the response back to the clients. But what happens if a request (R1) of a client takes lets say about 1 minute. Another Client creates another (fast) request (R2). Then - since redis is single threaded - R2 cannot be delegated to the right handler until R1 is finished, right? In a multithreade environment you could just start each handler in a single thread, so the "main" Thread is just accepting and responding to io connections and all other work is carried out in own threads.
If it really just queues the io handling and handler logic, it could never be as fast it is. What am I missing here?

You're not missing anything, besides perhaps the fact that most operations in Redis complete in less than a ~millisecond~ couple of microseconds. Long running operations indeed block the server during their execution.

Let’s say if there were 10,000 users doing live data pulling with 10 seconds each on hmget, and on the other side, server were broadcasting using hmset, redis can only issue the set at the last available queue.
Redis is only good for queuing and handle limited processing like inserting lazy last login info, but not for live info broadcasting, in this case, memcached will be the right choice. Redis is single threaded, like FIFO.

Related

why Redis is single threaded(event driven)

I am trying to understanding basics of Redis.
One that that keep coming everywhere is, Redis is single threaded that makes things atomic.But I am unable to imagine how this is working internally.I have below doubt.
Don't we design a server Single thread if it is IO bound application(like Node.js),where thread got free for another request after initiating IO operation and return data to client once IO operation is finished(providing concurrency). But in case of redis all data are available in Main Memory,We are not going to do IO operation at all.So then why Redis is single threaded?What will happen if first request is taking to much time,remaining request will have to keep waiting?
TL;DR: Single thread makes redis simpler, and redis is still IO bound.
Memory is I/O. Redis is still I/O bound. When redis is under heavy load and reaches maximum requests per second it is usually starved for network bandwidth or memory bandwidth, and is usually not using much of the CPU. There are certain commands for which this won't be true, but for most use cases redis will be severely I/O bound by network or memory.
Unless memory and network speeds suddenly get orders of magnitude faster, being single threaded is usually not an issue. If you need to scale beyond one or a few threads (ie: master<->slave<->slave setup) you are already looking at Redis Cluster. In that case you can set up a cluster instance per CPU core if you are somehow CPU starved and want to maximize the number of threads.
I am not very familiar with redis source or internals, but I can see how using a single thread makes it easy to implement lockless atomic actions. Threads would make this more complex and doesn't appear to offer large advantages since redis is not CPU bound. Implementing concurrency at a level above a redis instance seems like a good solution, and is what Redis Sentinel and Redis Cluster help with.
What happens to other requests when redis takes a long time?
Those other requests will block while redis completes the long request. If needed, you can test this using the client-pause command.
The correct answer is Carl's, of course. However.
In Redis v4 we're seeing the beginning of a shift from being mostly single threaded to selectively and carefully multi threaded. Modules and thread-safe contexts are one example of that. Another two are the new UNLINK command and ASYNC mode for FLUSHDB/FLUSHALL. Future plans are to offload more work that's currently being done by the main event loop (e.g. IO-bound tasks) to worker threads.
From redis website
Redis uses a mostly single threaded design. This means that a single
process serves all the client requests, using a technique called
multiplexing. This means that Redis can serve a single request in
every given moment, so all the requests are served sequentially. This
is very similar to how Node.js works as well. However, both products
are not often perceived as being slow. This is caused in part by the
small amount of time to complete a single request, but primarily
because these products are designed to not block on system calls, such
as reading data from or writing data to a socket.
I said that Redis is mostly single threaded since actually from Redis 2.4 we use threads in Redis in order to perform some slow I/O operations in the background, mainly related to disk I/O, but this does not change the fact that Redis serves all the requests using a single thread.
Memory is no I/O operation

How to design a scalable rpc call listener?

I have to listen for rpc calls , stack them somewhere , process them, and answer. The thing is that they are not run as soon as they come. The response is an ACK for each rpc call recieved.
The problem is that i want to design it in a way that i can have many listening servers writing in the same stack of calls, piling them up as they come.
My objective is to listen to as many calls as possible. How should i achieve this?
My main technology is Perl and node.js but would use any open source software for this task.
It sounds like any kind of job queue will do what you need it to; I'm personally a big fan of using Redis for this kind of thing. Since Redis lists maintain insertion order, you can simply LPUSH your RPC call info on to the end of the list from any number of web servers listening to the RPC calls, and somewhere else (in another process/on another machine, I assume) RPOP (or BRPOP) them off and process them.
Since Node.js uses fully asynchronous IO, assuming you're not doing a lot of processing in your RPC listeners (that is, you're only listening for requests, sending an ACK, and pushing onto Redis), my guess is that Node would be exceedingly efficient at this.
An aside on using Redis for a queue: if you want to ensure that, in the event of a catastrophic failure, jobs are not lost, you'll need to implement a little more logic; from the RPOPLPUSH documentation:
Pattern: Reliable queue
Redis is often used as a messaging server to implement processing of background jobs or other kinds of messaging
tasks. A simple form of queue is often obtained pushing values into a
list in the producer side, and waiting for this values in the consumer
side using RPOP (using polling), or BRPOP if the client is better
served by a blocking operation.
However in this context the obtained
queue is not reliable as messages can be lost, for example in the case
there is a network problem or if the consumer crashes just after the
message is received but it is still to process.
RPOPLPUSH (or
BRPOPLPUSH for the blocking variant) offers a way to avoid this
problem: the consumer fetches the message and at the same time pushes
it into a processing list. It will use the LREM command in order to
remove the message from the processing list once the message has been
processed.
An additional client may monitor the processing list for
items that remain there for too much time, and will push those timed
out items into the queue again if needed.

nodejs - Why Node.js can handle large number of simulteneous persistent connections?

I know Node.js is good at keeping large number of simultaneous persistent connections, for example, a chat room for many many chatters.
I am wondering how it achieves this. I mean anyway it is using TCP/IP which is encapsulated by the underlying OS, why it can handle persistent connections so well that others cannot?
What is the magic thing does it have?
Node.js makes all I/O asynchronous. It only runs in a single thread, but will do other requests or operations while waiting on I/O.
In contrast, classical web servers will not serve another request until the previous one is fully done. For this reason, Apache runs several processes at the same time; let's say there's 10 httpd processes, that normally means 10 requests can be served at any one time (*). If the processes take more time to complete, you will serve less requests - or will have to spawn more processes, even if the process is doing nothing - like waiting for the database to chew up and return data.
A node.js process, faced with a request that will go to the database, leaves the database to work while it goes to serve another request.
*) MPM makes this not quite true, but true enough for all intents and purposes.
Well, the thing is that most web servers (like apache etc.. ) works using thread spawning, where they spwan a new thread for every incoming HTTP request. these threads are synchronous and blocking in nature => which means they will execute the code in the order it is written and any further computation will be blocked by the current I/O or compute task. Like if you want to listen for an event like - chat submission by a chatter you need to have a dedicated thread per user ( per user is necessary for maintaining persistent connection, there are few possible optimization techniques but still you can assume threads to be per user) listening to this event and this thread will be blocked waiting for this event to happen. So for any thread spawning and blocking web-server
Javascript on the other hand is non-blocking ( and conductive to asynchronous codes )by nature => here you register a callback for an event and whenever it occurs some the callback function will be executed. It will not block at any point waiting for this event.
You can find more about this by reading about non-blocking or asynchronous servers.

How node.js works?

I don't understand several things about nodejs. Every information source says that node.js is more scalable than standard threaded web servers due to the lack of threads locking and context switching, but I wonder, if node.js doesn't use threads how does it handle concurrent requests in parallel? What does event I/O model means?
Your help is much appreciated.
Thanks
Node is completely event-driven. Basically the server consists of one thread processing one event after another.
A new request coming in is one kind of event. The server starts processing it and when there is a blocking IO operation, it does not wait until it completes and instead registers a callback function. The server then immediately starts to process another event (maybe another request). When the IO operation is finished, that is another kind of event, and the server will process it (i.e. continue working on the request) by executing the callback as soon as it has time.
So the server never needs to create additional threads or switch between threads, which means it has very little overhead. If you want to make full use of multiple hardware cores, you just start multiple instances of node.js
Update
At the lowest level (C++ code, not Javascript), there actually are multiple threads in node.js: there is a pool of IO workers whose job it is to receive the IO interrupts and put the corresponding events into the queue to be processed by the main thread. This prevents the main thread from being interrupted.
Although Question is already explained before a long time, I'm putting my thoughts on the same.
Node.js is single threaded JavaScript runtime environment. Basically it's creator Ryan Dahl concern was that parallel processing using multiple threads is not the right way or too complicated.
if Node.js doesn't use threads how does it handle concurrent requests in parallel
Ans: It's completely wrong sentence when you say it doesn't use threads, Node.js use threads but in a smart way. It uses single thread to serve all the HTTP requests & multiple threads in thread pool(in libuv) for handling any blocking operation
Libuv: A library to handle asynchronous I/O.
What does event I/O model means?
Ans: The right term is non-blocking I/O. It almost never blocks as Node.js official site says. When any request goes to node server it never queues the request. It take request and start executing if it's blocking operation then it's been sent to working threads area and registered a callback for the same as soon as code execution get finished, it trigger the same callback and goes to event queue and processed by event loop again after that create response and send to the respective client.
Useful link:
click here
Node JS is a JavaScript runtime environment. Both browser and Node JS run on V8 JavaScript engine. Node JS uses an event-driven, non-blocking I/O model that makes it lightweight and efficient. Node JS applications uses single threaded event loop architecture to handle concurrent clients. Actually its' main event loop is single threaded but most of the I/O works on separate threads, because the I/O APIs in Node JS are asynchronous/non-blocking by design, in order to accommodate the main event loop. Consider a scenario where we request a backend database for the details of user1 and user2 and then print them on the screen/console. The response to this request takes time, but both of the user data requests can be carried out independently and at the same time. When 100 people connect at once, rather than having different threads, Node will loop over those connections and fire off any events your code should know about. If a connection is new it will tell you .If a connection has sent you data, it will tell you .If the connection isn’t doing anything ,it will skip over it rather than taking up precision CPU time on it. Everything in Node is based on responding to these events. So we can see the result, the CPU stay focused on that one process and doesn’t have a bunch of threads for attention.There is no buffering in Node.JS application it simply output the data in chunks.
Though its been answered , i would like to just share my understandings in simple terms
Nodejs uses a library called Libuv , so this Libuv is written in C
language which uses the concept of threads . These threads are called
as workers and these workers take care of the multiple requests from client.
Parallel processing in nodejs is achieved with the help of 2 concepts
Asynchronous
Non blocking IO

Redis and Node.js and Socket.io Questions

I have been just learning redis and node.js There are two questions I have for which I couldn't find any satisfying answer.
My first question is about reusing redis clients within the node.js. I have found this question and answer: How to reuse redis connection in socket.io? , but it didn't satisfy me enough.
Now, if I create the redis client within the connection event, it will be spawned for each connection. So, if I have 20k concurrent users, there will be 20k redis clients.
If I put it outside of the connection event, it will be spawned only once.
The answer is saying that he creates three clients for each function, outside of the connection event.
However, from what I know MySQL that when writing an application which spawns child processes and runs in parallel, you need to create your MySQL client within the function in which you are creating child instances. If you create it outside of it, MySQL will give an error of "MySQL server has gone away" as child processes will try to use the same connection. It should be created for each child processes separately.
So, even if you create three different redis clients for each function, if you have 30k concurrent users who send 2k messages concurrently, you should run into the same problem, right? So, every "user" should have their own redis client within the connection event. Am I right? If not, how node.js or redis handles concurrent requests, differently than MySQL? If it has its own mechanism and creates something like child processes within the redis client, why we need to create three different redis clients then? One should be enough.
I hope the question was clear.
-- UPDATE --
I have found an answer for the following question. http://howtonode.org/control-flow
No need to answer but my first question is still valid.
-- UPDATE --
My second question is this. I am also not that good at JS and Node.js. So, from what I know, if you need to wait for an event, you need to encapsulate the second function within the first function. (I don't know the terminology yet). Let me give an example;
socket.on('startGame', function() {
getUser();
socket.get('game', function (gameErr, gameId) {
socket.get('channel', function (channelErr, channel) {
console.log(user);
client.get('games:' + channel + '::' + gameId + ':owner', function (err, owner) { //games:channel.32:game.14
if(owner === user.uid) {
//do something
}
});
}
});
});
So, if I am learning it correctly, I need to run every function within the function if I need to wait I/O answer. Otherwise, node.js's non-blocking mechanism will allow the first function to run, in this case it will get the result in parallel, but the second function might not have the result if it takes time to get. So, if you are getting a result from redis for example, and you will use the result within the second function, you have to encapsulate it within the redis get function. Otherwise second function will run without getting the result.
So, in this case, if I need to run 7 different functions and the 8. function will need the result of all of them, do I need to write them like this, recursively? Or am I missing something.
I hope this was clear too.
Thanks a lot,
So, every "user" should have their own redis client within the connection event.
Am I right?
Actually, you are not :)
The thing is that node.js is very unlike, for example, PHP. node.js does not spawn child processes on new connections, which is one of the main reasons it can easily handle large amounts of concurrent connections, including long-lived connections (Comet, Websockets, etc.). node.js processes events sequentially using an event queue within one single process. If you want to use several processes to take advantage of multi-core servers or multiple servers, you will have to do it manually (how to do so is beyond the scope of this question, though).
Therefore, it is a perfectly valid strategy to use one single Redis (or MySQL) connection to serve a large quantity of clients. This avoids the overhead of instantiating and terminating a database connection for each client request.
So, every "user" should have their own redis client within the
connection event. Am I right?
You shouldn't make a new Redis client for each connected user, that's not the proper way to do it. Instead just create 2-3 clients max and use them.
For more information checkout this question:
How to reuse redis connection in socket.io?
As for the first question:
The "right answer" might make you think you are good with one Connection.
In reality, whenever you are doing something that is waiting on an IO, a timer, etc, you are actually making node run the waiting method on the queue. Hence, if you use only 1 single connection, you will actually limit the performance of the thread you working on ( a single CPU) to the speed of redis - which is probably a few hundreds of callbacks per second (non-redis waiting callbacks will still go on) - while this is not poor performance, there's no reason to create this kind of limitation. It is recommended to create a few (5-10) connections to avoid this issue in it's entire. This number goes up for slower databases, e.g. MySQL, but is dependant on the type of queries and the code specifics.
Do note, that you should run a few workers on your server, per the number of CPUs you have, for best performance.
In regards to the 2nd Question:
It is a much better practice, to name the functions, one after the other, and use the names in the code rather than defining it as you go. In some situations, it will reduce memory consumption.

Resources