Aws elasticache creates a redis cluster by default.
Im using nodejs and using ioredis.
My question is if i call hgetall will it automatically query all nodes in the cluster?
Or is there something else i need to do?
You don't need to query all nodes. Use Redis.Cluster to connect to the cluster, and it will send the command to the right node.
A decent client library for Redis Cluster should implement the MOVED and ASK Redirection. The end user of the client library should NOT worry about where the key is located.
Update: I think what I said here is incorrect, see the other answer. For a command like HGETALL the client library knows where to send it.
From the docs, emphasis mine:
Every command will be sent to exactly one node. For commands containing keys, (e.g. ... HGETALL), ioredis sends them to the node that [sic] serving the keys, and for other commands not containing keys, (e.g. INFO, KEYS and FLUSHDB), ioredis sends them to a random node.
No, not automatically. You would need to send hgetall to each node in the cluster yourself. There is a nodes() utility function returns the array of nodes to facilitate this.
The documentation answers this explicitly and provides an example, see https://github.com/luin/ioredis#running-commands-to-multiple-nodes.
Related
Working on Redis Cache in a Node JS application and trying to have individual clients for read and write operations in Redis.
When using replicas, the main node is handling both Write and read and the other replica nodes are responsible for only reading by default. when using the primary endpoint, we could see the traffic is not equally shared among the nodes so would like to configure both the read and write node clients in the application for making the nodes work better.
Now I am having two nodes in Redis where one is responsible for read and the other is for write operations. I was trying to get if there is any configuration in createClient method where we can pass the read and write endpoint and When searching for the configurations I am not able to get any properties to pass in the createClient method.
Could anyone share, Is there any such configuration like that to specify to the Redis client to use read and write with different endpoint configurations when creating or we can achieve the same with any approaches?
Using Redis (node-redis) package from npm.
As of now trying to get any such configurations but not getting any proper way of handling the configurations. Most of them suggesting to go with manual checks for the commands and choosing the endpoint.
I'm using RubyOnRails as a base for an online shop and redis client library gem. After alert from my hosting provider I have decided to secure the redis and flush entire DB in order to rerun caching and etc.
But strange things happening for me, cause after running:
127.0.0.1:6379> FLUSHALL
OK
And then checking for existing keys I got:
127.0.0.1:6379> KEYS *
1) "processes"
2) "mydomain.com:5digitport:strangehash"
I'm not a Redis expert, but I thing something wrong with my Redis instance.
Have anyone faced this problem and how should I solve it?
Your app (or another one) is still connecting to redis and writing keys. Inspect your CLIENT LIST or netstat for connections.
Perhaps you are using a hosting provider that deploys Redis for you, and they've stored some config details in your Redis instance. If so, then you may not be able to delete these keys. If so, then just ignore them.
I have an app that receives data from several sources in realtime using logins and passwords. After data is recieved it's stored in memory store and replaced after new data is available. Also I use sessions with mongo-db to auth user requests. Problem is I can't scale this app using pm2, since I can use only one connection to my datasource for one login/password pair.
Is there a way to use different login/password for each cluster or get cluster ID inside app?
Are memory values/sessions shared between clusters or is it separated? Thank you.
So if I understood this question, you have a node.js app, that connects to a 3rd party using HTTP or another protocol, and since you only have a single credential, you cannot connect to said 3rd party using more than one instance. To answer your question, yes it is possibly to set up your clusters to use a unique use/pw combination, the tricky part would be how to assign these credentials to each cluster (assuming you don't want to hard code it). You'd have to do this assignment when the servers start up, and perhaps use a a data store to hold these credentials and introduce some sort of locking mechanism for each credential (so that each credential is unique to a particular instance).
If I was in your shoes, however, what I would do is create a new server, whose sole job would be to get this "realtime data", and store it somewhere available to the cluster, such as redis or some persistent store. The server would then be a standalone server, just getting this data. You can also attach a RESTful API to it, so that if your other servers need to communicate with it, they can do so via HTTP, or a message queue (again, Redis would work fine there as well.
'Realtime' is vague; are you using WebSockets? If HTTP requests are being made often enough, also could be considered 'realtime'.
Possibly your problem is like something we encountered scaling SocketStream (websockets) apps, where the persistent connection requires same requests routed to the same process. (there are other network topologies / architectures which don't require this but that's another topic)
You'll need to use fork mode 1 process only and a solution to make sessions sticky e.g.:
https://www.npmjs.com/package/sticky-session
I have some example code but need to find it (over a year since deployed it)
Basically you wind up just using pm2 for 'always-on' feature; sticky-session module handles the node clusterisation stuff.
I may post example later.
I'm trying to create a Flash app with some real time functionality, and would like to use Redis' pubsub functionality which is a perfect fit for what I need.
I know that connecting to a data store directly from client is almost always bad. What are the security implications of this (since I'm not an expert on Redis), and are there ways to work around them? From what I read, there is a possible exploit of doing config sets and changing the rdb file location and be able to arbitrary overwrite files. Is there anything else? (If I don't use that particular redis instance for anything at all, i.e. no data being stored)
I understand the alternative is to write some custom socket server program and have it act as the mediating layer for connecting to redis and issuing commands -- that's the work I'd like to avoid having to write, if possible.
** Edit **
Just learned about the rename-command configuration to disable commands. If I disable every single command on the redis instance and leave only SUBSCRIBE and PUBLISH open, would this be good enough to run on production?
I think it would be a bad idea to connect directly your client to Redis. Redis offers an authentication system for a unique user only. It expects this user to be your server app.
From my point of view, directly exposing Redis is always a bad idea. It would allow anybody to access all of your data. This is confirmed by the Redis doc.
So you won't avoid adding or developing the server side of your app.
In my nodejs app, I'm using Redis keys as channel names. I want a client to subscribe a channel only if the corresponding key exists. The problem is between a EXISTS command and a SUBSCRIBE command, another client may remove an existing key. I can't use WATCH-MULTI-EXEC to make it atomic because I can't use SUBSCRIBE within a MULTI-EXEC block. I can't use Lua script either.
If there any way to maintain atomicity in this case?
It seems impossible with current version of Redis. I switched to a different approach that do not require atomic subscribe.