I am using RabbitMQ (cluster) and connecting to it using a Node.js client (node-amqp - https://github.com/postwait/node-amqp).
The RabbitMQ docs states that handling a failover scenario (cluster's node failure) should be handled by the client meaning the client should detect the fail and connect to another node in the cluster.
What is the simplest way to support this failover balancing. Is the node-amqp client support this ?
Any example or solution will be appreciated.
Thanks.
node-amqp has support for multiple server hosts in the connection object. So input host: as an array of hosts (unfortunately only the host part accepts an array, so other parameters like port and authentication has to match across you rabbit servers).
Related
I have one situation to deal with redis-cluster.Actually we want to move to redis-cluster for high availability.So, currently we have one transaction server and we are using redis for managing mini-Statements.We have single instance of redis running on default port with 0.0.0.0 ip. In my transaction server, i have one configuration file in which i am putting redis ip and port for connection.
My Question:
1) Suppose i have two machine with redis server and i want something like if one machine died then my transaction server will automatically use second machine for its work and it should have all the keys available.So for this what ip and port i should configure in my transaction server config file and what should be the setup for redis to achieve this goal?
A suggestion or a link will be helpful!
If you looking for high availability solution for Redis, you might want to look inot Redis Sentinel but not cluster.
Redis Sentinel offers exactly what you need, you can see the official document for more information.
I have Strimzi Kafka cluster set-up successfully on OpenShift. I can see the following services:
kafka-brokers
kafka-bootstrap
zookeeper-client
zookeeper-nodes
This is actually different from what is called out here; so, not sure, if this is a Strimzi installation issue. I followed installation steps from here.
I created a routes for kafka-bootstrap and kafka-brokers on port 9092 (non-TLS clients). In both cases, I get a ECONNREFUSED error when I provide the route value (e.g. my-cluster-myproject.192.168.99.100.nip.io:9092 in the example from kafkajs.
How do I get the kafkajs package connected to the Strimzi cluster?
The Strimzi services that you are seeing are correct but in order to access the Kafka brokers, it's better using the bootstrap service which allows to specify only one "broker" in the bootstrap servers list of your client and it will select a broker to connect the first time and then getting metadata (it avoids to you to connect using the headless service where the pod IPs can change on restart).
So saying that, if you need to access the brokers from outside of OpenShift you don't have to create a route for the bootstrap service manually but you have to configure external listeners (https://strimzi.io/docs/latest/#assembly-configuring-kafka-broker-listeners-deployment-configuration-kafka) with type route.
As already mentioned above, the OpenShift routes work with TLS only for TCP connections.
In order to provide your clients the right certificate tu use for TLS you can follow this part of the documentation: https://strimzi.io/docs/latest/#kafka_client_connections
Have you checked out external listeners?
BTW, if you need to expose strimzi by router, TLS encryption is necessary. OpenShift router does not support TCP, but support TLS.
I'm making a nodejs application that will act a server for other sites in different countries as the data being transmitted will be business related data. I would like to know how I can safely/securely send this data.
I am currently using socket.io to act as my main server (Master) on other sites there are (Slave) servers that handle the data from the master server.
I have got this working in a local environment but want to deploy this in the other sites.
I have tried to Google this to see if anyone else has done this but came across socket.io sessions but I don't know if this will fit with (Server->Server) connections.
Any help or experience would be grateful.
For server-server communication where you control both ends of the communication you can use WebSocket over HTTPS, you can use TCP over SSH tunnel or any other encrypted tunnel. You can use a PubSub service, a queue service etc. There are a lot of ways you can do it. Just make sure that the communication is encrypted either natively by the protocols you use or with VPN or tunnels that connect your servers in remote locations.
Socket.io is usually used as a replacement for WebSocket where there is no native support in the browser. It is rarely used for server to server communication. See this answer for more details:
Differences between socket.io and websockets
If you want a higher level framework with focus on real-time data then see ActionHero:
https://www.actionherojs.com/
For other options of sending real-time data between servers you can use some shared resource like a Redis database or some pub/sub service like Faye or Kafka, or a queue service like ZeroMQ or RabbitMQ. This is what is usually done to make things like that work across multiple instances of the server or multiple locations. You could also use a CouchDB changes feed, or a similar feature of RethinkDB to make sure that all of your instances get all the data as soon as it is posted by any one of them. See:
http://docs.couchdb.org/en/2.0.0/api/database/changes.html
https://rethinkdb.com/docs/changefeeds/javascript/
https://redis.io/topics/pubsub
https://faye.jcoglan.com/
https://kafka.apache.org/
Everything that uses HTTP is easy to encrypt with HTTPS. Everything else can be encrypted with a tunnel or VPN.
Good tools that can add encryption for protocols that are not encrypted themselves (like e.g. the Redis protocol) are:
http://www.tarsnap.com/spiped.html
https://www.stunnel.org/index.html
https://openvpn.net/
https://forwardhq.com/help/ssh-tunneling-how-to
See also:
https://en.wikipedia.org/wiki/Tunneling_protocol
Note that some hosting services may give you preconfigured tunnels or internal network interfaces that pass data encrypted between your servers located in different data centers of that provider. Some providers give you tools and tutorials to that easily as well.
I have spun 3 node instances using pm2. They are all running a websocket server using these ports: (9300, 9301, and 9302).
My main server acts as a nginx load balancer. The nginx upstream block:
upstream websocket {
least_conn;
server 127.0.0.1:9300;
server 127.0.0.1:9301;
server 127.0.0.1:9302;
}
After 10 players have connected, they are distributed in round-robin fashion. I am also utilizing Redis for Pub/Sub for all the node instances.
I am curious if it's possible for a connected player that is on instance 9300, switch to 9302 while not losing their connection?
The reasoning is because my game is instance based. I have "games" if you will, that players can create or join. If I can get the connected players onto the same node instance for their games, I would reduce all the extra Pub/Sub signals and achieve better latency. (Or so I think, but just curious if this is possible)
I am curious if it's possible for a connected player that is on
instance 9300, switch to 9302 while not losing their connection?
No, it is not possible. A TCP socket is a connection between two specific endpoints and it cannot be moved from one endpoint to another after it is established. There are very good security reasons why this is prohibited (so connections can't be hijaacked).
The usual way around this problem is for the server to tell the client to reconnect and give it instructions for how to connect to the particular server you want it connected to (e.g. connect to a specific port or specific hostname or some other means that your load balancer might use).
I currently am creating a horizontally scalable socket.io server which looks like the following:
LoadBalancer (nginx)
Proxy1 Proxy2 Proxy3 Proxy{N}
BackEnd1 BackEnd2 BackEnd3 BackEnd4 BackEnd{N}
My question is, with socket-io redis module, can I send a message to a specific socket connected to one of the proxy servers from one of the backend servers if they are all connected to the same redis server? If so, how do I do that?
As you wan to scale socket.io server, and you have used nginx as load balancer, do not forget to setup sticky load balancing, othersie single connection will be connected to multiple server based on load balancer pass the connection to socket.io server. So better to use sticky load balancing
With the redis socket io adapter, you can send and receive message with one or more socket.io server with help of Redis Pub/Sub implementation.
if you tell me which technology is used for Proxy and Backend, i will let you know more information on this.
Using the socket.io-redis module all of your backend servers will share the same pool of connected users. You can emit from Backend1 and if a client is connected to Backend4 he will get the message.
The key for this working though with Socket.io is to use sticky sessions on nginx so that once I client connects, it stays on the same machine. This is because the way that socket.io starts with a WebSocket and several long polling threads, they all need to be on the same backend server to work correctly.
Instead of sticky sessions, you can change your client connection optons to use Websockets ONLY and this will remove the problems with the multiple connections to multiple servers as there will only be one connection, the single websocket. This will also make your app lose the ability to downgrade to long-polling instead of WebSockets.