We use Hazelcast (2.3) in a Web backend running in a Java servlet container to distribute data in a cluster. Hazelcast maps are persisted in a MySQL database using a MapStore interface. Right now, we are using the Java native client interface and I wonder what is the difference between a "native" client and the embedded version when it comes to performance.
Is it correct that a "native" client might connect to any of the cluster nodes and that this decision is made again for every single request?
Is it correct that the overhead of sending all requests and responses through a TCP socket in a native client is avoided when the embedded version is used?
Is it fair to conclude that the embedded version is in general faster than the "native" client?
In case of a "native" client: it is correct that the MapStore implementation is part of the Hazelcast server (as class during runtime)? Or is it part of the "native" client so that all data that has to be persisted is sent through the TCP socket at first?
You give the set of nodes for native client to connect. Once it connects one it will use this node for communication with cluster till it dies. When it dies client will connect to other node to continue communication.
With native client there are two hops one from client to the node, one from the node to target node. (Target node is the node the target data is located) With embedded client there is single hop as it already knows where the wanted data is located (target node)
Yes generally but see: (from hazelcast documentation)
LiteMember is a member of the cluster, it has socket connection to
every member in the cluster and it knows where the data is so it will
get to the data much faster. But LiteMember has the clustering
overhead and it must be on the same data center even on the same RAC.
However Native client is not member and relies on one of the cluster
members. Native Clients can be anywhere in the LAN or WAN. It scales
much better and overhead is quite less. So if your clients are less
than Hazelcast nodes then LiteMember can be an option; otherwise
definitely try Native Client. As a rule of thumb: Try Native client
first, if it doesn't perform well enough for you, then consider
LiteMember.
4- Store operations are executed in hazelcast server. The object sent from client is persisted to centralized datastore by the target node which also stores the object in its memory.
Related
I currently have a Node server running that works with MongoDB. It handles some HTTP requests, but it largely used WebSockets. Basically, the server connects multiple users to rooms with WebSockets.
My server currently has around 12k WebSockets open and it's almost crippling my single threaded server, and now I'm not sure how to convert it over.
The server holds HashMap variables for the connected users and rooms. When a user does an action, the server often references those HashMap variables. So, I'm not sure how to use clusters in this. I thought maybe creating a thread for every WebSocket message, but I'm not sure if this is the right approach, and it would not be able to access the HashMaps for the other users
Does anyone have any ideas on what to do?
Thank you.
You can look at the socket.io-redis adapter for architectural ideas or you can just decide to use socket.io and the Redis adapter.
They move the equivalent of your hashmap to a separate process redis in-memory database so all clustered processes can get access to it.
The socket.io-redis adapter also supports higher-level functions so that you can emit to every socket in a room with one call and the adapter finds where everyone in the room is connected, contacts that specific cluster server, and has it send the message to them.
I thought maybe creating a thread for every WebSocket message, but I'm not sure if this is the right approach, and it would not be able to access the HashMaps for the other users
Threads in node.js are not lightweight things (each has its own V8 instance) so you will not want a nodejs thread for every WebSocket connection. You could group a certain number of WebSocket connections on a web worker, but at that point, it is likely easier to use clustering because nodejs will handle the distribution across the clusters for you automatically whereas you'll have to do that yourself for your own web worker pool.
If I use JDBC approach, I am able to achieve connection pooling using third party library(Apache Dbcp).
I am using Client based Approach, VoltDB is not exposing connection object, How to implement connection pooling?
Is there any mechanism for Client based approach?
The Client based approach is a lighter-weight yet more powerful API than JDBC.
The Client object should be connected to each of the servers in the cluster, or you can set the "TopologyChangeAware" property to true on the ClientConfig object prior to creating the Client object, then connect the client to any server in the cluster and it will create connections to all the others automatically.
The application will then interact with the database using this Client object, which has connections, rather than using a JDBC Connection object. Since the Client object is thread-safe and can support multiple simultaneous invocations of callProcedure() on multiple threads, there is no need to create a pool of Clients.
For more details on the Client interface, see Using VoltDB Chapter 6. Designing VoltDB Client Applications
Disclaimer: I work for VoltDB.
I have an app that receives data from several sources in realtime using logins and passwords. After data is recieved it's stored in memory store and replaced after new data is available. Also I use sessions with mongo-db to auth user requests. Problem is I can't scale this app using pm2, since I can use only one connection to my datasource for one login/password pair.
Is there a way to use different login/password for each cluster or get cluster ID inside app?
Are memory values/sessions shared between clusters or is it separated? Thank you.
So if I understood this question, you have a node.js app, that connects to a 3rd party using HTTP or another protocol, and since you only have a single credential, you cannot connect to said 3rd party using more than one instance. To answer your question, yes it is possibly to set up your clusters to use a unique use/pw combination, the tricky part would be how to assign these credentials to each cluster (assuming you don't want to hard code it). You'd have to do this assignment when the servers start up, and perhaps use a a data store to hold these credentials and introduce some sort of locking mechanism for each credential (so that each credential is unique to a particular instance).
If I was in your shoes, however, what I would do is create a new server, whose sole job would be to get this "realtime data", and store it somewhere available to the cluster, such as redis or some persistent store. The server would then be a standalone server, just getting this data. You can also attach a RESTful API to it, so that if your other servers need to communicate with it, they can do so via HTTP, or a message queue (again, Redis would work fine there as well.
'Realtime' is vague; are you using WebSockets? If HTTP requests are being made often enough, also could be considered 'realtime'.
Possibly your problem is like something we encountered scaling SocketStream (websockets) apps, where the persistent connection requires same requests routed to the same process. (there are other network topologies / architectures which don't require this but that's another topic)
You'll need to use fork mode 1 process only and a solution to make sessions sticky e.g.:
https://www.npmjs.com/package/sticky-session
I have some example code but need to find it (over a year since deployed it)
Basically you wind up just using pm2 for 'always-on' feature; sticky-session module handles the node clusterisation stuff.
I may post example later.
I am building a web app which has two parts. In one part it uses a real time connection between the server and the client and in the other part it does some cpu intensive task to provide relevant data.
Implementing the real time communication in nodejs and the cpu intensive part in python/java. What is the best way the nodejs server can participate in a duplex communication with the other server ?
For a basic solution you can use Socket.IO if you are already using it and know how it works, it will get the job done since it allows for communication between a client and server where the client can be a different server in a different language.
If you want a more robust solution with additional options and controls or which can handle higher traffic throughput (though this shouldn't be an issue if you are ultimately just sending it through the relatively slow internet) you can look at something like ØMQ (ZeroMQ). It is a messaging queue which gives you more control and lots of different communications methods beyond just request-response.
When you set either up I would recommend using your CPU intensive server as the stable end(server) and your web server(s) as your client. Assuming that you are using a single server for your CPU intensive tasks and you are running several NodeJS server instances to take advantage of multi-cores for your web server. This simplifies your communication since you want to have a single point to connect to.
If you foresee needing multiple CPU servers you will want to setup a routing server that can route between multiple web servers and multiple CPU servers and in this case I would recommend the extra work of learning ØMQ.
You can use http.request method provided to make curl request within node's code.
http.request method is also used for implementing Authentication api.
You can put your callback in the success of request and when you get the response data in node, you can send it back to user.
While in backgrount java/python server can utilize node's request for CPU intensive task.
I maintain a node.js application that intercommunicates among 34 tasks spread across 2 servers.
In your case, for communication between the web server and the app server you might consider mqtt.
I use mqtt for this kind of communication. There are mqtt clients for most languages, including node/javascript, python and java. In my case I publish json messages using mqtt 'topics' and any task that has registered to subscribe to a 'topic' receives it's data when published. If you google "pub sub", "mqtt" and "mosquitto" you'll find lots of references and examples. Mosquitto (now an Eclipse project) is only one of a number of mqtt brokers that are available. Another very good broker that is written in Java is called hivemq.
This is a very simple, reliable solution that scales well. In my case literally millions of messages reliably pass through mqtt every day.
You must be looking for socketio
Socket.IO enables real-time bidirectional event-based communication.
It works on every platform, browser or device, focusing equally on reliability and speed.
Sockets have traditionally been the solution around which most
realtime systems are architected, providing a bi-directional
communication channel between a client and a server.
This is more of a design question rather than implementation but I am kind of wondering if I can design something like this. I have an interactive app (similar to python shell). I want to host a server (lets say using either node.js http server or socket.io since I am not sure which one would be better) which would spawn a new child_process for every client that connects to it and maintains a different context for that particular client. I am a complete noob in terms of node.js or socket.io. The max I have managed is to have one child process on a socket.io server and connect the client to it.
So the question is, would this work ? If not is there any other way in node to get it to work or am I better off with a local server.
Thanks
Node.js - is single process web platform. Using clustering (child_process), you will create independent execution of same application with separate thread.
Each thread cost memory, and this is generally why most of traditional systems is not much scalable as will require thread per client. For node it will be extremely inefficient from hardware resources point of view.
Node is event based, and you dont need to worry much about scope as far as your application logic does not exploit it.
Count of workers is recommended to be equal of CPU Cores on hardware.
There is always a master application, that will create workers. Each worker will create http + socket.io listeners which technically will be bound to master socket and routed from there.
http requests will be routed for to different workers while sockets will be routed on connection moment, but then that worker will handle this socket until it gets disconnected.