I am building application on socket.io and node.js expressjs 4 .
for my opinion socket.io takes much resources then Rest APIs.
I want to know how i can compare RestApis vs Socket.io in terms of memory ,CPU usage .and which is best for large applications
Thanks
To compare memory and CPU usage i suggest executing a big number of queries of each one separately (like a thousand) and watching the memory and cpu of the process. Now which is better it's not that simple. It all depends... Socket.io is designed to be a real-time tool, so in large applications where there are a lot of real-time operations, it's better. But in large applications where real-time is really not an issue i believe(never really tested to know the numbers) you can get lesser memory usage using RESTful APIs, mainly because websocket is stateful, so the server needs to have every connection in it's memory. Another thing to put in mind is that HTTP protocol has a lot of goodies that websocket doesn't like gzipping, cache, routing, SEO, proxy, and much more.
A good article about that: REST vs WebSocket
You might get help from mention below blogs
http://www.pubnub.com/blog/websockets-vs-rest-api-understanding-the-difference/
http://blog.arungupta.me/rest-vs-websocket-comparison-benchmarks/
These contion the complete detail of the difference.
Related
I have deployed my nodeJS backend on Heroku Hobby dynos. There are 1500+ active users. So the API response time is very slow some times, Please help to figure out which dynos is better for backend deployement.
It always depends on your application. What type of operations and workload are you handling in your API, do you have any synchronous/blocking operation? Is there a lot of I/O involved? More information about what you are trying to achieve would be helpful to give a better recommendation.
One best practice for Node.js is to scale horizontally, this means, having multiple small servers to handle traffic instead of having one big server (vertical scaling). So, a good recommendation is to scale using multiples dynos, try to scale to 2 and measure again to see if it fits your performance needs.
Some recommended readings:
Good practices for high-performance and scalable Node.js applications
Optimizing Node.js Application Concurrency
I know it is a subjective question, but the reason I ask this question is because
Node.js is not good with heavy computational task
Node.js has some issue with memory leak.
By having the problems above, would node be a good use case to build a payment gateway software?
I'm very comfortable with node, but there are many people said that its better to use other language like golang or scala for this type of systems.
Let me know what you guys think about, whether I should use Node or other languages.
Yes, node.js would be perfectly fine for payment gateway software. An appropriate design using clustering or off-loading computation tasks to child processes could easily help optimize heavy computational tasks.
And, node.js is being used by many heavy traffic commercial sites without memory leak issues. Memory leaks are an issue with faulty software design, not with the platform.
Further, the very nature of payment gateway software (being the middleman in a transaction between two other networking endpoints) is very well set up for the node.js async design that handles lots of in-flight transactions very efficiently.
As with pretty much any major back-end system these days, you just have to design your app to work the way the platform performs best and you could probably use any of the systems you mention just fine.
I need to write a WebSocket server and I am learning Node JS by reading some books I purchased. This server is for a very fast game so I need to stream small messages to groups of clients as quick as possible.
What is the difference between:
Autobahn | JS : http://autobahn.ws/js/
and
Einaros : https://github.com/einaros/ws
?
I have heard that Autobahn is very powerful and capable to deal with 200k clients without a load balancer so I was wondering if someone with more experience could advise me whether there is any advantage in opting for one or another library.
The functional difference is: Einaros is a WebSocket library, whereas Autobahn provides WebSocket implementations (e.g. AutobahnPython), plus WAMP on top of WebSocket.
WAMP provides higher-level communication for apps (RPC + PubSub - pls see the WAMP website). And AutobahnJS is a WAMP implementation for browsers (and NodeJS) on top of WebSocket.
Now, say you don't care about WAMP, and hence only need a raw WebSocket server. Then you can compare AutobahnPython with Einaros primarily based on non-functional characteristics, like protocol compliance, security and performance.
Autobahn has best-in-class protocol compliance. I dare to say that, since the Autobahn project also provides the quasi industry standard WebSocket testsuite - used by most projects - including Einaros. Autobahn has 100% strict passes on all tests. Einaros probably also - I don't know.
Performance: yes, a single AutobahnPython based WebSocket server (4GB RAM, 2 cores, PyPy, FreeBSD in a VirtualBox VM) can handle 200k connected clients. To give you some more data points: here is a post with performance benchmarks on the RaspberryPi.
In particular, this post highlights the most important (IMO) metric: 95%/99% quantile messaging latency. You shouldn't look only at average latency, since there can be big skews and massive outliers. What you want is consistent low latency.
Achieving consistent low latency is non-trivial. E.g. one factor for languages/run-times like NodeJS or PyPy (a JITted Python implementation) is the garbage collector. Every time the GC runs, it'll slow stuff done - potentially introducing large latencies in messaging. I have done extensive benchmarking (unpublished) which indicates that PyPy's incremental GC is very good in this regard. Better than HotSpot (JVM) and NodeJS (Google V8). When in doubt, and since I haven't (yet) published numbers, you shouldn't believe me, but measure yourself.
The one thing I'd strongly recommend: don't rely on average latency, measure quantiles, do histograms.
Disclose: I am original author of Autobahn and work for Tavendo.
Thinking of creating a real-time app where users can collaborate. Found node.js + socket.io to be one of the solutions for this type of problem.
I hear from other developers that there will be a bottleneck as far as number of sockets my server will give to users. So if I have hundreds of users collaborating at same time, number of open sockets will run out and users will not be able to connect. Is this a valid concern?
update: on sort of related note I'm looking to use SockJS instead of Socket.io. There is a thread that explains pros and cons of these libraries. Also this is a good read.
For hundreds of users I don't think it is a concern.
Sockets as you know have persistent connection between the client and the server and both parties can start sending data at any time. Keeping them open is not a problem as much as the handling the load in terms of messages sent/second.
Socket.io can easily handle 1000 concurrent connections. But it will fail if it is sending more than 8-10k messages per second. You will hit the load barrier before your sockets are exhausted. In most cases handling more concurrent users translates to higher load. So don't worry about getting low on sockets. Trying to scale beyond that barrier would require more server resources.
Helpful links :
Socket.IO - are the open connections a concern?
http://www.quora.com/How-do-I-scale-socket-io-servers-2
There are already solutions using this approach like Cloud9 and it works good. There will be a point where you will need to scale out. So if you are planing something big I would think about it.
Here are some tests on sockets.io with 10,000 concurrent connections. Looks like it's good solution but not easy one because of fallback mechanism.
I have built a little system that uses dnode, shoe and browserify on the client, and NodeJS and dnode/shoe on the server end. I'm wondering if it is a good idea to use dnode (RPC) as the sole protocol for a real-time web application.
Let's look at the benefits of DNode or any other RPC interface. I like being able to call functions remotely (RPC). It definitely beats Ajax because you get a consistent interface for communicating from the client to the server and server to the client. I'm also betting that you get a small measure of performance over Ajax because of the HTTP overhead involved with Ajax.
However, using RPC, you have to deal with load balancing and the client connections on the server. But this goes with any websocket implementation. But, with other websocket implementation, you have a more traditional event based system, where the client listens to events from the server and responds to those events. I tried replicating this sort of interface using EventEmitters, but it's awful, and I keep getting warnings about too many handlers. Ugh!
I'm looking to achieve a lightweight, clean, interface that I can use to develop my application with. One that feels robust and is able to scale to many clients. It needs to feel solid.
I'm not really sure what my question is in writing this post. I'm tasked to update this codebase I wrote so that connections aren't lost, and it's overall more robust. I guess I'm just desperate for advice or consulting with my application. Is there someone so willing as to go face-to-face with me on discussing this topic (RPC and real-time web applications)?
Thanks for reading.
I have been investigating some of the same topics as it seemed to me than some of the RPC libs were very cool, but not altogether practical for large scale apps. I actually started with NowJS, realized it was a dead project, moved to DNode/Shoe/Browserify, and finally have moved on to SocketStream in an attempt to offload some of the dirty work to a project that has a unified goal. I really didn't want rewrite what others had already done on this subject and socketstream makes that easy. To get back to your question, as you can see on their page, SocketStream uses sticky sessions. This is a big assumption but one that probably can't be worked around at the moment without further developments. The reason I mention it is that they talk about some of the things they are working on as far as scaling goes. Might we worth a read or reaching out to the developer to see if you could talk things over with him. Good luck!