Can I use Node to build a payment gateway software? - node.js

I know it is a subjective question, but the reason I ask this question is because
Node.js is not good with heavy computational task
Node.js has some issue with memory leak.
By having the problems above, would node be a good use case to build a payment gateway software?
I'm very comfortable with node, but there are many people said that its better to use other language like golang or scala for this type of systems.
Let me know what you guys think about, whether I should use Node or other languages.

Yes, node.js would be perfectly fine for payment gateway software. An appropriate design using clustering or off-loading computation tasks to child processes could easily help optimize heavy computational tasks.
And, node.js is being used by many heavy traffic commercial sites without memory leak issues. Memory leaks are an issue with faulty software design, not with the platform.
Further, the very nature of payment gateway software (being the middleman in a transaction between two other networking endpoints) is very well set up for the node.js async design that handles lots of in-flight transactions very efficiently.
As with pretty much any major back-end system these days, you just have to design your app to work the way the platform performs best and you could probably use any of the systems you mention just fine.

Related

Sockets server made with Erlang vs others

I am learning Erlang and trying to understand how its sockets work as it is meant to be one of the strongest parts of the language and OTP.
I have experience with NodeJS, and wonder, how the applications made with NodeJS and Erlang differ in regards on how multiple sockets connections are managed.
As I understand, while JavaScript is single-threaded, V8 manages all the multiple simultaneous connections for it, though Erlang can manage multiple connections itself.
So, I wonder, if Erlang has excellent support for managing multiple connections at a time, how is it different from other technologies for a programmer? I mean, when I write an app for NodeJS, it can have as many connections open and well-managed as if I wrote code in Erlang, isn't it?
Please share your thoughts, links to some articles about the specialties of Erlang in this context are welcome too.
I am by no means an expert in Erlang, but I think I know Erlang and NodeJs on the same level.
The things you say, are all correct. Bot can handle multiple connections very efficiently, well well-managed you say.
But the thing is, the problems are not only handling multiple concurrent connections. The problems Erlang tries to solve very good, are fail safety, and distribution. I don't think NodeJs will be as good at it, as it is now.
Don't take it wrong, I'm not saying no one can code a distributed app in NodeJs, but considering the tools Erlang gives you, it maybe is a better choice.
For fail safety, as an example, Erlang let's you link your processes, so when one fails, other also fails or gets notified. That is not very practical by itself, but when you look at it alongside supervisors and shared-nothing processes, it is a great tool.
For distribution, Erlang let's you link nodes together. Linked nodes can talk together as if they were on the same machine, and they can spawn processes on other side too. Consider this, with the ability to start a failed app from a failed node on another node that is healthy. Gives you a great uptime.
And not to mention that these tools have years of experience behind them.
Just try to solve these issues on another ecosystem. I say ecosystem, because Erlang as a language is not complete, but the tools and frameworks (mostly OTP) have to be considered too. Then you can also say that Erlang really shines in this areas.
But Erlang also is not very good when it comes to linear processing, number crunching, image/sound processing, etc. That would be better implemented in another system.
I think, in this areas, the big difference between NodeJs and Erlang is their runtime model. NodeJs has one process, one thread that is working async on io-related tasks. Of course, you can run multiple processes, but that is the basic thing. On the other hand, Erlang has a VM called BEAM. Erlang uses special processes inside this VM, very light processes. BEAM schedules them itself, because they are not OS processes. This gives BEAM the advantage to have hundreds of thousands processes at the same time, each doing a task, be it io or anything else.
You see the difference now, I think. Erlang is more battle-tested, more better when fail safety or distribution is a must. NodeJs maybe better when you need faster development, and deployment.

synthetic multi-node crossbar system implementation

I am implementing a system composed of a collection of small systems, ie. Raspberry, Yun, Beaglebone, the occasional PC. Crossbar.io has real promise ... but, as I understand it, doesn't currently support multiple nodes. Am I correct? Does anyone know when that might happen?
In the meantime it occurred to me that each individual node can offer an http interface that I might be able to use for my purposes. My initial thought is to crate workers that wrap access to the web the interface on subsidiary nodes. This fits the overall architecture of the system I want to create - does it have any merit? Is it tractable? I'm new to websockets - and insight would be a great help.
Thanks for your time,
Al
In general that does sound like a fit for Crossbar.io.
There is no timeline on multi-node (i.e. multiple routers), but we hope to have at least hot-standby nodes for high availability ready in Q1. Other than for high availability, I think that a single instance should provide sufficient performance for most applications out there - on a single current (non-high-end) Xeon we're talking tens of thousands of events a second, and concurrent connections are mostly limited by RAM (and 100s of thousands on a single box are definitely not a problem). (If you need more than that then I'd be very interested in your specific use case - we want to learn more about our users.)
I don't completely understand the second part of your question: What precisely is the architecture you're planning here? If you're talking about the integrated Web server, then with recent optimizations (it can now use multiple cores) this should be enough for even moderately big sites, and with SPAs you're not likely to ever run into performance issues.
Hope this helps, and I'll be glad to answer in more detail once you've clarified the second part.

Monolithic (vs) Micro-services ==> Threads (vs) Process

I have a monolithic application with single process having 5 threads. Each thread accomplishes certain specific task. Thinking to move this application to microservices using dockers. If I look at the architecture, each worker thread would become a docker process. So, in some way Monolithic vs Microservices becomes more like Thread vs Process discussion in my case.
The original thinking of having the monolithic was to have threads for performance and share the same memory. Now with microservices arch, I am pushed to a process model that may not suit from performance point of view.
I am kind of stuck on how to approach this problem.
What you are missing here is that microservices is not suitable for any software system in the world! Think about the drivers for migrating your current monolithic system to microservices before doing anything. Are you seeking for high availability and scalability? Do you want to have freedom for writing each thread in different programming languages? Is your system that complicated that could not be comprehended in a monolithic style? and finally, are you ready for paying the expenses of having a microservices style?
Microservices brings in many complexities to the system and may cause performance penalties in favor of higher scalability due to chattiness of services. If performance is an important concern, the system is not that large, and your answer to most of the above questions is "NO", I strongly suggest that you do not go for microservices style. Instead, try to modularize your current code base and refactor the code for better quality and comprehensibility.
Regarding Docker, you can use it even with the monolithic style in order to remove some of the deployment barriers and inconsistency in the development and the deployment environments. If the mentioned issues around deployment do not bother you, do not go for docker either since it will be just a layer of computational overhead.
Microservice will gain your application a more power , but this depend how size your project , what is the degree of the availability do you need , Do you have a lot of teams , a lot of languages and extra
Microservice for some project will be over kill and this can be handled within multithreading , so you can think about your vision before to migrate to this Architecture ,

Should my Google Cloud Endpoints API backend be thread-safe?

I want to be clear: I am asking about the case where I am not using any concurrency in my own implementation. I just want to know if the framework within which my backend will be invoked (ie google app engine) itself imposes thread-safety requirements on the code running on it.
Thank you!
P.S. as a related but separate question, is there any guidance on how to do multithreading in our own backend code (which then obviously needs to be appropriately thread-safe). Specifically, can we use java's standard executor services / thread pools, or there is some google-approved API? Thanks.
So google app engine or any other platform won't ensure thread safety for you because all the critical sections which your concurrency control techniques are trying to make thread safe are defined by the developer (you) and there is no way for the OS to know when reads and writes should occur. For the JVM, the two main concurrency libraries which are heavily used are the Guava libraries (made by google! ;-) ) and the Akka framework.
Both are great libraries, I've used both and have had a pleasure from their learning curves. I would also recommend looking into the Play Framework, they support the akka framework, and building reactive web apps is their main focus. If you're interested in learning some cool aspects of production frameworks, I would highly recommend learning and implementing Dropwizard (Google app engine doesn't support it, but the take aways you get from it definitely outweigh that con).
Please let me know if you have any questions!

Node.js event vs thread programming on server side

We are planning to start a fairly complex web-portal which is expected to attract good local traffic and I've been told by my boss to consider/analyse node.js for the serve side.
I think scalability and multi-core support can be handled with an Nginx or Cherokee in front.
1) Is this node.js ready for some serious/big business?
2) Does this 'event/asynchronous' paradigm on server side has the potential to support the heavy traffic and data operation ? considering the fact that 'everything' is being processed in a single thread and all the live connections would be lost if it got crashed (though its easy to restart).
3) What are the advantages of event based programming compared to thread based style ? or vice-versa.
(I know of higher cost associated with thread switching but hardware can be squeezed with event model.)
Following are interesting but contradicting (to some extent) papers:-
1) http://www.usenix.org/events/hotos03/tech/full_papers/vonbehren/vonbehren_html
2) http://pdos.csail.mit.edu/~rtm/papers/dabek:event.pdf
Node.js is developing extremely rapidly, and most of its functionality is sturdy and ready for business. However, there are a lot of places where its lacking, like database drivers, jquery and DOM, multiple http headers, etc. There are plenty of modules coming up tackling every aspect, but for a production environment you'll have to be careful to pick ones that are stable.
Its actually much MUCH more efficient using a single thread than a thousand (or even fifty) from an operating system perspective, and benchmarks I've read (sorry, don't have them on hand -- will try to find them and link them later) show that it's able to support heavy traffic -- not sure about file-system access though.
Event based programming is:
Cleaner-looking code than threaded code (in JavaScript, that is)
The JavaScript engine is extremely efficient with processing events and handling callbacks, and its easily one of the languages seeing the most runtime optimization right now.
Harder to fit when you are thinking in terms of control flow. With events, you can never be sure of the flow. However, you can also come to think of it as more dynamic programming. You can treat each event being fired as independent.
It forces you to be more security-conscious when programming, for the above reason. In that sense, its better than linear systems, where sometimes you take sanitized input for granted.
As for the two papers, both are relatively old. The first benchmarks against this, which as you can see, has a more recent note about these studies:
http://www.eecs.harvard.edu/~mdw/proj/seda/
It also cites the second paper you linked about what they have done, but refuses to comment on its relevance to the comparison between event-based systems and thread-based ones :)
Try yourself to discover the truth
See What is Node.js? where we cover exactly that:
Node in production is definitely possible, but far from the "turn-key" deployment seemingly promised by the docs. With Node v0.6.x, "cluster" has been integrated into the platform, providing one of the essential building blocks, but my "production.js" script is still ~150 lines of logic to handle stuff like creating the log directory, recycling dead workers, etc. For a "serious" production service, you also need to be prepared to throttle incoming connections and do all the stuff that Apache does for PHP. To be fair, Rails has this exact problem. It is solved via two complementary mechanisms: 1) Putting Rails/Node behind a dedicated webserver (written in C and tested to hell and back) like Nginx (or Apache / Lighttd). The webserver can efficiently serve static content, access logging, rewrite URLs, terminate SSL, enforce access rules, and manage multiple sub-services. For requests that hit the actual node service, the webserver proxies the request through. 2) Using a framework like "Unicorn" that will manage the worker processes, recycle them periodically, etc. I've yet to find a Node serving framework that seems fully baked; it may exist, but I haven't found it yet and still use ~150 lines in my hand-rolled "production.js".

Resources