Can do transaction control on multiple distributed data destination Nodejs? - node.js

I have the case write data to multiple data destination in nodejs
writeToMySql(data); //Step 1
postRestApi(data); //Step 2
writeToSqlServer(data); //Step 3
So the problems is data destination is distributed. I want to do something like "Transaction", if any step in 3 step fail, all 1,2,3 is rollback
But the Step 2 is rest Api, if data are posted to Api, I cannot roll back because the data is go in another server/ services;
So is there some way to do this concept? Thank everyone;

Generally, it is not something supported by Node JS and other "modern" software development platforms. You would need to rely on older techniques like sagas which would mean the development overhead of detecting and rolling back the transactions programmatically.
but it is important to remember that this is not a language thing so much as a platform thing. For example HTTP is not transactional, so if you implement REST style API on HTTP, you do not get the other enterprise distributed computing features. If you implemented the REST style with SOAP or CORBA (which was quite common), for example, then yes, features like XA and others are part of those protocols.
While this is slowly changing, many of the distributed computing features developed in the 1980 and 1990 such as distributed transaction management and security/identity propagation, were not included in "modern" software development due to a lack of awareness. These traditional features are slowly being included in "modern" software development as the proponents gain more experience in the problem space.

Related

How to connect MetaTrader with a Node.JS?

I'm building a system, based on Node.JS, to connect with MetaTrader and to process all action like link account, open, close trade order...
But I still have not found out the way how to connect with MetaTrader in Nodejs. Can you give me a solution or package examples, that can help me do that?
Thanks!
You can try MetaApi https://metaapi.cloud cloud service which provides REST API and WebSocket API access to both MetaTrader 4 and MetaTrader 5 accounts.
Official REST API documentation: https://metaapi.cloud/docs/client
SDKs: https://metaapi.cloud/sdks (javascript, python and Java SDKs are provided as per April 2021)
It supports reading account information, positions, orders, trade history, receiving quotes, and accessing market data.
The service also provides copy trading API https://metaapi.cloud/docs/copyfactory and API to calculate forex trading metrics on a MetaTrader account https://metaapi.cloud/docs/metastats.
Update: as of mid Mar 2021 MetaApi allows limited API testing without adding a payment method
Observation:
MetaTrader software suite has multiple parts, only one of which is a customer-facing one - the MetaTrader Terminal 4/5. This terminal software communicates with MT4/5 Server and there are many other, additional Broker-side MetaTrader suite, server-cooperating systems.
Given your indication above, you seem to plan for Node.JS functional integration with MetaTrader Terminal piece of software.
Limitations:
As clarified above, the MetaTrader Terminal 4/5 software platform is the subject of interest and before technical steps are taken, a validation ought take place, so as to confirm, whether programmable features and services, natively supported inside MT4 Terminal are covering all you needs or not.
Given the MT4 Terminal has a programmable ecosystem for both for automated processing and for semi-automated back-testing, these two principal directions do not provide the same level of comfort for integration with an external cooperating logic or event-flow.
Given you project needs are not met with the built-in native MQL4/MQL5 code-execution environment, your further approach will have to be mixed with some GUI-manipulating assistive technologies, which may help to cover the gaps detected in the functional mapping pre-validation phase.
Approach:
For the purpose of making the MT4 Terminal code-execution ecosystem to cooperate with external worlds, there is a built in ability to #import extending features, not present in the native MQL4/5 language via DLL-s.
Having received this freedom of design, the user-code in the MQL4/5 language can borrow all missing features and services available for such integration projects.
Both Node.JS and MetaTrader Terminal 4/5 can use ZeroMQ and/or nanomsg for a fast and productive integration of a heterogeneous distributed system, which seems to be a match for your indicated needs.
Feel free to read other posts here and here, about signalling/messaging function-plane concepts, used for the sake of this very kind of system integration.

Should I be moving to a microservices based architecture?

I am working on a monolith system. All of it's code is in one repository (Web API and background workers). System is written in Nodejs and MongoDB (Mongoose) is used as a data store. My goal is to set a new path how project should evolve. At first I was wondering if I could move towards microservices based architecture.
Monolith architecture creates some problems:
If my background workers needs to scale. I have to deploy all the project to the server despite only using a small fraction of it.
All system must be redeployed when code changes. What if payment processor calls webhook while system is being redeployed?
Using microsevices advantages are quite obvious:
Smaller code base for individual microservice. Easier to reason about it.
Ability to select programming tools best for particular use case.
Easier to scale.
Looking at the current code I noticed that Mongoose ODM (Object Document Mapper) models are used across all the project to create, query and update models in database. As a principle of a good programming all such interactions with database should be abstracted. Business logic should not leak into other system layers. I could do that by introducing REPOSITORY pattern (Domain Driven Design). While code is still being shared across web api and it's background workers it is not a hard task to do.
If i decide to extract repositories into standalone microservices than all bunch of problems arise:
Some sort of query language must be introduced to accommodate complex search queries.
Interface must provide a way to iterate over search results (cursor based navigation) without returning all database documents over network.
Since project is in it's early stage and I am the only developer, going to microservices based architecture seems like an overkill. Maybe there are other approaches I should consider?
Extracting business logic and interaction with database into separate repository and sharing among services to avoid complex communication protocols between services?
Based on my experience with working in Microservices for last few years, it seems like an overkill in current scenario but pays off in long-term.
Based on the information stated above, my thoughts are:
Code Structure - Microservices Architecture (MSA) applying in above context means not separating DAO, Business Logic etc. rather is more on the designing system as per business functions. For example, if it is an eCommerce application, then you can shipping, cart, search as separate services, which can further be divided into smaller services. Read it more about domain-driven design here.
Deployment Unit - Keeping microservices apps as an independent deployment unit is a key principle. Hence, keep a vertical slice of the application and package them as Docker Image with Application Code, App Server (if any), Database and OS (Linux etc.)
Communication - With MSA, communication between services become a key and hence general practice is to remain with the message-oriented approach for communication (read about the reactive system and reactive programming for more insight).
PaaS Solution - There are multiple PaaS solutions available, which you can apply so that you don't need to worry about all the other aspects like container management, container orchestration, auto-scaling, configuration management, log management and monitoring etc. See following PaaS solutions:
https://www.nanoscale.io/ by TIBCO
https://fabric8.io/ - by RedHat
https://openshift.io - by RedHat
Cloud Vendor Platforms - AWS, Azure & Google Cloud all of them have specific support for Microservices App from the deployment perspective, which we can use as an alternative solution if you don't want to deploy PaaS solution in your organization.
Hope these pointers will have in understanding the overall landscape so that you can structure your architecture for future need.
I am working on a monolith system... My goal is to set a new path how project should evolve. At first I was wondering if I could move towards microservices based architecture.
In what ways do you need to evolve the project? Will it be mostly bugfixes, adding features, improving performance and/or scalability? Do you anticipate other developers collaborating in the future? Are you currently having maintenance issues? The answers to these questions (and many more) should be considered in guiding your choices.
You seem to be doing your homework around the pros and cons of a microservice architecture, so if you haven't asked yourself why you're even doing this in the first place, now would be good time to do so.
Maybe there are other approaches I should consider?
There's always the good old don't-break-what's-going ;)

Would Node.js be the right choice for this application?

I spent the last couple of days figuring out what development stack to use for the interactive student platform I'm planning to build.
I figured out that the MEAN stack may suit the job very well. However, I face a dilemma whether to use Node.js as backend technology for the application:
Reasons to consider Node
The backend will mainly consist of realtime components. E.g. collaboration tools, notifications, etc.
These components will handle this data concurrently
It will scale better than a conventional server-side programming language such as PHP
Exposing the data with REST for e.g. a mobile applications will be a breeze
Having one data format (JSON) in the front- and backend will speed up development.
Doubts
Some components require computation. Although not that complex, it may slow down the application.
Although the application is mostly a single page application, the application will (in a later stage have some features that Node seems not typically suited for. E.g. a payment workflow.
I already made the switch from a previous approach, so this time I want to be sure to choose the right approach. Will Node.js be the right choice for this application, or will a, for example, PHP backend with Laravel suit the job better as the application matures?
I think there's a whole range of possibilities you're not considering, for example it's a perfectly valid approach to use Node for some of the back-end (e.g. connections to third parties, managing the UI, handling concurrent users) while delegating some of the back-end to other components that are more suited (e.g. components that require heavy computation).
That said, I don't see anything you describe in your 'doubts' as being particularly non-nodish. The computation stuff you say will be lightweight, but my recommendation there is to treat it like any other async task, then if you decide later that it is a problem (e.g. slows down the app) it's pretty trivial to extract that out into either a separate Node process (therefore not blocking your main app's event loop) or use a component built in your language of choice (Java, .NET, C, Perl, whatever) as described above.
I don't understand why you suggest a workflow isn't something Node is suited for. I've seen and built a number of them in Node and other frameworks, it's no less suited for it than any other framework, and better than some.

Transition from RestKit to pure AFNetworking 2.0

I'd been using RestKit for the last two years, but recently I've started thinking about transition from these monolith framework as it seems to be really overkill.
Here's my pros for moving forward:
There is big need in using NSURLSession for background fetches and RestKit has only experimental branch for transition to AFNetworking 2.0. No actual dates when transition will be finished. (Main Reason)
No need for CoreData support in network library as no need for fully functional offline data storage.
Having headache with new concept of response/request descriptors as they don't support different parameters in path patterns (ex. access token parameter) and there is no way to create object request operation in one line with custom descriptor. Here I am loosing features of object manager as facade.
I. The biggest loss of RestKit for me in object mapping process.
Could you recommend standalone libraries that you use which shows themselves as flexible and stable?
II. And as I sad I need no fully functional storage but I still need some caching support in some places.
I've heard that NSURLCache has become useful in last OS release.
Did you use it and what's the strategy?
Does it return cached API responses when network connection is down?
III. Does anybody faces the same problems?
What solutions have you applied?
Maybe someone could give some piece of advice about architecture that he or she uses in multiple apps with pure AFNetworking?
I. In agreement with others who have commented, AFNetworking + Mantle is a simple and effective way to interact with a Restful API and to replace RestKit's object mapping process that you miss.
II. To answer the requirements of your caching support is highly dependent on the context. However, I have found for my recent functional requirements that caching a view model for a particular controller's screen and only caching reference data returned by APIs allows me to keep the application logic relatively simple whilst giving the user some continuity. A simple error notification for connectivity issues can be dealt with a cross-cutting manner.
III. One thought on the architecture relevant to this aspect is to ensure that the APIs the app is dependent on provides data according to the app experience. This allows your app to focus on what it is good at (a very slick user-experience) and moves logic into the API's closer to API dependencies such as data. This has a further benefit of reducing the chattiness of the app.

High Scalability in Domain-Driven Design

I'm using DDD for a service-oriented application intended to transmit a high volume of messages between a high volume of web clients (i.e., browsers).
Because in the context of required functionality, the need for transmission outweighs the need for storage, I love the idea of relying on RAM primarily and minimizing use of the database.
However I'm unclear on how to architect this from a scalability point of view. A web farm creates high availability of service endpoints and domain logic processing. But no matter how many servers I have, it seems they must all share a common repository so that their data is consistent.
How do I build this repository so that it's as scalable as possible? How can it be splashed across an array of physical machines in a manner such that all machines are consistent and each couldn't care less if another goes down?
Also since touching the database will be required occasionally (e.g., when a client goes missing and messages intended for it must be stored until it returns), how should I organize my memory-based code and data access layer? Are they both considered "the repository"?
There are several ways to solve this issue. No single answer can really cover it all...
One method to ensure your scalability is to simply scale the hardware. Write your web services to be stateless so that you can run a web farm (all running the same identical services, pointing to the same DB) and turn your DB into a cluster. Clustered databases run over multiple servers and work on the same storage. However, this scenario can get complicated and expensive quite quickly.
Some interesting links:
http://scale-out-blog.blogspot.com/2009/09/future-of-database-clustering.html
http://en.wikipedia.org/wiki/Server_farm
Another method is to look at architecture. CQRS is a common architectural model that ensures scalability. For instance, this architecture model -- its name stands for Command/Query Responsibility Segregation -- builds different databases for reading and writing. This seems contradictory, but if you study it, it becomes natural and you wonder why you've never thought of it before. Simply put, most apps do a lot more reading than writing, and writing tends to be a lot more complicated than reading (requiring business rule validation etc.), so why not separate the two? You can use your expensive transactional database for writing and then your cheap, maybe Non-SQL based or open source, database over multiple reading servers. Your read model is then optimized for the screens of your application(s), whereas the write model is optimized solely for writing and is in fact a DDD-based set of repositories.
There's just not enough room here to cover this option in detail, but CQRS is a good way of achieving scalability and even ease of development, once you have a CQRS framework in place. There are many other advantages to CQRS, such as ease of auditing (if you combine it with the "event sourcing" technique, which is common in CQRS-based environments).
Some interesting links:
http://cqrsinfo.com
http://abdullin.com/cqrs
http://blog.fossmo.net/post/Command-and-Query-Responsibility-Segregation-(CQRS).aspx
Are you ready for some reading? There are a lot of options, but I believe you should start by learning about the advantages of modern distributed NoSQL dbs, and enjoy learning from the experience learned in facebook, linkedin and other friends. Start here:
http://highscalability.com/
http://nosql-database.org/

Resources