Monolithic (vs) Micro-services ==> Threads (vs) Process - multithreading

I have a monolithic application with single process having 5 threads. Each thread accomplishes certain specific task. Thinking to move this application to microservices using dockers. If I look at the architecture, each worker thread would become a docker process. So, in some way Monolithic vs Microservices becomes more like Thread vs Process discussion in my case.
The original thinking of having the monolithic was to have threads for performance and share the same memory. Now with microservices arch, I am pushed to a process model that may not suit from performance point of view.
I am kind of stuck on how to approach this problem.

What you are missing here is that microservices is not suitable for any software system in the world! Think about the drivers for migrating your current monolithic system to microservices before doing anything. Are you seeking for high availability and scalability? Do you want to have freedom for writing each thread in different programming languages? Is your system that complicated that could not be comprehended in a monolithic style? and finally, are you ready for paying the expenses of having a microservices style?
Microservices brings in many complexities to the system and may cause performance penalties in favor of higher scalability due to chattiness of services. If performance is an important concern, the system is not that large, and your answer to most of the above questions is "NO", I strongly suggest that you do not go for microservices style. Instead, try to modularize your current code base and refactor the code for better quality and comprehensibility.
Regarding Docker, you can use it even with the monolithic style in order to remove some of the deployment barriers and inconsistency in the development and the deployment environments. If the mentioned issues around deployment do not bother you, do not go for docker either since it will be just a layer of computational overhead.

Microservice will gain your application a more power , but this depend how size your project , what is the degree of the availability do you need , Do you have a lot of teams , a lot of languages and extra
Microservice for some project will be over kill and this can be handled within multithreading , so you can think about your vision before to migrate to this Architecture ,

Related

Should I be moving to a microservices based architecture?

I am working on a monolith system. All of it's code is in one repository (Web API and background workers). System is written in Nodejs and MongoDB (Mongoose) is used as a data store. My goal is to set a new path how project should evolve. At first I was wondering if I could move towards microservices based architecture.
Monolith architecture creates some problems:
If my background workers needs to scale. I have to deploy all the project to the server despite only using a small fraction of it.
All system must be redeployed when code changes. What if payment processor calls webhook while system is being redeployed?
Using microsevices advantages are quite obvious:
Smaller code base for individual microservice. Easier to reason about it.
Ability to select programming tools best for particular use case.
Easier to scale.
Looking at the current code I noticed that Mongoose ODM (Object Document Mapper) models are used across all the project to create, query and update models in database. As a principle of a good programming all such interactions with database should be abstracted. Business logic should not leak into other system layers. I could do that by introducing REPOSITORY pattern (Domain Driven Design). While code is still being shared across web api and it's background workers it is not a hard task to do.
If i decide to extract repositories into standalone microservices than all bunch of problems arise:
Some sort of query language must be introduced to accommodate complex search queries.
Interface must provide a way to iterate over search results (cursor based navigation) without returning all database documents over network.
Since project is in it's early stage and I am the only developer, going to microservices based architecture seems like an overkill. Maybe there are other approaches I should consider?
Extracting business logic and interaction with database into separate repository and sharing among services to avoid complex communication protocols between services?
Based on my experience with working in Microservices for last few years, it seems like an overkill in current scenario but pays off in long-term.
Based on the information stated above, my thoughts are:
Code Structure - Microservices Architecture (MSA) applying in above context means not separating DAO, Business Logic etc. rather is more on the designing system as per business functions. For example, if it is an eCommerce application, then you can shipping, cart, search as separate services, which can further be divided into smaller services. Read it more about domain-driven design here.
Deployment Unit - Keeping microservices apps as an independent deployment unit is a key principle. Hence, keep a vertical slice of the application and package them as Docker Image with Application Code, App Server (if any), Database and OS (Linux etc.)
Communication - With MSA, communication between services become a key and hence general practice is to remain with the message-oriented approach for communication (read about the reactive system and reactive programming for more insight).
PaaS Solution - There are multiple PaaS solutions available, which you can apply so that you don't need to worry about all the other aspects like container management, container orchestration, auto-scaling, configuration management, log management and monitoring etc. See following PaaS solutions:
https://www.nanoscale.io/ by TIBCO
https://fabric8.io/ - by RedHat
https://openshift.io - by RedHat
Cloud Vendor Platforms - AWS, Azure & Google Cloud all of them have specific support for Microservices App from the deployment perspective, which we can use as an alternative solution if you don't want to deploy PaaS solution in your organization.
Hope these pointers will have in understanding the overall landscape so that you can structure your architecture for future need.
I am working on a monolith system... My goal is to set a new path how project should evolve. At first I was wondering if I could move towards microservices based architecture.
In what ways do you need to evolve the project? Will it be mostly bugfixes, adding features, improving performance and/or scalability? Do you anticipate other developers collaborating in the future? Are you currently having maintenance issues? The answers to these questions (and many more) should be considered in guiding your choices.
You seem to be doing your homework around the pros and cons of a microservice architecture, so if you haven't asked yourself why you're even doing this in the first place, now would be good time to do so.
Maybe there are other approaches I should consider?
There's always the good old don't-break-what's-going ;)

Best approach docker

I have an application with the following architecture microservices to implement. My question is how to use the docker:
I must:
I see three scenarios and have no doubt that the most efficient:
My service should be easy to scale because it is an application with significant amounts of requests, and your stay will be made at heroku, I wonder which of the three scenario will be more effective, I'm new to microservices and I have no idea what best approach.
I think it is really down to how or how intensive your services are used. The beauty of microservices architecture is the services are completely independent to each other so you have the freedom to configure your infrastructure the way you want to maximise throughput. The use of docker seems a little bit irrelevant to your question.

Can I use Node to build a payment gateway software?

I know it is a subjective question, but the reason I ask this question is because
Node.js is not good with heavy computational task
Node.js has some issue with memory leak.
By having the problems above, would node be a good use case to build a payment gateway software?
I'm very comfortable with node, but there are many people said that its better to use other language like golang or scala for this type of systems.
Let me know what you guys think about, whether I should use Node or other languages.
Yes, node.js would be perfectly fine for payment gateway software. An appropriate design using clustering or off-loading computation tasks to child processes could easily help optimize heavy computational tasks.
And, node.js is being used by many heavy traffic commercial sites without memory leak issues. Memory leaks are an issue with faulty software design, not with the platform.
Further, the very nature of payment gateway software (being the middleman in a transaction between two other networking endpoints) is very well set up for the node.js async design that handles lots of in-flight transactions very efficiently.
As with pretty much any major back-end system these days, you just have to design your app to work the way the platform performs best and you could probably use any of the systems you mention just fine.

Nodejs to utilize all cores on all CPUs

I'm going to create multithreaded application that highly utilize all cores on all CPUs doing some intensive IO (web browsing) and then intensive CPU (analyzis of crawled streams). Is NodeJS good for that (since it's single threaded and I don't wanna run couple of nodejs instances [one per single core] and sync between them). Or should I consider some other platform?
Node is perfect for that; it is actually named Node as reference to the intended topology of its apps, as multiple (distributed) nodes that communicate with each other.
Take a look at the built-in cluster module, which handles multi-instance applications and thread sharing.
Further reading
Multi Core NodeJS App, is it possible in a single thread framework? by Cristian Ramirez on Codeburst
Scaling NodeJS Applications by Samer Buna on FreeCodeCamp
JavaScript V8 Engine was made to work with async tasks running on One core. However, it doesn't mean that you can have multiple cores running the same or perhaps, differente applications that communicate between each other.
You just have to be aware of some multiple-cores problems that might occur.
For example, if you are going to share LOTS of information between threads, then perhaps this is not the best language for you.
Considering the factor of multi-core language, I have recently been introduced to Elixir, based on Erlang (http://elixir-lang.org/).
It is a really cool language, developed 100% thinking about multi-thread applications. But it was made to make it easy, and also very fast applications that can be scalonable for as many cores as you want/can.
Back to node, the answer is yes, it support multi-thread, but is up to you to decide what to continue with. Take a look at this answer, and you might clarify your mind: Node.js on multi-core machines

performance - multithreaded or multiprocess applications

In order to develop a highly network intensive server application on linux, what sort of architecture is preferred? The idea is that this app would typically run on machines with multiple cores (either virtual or physical). Considering that performance is the key criteria, is it better to go for a multi-threaded application or the one with multi-process design? I do know that sharing of resources and synchronization to access of such resources from multiple processes is a lot of programming overhead, but as mentioned earlier overall performance is the key requirement and so we can ignore those things. And the programming language would be C/C++.
I have heard that even the multi-threaded applications (single process) can take advantage of multiple cores and run each thread on a different core independently (as long as there is no sync issues). And this scheduling is done by the kernel. If so, is there not much difference in performance between multi-threaded applications and multi-process applications? Nginx uses a multi-process architecture and is really quick, but can one get the same performance with multi-threaded applications?
Thanks.
Processes and threads on linux are very similar to each other - the main difference is that the whole virtual memory is shared as well as certain things like signal handling differ.
This makes for cheaper context switches between threads (no need for costly MMU reloads etc.) but doesn't necessarily cause much difference in speed (especially outside of thread creation).
For designing a highly network intensive application, basically the only solution is to use an evented architecture (otherwise you'll bog down the system with huge amount of processes/threads and spend more time on their management than actually running work code), where you react to I/O on sockets and based on which sockets exhibit activity do apropriate operations.
A famous writeup about the problems faced in such situations is "The C10k problem", available from http://www.kegel.com/c10k.html - it describes different I/O approaches, so despite being a bit dated, it's a very good introduction.
Be careful before jumping deeply into reactor-like designs, though - they can get unwieldy and complex, so see if you can't use library/language that provides a nicer abstraction over it (Erlang is my personal favourite in this, languages with coroutines like Go can be useful too).
If your threads are doing the job independent from one another, under linux, there is simply no reason to not going with multiple processes instead. Multiple processes would increase your memory usage as each process has its own private memory space, but on the other hand sharing the memory space between independent threads is the worse decision. Context switching between threads vs processes is usually done better for processes rather than threads although its a little bit architecture and code dependent. Processes are safe to not get serialized with locks and mutex es. Processes are easier to manage and interact with in Linux. here is a good document you might find interesting (http://elinux.org/images/1/1c/Ben-Yossef-GoodBadUgly.pdf).

Resources