Recommended way for running applications per Service Fabric node - azure

Can i know what the best practice for running applications on a service fabric node, Is it one service per node or multiple services per node? Let me know any advantages or disadvantages if there are any.

One of the goals/features of using ASF is to run many (micro)services on your nodes, to make efficient use of all the system resources.
Deploy applications at higher density than virtual machines, deploying
hundreds or thousands of applications per machine.
Read more here.

Related

Node.js Cluster module vs Microservices

They both solve the same issue - scalability. When to use which?
And is there a point to integrating cluster API for node app running inside a docker container?
They're not really equivalent. Microservices solve an organizational and code management problem, scalability in a very dynamic way, reducing tight coupling, and keeping bugs isolated to one microservice). cluster solves scalability in a very limited way, by spinning out cluster workers on the same machine. If you have one large app and generally scale vertically (by increasing the amount of computing power your hosts have), cluster is great. If not, breaking things down int services (or further down into microservices) is also great.
You can also do both (your second question), for example running Node apps in containers on Kubernetes, where the Node apps use cluster. Depending on how your containers get run and how many vCPUs they're allocated, it may or may not have any effect, but it's only a couple lines of code so it doesn't hurt to add it.

Running a Windows Service as a statefull service in Service Fabric

I have three .net programs currently running as windows services. We are migrating to Service Fabric and I have a few questions. Our intent is to migrate the services to StateFul service since we need to keep track of locations of files, batch size, etc. that are currently stored in an app.config file. So we can "lift and shift" the code from the onTimer event to the RunAsync as discussed in this stackoverflow question:
How to Migrate Windows Service in Azure Service Fabric
However there are some questions I have about these services. Of course part of using SF is to have the applications in a reliable environment to keep these applications available as much as possible, so the first question is:
Should we only deploy the service to one node and use the reliable
collection to maintain the state of the process should the node go down and
have to be brought back up?
Or, should we deploy the application to say 3 nodes and just have each
application on their node check the reliable collection to see if another
application is processing files and to wait?
files?
The application will "awake" at a determined interval and look at a folder, if there are any files in the folder, it will process them. This could take from a couple of seconds to many minutes. So if the application on was on three nodes, it is entirely possible that the other two applications on their nodes would wake up to process files. If they could check a reliable dictionary to see if one of the other instances of the application is running the file processing, they would just wait until the next time they are needed.
I know this is vague, I am looking for input on whether to launch the application on multiple nodes or a single node?
In short: statefull services have partitioned data. So you will have at least one, and probably more than one, partition. For each partition a primary instance will be up and running serving requests or doing work. Then for each primary instance there will be some secundary instances that will take over when the primairy fails. More info here.
In the configuration of the service you specify the number of partitions and the replica count:
<Service Name="Processing">
<StatefulService ServiceTypeName="ProcessingType" TargetReplicaSetSize="[Processing_TargetReplicaSetSize]" MinReplicaSetSize="[Processing_MinReplicaSetSize]">
<UniformInt64Partition PartitionCount="[Processing_PartitionCount]" LowKey="0" HighKey="25" />
</StatefulService>
</Service>
The primairy and secundairy instances (replica's) will be distributed over the cluster nodes so for example, when the node running the primairy instance goes down a replica on another node will take over.
There is more to it than what I have described but this is the basic idea behind it all.
So to answer your question: you should specify enough replica's on other nodes to gurantuee high availabilty.

configure service fabrics microservices on which node to run

We have an microservices application with 5 stateless services
eShopWeb
eShopAPI
eShopOrder
eShopBasket and eShopPayments
We created an service fabrics cluster in azure with 3 nodes. Now we want to configure like as follows
eShopWeb and eShopOrder need to run on node 1
eShopAPI and eShopPayments needs to run on node 2
eShopOrder needs to run on node 3.
How to achieve the above configuration to rum multiple micro services on same node
You shouldn't care which node runs which service. By tying services to nodes you undermine the self-healing capabilities of SF (what if node 2 fails?). Also, you can't do rolling upgrades this way (except for eShopOrder).
I'd recommend avoiding placement constraints if you can. Unless you have multiple node types, or a large cluster.
Service affinity is for legacy services that are so chatty that they don't perform well when on separate nodes, because of latency in communication.
(And for production, you should use 5 nodes.)
You have two options here.
Placement constraints. You can get details in this question. But I would not recommend this as natural balancing ability will be useless here.
Service affinity. You can find more info here. It's when some services are on the same node as another. But be aware: this rule could be broken; read documentation carefully.
I would go with the second approach and see how it works at least in staging environment.

Scaling microservices using Docker

I've created a Node.js (Meteor) application and I'm looking at strategies to handle scaling in the future. I've designed my application as a set of microservices, and I'm now considering implementing this in production.
What I'd like to do however is have many microservices running on one server instance to maximise resource usage whilst they are using a small number of resources. I know containers are useful for this, but I'm curious if there's a way to create a dynamically scaling set of containers where I can:
Write commands such as "provision another app container on this server if the containers running this app reach > 80% CPU/other limiting metrics",
Provision and prepare other servers if needed for extra containers,
Load balance connections between these containers (and does this affect server load balancing, e.g., send less connections to servers with fewer containers?)
I've looked into AWS EC2, Docker Compose and nginx, but I'm uncertain if I'm going in the right direction.
Investigate Kubernetes and/or Mesos, and you'll never look back. They're tailor-made for what you're looking to do. The two components you should focus on are:
Service Discovery: This allows inter-dependent services (micro-service "A" calls "B") to "find" each other. It's typically done using DNS, but with registration features on top of it that handle what happens as instances are scaled.
Scheduling: In Docker-land, scheduling isn't about CRON jobs, it means how containers are scaled and "packed" into servers in various ways to maximize efficient usage of available resources.
There are actually dozens of options here: Docker Swarm, Rancher, etc. are also competing alternatives. Many cloud vendors like Amazon also offer dedicated services (such as ECS) with these features. But Kubernetes and Mesos are emerging as standard choices, so you'd be in good company if you at least start there.
Metrics could be collected via Docker API ( and cool blog post ) and it's often used for that.
Tinkering with DAPI and docker stack tools (compose/swarm/machine) could provide alot of tools to scale microservice architecture efficiently.
I could advise in favor of Consul to manage discovery in such resource-aware system.
We are using AWS to host our miroservices application, and using ECS (AWS docker service) to containerize the different API.
And in this context, we use AWS auto scaling feature to manage the scale in and scale out. Check this.
Hope it helps.

How to make a distributed node.js application?

Creating a node.js application is simple enough.
var app = require('express')();
app.get('/',function(req,res){
res.send("Hello world!");
});
But suppose people became obsessed with your Hello World! application and exhausted your resources. How could this example be scaled up on practice? I don't understand it, because yes, you could open several node.js instance in different computers - but when someone access http://your_site.com/ it aims directly that specific machine, that specific port, that specific node process. So how?
There are many many ways to deal with this, but it boils down to 2 things:
being able to use more cores per server
being able to scale beyond more than one server.
node-cluster
For the first option, you can user node-cluster or the same solution as for the seconde option. node-cluster (http://nodejs.org/api/cluster.html) essentially is a built in way to fork the node process into one master and multiple workers. Typically, you'd want 1 master and n-1 to n workers (n being your number of available cores).
load balancers
The second option is to use a load balancer that distributes the requests amongst multiple workers (on the same server, or across servers).
Here you have multiple options as well. Here are a few:
a node based option: Load balancing with node.js using http-proxy
nginx: Node.js + Nginx - What now? (using more than one upstream server)
apache: (no clearly helpful link I could use, but a valid option)
One more thing, once you start having multiple processes serving requests, you can no longer use memory to store state, you need an additional service to store shared states, Redis (http://redis.io) is a popular choice, but by no means the only one.
If you use services such as cloudfoundry, heroku, and others, they set it up for you so you only have to worry about your app's logic (and using a service to deal with shared state)
I've been working with node for quite some time but recently got the opportunity to try scaling my node apps and have been researching on the same topic for some time now and have come across following pre-requisites for scaling:
My app needs to be available on a distributed system each running multiple instances of node
Each system should have a load balancer that helps distribute traffic across the node instances.
There should be a master load balancer that should distribute traffic across the node instances on distributed systems.
The master balancer should always be running OR should have a dependable restart mechanism to keep the app stable.
For the above requisites I've come across the following:
Use modules like cluster to start multiple instances of node in a system.
Use nginx always. It's one of the most simplest mechanism for creating a load balancer i've came across so far
Use HAProxy to act as a master load balancer. A few pointers on how to use it and keep it forever running.
Useful resources:
Horizontal scaling node.js and websockets.
Using cluster to take advantages of multiple cores.
I'll keep updating this answer as I progress.
The basic way to use multiple machines is to put them behind a load balancer, and point all your traffic to the load balancer. That way, someone going to http://my_domain.com, and it will point at the load balancer machine. The sole purpose (for this example anyways; in theory more could be done) of the load balancer is to delegate the traffic to a given machine running your application. This means that you can have x number of machines running your application, however an external machine (in this case a browser) can go to the load balancer address and get to one of them. The client doesn't (and doesn't have to) know what machine is actually handling its request. If you are using AWS, it's pretty easy to set up and manage this. Note that Pascal's answer has more detail about your options here.
With Node specifically, you may want to look at the Node Cluster module. I don't really have alot of experience with this module, however it should allow you to spawn multiple process of your application on one machine all sharing the same port. Also node that it's still experimental and I'm not sure how reliably it will be.
I'd recommend to take a look to http://senecajs.org, a microservices toolkit for Node.js. That is a good start point for beginners and to start thinking in "services" instead of monolitic applications.
Having said that, building distributed applcations is hard, take time to learn, take LOT of time to master it, and usually you will face a lot trade-off between performance, reliability, manteinance, etc.

Resources