Service Fabric was just announced at the build conference. I was reading the scarce documentation about it and I have a question.
I'm evaluating Service Fabric for hosting CRUD like microservices that are at the moment built in ASP.NET WebApi.
Is Service Fabric geared towards hosting small pieces of functionality that receive data, process it and return the result, rather than hosting CRUD WebApi types of application?
Service Fabric enables the creation of both stateless and stateful microservices.
As the name suggests, any state maintained by an an instance of a stateless service will be lost if the node goes down. A new, fresh instance will simply be spun up elsewhere in the cluster.
Stateful services offer the ability to persist state without relying on an external store. Any data stored in a Reliable Collection will be automatically replicated across multiple nodes in the cluster, ensuring that the state is resilient to failures.
A common pattern is to use a stateless service as the client-facing gateway to the application and then have that service direct traffic to the app's partitioned stateful services. This hides the work of resolving partitions from clients, allowing them to to target one logical endpoint with all requests.
Take a look at the WordCount sample for an example of how this works. The WordCount.WebService stateless service acts as the front end to the application. It simply resolves the partition based on the incoming request and then sends it on. The WordCount.Service stateful service (partitioned based on the first letter of the word) immediately puts those incoming requests in a ReliableQueue and then processes them in the background, storing the results in a ReliableDictionary.
For more details, see the Reliable Services Overview.
Note: for now, the best way to expose WebAPI endpoints to clients is to self-host an OWIN server in the stateless service. ASP.NET 5 projects will soon be supported as well.
This video answers my own question: http://channel9.msdn.com/Events/Build/2015/2-704. In summary, we should use Stateless Services to host ASP.NET based sites or API's which persist data to external data stores.
If you don't have state (or have it externally), Stateless Service is the way to start.
Answer to the original question is "both". Basically, anything that have main() function (with couple of more extended contract methods to talk to Service Fabric) can be a service in Service Fabric world.
Related
I went over this blog Azure SF vs Docker but it didn't answer my doubts completely.
I have Docker Data Center on-prem and i want to push Azure SF into this. But i feel DDC is doing exactly same thing as Service Fabric.
Few things from my mind.
DDC takes care of scaling up, all types of container orchestration, health monitoring etc.
Few items which it doesn't provide :
Service remoting between services, publish subscribe model between services, stateful layer(i've heard about portworx volume rep)
Can someone enlighten me more on when should i go with SF which DDC doesn't provide.
If your application landscape consists of containers and there is no intention to change that then you should probably stick to DDC.
Service Fabric (ASF) has a lot more to offer than support for containers. In fact, in earlier days it did not even had support for containers.
The focus of AFS is to provide a platform for building microservices based applications using stateless services, stateful services and actors.
Things that DDC does not provide:
Stateful Services
Actor model
Stateful Services: The benefit of stateful services is that the data lives where the code lives, so no more separate data stores like a NoSQL or relational database. A great benefit is the reduced latency. So in other words, if you have a frontend running in a container that connects to a container that contains a MySQL server for example, you can replace that using a mix of stateless and stateful services.
Actor model: The actor pattern is a computational model for concurrent or distributed systems in which a large number of these actors can execute simultaneously and independently of each other.
In some scenario's the use of containers in ASF is a temporarily one, to lift and shift existing software and combine that with ASF own service models. In later stages the containers can be replaced by ASF services.
The official docs does list some scenario's as when to run containers on ASF:
IIS lift and shift: If you have existing ASP.NET MVC apps that you want to continue to use, put them in a container instead of migrating them to ASP.NET Core. These ASP.NET MVC apps depend on Internet Information Services (IIS). You can package these applications into container images from the precreated IIS image and deploy them with Service Fabric. See Container Images on Windows Server for information about Windows containers.
Mix containers and Service Fabric microservices: Use an existing container image for part of your application. For example, you might use the NGINX container for the web front end of your application and stateful services for the more intensive back-end computation.
Reduce impact of "noisy neighbors" services: You can use the resource governance ability of containers to restrict the resources that a service uses on a host. If services might consume many resources and affect the performance of others (such as a long-running, query-like operation), consider putting these services into containers that have resource governance.
By the way, in your referenced Q & A the fact that is a Microsoft product is listened as a possible disadvantage. It might still be to some, but Microsoft has announces it will open source ASF.
Can you guys explain
Service Fabric can be packaged with MULTIPLE SERVICES to be shipped but then
how do you reuse some of these services into other Application?
Is there a way Reliable Dictionary or Reliable Queue may be shared among
services deployed on Same Cluster?
I tried reading on google but no clear understanding. Your help will be really appreciated.
... how do you reuse some of these services into other Application?
What do you mean with reuse? Sharing the code? You could have a service in Application A talk to a service in Application B instead of having the same service in Application A.
Is there a way Reliable Dictionary or Reliable Queue may be shared among services deployed on Same Cluster?
No there is not. A Reliable Dictionary or Reliable Queue provides data locality to a service removing the need for additional network calls. As soon as you need this same data for multiple services you should consider using other storage solutions like CosmosDB, Blob storage or another database.
If you are looking for some kind of distributed cache you can take a look at Azure Redis.
It is, however, entirely possible to expose the data of a Reliable Dictionary or Reliable Queue using a service. Then that service acts like a data provider / repository. You can expose methods like Add() or Delete() in such a service that results in an update of the Reliable Dictionary or Reliable Queue.
I need to deploy two node services to CF (each service in its own container).
The apps need to communicate. How is it recommended to implement this communication? I can't find any guide which explains service-to-service communication in CF, and since it should deploy to the cloud I need some best practices. Some examples will be very helpful.
This is a classic question that always come to solve any enterprise application integration pattern and it comes down to the point that, what type of integration needs one has.
If an app want to have synchronous communication to get a real time response, RESTFul APIs are the most loved integration style of this age. But one also need to consider that, creation of huge numbers of APIs (which is the downside of going with Microservices based architecture) also brings-in the huge overhead of maintaining the set and locating the correct one. An API Gateway and a Service Discovery tools should be of help here. I am a novice about Blue-mix but you can surely host a Spring-Cloud-Eureka or Consul based Service Discovery on it to serve the purpose, and similarly Spring Cloud Zuul to have an API Gateway.
Another simple catch here is to ensure not to build one central service as fat spof to cater to whole of your microservices world but rather have many such services each catering to a contextually bound microservices.
On the similar line, if the need is to have async communication, message brokers such as - RabbitMQ, Kakfa should be the best and simplest integration style for apps to communicate. The same catch of not building a SPOF service but rather have separate service instances one each for a set of bounded microservices applies here as well, with all these instances being further federated for wider communication should be taken care of.
Your answer will depend on what kind of communication you want between your apps.
If you're looking to deploy a microservice-based architecture pattern for your Node services, i.e. server code that performs an independent, granular business function, I would recommend getting started reading the docs here and using the new Bluemix Developer Console.
Here there is a growing set of patterns and starters that you can use to understand and develop cloud native apps that can communicate to each other by exposing API endpoints compliant with the Open API specification and auto-generating SDKs for your omnichannel client applications.
After downloading the selected starter, you can modify the code to expose an API that performs the business logic that you need. Subsequently, you can run your project locally in a container or deploy it to Bluemix using the bx dev command line tool.
After setting that up, you will have cross platform, language independent communication between your microservices and client applications.
What is the reasoning behind Applications concept in Service Fabric? What is the recommended relation between Applications and Services? In which scenarios do Applications prove useful?
Here is a nice summary how logical services differ from physical services: https://learn.microsoft.com/en-us/dotnet/standard/microservices-architecture/architect-microservice-container-applications/logical-versus-physical-architecture
Now, in relation to Service Fabric, Service Fabric applications represent logical services while Service Fabric services represent physical services. To simplify it, a Service Fabric application is a deployment unit, so you would put there multiple services that rely on the same persistent storage or have other inter-dependencies so that you really need to deploy them together. If you have totally independent services, you would put them into different Service Fabric applications.
An application is a collection of constituent services that perform a certain function or functions. A service performs a complete and standalone function and can start and run independently of other services. A service is composed of code, configuration, and data. For each service, code consists of the executable binaries, configuration consists of service settings that can be loaded at run time, and data consists of arbitrary static data to be consumed by the service. Each component in this hierarchical application model can be versioned and upgraded independently.
It is described here in detail
How I currently see it, applications are a nice concept to group multiple services together and manage them as single unit. In context of service fabric, this is useful if you have multiple nano-services which do not warrant them being completely standalone; instead you can package them together into microservices (SF application).
Disclaimers:
- nano-service would be a REALLY small piece of code running as a stateless SF service for example (e.g. read from queue, couple of lines of code to process, write to another queue).
- in case of "normal" microservices, one could consider packaging them as 1 SF application = 1 SF service
An application is a required top level container for services. You deploy applications, not services. So you cannot really speak about differences between the two since you cannot have services without an application.
From https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-application-model:
An application is a collection of constituent services that perform a certain function or functions. A service performs a complete and standalone function (it can start and run independently of other services) and is composed of code, configuration, and data. For each service, code consists of the executable binaries, configuration consists of service settings that can be loaded at run time, and data consists of arbitrary static data to be consumed by the service. Each component in this hierarchical application model can be versioned and upgraded independently.
Take a look at the link provided and you will see the hierarchical relationship.
I need help how to think about designing our application to fit into the new Azure Service Fabric template.
Today we have an application built on Azure Cloud Services. The application is built around DDD and we have separate bounded contexts for different subsystem parts of the application. The bounded contexts are today hosted in one worker role that exposes these subsystems using a single WebAPI.
Additionally we have one Web Role hosting the web frontend and one Worker Role processing a background queue.
We strive to move to a micro services architecture. The first thing I planned to do was to extract all bounded context into their own API-hosts. This will result in 5-10 new WebAPI services supporting our subsystems.
To my question, should all of these subsystem/bounded context/API-hosts be their own Service Fabric Application or a service within a single Service Fabric Application?
I've read the documentation, found here Service Fabric Application Model, over and over and I can't figure out where my services fits in.
We want the system to support different versions of the services, and the services should also be possible to scale different from another. There might even be a requirement to have one micro service to run in a larger VM size then the rest.
Please can someone guide me in which suits my needs.
I think you have the right idea, in general terms, that each bounded context is a (micro) service. Service Fabric gives you two levels of organization with applications and services, where an application is a logical grouping of services. Here's what that means for you:
Logically speaking, think of an application as a cohesive set of functionality. The services that collectively form that cohesive set of functionality should be grouped as an application. You can ask yourself, for each service: "does it make sense to deploy this service by itself without these other services?" If the answer is no, then they should probably be grouped in the same application.
Developmentally speaking, the Visual Studio tooling is geared a bit more toward multiple services in one application, but you can have multiple applications in one solution too.
Operationally speaking, an application represents a process boundary, upgrade group, and versioning group:
Each instance of an application you create gets its own process (or set of processes if you have multiple service types in the application). Service instances of a service type share host processes. Service instances of different service types get their own process per type.
The application is the top level upgrade unit, that is, every upgrade you do is an application upgrade. You can upgrade individual services within an application (you don't always have to upgrade every service within an application), but each time you do an upgrade, the application version changes.
You can create side-by-side instances of different versions of the same application type in your cluster. You cannot create side-by-side instances of different versions of the same service type within an application instance.
Placement and scale is done at the service. So for example, you can scale one service in an application, and you can place another service on a larger VM.