We are developing Microservices with .NET Core on top of Service Fabric.
We have two development environments that is part of our release process, Automated Test Environment and Functional Test Environment. Using two full instances of SF on Azure is costy given that we can tolerate availability and performance for our dev environment and given that our production environment is on Azure. We already have a VM that we can use.
Does SF work under Windows Server 2016 Core? (I couldn't find any affirmation online).
Is it possible to have two instances of our application running on one VM?
In order to setup a Service Fabric cluster you need at least 3 machines (or you cannot reach quorum in your cluster). If you run it on Azure then you also choose the Reliability and durability tier for your nodetypes https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-capacity#the-reliability-characteristics-of-the-cluster. A higher reliability tier (Silver, Gold or Platinum) means that you need further nodes (machines) in cluster.
You can run multiple instances of the same application and of different deployment versions in the same cluster. You need to consider how your services are assigned ports (for those that expose HTTP endpoints), otherwise these will conflict if you have multiple instances of the same application type in the same cluster. There is currently no way to provision new instances through Visual Studio, you need to use PowerShell, the API or the Service Fabric Explorer.
Related
I am facing the following problem: I need to execute on-demand long running workers on Azure VMs. These workers are wrapped in a docker image.
So I looked at what Azure is offering and I seem to have the following two options:
Use a VM with docker-compose. This means I need to be able to programatically start a VM, run the docker image on it, and then shutdown the VM (the specs we use are quite expensive and we can't let it run indefinitely). However this means writing orchestration logic ourselves. Is there a service that maybe we could use to make life easier?
Setting up a k8s cluster. However, I am not sure how pricing works here. Would I be able to add the type of the VMs we use to the cluster, and then use the k8s API to start on-demand containers? How would I get priced in this case?
If the only thing you need are workers, there are a few more options you have. Which service suits best depends on the requirements you have. Based on what's in your question, I would think one of the following two might fit best:
Azure Container Instances
Azure Container Instances offers the fastest and simplest way to run a container in Azure, without having to manage any virtual machines and without having to adopt a higher-level service.
Azure Container Instances is a great solution for any scenario that can operate in isolated containers, including simple applications, task automation, and build jobs.
Azure Container Apps (preview)
Azure Container Apps enables you to run microservices and containerized applications on a serverless platform. Common uses of Azure Container Apps include:
Deploying API endpoints
Hosting background processing applications
Handling event-driven processing
Running microservices
According to Azure's Container services page, here are your options:
IF YOU WANT TO
USE THIS
Deploy and scale containers on managed Kubernetes
Azure Kubernetes Service (AKS)
Deploy and scale containers on managed Red Hat OpenShift
Azure Red Hat OpenShift
Build and deploy modern apps and microservices using serverless containers
Azure Container Apps
Execute event-driven, serverless code with an end-to-end development experience
Azure Functions
Run containerized web apps on Windows and Linux
Web App for Containers
Launch containers with hypervisor isolation
Azure Container Instances
Deploy and operate always-on, scalable, distributed apps
Azure Service Fabric
Build, store, secure, and replicate container images and artifacts Azure
Container Registry
EDIT:
Based on the comment
Let's say the only requirement is that I am able to use the resources on-demand, so I only end up spending the amount of money that would take for a certain job to finish execution. What would you use?
the answer would most probably be Container Apps, if the code you have available is not easily migrated to an Azure Function. The most important reason: they are Serverless, which means they scale to zero and you only pay for actual consumption. Next to that, you have to write limited to no orchestration logic, since the container apps can scale based on event sources.
Enables event-driven application architectures by supporting scale based on traffic and pulling from event sources like queues, including scale to zero.
Another great resource is Comparing Container Apps with other Azure container options.
I have an ASP.NET core 3.1 based web api ready to deploy to Azure for production use. For test / development, I have been deploying it to a traditional app service on Azure which I believe is a shared Windows VM under the hood. I have been on F1 tier and it suits my needs for test and dev.
But for production, even the cheapest plan costs me $93.44 per month which I would like to avoid if I can.
In order to lower the cost, I have decided to containerize my app and deploy it using "web app for containers" or "azure container instances". My question is, based on your experience, which method will give me reasonable production-scale performance while minimize my monthly cost? Or would containerize my app save me any money at all?
Please note that I have evaluated Azure Functions and decided it is not what I would like to use.
For your requirements, first of all, you need to know that the Azure Container Instance benefits for its quick start and running. See this:
Azure Container Instances is a great solution for any scenario that
can operate in isolated containers, including simple applications,
task automation, and build jobs.
It's good for the simple application, but not good for scenarios where you need full container orchestration, including service discovery across multiple containers, automatic scaling, and coordinated application upgrades. And I think it's also not stable for the production use, it's more appropriate for the test.
And the Azure App Service is cost according to the service plan, the service plan billed on a per second basis. You can plan the use with time as you need and the App Service has more features than Container Instance. Or if you do not satisfied with App Service, maybe you can take a look at the Azure Kubernetes Service, it has more control and feature then the Container Instance.
As of beginning of 2022 looks like Container Instances and Web Apps for containers will be the same ~32eur which is a bit better than the app service ~50eur.
Azure Service Fabric can be run on Windows Server. Can Service Fabric Mesh be hosted that way as well?
The underline platform is the same service fabric binaries, the only difference is that on MESH you don't manage nodes and all definitions is based on Containers and Hardware Resources (Network, CPU, storage), you will be able to simulate a "Single Node" MESH cluster like you do with current SF and deploy you mesh applications in there.
If your plan is to have a production environment onPrem I haven't got in much details about it, now that it is becoming an opensource solution, I assume yes, but I don't think it is on their top priority.
For now, there is not much documentation about it, so the best you can find will be on these links:
https://learn.microsoft.com/en-gb/azure/service-fabric-mesh/
https://azure.microsoft.com/en-us/blog/azure-service-fabric-mesh-is-now-in-public-preview/
How to setup local development cluster:
https://learn.microsoft.com/en-gb/azure/service-fabric-mesh/service-fabric-mesh-howto-setup-developer-environment-sdk
I have been busy breaking up a monolithic service layer into about 30 small 'chunks' that can be independently deployed (C#, web API).
At the same time, we are moving to Azure.
How should these microservices be deployed?
We need 4 environments (devint, QA, UA and Prod) so we were going to use 4 slots per PaaS, and a new Paas for every microservice.
But this would get expensive and hard to manager.
Are there better approaches? (I know little to nothing about Azure so any help is appreciated).
Thanks
Azure Service Fabric is built for Microservices, and would likely be the best option to go with. Especially for forward thinking when running on the Azure platform. However, depending on your time line the fact that Service Fabric is still in Preview may be an issue. Azure features in Preview don't have the full SLA guarantee that they will when made Generally Available (GA).
The simplest hosting solution to use for Microservices in Azure App Service would be to deploy the different services as Web Apps, possibly using Web Jobs for any background processing. Web Apps and Web Jos work extremely well for building Microservices, and I have used this approach on projects in the past.
Regarding you comment about "4 slots". If you are referring to Web App Deployment Slots, then you will want to reconsider having 4 deployment slots of the same Web App to host your different environments. Especially in Production, there should be a Deployment Slot used for the Live Production instance, and one slot for a Staging area used for testing deployments before swapping them. When it comes to Dev/Int, QA and UAT then you'll want to have 1 or more Web Apps with necessary Deployment Slots to fit your needs. The last thing you want to do is mix up your Dev/Int, QA, UAT and Production environments. It's also very important to understand that all the Deployment Slots for a single Web App run on the exact same Virtual Machine; which means if you have all 4 environments as Deployment Slots then your Dev and QA environments could affect the performance of Production; which would be horrible.
You should consider using Azure Web Apps to host your chunks because it doesn't require any customization of the API or Websites you code (unlike Cloud Services which have their packaging and deployment format). The same WebDeploy mechanism will work on any IIS server (on your own server, AWS or Azure)
Take a look at Azure Resource Manager (ARM) to define the underlying resources such as the hosting App service plan (equivalent to a web server), web apps and databases. You will in all likelihood have the same set of resources in each environment and different configuration (such as different API URLs) or minor tweaks (such a premium SQL plan or larger/more instances of the web applications). ARM template can thus be shared across the 4 environments with each environment having its own ARM parameter file.
I am writing an application that will be deployed both to the cloud and to on-premise data-centres (for those clients who, essentially, don't yet trust the cloud with their data.
If i choose to go MS Azure I can use the new cloud project types with their Web and Worker roles. But how can I get the worker roles running for the on-premise variant?
Do I have to write my own host (say as a windows service)? This is not ideal as it requires additional code and deployment.
Is there an Azure compatible approach, say in the Windows Azure Pack or the App Fabric stuff (is App Fabric still current?) that doesn't require the full setup of the private cloud ?
This doesn't exist in Azure Pack.
There is no need to try and have a Worker Role on premise. All you need to do is to have a Virtual Machine that you install a Windows Service on.
It's easy to create a Windows Service using Topshelf.
Deployment of a Windows Service with Topshelf is actually much easier than deployments for Worker Roles because you just run the .exe you create with the install and then with the start arguments.
Because of this you actually need less code than for a Worker Role since you don't need a second wrapper project.
While I haven't used Windows Azure Pack before it does seem capable of providing this functionality in house, however the requirements and setup procedures are intense and it is certainly geared towards enterprise.
A better option is for you to create a console app that triggers the OnStart() and Run() functions for your WorkerRole based on your OS Task Scheduler.
Not too much work in my opinion and you get to keep your WorkerRoles as is but just add the console app for any on premise solutions.