Alternative solution for Azure Service Fabric distributed cache - azure

I have an application running in Service Fabric with multiple nodes. All the running nodes share some cache data using distributed caching available in Service Fabric.
Now, I am looking to move out from Service Fabric due to cost issues.
What would be a good solution for me where I can also maintain caching between multiple instances ( like the distributed cache in Service Fabric).
I need to install it in an Azure environment.

If you're looking for an alternative managed by Microsoft, the best choice would be Azure Redis Cache.
More info:
https://learn.microsoft.com/en-us/azure/azure-cache-for-redis/

Related

What is an azure role?

I'm reading this article on distributed caching in Azure. Being new to Azure I'm trying to understand what they mean when they use the term "role" in the following context:
In-Role Cache You can deploy an in-role cache on a co-located or
dedicated role in Azure. Co-located means your application is also
running on that VM and dedicated means it’s running only the cache.
Although a good distributed cache provides elasticity and high
availability, there’s overhead associated with adding or removing
cache servers from the cache cluster. Your preference should be to
have a stable cache cluster. You should add or remove cache servers
only when you want to scale or reduce your cache capacity or when you
have a cache server down.
The in-role cache is more volatile than other deployment options
because Azure can easily start and stop roles. In a co-located role,
the cache is also sharing CPU and memory resources with your
applications. For one or two instances, it’s OK to use this deployment
option. It’s not suitable for larger deployments, though, because of
the negative performance impact.
You can also consider using a dedicated in-role cache. Bear in mind
this cache is deployed as part of your cloud service and is only
visible within that service. You can’t share this cache across
multiple apps. Also, the cache runs only as long as your service is
running. So, if you need to have the cache running even when you stop
your application, don’t use this option.
Microsoft Azure Cache and NCache for Azure both offer the in-role
deployment option. You can make Memcached run this configuration with
some tweaking, but you lose data if a role is recycled because
Memcached doesn’t replicate data.
They talk about In-Role cache, cache service, cache VMs and multi-region cache VMs.
I understand cache services to be "server-less" meaning you don't manage the server or cluster, Azure
does all of that, as apposed to cache VMs where you handle deployment of the server and the cache solution on that server.
How does In-Role cache differ, and what is a "role"? I usually think of role as the definition of how a user participates in a given system, and it establishes the capabilities or permissions that members of that role would need within the system to fulfill their duties. This seems different than that.
It's legacy. In the past, there were Azure In-Role Cache and the Azure Managed Cache Service. The recommendation is to use Azure Redis Cache now:
https://azure.microsoft.com/en-us/blog/azure-managed-cache-and-in-role-cache-services-to-be-retired-on-11-30-2016/

Is Kubernetes + Docker + AWS = Azure + Service Fabric?

I see advantages of Kubernetes which include Rolling Deployments, Automatic Health check monitoring, and swinging a new server to action when an existing one fails. I also do understand that Kubernetes is not just for Docker.
So, that brings a couple of questions!
When Azure, and Service Fabric could provide all that I said (and beyond), why would I need Kubernetes?
Would it make sense for one to use Kubernetes along with Service Fabric for large scale deployments on Azure?
Let's look first at the similarities between Kubernetes and Service Fabric.
They are both cloud-agnostic clustering, orchestration, and scheduling software.
They can both be deployed manually, by you, to any set of VMs, anywhere.
There are "managed" offerings for both, meaning a cloud provider like Azure or Google Cloud will host a cluster for you, but generally you still own the VMs.
They both deploy and manage containers.
They both have rich management operations, such as rolling upgrades, health checks, and self-healing capabilities.
That's a fairly high-level view but should give you an idea of what and where you can run with each.
Now let's look where they're different. There are a ton of small differences, but I want to focus on two of the really big conceptual differences:
Application model:
Service Fabric allows you to orchestrate any arbitrary container or EXE (whether that's a small node.js app or a giant legacy application), and in that sense it is similar to Kubernetes. But overall it is more focused on application development specifically, with programming models that are integrated with the platform. In this respect, it is more closely comparable to Cloud Foundry than Kubernetes.
Kubernetes is focused more on orchestrating infrastructure for an application. It doesn't really focus on how you write your application. That's up to you to figure out; Kubernetes just wants a container to run, doesn't matter what's in it.
State management
Kubernetes allows you to deploy stateful software to it, by providing persistent disk storage volumes to containers and assigning unique identifiers to pods. This lets you deploy things like ZooKeeper or MySQL.
Service Fabric is stateful software. Service Fabric is designed as a stateful, data-aware platform. It provides HA state and scale-out primitives. So while Kubernetes allows you to deploy stateful things, Service Fabric allows you to build stateful things. This is one of the key differences that's often overlooked. For example:
On Kubernetes, you can deploy ZooKeeper.
On Service Fabric, you can actually build ZooKeeper yourself using Service Fabric's replication and leader election primitives.
Kubernetes uses etcd for distributed, reliable storage about the state of the cluster.
Service Fabric doesn't need etcd, because Service Fabric itself is a distributed, reliable storage platform. The system services in Service Fabric make use of this to reliably store the state of the cluster. This makes Service Fabric entirely self-contained.
The fact that Service Fabric is a stateful platform is key to understanding it and how it differs from other major orchestrators. Everything it does - scheduling, health checking, rolling upgrades, application versioning, failover, self-healing, etc - are all designed around the fact that it is managing replicated and distributed data that needs to be consistent and highly available at all times.
Please find below a good comparaison article about the difference between ACS and Azure Service Fabric:
https://blogs.msdn.microsoft.com/maheshkshirsagar/2016/11/21/choosing-between-azure-container-service-azure-service-fabric-and-azure-functions/
Could you please clarify what you refer to when you talk mentionne "AWS" ?
From a "developer level" solution could be statefull in both cases but it have a major difference from an Infrastructure point of view:
Docker + Kuberest is a "IaaS" oriented solution
Azure Service Fabric (if you are using Azure service) is a PaaS solution.
IaaS is, in general, more costly and have a more significant maintenance cost.
From a support point of view:
Azure Service Fabric is supported by Microsoft
Docker and Kubernetest are more open source oriented
Hope this help.
Best regards

Azure service fabric - Performance telemetry of individual service

Is there a way to get the performance telemetry of an individual service running in a node which has other services running in the same node within a servive fabric cluster?
We are using .net core where there are not performance counters either and we arent using containers at the moment. We want to make sure one microservice doesnt hog all the system resources and choke the other microservices running in the same node. We are using guest executables.
We use application insight. It has support for microservices for service fabric ie correlation id where you can trace a request through multiple services within service fabric
Here is set up instruction and example
https://github.com/Microsoft/ApplicationInsights-ServiceFabric

Service fabric cluster takes forever to deploy

Is there a way to shorten the process? Should I have two service fabric clusters if we want to implement continuous delivery process ?
If the Service Fabric cluster deployment (i.e. creation of a Service Fabric cluster) is stuck - open an issue in the Azure Portal with support to help get it resolved.
For application deployment you don't need separate cluster to do CD. Depending on your CD strategy (e.g. rolling upgrades, rip and replace, blue/green), there are various ways of doing that in Service Fabric. Take a look here for some of the conceptual documentation on this topic: https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-application-upgrade

Who is faster? AppFabric Cache or Memcached?

We use the MS AppFabric Cache to store the central session. But we want to know how fast this store? Realties who used to do the virtual machine and Memcached on this machine?
thanks a lot!
I'm not sure I have seen any benchmarks on this so its a hard one to answer but the new Windows Azure Caching service released on June 7th addresses a lot of the performance issues of the Distributed Multi-Tenanted Azure Caching Service.
Below are extracts from Haishi's blog on Azure Caching, note Windows Azure Caching now supports Memcached interoperability too...
"The cluster utilizes free memory spaces on your Cloud Service host machines, and the cluster service process can either share the host machines of your web/worker roles, or run on dedicated virtual machines when you use dedicated caching worker roles"
http://haishibai.blogspot.com.au/2012/06/windows-azure-caching-service-memcached.html
Fast performance. The cluster either collocates with your roles, or runs in close proximity based on affinity groups. This helps to reduce latency to minimum. In addition, you can also enable local cache on your web/worker roles so that they get fastest in-memory accesses to cached items as well as notification support.

Resources