I have a cluster of microservices to be hosted on Azure Kubernetes Service.
These microservices are .NET Core based and will
talk to on-premises services via gRPC
stream data using SignalR Core to client apps(Websockets)
The problem I can't find a good solution for is "How to persist gRPC" connections as pods are created and destroyed.
This seems like a very trivial problem for hosting microservices on a hybrid network. I would love to hear how others have addressed this issue.
Persistance of grpc would be difficult in such environment as pods are not persistant at all. I would suggest two approach to handle this scenario
Use/build a proxy between EKS and On Premises Service which can keep persistant connection open with on-premises service but connection to proxy can be added/removed as the pods are created/destroyed. This proxy can act as connection pool and provide higher throughput to on-premises service invocation.
Don't worry about the persistance of connection with on-premises services (treat this like a rdbms connection which can be created or destroyed on demand but has some cost in order to create new). This approach would work in case the pods are created or destroyed not too frequently.
I would suggest second approach in case the pods are not created/destroyed too frequently (few every hour) as it has less moving parts. But if pods are scaled up too frequently, approach one should be used.
Related
I would like to know how I can protect my Nodejs microservices so only the API gateway can access it. Currently the microservices are exposed on a unique port on my machine and can be access directly without passing through the gateway. That defeats the purpose of the gateway to serve as the only entry point in the system for secure and authorized information exchange.
The microservices and the gateway are currently built with Nodejs and express.
The plan is to eventually deploy it on the cloud (digital ocean). I'd appreciate any response. Thanks.
Kubernetes can solve this problem.
Kubernetes manages containers where each container can be a micro service.
While connecting your micro services to your gateway server, you can choose to only allow foreign connections to your gateway server. You would have a load balancer / nginx in your kubernetes cluster that redirects request to your gateway server.
Kubernetes has many other features such as:
service discovery: each of your micro service's IP could potentially change on restart/deployment unless you have static IP for all ur services. service discovery solves this problem.
high availability & horizontal scaling & zero downtime: you can configure to have several replicas for each of your service. So when one of the service goes down there still are other replicas alive to deal with the remaining requests. This also helps with CICD. With something like github action, you can make a smooth CICD pipeline. When you deploy a new docker image(update a micro service), kubernetes will launch a new container first and then kill the old container. So you have zero down time.
If you are working with micro services, you should definitely have a deep dive into kubernetes.
I'm using the Hybrid Connection Manager and also the On Premise Data Gateway for several projects hosted in the Azure cloud.
There are more and more use cases for those two components and I need to setup a clean monitoring to detect connection troubles (for example when there is a network issue or a reboot of the servers hosting the gateways).
For the HCM, there are Relays metrics I can rely on, but I saw that some of those counters are not reliable. I had issues with my connexion in the past few days, and when I check the ListenerConnections-ClientError or ListenerConnections-ServerError counters, they always equal to 0... this sounds very strange?
Regarding the OnPremise Data Gateway, I think that because it also relies on SBus Relay, I should probably use the same metrics?
I am creating a WebService with C# Core 3.0 that is using MySQL and Redis, but I am not so familiar with Azure so I need advice about configuring everything.
I had MySQL hosted on AWS, but I am transferring it to Azure because I think that performance (speed) will be better on Azure because they will be on same data center. Right?
But, on my MySQL page Host is like '*.mysql.database.azure.com'. That means that every connection will go out of Azure, and than come back? I don't have some local IP for connection? Same question for Redis.
Do I need to configure some local network on Azure and will that impact speed on the app? And, is MySQL a good choice for Azure or should I try with another one?
I am just reading about Azure Virtual Networks. But as I understand it, VN's sole purpose is to isolate elements from the outside network?
You will get better performance if your my-sql instance and your app service are in the same region (basically the same data centre).
The connection string is mysql.database.azure.com, but remember the connection will be a TCP/IP connection, so the DNS lookup will realise that this address mysql.database.azure.com is in the same region (same data center). Then the TCP/IP connection will go to an internal IP.
You could use tcpping in your app service's kudo console to try this and see the result.
The basic rule is that you should group your app and database in the same region for better performance and cheaper cost (as Microsoft doesn't charge traffic within the same region).
Azure Virtual network is for a different purpose. For example, if you have some on premise database servers and you want to call these servers from azure, then VM could be helpful. But for the scenario you described, it is not really needed.
The company I work for has Microsoft azure support included, and if you or your company have support contract with them, you can raise questions directly to them and get really quick responses.
I'm looking into migrating an application to Service Fabric running on Azure. It's a realtime chat-style application using SignalR. I'd like to have an instance of a service running, self-hosting a SignalR hub (via OWIN) for each "affinity group" in which users are communicating. This is so I can avoid having to scale out SignalR with a backplane. I'd like to be able to spin these services up and down as groups of users enter and leave my application. I would expect I could host tens of these services per VM with a typical load of hundreds of users per group.
My idea is that I'd have a service locator that clients connect to initially to discover which service (port) is hosting their group. I would also have a service that spun up an instance of the chat service when the first request for that group came in.
How would I architect this in Service Fabric on Azure so that a) each of the services/actors is accessible with a SignalR client from the internet? and b) I'm only running as many services as necessary to serve m active groups out of n total groups? The demand for this app is very transient and spiky, so I'm hoping to take advantage of the fact that services are simply processes and can be provisioned in a matter of seconds vs. my current scenario where I have to spin up entire cloud services and wait tens of minutes to handle spikes (at which point it's too late)
You would do a few things:
Have a "service manager service" that intercepted initial join requests and created new Service Fabric services on the fly if they didn't already exist OR if they did already exist resolve the service's current location and then return that address to the client
Alternatively they could just pass back the internal service name (if you're ok exposing that information) and the client could do the resolution and then connection. To some degree this will depend on how much info you want to expose to the client, whether you can or want to modify it to "know about" Service Fabric, etc.
The client would then connect to the actual backing service directly
You would have to come up with some sort of mechanism for the actual chat services to know that there is nobody left and to either delete themselves or to go back through the manager.
You probably would be best off modeling the chat service as a Reliable Service rather than an actor as the Reliable Services stack allows more flexibility around communication protocols/stacks.
Recently came to know about Azure Service Fabric and seems a good way to develop scaling applications as bunch of micro services. Everywhere it is telling that we need to have stateless front end web services and stateful partitioned internal services. The internal services scale by partitioning the data.
But what happens to the front end services in load? .The chances are very less as they are doing nothing but relying to internal stateful services. Still should we use load balancer before front end services? If so, can we host the same too via Service Fabric's Stateless model using OWIN or any other web host?
The question is asked already in SO but as comment. It didnt get reply as the original question was different.
Azure Service Fabric usage
Yes, you'll definitely want to distribute load across your stateless services as well. The key difference is that since they are stateless, they can handle requests in a round-robin fashion.
Whereas stateful services have partitions, which map to individual chunks of the service's state, stateless services simply have instances, which are identical clones of each other, just on different nodes. You can set the number of instances in on the default service definition in the application manifest. For instance, this declaration will ensure that there are always 5 instances of your stateless service running in the cluster:
<Service Name="Stateless1">
<StatelessService ServiceTypeName="Stateless1Type" InstanceCount="5">
<SingletonPartition />
</StatelessService>
</Service>
You can also set the InstanceCount to -1, in which case Service Fabric will create an instance of your stateless service on each node.
The Azure load-balancer will round-robin your incoming traffic across each of your instances. Unfortunately, there isn't a good way to simulate this in a one-box environment right now.