Cassandra across multiple Azure datacentres - azure

trying to figure out how to create Cassandra cluster in Azure across more than one datacentre.
I am not much interested in Cassandra topology settings yet, but more in how to set Azure endpoints or inter-datacentre communication to allow nodes to connect remotely. Do I need to set endpoint for node communication inside one datacentre?
What about security of azure endpoints?
Thank you.

Related

Is it possible to host Cassandra nodes on-premise (vs cloud)?

I'm trying to build a Decentralised social media platform using Cassandra. To do this I would like the instances or nodes of the Cassandra database to be hosted on the clients side rather then having it hosted on the cloud. I would like to know if it would be possible for the user to somehow run an instance on their side with part of the data. This will allow the information to be distributed between many computers globally.
You can deploy Cassandra nodes:
on-premise,
on a private cloud,
on a public cloud, or
a hybrid environment of on-premise + cloud.
It is also possible to deploy Cassandra on any combination of the above. Cheers!

gRPC connection pooling on server side

I have a cluster of microservices to be hosted on Azure Kubernetes Service.
These microservices are .NET Core based and will
talk to on-premises services via gRPC
stream data using SignalR Core to client apps(Websockets)
The problem I can't find a good solution for is "How to persist gRPC" connections as pods are created and destroyed.
This seems like a very trivial problem for hosting microservices on a hybrid network. I would love to hear how others have addressed this issue.
Persistance of grpc would be difficult in such environment as pods are not persistant at all. I would suggest two approach to handle this scenario
Use/build a proxy between EKS and On Premises Service which can keep persistant connection open with on-premises service but connection to proxy can be added/removed as the pods are created/destroyed. This proxy can act as connection pool and provide higher throughput to on-premises service invocation.
Don't worry about the persistance of connection with on-premises services (treat this like a rdbms connection which can be created or destroyed on demand but has some cost in order to create new). This approach would work in case the pods are created or destroyed not too frequently.
I would suggest second approach in case the pods are not created/destroyed too frequently (few every hour) as it has less moving parts. But if pods are scaled up too frequently, approach one should be used.

Is there a way to manage modules deployed on iotedge device by using AKS(Azure Kubernetes Service)?

I have been looking at using the kubernetes for container orchestration . However, as far as i know kubernetes could be on-prem or managed as service through Azure Kubernetes Service. I have known that on-prem support for K8s is being provided by edge , however I wonder how this would work if my workloads were on AKS.
Can you clarify your scenario more?
We have a public preview that one can register a k8s cluster as an edge device. And can deploy applications to the k8s cluster Edge through iothub. The k8s cluster can be on-prem or aks. Same instructions can be followed. https://learn.microsoft.com/en-us/azure/iot-edge/how-to-install-iot-edge-kubernetes.
Another way is to connect iothub to a K8s cluster through virtual kublet. https://github.com/Azure/iot-edge-virtual-kubelet-provider.
This way your workload can be deployed to Edge devices by iothub with k8s api or kubectl.
Would like to understand the needs and hear feedback when trying.
Thanks
Cindy Xing #msft
Not sure if I understood correctly your question, but I understood that you would like to manage IoT edge deployments through K8s. Is my understanding correct?
There is an experimental project that helps managing IoT Edge deployments via simple kubectl commands

Understanding Azure Kubernetes Service (AKS)

I need some help to have a better understanding of Azure Kubernetes Service (AKS).
From what I understood (from official and unofficial documentaion), AKS provides everything I need to work with a K8s cluster, that is to say all nodes I need for my deployments. All these nodes are VMs in their (Microsoft) Clou and are created on each deployment. Is that correct ?
Is it possible to add my personnal nodes in the cluster ?
Actually, I have some RPi that I want to use as nodes in a K8s cluster. I want to use K8s to manage the deployments of some docker application on my Raspberry Pis. I would like to know if it's possible to do that with AKS.
Thanks
No, AKS is just a managed kubernetes service, you cannot add your own nodes to it, since you dont control masters. you can look at AKS-engine. that is an easy way to create a kubernetes cluster that you will manage and can do anything with it.

How to expose metrics to Prometheus from springboot application CF cluster

I am developing an application from scratch using Springboot (Jhipster) and intend to deploy in a cloudfoundry with multiple nodes. Planning to setup prometheus server which can pull metrics from the cluster.
I am trying to understand how can I setup Prometheus to query individual nodes of the cluster. As the application is deployed in cloud foundry, it may not be possible to obtain the ip address of individual nodes.
As I am newbie with Prometheus, want to make sure I am solutioning appropriately.
I would recommend using Micrometer.io
You can use either service discovery (e.g. Netflix Eureka) or use the Pushgateway in some form.
https://prometheus.io/docs/instrumenting/pushing/
https://prometheus.github.io/client_java/io/prometheus/client/exporter/PushGateway.html

Resources