what approach I should use for external access to Cassandra running inside kubernetes - cassandra

I have a StatefulSet Cassandra deployment that works great for services deployed to Kubernetes with namespace access, but I also have an ETL job that runs in EMR and needs to load data into that Cassandra cluster.
What would be the main approach/Kubernetes way of doing this?

I can think of two options.
Simple one is you can create the server with Type: NodePort, with this you can connect server with Node IP Address:PortNumber.
Second option is you can create the Ingress Load Balancer and connect to Cassandra cluster.

Related

Do I need minikube if I'm using a k8 cluster in Azure Cloud?

I'm trying to deploy a simple API into an AKS and want to show it off to the internet using an ingress controller. I want to know if I'm using cloud do I need to install minikube?
Minikube is designed for local Kubernetes development and testing. It supports one node by default. So it is not related to your AKS setup, i.e you don't need minikute for AKS.
To be able to demo your setup on the Internet, you can set up an AKS but be mindful of securities and make sure that you are not exposing your entire cluster on the Internet.

Understanding Azure Kubernetes Service (AKS)

I need some help to have a better understanding of Azure Kubernetes Service (AKS).
From what I understood (from official and unofficial documentaion), AKS provides everything I need to work with a K8s cluster, that is to say all nodes I need for my deployments. All these nodes are VMs in their (Microsoft) Clou and are created on each deployment. Is that correct ?
Is it possible to add my personnal nodes in the cluster ?
Actually, I have some RPi that I want to use as nodes in a K8s cluster. I want to use K8s to manage the deployments of some docker application on my Raspberry Pis. I would like to know if it's possible to do that with AKS.
Thanks
No, AKS is just a managed kubernetes service, you cannot add your own nodes to it, since you dont control masters. you can look at AKS-engine. that is an easy way to create a kubernetes cluster that you will manage and can do anything with it.

How to expose metrics to Prometheus from springboot application CF cluster

I am developing an application from scratch using Springboot (Jhipster) and intend to deploy in a cloudfoundry with multiple nodes. Planning to setup prometheus server which can pull metrics from the cluster.
I am trying to understand how can I setup Prometheus to query individual nodes of the cluster. As the application is deployed in cloud foundry, it may not be possible to obtain the ip address of individual nodes.
As I am newbie with Prometheus, want to make sure I am solutioning appropriately.
I would recommend using Micrometer.io
You can use either service discovery (e.g. Netflix Eureka) or use the Pushgateway in some form.
https://prometheus.io/docs/instrumenting/pushing/
https://prometheus.github.io/client_java/io/prometheus/client/exporter/PushGateway.html

Allow Node.js app to talk to a pod on a Kubernetes Cluster

I have a Node.js app in Azure and I want it to be able to connect/talk to a pod on a kubernetes cluster (this pod has an external IP and is a load balancer). On click of a submit button on my Node.js app, I want to be able to send bash commands to the pod on the Kubernetes cluster.
Would you know how I could connect the app to the pod? I know there is server.listen function in the index.js file, however I am not too sure how to approach the situation.
Thanks for the help
Not sure, but you can connect the pod in a Kubernetes Cluster as you connect the Database or other services as usual in the Node.js app.
All of all, you should get the credential of the AKS through the command az aks get-credentials to create a tunnel with the Kubernetes Cluster. Then you can use the command kubectl exec to execute the bash shell command in the pod.
Also, you can take a look at Executing commands in Pods using K8s API, maybe it would be helpful.
Here are the examples that manage the pod in the Kubernetes CLuster pod. See kubernetes-client.

Cassandra across multiple Azure datacentres

trying to figure out how to create Cassandra cluster in Azure across more than one datacentre.
I am not much interested in Cassandra topology settings yet, but more in how to set Azure endpoints or inter-datacentre communication to allow nodes to connect remotely. Do I need to set endpoint for node communication inside one datacentre?
What about security of azure endpoints?
Thank you.

Resources