Consul quorum (3 consul servers) in Azure Kubernetes Pods - azure

We have a system in an azure kubernetes cluster, consisting of 7 nodes. 3 of those nodes are consul servers, forming a quorum. We are encountering a problem, where when the pods restart, their IP address changes. Thus we are forced to re-configure the consul servers manually.
Consul is installed using the Hashicorp helm chart for our consul cluster. all of its files are stored in a persistent volume (/data) and it does store the nodeid in StatefulSet.
IF there is a way where consul can reconfigure itself or kubernetes can provide a static IP for the consul servers to connect with each other, I would appreciate it if it could be shared!

Did you install Consul on your cluster using the Hashicorp helm chart? Their architecture uses a StatefulSet for the Consul server pods and persistent volume claims to store the node-id so the pods can move around. (ref: https://www.consul.io/docs/k8s/installation/overview#server-agents)
If you have used another installation method, do you have persistent volumes so the node-id does not change between restarts? Please expand on your Consul installation method, current configuration and re-configuration steps that are required as well.

Related

Azure kubernetes - Azure CNI & Istio, sidecar IP allocation?

Our Azure kubernetes cluster is configured with Azure CNI for networking which uses the subnet with CIDR: /21.
As we are planning to deploy Istio service mesh and additional sidecars for log shipping, how would those impact the available IPs? Would those consume the IPs? If so, how to avoid the IP congestion?
Kubernetes allocates a single IP per pod, so no matter how many sidecars you have, a single pod will only have a single IP. Basically, you dont need to do anything in this regard

Cassandra inter DC sync over VPN on GCP

I have an VPN between the company network 172.16.0.0/16 and GCP 10.164.0.0/24
On GCP there is a cassandra cluster running with 3 instances. These instances get dynamical local ip adresses - for example 10.4.7.4 , 10.4.6.5, 10.4.3.4.
My issue: from the company network I cannot access 10.4x addresses as the tunnel works only for 10.164.0.0/24.
I tried setting up an LB service on 10.164.0.100 with the cassandra nodes behind. This doesnt work: when I configure that ip adress as seed node on local cluster, it gets an reply from one of the 10.4.x ip addresses, which it doesnt have in its seed list.
I need advice how to setup inter DC sync in this scenario.
IP addresses which K8s assign to Pods and Services are internal cluster-only addresses which are not accessible from outside of the cluster. It is possible by some CNI to create connection between in-cluster addresses and external networks, but I don't think that is a good idea in your case.
You need to expose your Cassandra using Service with NodePort or LoadBalancer type. That is another one answer with a same solution from Kubernetes Github.
If you will add a Service with type NodePort, your Cassandra will be available on a selected port on all Kubernetes nodes.
If you will choose LoadBalancer, Kubernetes will create for you Cloud Load Balancer which will be an entrypoint for Cassandra. Because you have a VPN to your VPC, I think you will need an Internal Load Balancer.

kubectl exec vs ssh using bastion

KOPS lets us create a Kubernetes cluster along with a bastion that has ssh access to the cluster nodes
With this setup is it still considered safe to use kubectl to interact with the Kubernetes API server?
kubectl can also be used to interact with shell on the pods? Does this need any restrictions?
What are the precautionary steps that need to be taken if any?
Should the Kubernetes API server also be made accessible only through the bastion?
Deploying a Kubernetes cluster with the default Kops settings isn’t secure at all and shouldn’t be used in production as such. There are multiple configuration settings that can be done using kops edit command. Following points should be considered after creating a Kubnertes Cluster via Kops:
Cluster Nodes in Private Subnets (existing private subnets can be specified using --subnets with the latest version of kops)
Private API LoadBalancer (--api-loadbalancer-type internal)
Restrict API Loadbalancer to certain private IP range (--admin-access 10.xx.xx.xx/24)
Restrict SSH access to Cluster Node to particular IP (--ssh-access xx.xx.xx.xx/32)
Hardened Image can also be provisioned as Cluster Nodes (--image )
Authorization level must be RBAC. With latest Kubernetes version, RBAC is enabled by default.
The Audit logs can be enabled via configuration in Kops edit cluster.
kubeAPIServer:
auditLogMaxAge: 10
auditLogMaxBackups: 1
auditLogMaxSize: 100
auditLogPath: /var/log/kube-apiserver-audit.log
auditPolicyFile: /srv/kubernetes/audit.yaml
Kops provides reasonable defaults, so the simple answer is : it is reasonably safe to use kops provisioned infrastructure as is after provisioning.

Network setup for accessing Azure Redis service from Azure AKS

We have an application that runs on an Ubuntu VM. This application connects to Azure Redis, Azure Postgres and Azure CosmosDB(mongoDB) services.
I am currently working on moving this application to Azure AKS and intend to access all the above services from the cluster. The services will continue to be external and will not reside inside the cluster.
I am trying to understand how the network/firewall of both the services and aks should be configured so that pods inside the cluster can access the above services or any Azure service in general.
I tried the following:
Created a configMap containing the connection params(public ip/address, username/pwd, port, etc) of all the services and used this configMap in the deployment resource.
Hardcoded the connection params of all the services as env vars inside the container image
In the firewall/inbound rules of the services, I added the AKS API ip, individual node ips
None of the above worked. Did I miss anything? What else should be configured?
I tested the setup locally on minikube with all the services running on my local machine and it worked fine.
I am currently working on moving this application to Azure AKS and
intend to access all the above services from the cluster.
I assume that you would like to make all services to access each other and all the services are in AKS cluster? If so, I advise you configure the internal load balancer in AKS cluster.
Internal load balancing makes a Kubernetes service accessible to
applications running in the same virtual network as the Kubernetes
cluster.
You can take a try and follow the following document: Use an internal load balancer with Azure Kubernetes Service (AKS). In the end, good luck to you!
Outbound traffic in azure is SNAT-translated as stated in this article. If you already have a service in your AKS cluster, the outbound connection from all pods in your cluster will come thru the first LoadBalancer type service IP; I strongly suggest you create one for the sole purpose to have a consistent outbound IP. You can also pre-create a Public IP and use it as stated in this article using the LoadBalancerIP spec.
On a side note, rather than a ConfigMap, due to the sensitiveness of the connection string, I'd suggest you create a Secret and pass that down to your Deployment to be mounted or exported as environment variable.

Using Fluentd for store logs in ElasticSearch

I have a Kubernetes cluster with 50+ pods on it, and I want to grab the logs from all of these pods, then store logs in the ElasticSearch and visualize that using Kibana, but ElasticSearch and Kibana should be outside Kubernetes, on another virtual machine in the same network.
How can I configure the Fluentd to grab and send logs to Non-Kubernetes ElasticSearch?
It is totally possible. In the kubernetes cluster, you would need to expose the Fluentd service on an external IP reachable from the virtual machine outside the cluster, and run ElasticSearch and Kibana on this virtual machine.
ElasticSearch (outside the kubernetes cluster) will access Fluentd (inside the kubernetes cluster) using the Fluentd service in k8s and pull the logs.
There are four ways to expose the Fluentd service in k8s for external access by ElasticSearch:
LoadBalancer service type which sets the ExternalIP automatically. This is used when there is an external non-k8s, cloud-provider's load-balancer like CGE, AWS or Azure, and this external load-balancer would provide the ExternalIP for the nginx ingress service.
ExternalIPs per https://kubernetes.io/docs/concepts/services-networking/service/#external-ips.
NodePort: In this approach, the service can be accessed from outside the cluster using NodeIP:NodePort/url/of/the/service.
Ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/

Resources