We have an HA Kubernetes cluster. We want to secure the connection between nodes with TLS, but I don't want to secure the connection between pods in a node, is that anyway?
Related
I have an EKS cluster and trying to connect the application pod to the ElastiCache Redis endpoint. Both are in the same VPC. I allow the communication between both EKS and ElastiCache Redis.
When I telnet from a pod to the ElastiCache Redis endpoint is connected. But Unfortunately, I access from my nodejs application in won't work.
Can somebody help me to resolve this?
I connect to atlas and use the Access Control List, to prevent unwanted connections. I have successfully implemented a VPN server with OPenVPN image, but i don't know how I can route my kubernetes outgoing traffic through my OPenVPN server.
Is there any setup I need on the cluster, docker or application side (nodejs).
We have a system in an azure kubernetes cluster, consisting of 7 nodes. 3 of those nodes are consul servers, forming a quorum. We are encountering a problem, where when the pods restart, their IP address changes. Thus we are forced to re-configure the consul servers manually.
Consul is installed using the Hashicorp helm chart for our consul cluster. all of its files are stored in a persistent volume (/data) and it does store the nodeid in StatefulSet.
IF there is a way where consul can reconfigure itself or kubernetes can provide a static IP for the consul servers to connect with each other, I would appreciate it if it could be shared!
Did you install Consul on your cluster using the Hashicorp helm chart? Their architecture uses a StatefulSet for the Consul server pods and persistent volume claims to store the node-id so the pods can move around. (ref: https://www.consul.io/docs/k8s/installation/overview#server-agents)
If you have used another installation method, do you have persistent volumes so the node-id does not change between restarts? Please expand on your Consul installation method, current configuration and re-configuration steps that are required as well.
I have an VPN between the company network 172.16.0.0/16 and GCP 10.164.0.0/24
On GCP there is a cassandra cluster running with 3 instances. These instances get dynamical local ip adresses - for example 10.4.7.4 , 10.4.6.5, 10.4.3.4.
My issue: from the company network I cannot access 10.4x addresses as the tunnel works only for 10.164.0.0/24.
I tried setting up an LB service on 10.164.0.100 with the cassandra nodes behind. This doesnt work: when I configure that ip adress as seed node on local cluster, it gets an reply from one of the 10.4.x ip addresses, which it doesnt have in its seed list.
I need advice how to setup inter DC sync in this scenario.
IP addresses which K8s assign to Pods and Services are internal cluster-only addresses which are not accessible from outside of the cluster. It is possible by some CNI to create connection between in-cluster addresses and external networks, but I don't think that is a good idea in your case.
You need to expose your Cassandra using Service with NodePort or LoadBalancer type. That is another one answer with a same solution from Kubernetes Github.
If you will add a Service with type NodePort, your Cassandra will be available on a selected port on all Kubernetes nodes.
If you will choose LoadBalancer, Kubernetes will create for you Cloud Load Balancer which will be an entrypoint for Cassandra. Because you have a VPN to your VPC, I think you will need an Internal Load Balancer.
The standard and premium pricing tiers of Azure Redis Cache provide master/slave replication:
Standard—A replicated cache in a two-node primary/secondary
configuration managed by Microsoft, with a high-availability SLA.
But the Azure portal provides connection details (hostname, port, key) for only a single redis instance. Is there a way to connect to connect to the slave process in a replica?
Since the Azure Redis service manages replication and automatic failover on your behalf, it is useful not to make any assumptions about which node is Master as that could change on a failover. Hence the service exposes only one endpoint and ensures that any requests to that endpoint hit the correct Master. It is technically possible to connect to Master or Slave, but Azure doesn't expose it and it requires checks on the client side to ensure that the node is indeed Master or Slave.
If you turn on clustering, the Redis cluster protocol is used. Under this protocol, you can run a cluster nodes command and it should return get a list of Master and slave nodes and the ports that each of these are listening on.
The Redis service manages replication and failover, for high availability. This is not something exposed to you. That is, you cannot connect directly to the slave/secondary.