Cassandra inter DC sync over VPN on GCP - cassandra

I have an VPN between the company network 172.16.0.0/16 and GCP 10.164.0.0/24
On GCP there is a cassandra cluster running with 3 instances. These instances get dynamical local ip adresses - for example 10.4.7.4 , 10.4.6.5, 10.4.3.4.
My issue: from the company network I cannot access 10.4x addresses as the tunnel works only for 10.164.0.0/24.
I tried setting up an LB service on 10.164.0.100 with the cassandra nodes behind. This doesnt work: when I configure that ip adress as seed node on local cluster, it gets an reply from one of the 10.4.x ip addresses, which it doesnt have in its seed list.
I need advice how to setup inter DC sync in this scenario.

IP addresses which K8s assign to Pods and Services are internal cluster-only addresses which are not accessible from outside of the cluster. It is possible by some CNI to create connection between in-cluster addresses and external networks, but I don't think that is a good idea in your case.
You need to expose your Cassandra using Service with NodePort or LoadBalancer type. That is another one answer with a same solution from Kubernetes Github.
If you will add a Service with type NodePort, your Cassandra will be available on a selected port on all Kubernetes nodes.
If you will choose LoadBalancer, Kubernetes will create for you Cloud Load Balancer which will be an entrypoint for Cassandra. Because you have a VPN to your VPC, I think you will need an Internal Load Balancer.

Related

What does an IPs meaning by within cluster and what is outside cluster in AKS?

I have a cluster and pods running inside the cluser.
There are associated services for each pods and few are cluster IP, few are node port and few are load balancers.
I also have VM running in my azure account.
If i am hitting IP of these services/pods in this VMs browser.
What is considered a within-cluster what is considered an outside cluster? Means
why my load blancer ip only accessible in VMs chrome browser and no cluser IPs are not accessible within VMs chrome browser?
That means cluster ips are external and load balancer is internal?
ClusterIP service types are internal to your cluster. The IP address assigned to the service is from the pod-network-cidr which is an internal CIDR. You cannot reach a ClusterIP service from outside the cluster.
NodePort services are external and are bound to the IP address of the Node on a specified port. This is the IP address of the node that is "external" to it (pingable from the outside).
LoadBalancer services are external as well, and usually can be bound to various public IP addresses as required in order to properly load balance traffic to your services.
You can read more about service types here.
I hope this helps!

Is it possible to create SQL always on configuration in Windows 2016 cluster with no LB ip?

don´t know if this possible or not.
client wants to create a Windows 2016 cluster with 2 different vms/nodes that are in Azure which are in different subscriptions and virtual networks. No shared storage
the idea is to configure SQL always on between them so that DB and sql config replicates exactly from VM1 to VM2. Then always on config would be removed when this syncs completes. client won´t do a normal backup/restore from one to the other (I already suggest them this aproach), they would go with always on aproach.
Vms are already on the same localdomain and they can ping each other . Command in powershell to test if cluster can be done with both vms was successfull:
PS C:\windows\system32> Test-Cluster -Node VM07.domain.local,VM04.domain.local
WARNING: System Configuration - Validate Software Update Levels: The test reported some warnings..
WARNING: Network - Validate Network Communication: The test reported some warnings..
WARNING:
Test Result:
HadUnselectedTests, ClusterConditionallyApproved
Testing has completed for the tests you selected. You should review the warnings in the Report. A cluster solu
supported by Microsoft only if you run all cluster validation tests, and all tests succeed (with or without war
Test report file path: C:\xxxx\xxxxxx\AppData\Local\Temp\Validation Report 2021.03.26 At 11.13.54.htm
Thing is that this cluster doesn´t have a listener or load balancer IP, as this requires vms on same subnet . Cluster is only going to be used for SQL always on config.
Is it possible to create this cluster without a Loadbalancer Static IP for the cluster name?. Can the IP of one of the 2 nodes be used for this instead. something like:
VM07 IP: 10.1.2.3
VM04 IP: 10.1.2.4
New-Cluster –Name newcluster -Node VM07,VM04 –StaticAddress ClusterIP 10.1.2.3
–NoStorage
I know is a odd idea but want to be sure if it´s possible or not in practice.
thank you!
Use a single NIC per server (cluster node) and a single subnet.
Because the virtual IP access point works differently in Azure, you need to configure Azure Load Balancer to route traffic to the IP address of the FCI nodes or the availability group listener. In Azure virtual machines, a load balancer holds the IP address for the VNN that the clustered SQL Server resources rely on. The load balancer distributes inbound flows that arrive at the front end, and then routes that traffic to the instances defined by the back-end pool. You configure traffic flow by using load-balancing rules and health probes. With SQL Server FCI, the back-end pool instances are the Azure virtual machines running SQL Server.
Refer to this link for best practices and limitations: https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/hadr-cluster-best-practices
UPDATE
Azure Load Balancer or App Gateway can be configured with any kind of static or dynamic IP for destination.
https://learn.microsoft.com/en-us/azure/load-balancer/manage

Azure Kubernetes Cluster - Accessing and interacting with Service(s) from on-premise

currently we have the following scenario:
We have established our connection (network wise) from on-premise to the Azure Kubernetes Cluster (private cluster!) without any problems.
Ports which are being routed and are allowed
TCP 80
TCP 443
So far, we are in a development environment and test different configurations.
For setting up our AKS, we need to set the virtual network (via CNI) and Service CIDR. We have set the following configuration (just an example)
Virtual Network: 10.2.0.0 /21
Service CIDR: 10.2.8.0 /23
So, our pods are having IPs from our virtual network subnet, the services are getting their IPs from the Service CIDR. So far so good. A route table for the virtual network (subnet has been associated with the route table) is forwarding all traffic to our firewall and vice versa: Interacting with the virtual network is working without any issue. The network team (which is new to Azure cloud stuff as well) has said that the connection and access to the Service CIDR should be working.
Sadly, we are unable to access the Service CIDR.
For example, let's say we want to establish the kubernetes dashboard (web ui) via https://kubernetes.io/docs/tasks/access-application-cluster/web-ui-dashboard/. After running the YAML code, the kubernetes dashboard pod and service is being successfully created. The pod can be pinged and "accessed", but the service, which makes it possible to access the kubernetes dashboard via port 443 cannot be accessed. For example https://10.2.8.42/.
My workaround so far is that the kubernetes dashboard (as a Service, type: ClusterIP) is having set an external IP from the virtual network. This sounds all great, but I am not really fond of it, since I have to interact with the virtual network rather than the Service CIDR.
Is this really the correct way? Any hints how to make the Service CIDR accessible? What am I missing ?
Any help would be appreciated.

Access Azure Service Fabric application from internet

I think I'm missing something that is on surface.
I have created SF cluster in Azure. It has a Load Balancer and a network with 3 VMs (nodes) which have IP addresses in 10.0.0.0/16.
When I'm asking Load balancer for application endpoint it responds with node IP address. (I captured packets with WireShark) But I can't access it because the network is private.
A bit more info about my case: 3xA0 instances, net.tcp:20001 endpoints, firewall allow connections, ports opened and listening, i have public IP address assigned to balancer, probe for service port.
On your load balancer you will need to assign a public IP address. You can find some really good detailed guides in the documentation for this.
Ok Here is it:
When you want to communicate to the service from outside the cluster - just use load balancer IP and you don't need the naming server communication. Load balancer has probs that can check ports on each node in cluster and forward your request to random instance which has service you are asking.
When you want to communicate one microservice to another within the cluster then you have 2 options:
ask naming service through load balancer and then communicate to the service directly.
if you know for sure that the service should be on every node in your cluster - you can just communicate to localhost directly.
When you want to communicate from separate vm to the microservice in the cluster from within cluster's virtual network (you can connect WebApp to the cluster using vpn) then you can ask naming service through load balancer but using Service fabric HTTP API because you will not be able to use service fabric classes on VM wich doesn't have Service Fabric SDK installed. Here is an example of service resolving: https://github.com/lAnubisl/ServiceFabricHttpApiClient

How do I expose Kubernetes service to the internet?

I am running a kubernetes cluster with 1 master (also a node) and 2 nodes on Azure. I am using Ubuntu with Flannel overlay network. So far everything is working well. The only problem I have is exposing the service to the internet.
I am running the cluster on an azure subnet. The master has a NIC attached to it that has a public IP. This means if I run a simple server that listens on port 80, I can reach my server using a domain name (Azure gives an option to have a domain name for a public IP).
I am also able to reach the kubernetes guest book frontend service with some hack. What I did was check all the listening ports on the master and try each port with the public IP. I was able to hit the kubernetes service and get response. Based on my understanding this is directly going to the pod that is running on the master (which is also a node) rather than going through the service IP (which would have load balanced across any of the pods).
My question is how do I map the external IP to the service IP? I know kubernetes has a setting that works only on GCE (which I can't use right now). But is there some neat way of telling etcd/flannel to do this?
If you use the kubectl expose command:
--external-ip="": External IP address to set for the service. The service can be accessed by this IP in addition to its generated service IP.
Or if you create from a json or yaml file, use the spec/externalIPs array.

Resources