I've created two pods top of Azure Kubernetes cluster
1) Application
2) MS SQL server
both pods are exposed via Azure Loadbalancer and both having External IPs. I am unable to use the External IP in my application config file. But I can connect that SQL Server from anywhere. For some reason I am unable to telnet DB IP from Application container.
the connection is getting timeout. but I can ping/telnet the DB's cluster ip. So I have tried to use the DB cluster IP in my config file to check if the connection is successful but no luck.
Could someone help me with this ?
As Suresh said, we should not use public IP address to connect them.
We can refer to this article to create a application and a database, then connect a front end to a back end using a service.
This issue was fixed in other way. But still running a Application & DB as separate service is night mare in Azure container service(Kubernetes).
1) I've combined App+DB in same container and put the DB connection string as "localhost" or "localhost,1433" is my application config file.
2) Created Docker image with above setup
3) Created pod
4) Exposed pod with two listening ports "kubectl expose pods "xxx" --port=80,1433 --type=LoadBalancer
5) I can access the DB with 1433
In the above setup, we have planned to keep the container in auto scaled environment with persistent volume storage
Also we are planning to do the scheduled backup of container, So we do not want to loose the DB data.
Is anybody having other thoughts, what the major issue factors we need to consider in above setup ??
This issue was fixed..!
Create two pods for Application and DB, Earlier when I provide the DB cluster IP in application config file, it was worked.But I was able to telnet 1433
I have created another K8s cluster in Azure then tried with same setup (provided cluster IP). This time it worked like charm.
Thanks to #Suresh Vishnoi
Related
don´t know if this possible or not.
client wants to create a Windows 2016 cluster with 2 different vms/nodes that are in Azure which are in different subscriptions and virtual networks. No shared storage
the idea is to configure SQL always on between them so that DB and sql config replicates exactly from VM1 to VM2. Then always on config would be removed when this syncs completes. client won´t do a normal backup/restore from one to the other (I already suggest them this aproach), they would go with always on aproach.
Vms are already on the same localdomain and they can ping each other . Command in powershell to test if cluster can be done with both vms was successfull:
PS C:\windows\system32> Test-Cluster -Node VM07.domain.local,VM04.domain.local
WARNING: System Configuration - Validate Software Update Levels: The test reported some warnings..
WARNING: Network - Validate Network Communication: The test reported some warnings..
WARNING:
Test Result:
HadUnselectedTests, ClusterConditionallyApproved
Testing has completed for the tests you selected. You should review the warnings in the Report. A cluster solu
supported by Microsoft only if you run all cluster validation tests, and all tests succeed (with or without war
Test report file path: C:\xxxx\xxxxxx\AppData\Local\Temp\Validation Report 2021.03.26 At 11.13.54.htm
Thing is that this cluster doesn´t have a listener or load balancer IP, as this requires vms on same subnet . Cluster is only going to be used for SQL always on config.
Is it possible to create this cluster without a Loadbalancer Static IP for the cluster name?. Can the IP of one of the 2 nodes be used for this instead. something like:
VM07 IP: 10.1.2.3
VM04 IP: 10.1.2.4
New-Cluster –Name newcluster -Node VM07,VM04 –StaticAddress ClusterIP 10.1.2.3
–NoStorage
I know is a odd idea but want to be sure if it´s possible or not in practice.
thank you!
Use a single NIC per server (cluster node) and a single subnet.
Because the virtual IP access point works differently in Azure, you need to configure Azure Load Balancer to route traffic to the IP address of the FCI nodes or the availability group listener. In Azure virtual machines, a load balancer holds the IP address for the VNN that the clustered SQL Server resources rely on. The load balancer distributes inbound flows that arrive at the front end, and then routes that traffic to the instances defined by the back-end pool. You configure traffic flow by using load-balancing rules and health probes. With SQL Server FCI, the back-end pool instances are the Azure virtual machines running SQL Server.
Refer to this link for best practices and limitations: https://learn.microsoft.com/en-us/azure/azure-sql/virtual-machines/windows/hadr-cluster-best-practices
UPDATE
Azure Load Balancer or App Gateway can be configured with any kind of static or dynamic IP for destination.
https://learn.microsoft.com/en-us/azure/load-balancer/manage
We have an application that runs on an Ubuntu VM. This application connects to Azure Redis, Azure Postgres and Azure CosmosDB(mongoDB) services.
I am currently working on moving this application to Azure AKS and intend to access all the above services from the cluster. The services will continue to be external and will not reside inside the cluster.
I am trying to understand how the network/firewall of both the services and aks should be configured so that pods inside the cluster can access the above services or any Azure service in general.
I tried the following:
Created a configMap containing the connection params(public ip/address, username/pwd, port, etc) of all the services and used this configMap in the deployment resource.
Hardcoded the connection params of all the services as env vars inside the container image
In the firewall/inbound rules of the services, I added the AKS API ip, individual node ips
None of the above worked. Did I miss anything? What else should be configured?
I tested the setup locally on minikube with all the services running on my local machine and it worked fine.
I am currently working on moving this application to Azure AKS and
intend to access all the above services from the cluster.
I assume that you would like to make all services to access each other and all the services are in AKS cluster? If so, I advise you configure the internal load balancer in AKS cluster.
Internal load balancing makes a Kubernetes service accessible to
applications running in the same virtual network as the Kubernetes
cluster.
You can take a try and follow the following document: Use an internal load balancer with Azure Kubernetes Service (AKS). In the end, good luck to you!
Outbound traffic in azure is SNAT-translated as stated in this article. If you already have a service in your AKS cluster, the outbound connection from all pods in your cluster will come thru the first LoadBalancer type service IP; I strongly suggest you create one for the sole purpose to have a consistent outbound IP. You can also pre-create a Public IP and use it as stated in this article using the LoadBalancerIP spec.
On a side note, rather than a ConfigMap, due to the sensitiveness of the connection string, I'd suggest you create a Secret and pass that down to your Deployment to be mounted or exported as environment variable.
Im trying to get web services to my existing service from aks managed cluster on azure. I did nsg port config stuff from portal to let outbound traffic go out and restarted vm several times. But my node cannot ping any ping on the internet. Im not trying to ping somewhere with its fqdn. Im trying it with its ip address. How can i reach a service from internet into my cluster?
How did you create the service and pod? Be default load balancer one will create all the ruls for you and you dont need to create the rules by yourself.
You can share your pod details
Some of my data is in Mongo replicas that are hosted in docker containers running in kubernetes cluster. I need to access this data from the AWS lambda that is running in the same VPC and subnet (as the kubernetes minions with mongo db). lambda as well as the kubernetes minions (hosting mongo containers) are run under the same security group. I am trying to connect using url "mongodb://mongo-rs-1-svc,mongo-rs-2-svc,mongo-rs-3-svc/res?replicaSet=mongo_rs" where mongo-rs-x-svc are three kubernetes services that enables access to the appropriate replicas. When I try to connect using this url, it fails to resolve the mongo replica url (e.g. mongo-rs-2-svc). Same URL works fine for my web service that is running in its own docker container in the same kubernetes cluster.
Here is the error I get from mongo client that I use...
{\"name\":\"MongoError\",\"message\":\"failed to connect to server [mongo-rs-1-svc:27017] on first connect [MongoError: getaddrinfo ENOTFOUND mongo-rs-1-svc mongo-rs-1-svc:27017]\"}". I tried replacing mongo-rs-x-svc to their internal ip addresses in the url. In this case the above name resolution error disappeared but got another error - {\"name\":\"MongoError\",\"message\":\"failed to connect to server [10.0.170.237:27017] on first connect [MongoError: connection 5 to 10.0.170.237:27017 timed out]\"}
What should I be doing to enable this access successfully?
I understand that I can use the webservice to access this data as intermediary but since my lambda is in VPC, I have to deploy NAT gateways and that would increase the cost. Is there a way to access the webservice using the internal endpoint instead of public url? May be that is another way to get data.
If any of you have a solution for this scenario, please share. I went through many threads that showed up as similar questions or in search results but neither had a solution for this case.
This is a common confusion with Kubernetes. The Service object in Kubernetes is only accessible from inside Kubernetes by default (i.e. when type: ClusterIP is set). If you want to be able to access it from outside the cluster you need to edit the service so that it is type: NodePort or type: LoadBalancer.
I'm not entirely sure, but it sounds like your network setup would allow you to use type: NodePort for your Service in Kubernetes. That will open a high-numbered port (e.g. 32XXX) on each of the Nodes in your cluster that forwards to your Mongo Pod(s). DNS resolution for the service names (e.g. mongo-rs-1-svc) will only work inside the Kubernetes cluster, but by using NodePort I think you should be able to address them as mongodb://ec2-instance-1-ip:32XXX,ec2-instance-2-ip:32XXX,....
Coreyphobrien's answer is correct. Subsequently you were asking for how to keep the exposure private. For that I want to add some information:
You need to make the Lambdas part of your VPC that your cluster is in. For this you use the --vpc-config parameter when creating the lambdas or updating. This will create a virtual network interface in the VPC that allows the Lambda access. For Details see this.
After that you should be able to set the AWS security group for your instances so that the NodePort will only be accessible from another security group that is used for your Lambdas network interface.
This blog discusses an example in more detail.
I am running a kubernetes cluster with 1 master (also a node) and 2 nodes on Azure. I am using Ubuntu with Flannel overlay network. So far everything is working well. The only problem I have is exposing the service to the internet.
I am running the cluster on an azure subnet. The master has a NIC attached to it that has a public IP. This means if I run a simple server that listens on port 80, I can reach my server using a domain name (Azure gives an option to have a domain name for a public IP).
I am also able to reach the kubernetes guest book frontend service with some hack. What I did was check all the listening ports on the master and try each port with the public IP. I was able to hit the kubernetes service and get response. Based on my understanding this is directly going to the pod that is running on the master (which is also a node) rather than going through the service IP (which would have load balanced across any of the pods).
My question is how do I map the external IP to the service IP? I know kubernetes has a setting that works only on GCE (which I can't use right now). But is there some neat way of telling etcd/flannel to do this?
If you use the kubectl expose command:
--external-ip="": External IP address to set for the service. The service can be accessed by this IP in addition to its generated service IP.
Or if you create from a json or yaml file, use the spec/externalIPs array.