So I have deployed a container containing a microservice in Azure Kubernetes cluster in subnet A. And I have another microservice that is running on a VM but in another subnet B. I have to call rest APIs of the application running inside the container from a VM. How can I achieve it?
The right way to achieve that is by creating an internal load balancer.
From the docs:
To restrict access to your applications in Azure Kubernetes Service
(AKS), you can create and use an internal load balancer. An internal
load balancer makes a Kubernetes service accessible only to
applications running in the same virtual network as the Kubernetes
cluster.
Follow Specify a different subnet section:
To specify a subnet for your load balancer, add the
azure-load-balancer-internal-subnet annotation to your service. The
subnet specified must be in the same virtual network as your AKS
cluster. When deployed, the load balancer EXTERNAL-IP address is part
of the specified subnet.
Example:
apiVersion: v1
kind: Service
metadata:
name: internal-app
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "apps-subnet"
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: internal-app
Diagram:
Related
First of all I am pretty new on Kubernetes and containerized world.
My scenario is as follows:
I have a application which is deployed to AKS, we are using AGIC as ingress. The application is consuming endpoints hosted outside the AKS. The consumed application is publicly accessible but it has IP whitelisting. I am whitelisting the Application Gateway IP. Also I created a External Service as such.
kind: Service
apiVersion: v1
metadata:
name: service-endpoint
spec:
type: ExternalName
externalName: endpointname.something.com
ports:
- protocol: TCP
port: 433
But it does not work.
Additionally I tried to ping the direct endpoint URL(https://endpointname.something.com) from the pod, and I receive 403.
Can someone advice what would be the correct steps in order to achieve this connectivity?
Please note that we fixed this issue by whitelisting the public IP of the AKS load balancer to the target system.
I have setup an AKS cluster, with a POD configured to run multiple Tomcat services. My Apache web server is outside the AKS cluster and hosted on a VM, but in the same subnet. Apache server sends a request to the Tomcat with ajp://10.x.x.x:5009/dbp_webui, which is inside the AKS cluster. I am looking for options on how to expose the Tomcat service, so that my Apache can make a successful connection.
You can use ingress to expose you service. From version 0.18.0 it supports AJP protocol.
https://github.com/kubernetes/ingress-nginx/blob/main/Changelog.md#0180. Intro into ingress: https://kubernetes.io/docs/concepts/services-networking/ingress/
You will probably need to set additional annotation to describe the backend protocol: https://kubernetes.github.io/ingress-nginx/user-guide/nginx-configuration/annotations/#backend-protocol
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-name
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "AJP"
spec:
...
As #CSharpRocks mentioned in the comments, AKS nodes don't have public IP addresses by default. This means that a better option is to use LoadBalancerservice type.
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer
It will deploy a LB that will route traffic to the Pod no matter on witch node it will resident. AFAIK with AKS have option to install Ingress out of the box, with a LB.
Edit
Scratch this
Easier way: use a NodePort type service:
https://kubernetes.io/docs/concepts/services-networking/service/#nodeport
We have created the kubernetes cluster on the azure VM, with Kube master and two nodes. We have deployed application and created the service with "NodePort" which works well. But when we try to use the type: LoadBalancer then it create service but the external IP goes pending state. Currently we unable create service type load balance and due this "ingress" nginx controller also going to same state. So we are not sure how do we setup load balancing in this case.
We have tried creating Load Balancer in Azure and trying to use that ip like shown below in service.
kind: Service
apiVersion: v1
metadata:
name: jira-service
labels:
app: jira-software
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
selector:
app: jira-software
type: LoadBalancer
loadBalancerIP: xxx.xxx.xxx.xxx
ports:
- name: jira-http
port: 8080
targetPort: jira-http
similarly we have one more application running on this kube cluster and we want to access application based on the context path.
if we invoke jira it should call backend server jira http://dns-name/jira
if we invoke some other app like bitbucket http://dns-name/bitbukcet
If I understand correctly you used type LoadBalancer in Virtual Machine, which will not work - type LoadBalancer works only in managed Kubernetes services like GKE, AKS etc.
You can find more information here.
I am having an aks instance running. which I assigned an virtual network to it. So all the Node IPs in the network are good and I can reach them from within the network.
Now I wonder if it is possible to create a 2nd virtual network and tell kubernetes to use it to assign public ips ?
Or maybe is it possible to say that a specific service should always have the same node ip ?
No, this is not supported, you might be able to hack your way through, but certainly not out of the box.
But you can create an internal load balancer for your service in the network and its ip wouldnt change, you do this using a service with an annotation:
---
apiVersion: v1
kind: Service
metadata:
name: name
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
ports:
- port: xxx
selector:
app: name
type: LoadBalancer
I have got the following service for Kubernetes dashboard
Name: kubernetes-dashboard
Namespace: kube-system
Labels: k8s-app=kubernetes-dashboard
kubernetes.io/cluster-service=true
Annotations: kubectl.kubernetes.io/last-applied-configuration={"kind":"Service","apiVersion":"v1","metadata":{"name":"kubernetes-dashboard","namespace":"kube-system","creationTimestamp":null,"labels":{"k8s-app":"k...
Selector: k8s-app=kubernetes-dashboard
Type: NodePort
IP: 10.0.106.144
Port: <unset> 80/TCP
NodePort: <unset> 30177/TCP
Endpoints: 10.244.0.11:9090
Session Affinity: None
Events: <none>
According to the documentation, I ran
az acs kubernetes browse
and it works on http://localhost:8001/ui
But I want to access it outside the cluster too. The describe output says that it is exposed using NodePort on port 30177.
But I'm not able to access it on http://<any node IP>:30177
As we know, expose the service to internet, we can use nodeport and LoadBalancer.
As far as I know, Azure does not support nodeport type now.
But I want to access it outside the cluster too.
we can use LoadBalancer to re-create the kubernetes dashboard, here are my steps:
Delete kubernetes-dashboard via kubernetes UI: select Namespace to kube-system, then select services, then delete it:
Modify Kubernetes-dashboard-service.yaml: SSH master VM, then change type from nodeport to LoadBalancer:
root#k8s-master-47CAB7F6-0:/etc/kubernetes/addons# vi kubernetes-dashboard-service.yaml
apiVersion: v1
kind: Service
metadata:
labels:
kubernetes.io/cluster-service: "true"
k8s-app: kubernetes-dashboard
name: kubernetes-dashboard
namespace: kube-system
spec:
ports:
- port: 80
targetPort: 9090
selector:
k8s-app: kubernetes-dashboard
type: LoadBalancer
start kubernetes browse from CLI 2.0:
C:\Users>az acs kubernetes browse -g k8s -n containerservice-k8s
Then SSH to master VM to check the status:
Now, we can via the Public IP address to browse the UI:
Update:
The following image shows the architecture of azure container service cluster(Kubernetes), we should use Load-Balancer to expose the service to internet.
On second thought, this actually is expected to NOT work. The only public IP in the cluster, by default, is for the load balancer on the masters. And that load balancer obviously is not configured to forward random ports (like 30000-32767 for example). Further, none of the nodes directly have a public IP, so by definition NodePort is not going to work external to the cluster.
The only way you're going to make this work is by giving the nodes public IP addresses directly. This is not encouraged for a variety of reasons.
If you merely want to avoid waiting... then I suggest:
Don't delete the Service. Most dev scenarios should just be kubectl apply -f <directory> in which case you don't really need to wait for the Service to re-provision
Use Ingress along with 'nginx-ingress-controller' so that you only need to wait for the full LB+NSG+PublicIP provisioning once, and then can just add/remove Ingress objects in your dev scenario.
Use minikube for development scenarios, or manually add public ips to the nodes to make the NodePort scenario work.
You can't expose the service via nodeport by running the kubectl expose command, you get a VIP address outside the range of the subnets your cluster sits on... Instead, deploy a service through a yaml file and you can specify an internal load balancer as a type..., which will give you a local IP on the Master subnet, which you can connect to via the internal network...
Or, you can just expose the service with an external load balancer and get a public ip. available on the www.