I am trying to setup a K8s deployment where ingress's controllers can define a service as a subdomain. i.e. app1 can define itself to get traffic from app1.sub.domain.io in its ingress config.
I have a DNS A Record *.sub.domain.io that points to a Load Balancer. That load balancer is pointing to the cluster's instance group.
So if I am right all traffic that goes to anything at sub.domain.io will land inside the cluster and just need to route said traffic.
Below are the k8 configs, which has a pod, a service and an ingress. The pods are healthy and working, I believe the service isn't required but will want other pods to talk to it via internal DNS so it's added.
The ingress rules have a host app1.sub.domain.io, so in theory, curl'ing app1.sub.domain.io should follow:
DNS -> Load Balancer -> Cluster -> Ingress Controller -> Pod
At the moment when I try to hit app1.sub.domain.io it just hangs. I have tried not having service, making external-name service and doesn't work.
I don't want to go down the route of using the loadBalancer ingress as that makes a new external IP that needs to be applied to DNS records manually, or with a nasty bash script that waits for services external IP and runs GCP command, and we don't want to do this for each service.
Ref links:https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting
Deployment
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: app1
namespace: default
labels:
app: app1
spec:
replicas: 3
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
containers:
- image: xxxx:latest
name: app1
ports:
- containerPort: 80
env:
- name: NODE_ENV
value: production
Service
---
kind: Service
apiVersion: v1
metadata:
name: app1
labels:
app: app1
spec:
ports:
- port: 80
targetPort: 80
selector:
app: app1
type: ClusterIP
Ingress
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1-ingress
labels:
app: app1
spec:
rules:
- host: app1.sub.domain.io
http:
paths:
- backend:
serviceName: app1
servicePort: 80
Once everything is deployed if you query
kubectl get pods,services,ingresses -l app=app1
NAME READY STATUS RESTARTS AGE
po/app1-6d4b9d8c5-4gcz5 1/1 Running 0 20m
po/app1-6d4b9d8c5-m4kwq 1/1 Running 0 20m
po/app1-6d4b9d8c5-rpm9l 1/1 Running 0 20m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/app1 ClusterIP x.x.x.x <none> 80/TCP 20m
NAME HOSTS ADDRESS PORTS AGE
ing/app1-ingress app1.sub.domain.io 80 20m
----------------------------------- Update -----------------------------------
Currently doing this, not ideal. Have global static IP that's assigned to a DNS record.
---
kind: Service
apiVersion: v1
metadata:
name: app1
labels:
app: app1
spec:
type: NodePort
selector:
app: app1
ports:
- port: 80
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: app1-static-ip
labels:
app: app1-static-ip
spec:
backend:
serviceName: app1
servicePort: 80
*.sub.domain.io should point to the IP of the Ingress.
You can use a static IP for the Ingress by following the instructions in the tutorial here: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#step_5_optional_configuring_a_static_ip_address
Try adding path to your Ingress:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1-ingress
labels:
app: app1
spec:
rules:
- host: app1.sub.domain.io
http:
paths:
- backend:
serviceName: app1
servicePort: 80
path: /
If that doesn't work, please post the output of describe service and describe ingress.
Do you have an Ingress Controller?
Traffic should go LB-> Ingress Controller-> Ingress-> Service ClusterIP-> Pods
Related
I'm trying to access a simple Asp.net core application deployed on Azure AKS but I'm doing something wrong.
This is the deployment .yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aspnetapp
spec:
replicas: 1
selector:
matchLabels:
app: aspnet
template:
metadata:
labels:
app: aspnet
spec:
containers:
- name: aspnetapp
image: <my_image>
resources:
limits:
cpu: "0.5"
memory: 64Mi
ports:
- containerPort: 8080
and this is the service .yml
apiVersion: v1
kind: Service
metadata:
name: aspnet-loadbalancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
name: aspnetapp
Everything seems deployed correctly
Another check I did was to enter the pod and run
curl http://localhost:80,
and the application is running correctly, but if I try to access the application from the browser using http://20.103.147.69 a timeout is returned.
What else could be wrong?
Seems that you do not have an Ingress Controller deployed on your AKS as you have your application exposed directly. You will need that in order to get ingress to work.
To verify if your application is working your can use port-forward and then access http://localhost:8080 :
kubectl port-forward aspnetapp 8080:8080
But you should def. install a ingress-controller: Here is a Workflow from MS to install ingress-nginx as IC on your Cluster.
You will then only expose the ingress-controller to the internet and could also specify the loadBalancerIP statically if you created the PublicIP in advance:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
The Ingress Controller then will route incoming traffic to your application with an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx # ingress-nginx specifix
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test
port:
number: 80
PS: Never expose your application directly to the internet, always use the ingress controller
In your Deployment, you configured your container to listen on port 8080. You need to add targetport value set to 8080 in the Service definition.
Documentation
I created an Ingress Service as below and I am able to get response using the IP (retrieved using the kubectl get ingress command).
Deployment File
apiVersion: apps/v1
kind: Deployment
metadata:
name: app1-nginx-deployment
labels:
app: app1-nginx
spec:
replicas: 1
selector:
matchLabels:
app: app1-nginx
template:
metadata:
labels:
app: app1-nginx
spec:
containers:
- name: app1-nginx
image: msanajyv/k8s_api
ports:
- containerPort: 80
Service File
apiVersion: v1
kind: Service
metadata:
name: app1-nginx-clusterip-service
labels:
app: app1-nginx
spec:
type: ClusterIP
selector:
app: app1-nginx
ports:
- port: 80
targetPort: 80
Ingress File
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginxapp1-ingress-service
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- path: /
backend:
serviceName: app1-nginx-clusterip-service
servicePort: 80
With above YAML file I am able to access the service using the ip address i.e., http://xx.xx.xx.xx/weatherforecast .
But I thought of accessing the service using a Domain name instead of IP. So I created a DNS in my azure portal and added a Record set as below.
Also I changed my Ingress file as below.
...
rules:
- host: app1.msvcloud.io
http:
paths:
- path: /
backend:
serviceName: app1-nginx-clusterip-service
servicePort: 80
When I access using the host name (http://app1.msvcloud.io/weatherforecast), the host is not getting resolved. kindly let me know what I am missing.
By create a record in your private DNS zone, you can only resolve the name (app1.msvcloud.io) within your virtual network. This means it will work if you remote into a VM within the VNET and access from there. But it will not work if you try the same outside the VNET. If you want the name to be resolvable on the world wide web, you need to buy the domain name and register in Azure DNS.
The records contained in a private DNS zone aren't resolvable from the
Internet. DNS resolution against a private DNS zone works only from
virtual networks that are linked to it.
~What is a private Azure DNS zone.
I have a service which exposes a "hello world" web deployment in "develop" namespace.
Service YAML
kind: Service
apiVersion: v1
metadata:
name: hello-v1-svc
spec:
selector:
app: hello-v1
ports:
- protocol: TCP
port: 80
targetPort: 8080
To test if the page is working properly, I run "kubectl port-forward" and the page is displayed successfully using the public IP.
Edit:
Then the Ingress is deployed but the page only displays within vnet address space
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: app
version: 1.0.0
name: dev-ingress
namespace: develop
spec:
rules:
- http:
paths:
- backend:
serviceName: hello-v1-svc
servicePort: 80
path: /
Ingress Rules
Rules:
Host Path Backends
---- ---- --------
*
/ hello-v1-svc:80 (10.1.1.13:8080,10.1.1.21:8080,10.1.1.49:8080)
What step am I skipping for the page to display?
First of all to answer your comment:
Maybe it is related to this annotation:
"service.beta.kubernetes.io/azure-load-balancer-internal". Controller
is set to "True"
service.beta.kubernetes.io/azure-load-balancer-internal: "true" annotation is typicaly used when you create an ingress controller to an internal virtual network. In case of this annotation , the ingress controller is configured on an internal, private virtual network and IP address. No external access is allowed.
You can find more info in Create an ingress controller to an internal virtual network in Azure Kubernetes Service (AKS) article.
Reproduced your case, created AKS cluster and applied below yamls. It works as expected so please use
apiVersion: v1
kind: Namespace
metadata:
name: develop
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment
namespace: develop
labels:
app: hello-v1
spec:
selector:
matchLabels:
app: hello-v1
replicas: 2
template:
metadata:
labels:
app: hello-v1
spec:
containers:
- name: hello-v1
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: hello-v1-svc
spec:
type: ClusterIP
selector:
app: hello-v1
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress
namespace: develop
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- backend:
serviceName: hello-v1-svc
servicePort: 80
path: /
List of my services
vitalii#Azure:~$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default hello-v1-svc ClusterIP 10.0.20.206 <none> 80/TCP 19m
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 83m
ingress-basic nginx-ingress-controller LoadBalancer 10.0.222.156 *.*.*.* 80:32068/TCP,443:30907/TCP 53m
ingress-basic nginx-ingress-default-backend ClusterIP 10.0.193.198 <none> 80/TCP 53m
kube-system dashboard-metrics-scraper ClusterIP 10.0.178.224 <none> 8000/TCP 83m
kube-system healthmodel-replicaset-service ClusterIP 10.0.199.235 <none> 25227/TCP 83m
kube-system kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 83m
kube-system kubernetes-dashboard ClusterIP 10.0.115.184 <none> 443/TCP 83m
kube-system metrics-server ClusterIP 10.0.199.200 <none> 443/TCP 83m
Sure for security reasons I hide EXTERNAL-IP of nginx-ingress-controller. Thats the IP you should use to access page.
MOre information and exampke you can find in Create an ingress controller in Azure Kubernetes Service (AKS) article
I have created the Kubernetes Cluster using two Azure Ubuntu VMs. I am able to deploy and access pods and deployments using the Nodeport service type. I have also checked the pod's status in Kube-system namespace. All of the pod's status showing as running. but, whenever I mention service type to Loadbalancer, it was not creating the LoadBalancer IP and it's status always showing as pending. I have also created an Ingress controller for the Nginx service. still, it is not creating an ingress Address. While initializing the Kubernetes master, I am using the following command.
kubeadm init
Below is deployment, svc and Ingress manifest files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: nginx
servicePort: 80
$ kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"p...
Selector: app=nginx
Type: ClusterIP
IP: 10.96.107.97
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.44.0.4:80,10.44.0.5:80,10.44.0.6:80
Session Affinity: None
Events: <none>
$ kubectl describe ingress nginx
Name: test-ingress
Namespace: default
Address:
Default backend: nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Rules:
Host Path Backends
---- ---- --------
`*` `*` nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"test-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"nginx","servicePort":80}}}
Events: `<none>`
Do we need to mention any IP ranges(private or public) of VMs while initializing the kubeadm init? or
Do we need to change any network settings in Azure Ubuntu VMs?
As you created your own Kubernetes cluster rather than AWS, Azure or GCP provided one, there is no load balancer integrated. Due to this reason, you are getting IP status pending.
But with the use of Ingress Controller or directly through NodePort you can circumvent this problem.
However, I also observed in your nginx service you are using an annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb and you said you are using Azure and those are platform specific annotations for the service and that annotation is AWS specific.
However, you can give something like this a try, if you would like to experiment directly with public IPs, you can define your service by providing externalIPs in your service if you have a public ip allocated to your node and allows ingress traffic from somewhere.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
- 80.11.12.10
But, a good approach to get this done is using an ingress controller if you are planning to build your own Kubernetes cluster.
Hope this helps.
I am using AKS cluster on Azure. I am trying to discover service using DNS (http://my-api.default.svc.cluster.local:3000/) but, it's not working (This site can’t be reached). With service IP endpoint everything is working fine.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
labels:
app: my-api
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api
image: test.azurecr.io/my-api:latest
ports:
- containerPort: 3000
imagePullSecrets:
- name: testsecret
---
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
selector:
app: my-api
ports:
- protocol: TCP
port: 3000
targetPort: 3000
kubectl describe services kube-dns --namespace kube-system
Name: kube-dns
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-dns","kubernet...
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.10.110.110
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.10.100.54:53,10.10.100.64:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.10.100.54:53,10.10.100.64:53
Session Affinity: None
Events: <none>
kubectl describe svc my-api
Name: my-api
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-api","namespace":"default"},"spec":{"ports":[{"port":3000,"protocol":...
Selector: app=my-api
Type: ClusterIP
IP: 10.10.110.104
Port: <unset> 3000/TCP
TargetPort: 3000/TCP
Endpoints: 10.10.100.42:3000
Session Affinity: None
Events: <none>
From Second POD
kubectl exec -it second-pod /bin/bash
curl my-api.default.svc.cluster.local:3000
Response: {"value":"Hello world2"}
From Second POD website is running which is using the same endpoint but it's not connecting to the service.
Fixing the indentation of your yaml file, I was able to launch the deployment and service successfully. Also the DNS resolution worked fine.
Differences:
Fixed indentation
Used test1 namespaces instead of default
Used containerPort 80 instead of 3000
Used my image
Deployment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: my-api
name: my-api
namespace: test1
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- image: leodotcloud/swiss-army-knife
name: my-api
ports:
- containerPort: 80
protocol: TCP
Service:
apiVersion: v1
kind: Service
metadata:
name: my-api
namespace: test1
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 80
selector:
app: my-api
type: ClusterIP
Debugging steps:
Install tcpdump inside both of the kube-dns containers and start capturing DNS traffic (with filters from the second pod IP)
From inside the second pod, run curl or dig command using the FQDN.
Check if the DNS query packets are reaching the kube-dns containers.
If not, check for networking issues.
If the DNS resolution is working, then start tcpdump inside your application container and check if the curl packet is reaching the container.
Check the source and destination IP address of the packets.
Check the iptables rules on the hosts.
Check sysctl settings.
If you use Deployment to deploy your application onto cluster where it will be consumed via a Service you should have no need at all to manually set Endpoints. Just rely on kubernetes and define normal selector in your Service object.
Other then that, when it makes sense (external service consumed from within cluster), you need to make sure your Endpoints ports definition fully matches the one on service (incl. protocol and potentially name). This incomplete matching is a most common reason for endpoints to be not visible as a part of service.
From the above discussion, what I understood is, you want to expose a service but not using the IP address.
Service can be exposed in many ways. you should look for Service type LoadBalancer.
Try modifying your service is follow :
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
type: LoadBalancer
selector:
app: my-api
ports:
- protocol: TCP
port: 3000
targetPort: 3000
This will create a loadbalancer and map your service to the same.
Later you can add this loadbalancer to your DNS mapping service provided by Azure to give the domain name you like. ex: http:\\my-api.example.com:3000
Also I would like to add, if you define your ports as follow :
ports:
- name: http
port: 80
targetPort: 3000
This will redirect traffic coming to port 80 to 3000 and your service call would look much cleaner for ex. http:\\my-api.example.com