These are details from Azure AKS
I am getting 404 on website and in the backend nginx ingress pods getting this log
Service "ns-2/svc-test-2" does not have any active Endpoint
This is liferay application running on the pod.
Ingress describe
Name: ingress-abc-2
Namespace: ns-abc-2
Address: 1.1.1.1
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
secret-tls-abc-2 terminates aks.abc.in
Rules:
Host Path Backends
---- ---- --------
aks.abc.in
/ svc-abc-2:80 (10.244.0.23:8080)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: true
Events: <none>
service describe
Name: svc-abc-2
Namespace: ns-abc-2
Labels: <none>
Annotations: service.beta.kubernetes.io/azure-load-balancer-internal: true
Selector: app=pod-abc-2
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.0.162.112
IPs: 10.0.162.112
Port: port-abc-2 80/TCP
TargetPort: 8080/TCP
Endpoints: 10.244.0.23:8080
Session Affinity: None
Events: <none>
kubectl get po -n ns-2 -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-abc-2-1111 1/1 Running 0 103m 10.244.0.23 aks-agentpool-1111-vmss000000 <none> <none>
kubectl describe po -n ns-2 pod-abc-2-1111
Namespace: ns-abc-2
Priority: 0
Node: aks-agentpool-1111-vmss000000/10.224.0.4
Start Time: Thu, 18 Aug 2022 18:23:09 +0530
Labels: app=pod-abc-2
pod-template-hash=5d774586b5
Status: Running
IP: 10.244.0.23
IPs:
IP: 10.244.0.23
Deployment describe
Name: deployment-abc-2
Namespace: ns-abc-2
CreationTimestamp: Thu, 18 Aug 2022 18:23:09 +0530
Labels: app=deployment-canopi-liferay-2
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=pod-abc-2
Selectors and Labels are properly selected.
If I go inside pod and run curl localhost:8080 then I get the response of welcome page.
Endpoints
kubectl get endpoints -n ns-abc-2
NAME ENDPOINTS AGE
svc-abc-2 10.244.0.23:8080 148m
Enpoints ingress controller
kubectl get endpoints -n ns-ingress-2
NAME ENDPOINTS AGE
nginx-ingress-controller-ingress-nginx-controller 10.244.0.20:443,10.244.0.21:443,10.244.0.20:80 + 1 more... 144m
nginx-ingress-controller-ingress-nginx-controller-admission 10.244.0.20:8443,10.244.0.21:8443 144m
Ok, so this issue is resolved.
In my nginx controller I was pointing application on DNS name and I was trying to open application using public IP of controller until DNS-PublicIP mapped in which I was getting 404.
After DNS-PublicIP have mapped, application is started showing on website DNS based URL.
Related
I have a simple service
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
And here is how my cluster looks like. Pretty simple.
$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-shell-95cb5df57-cdj4z 1/1 Running 0 23m 10.60.1.32 aks-nodepool-19248108-0 <none> <none>
nginx-deployment-76bf4969df-58d66 1/1 Running 0 36m 10.60.1.10 aks-nodepool-19248108-0 <none> <none>
nginx-deployment-76bf4969df-jfkq7 1/1 Running 0 36m 10.60.1.21 aks-nodepool-19248108-0 <none> <none>
$kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
internal-ingress LoadBalancer 10.0.0.194 10.60.1.35 80:30157/TCP 5m28s app=nginx-deployment
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 147m <none>
$kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
my-shell-95cb5df57 1 1 1 23m my-shell ubuntu pod-template-hash=95cb5df57,run=my-shell
nginx-deployment-76bf4969df 2 2 2 37m nginx nginx:1.7.9 app=nginx,pod-template-hash=76bf4969df
I see I have 2 pods wiht my nginx app. I want to be able to send a request from any other new pod to either one of them. If one crashes, I want to still be able to send this request.
In the past I used a load balancer for this. The problem with load balancers is that they open up a public IP and int this specific scenario, I don't want a public IP anymore. I want this service to be invoked by other pods directly, without a public IP.
I tried to use an internal load balancer.
apiVersion: v1
kind: Service
metadata:
name: internal-ingress
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "my-subnet"
spec:
type: LoadBalancer
loadBalancerIP: 10.60.1.45
ports:
- port: 80
selector:
app: nginx-deployment
The problem is that it does not get an IP in my 10.60.0.0/16 network like it is described here: https://learn.microsoft.com/en-us/azure/aks/internal-lb#specify-a-different-subnet
I get this never ending <pending>.
kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
internal-ingress LoadBalancer 10.0.0.230 <pending> 80:30638/TCP 15s app=nginx-deployment
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 136m <none>
What am I missing? How to troubleshoot? Is it even possible to have pod to service communication?
From the message you provide, it seems you want to use a special private IP address which is in the subnet that the same as the AKS cluster use. I think the possible reason is that the special IP address which you want to use is already assigned by the AKS, it means you cannot use it.
Troubleshooting
So you need to guide to the Vnet which your AKS cluster used and check if the IP address is already in use. Here is the screenshot:
Solution
Choose an IP address that is not assigned by the AKS from the subnet the AKS used. Or do not use a special one, let the AKS assign your load balancer dynamic. Then change your YAML file like below:
apiVersion: v1
kind: Service
metadata:
name: internal-ingress
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx-deployment
Use a ClusterIP Service (the default type) which creates only a cluster-internal IP and no public IP:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
Then you can access the Service (and thus the Pods behind it) from any other Pod in the same namespace by using the Service name as the DNS name:
curl nginx-service
If the Pod from which you want to access the Service is in a different namespace, you have to use the fully qualified domain name of the Service:
curl nginx-service.my-namespace.svc.cluster.local
I installed istio on kubernetes without helm.
I can see pods and services are created in istio-system namespace.
All service like grafana, Prometheus are created and their ports are not exposed.
As load-balancer-service is created so that one load balancer is also created in AWS, I wanted to access grafana, prometheus etc dashboard from an external network through newly created load balancer endpoint but that dashboard is not accessible from load balancer endpoint.
I tried port forwarding recommended by istio docs:
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 &
These is working with only http://localhost:3000 but not accessible with http://publicip:3000
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 172.20.192.71 <none> 3000/TCP 1m
istio-citadel ClusterIP 172.20.111.103 <none> 8060/TCP,15014/TCP 1m
istio-egressgateway ClusterIP 172.20.123.112 <none> 80/TCP,443/TCP,15443/TCP 1m
istio-galley ClusterIP 172.20.45.229 <none> 443/TCP,15014/TCP,9901/TCP 1m
istio-ingressgateway LoadBalancer 172.20.94.157 xxxx-yyyy.us-west-2.elb.amazonaws.com 15020:31336/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:32146/TCP,15030:30126/TCP,15031:31506/TCP,15032:30501/TCP,15443:31053/TCP 1m
istio-pilot ClusterIP 172.20.27.87 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 1m
istio-policy ClusterIP 172.20.222.108 <none> 9091/TCP,15004/TCP,15014/TCP 1m
istio-sidecar-injector ClusterIP 172.20.240.198 <none> 443/TCP 1m
istio-telemetry ClusterIP 172.20.157.227 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 1m
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 1m
jaeger-collector ClusterIP 172.20.92.248 <none> 14267/TCP,14268/TCP 1m
jaeger-query ClusterIP 172.20.168.197 <none> 16686/TCP 1m
kiali ClusterIP 172.20.236.20 <none> 20001/TCP 1m
prometheus ClusterIP 172.20.21.205 <none> 9090/TCP 1m
tracing ClusterIP 172.20.231.66 <none> 80/TCP 1m
zipkin ClusterIP 172.20.200.32 <none> 9411/TCP 1m
As shown in above I'm trying to access grafana dashboard using load balncer as well as port forwarding but I haven't get grafana dashboard
You can create Istio Gateway and VirtualService in order to forward your requests to grafana service running by default on port 3000
Firstly, let's check grafana and istio-ingressgateway service
kubectl get svc grafana istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 100.71.67.105 <none> 3000/TCP 18h
istio-ingressgateway LoadBalancer 100.64.42.106 <Public IP address> 15020:31766/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:32576/TCP,15030:30728/TCP,15031:31037/TCP,15032:31613/TCP,15443:32501/TCP 18h
So, we have grafana running service listening on port 3000, and default istio-ingressgateway LoadBalancer service running with assigned public ip address.
Then we create gateway to use this default LoadBalancer.
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: istio-system # Use same namespace with backend service
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: HTTP
protocol: HTTP
hosts:
- "*"
EOF
Then configure route to grafana service for traffic entering via the this gateway:
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
namespace: istio-system # Use same namespace with backend service
spec:
hosts:
- "*"
gateways:
- grafana-gateway # define gateway name
http:
- match:
- uri:
prefix: "/"
route:
- destination:
port:
number: 3000 # Backend service port
host: grafana # Backend service name
EOF
Then hit the http://<public_ip_istio_ingressgateway>, you should see the grafana dashboard
I hope it will be helpful for you.
kubectl -n istio-system port-forward svc/kiali 20001
Then hit http://localhost:20001/kiali/
First off a disclaimer: I have only been using Azure's Kubernetes framework for a short while so my apologies for asking what might be an easy problem.
I have two Kubernetes services running in AKS. I want these services to be able to discover each other by service name. The pods associated with these services are each given an IP from the subnet I've assigned to my cluster:
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP ...
tom 1/1 Running 0 69m 10.0.2.10 ...
jerry 1/1 Running 5 67m 10.0.2.21 ...
If I make REST calls between these services using their pod IPs directly, the calls work as expected. I don't want to of course use hard coded IPs. In reading up on kube dns, my understanding is that entries for registered services are created in the dns. The tests I've done confirms this, but the IP addresses assigned to the dns entries are not the IP addresses of the pods. For example:
$ kubectl exec jerry -- ping -c 1 tom.default
PING tom.default (10.1.0.246): 56 data bytes
The IP address that is associated with the service tom is the so-called "cluster ip":
$ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
tom ClusterIP 10.1.0.246 <none> 6010/TCP 21m
jerry ClusterIP 10.1.0.247 <none> 6040/TCP 20m
The same is true with the service jerry. The problem with these IP addresses is that REST calls using these addresses do not work. Even a simple ping times out. So my question is how can I associate the kube-dns entry that's created for a service with the pod IP instead of the cluster IP?
Based on the posted answer, I updated my yml file for "tom" as follows:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: tom
spec:
template:
metadata:
labels:
app: tom
spec:
containers:
- name: tom
image: myregistry.azurecr.io/tom:latest
imagePullPolicy: Always
ports:
- containerPort: 6010
---
apiVersion: v1
kind: Service
metadata:
name: tom
spec:
ports:
- port: 6010
name: "6010"
selector:
app: tom
and then re-applied the update. I still get the cluster IP though when I try to resolve tom.default, not the pod IP. I'm still missing part of the puzzle.
Update: As requested, here's the describe output for tom:
$ kubectl describe service tom
Name: tom
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"tom","namespace":"default"},"spec":{"ports":[{"name":"6010","po...
Selector: app=tom
Type: ClusterIP
IP: 10.1.0.139
Port: 6010 6010/TCP
TargetPort: 6010/TCP
Endpoints: 10.0.2.10:6010
The output is similar for the service jerry. As you can see, the endpoint is what I'd expect--10.0.2.10 is the IP assigned to the pod associated with the service tom. Kube DNS though resolves the name "tom" as the cluster IP, not the pod IP:
$ kubectl get pods
NAME READY STATUS RESTARTS AGE IP ...
tom-b4ccbfb97-wfmjp 1/1 Running 0 15h 10.0.2.10
jerry-dd8fbf98f-8jgw7 1/1 Running 0 14h 10.0.2.20
$ kubectl exec jerry-dd8fbf98f-8jgw7 nslookup tom
Name: tom
Address 1: 10.1.0.139 tom.default.svc.cluster.local
This doesn't really matter of course as long as REST calls are routed to the expected pod IP. I've had some success with this today:
$ kubectl exec jerry-5554b956b-9kpj7 -- wget -O - http://tom:6010/actuator/health
{"status":"UP"}
This shows that even though the name "tom" resolves to the cluster IP there is routing in place that makes sure the call gets to the pod. I've tried the same call from service tom to service jerry and that also works. Curiously, a loopback, from tom to tom, times out:
$ kubectl exec tom-5c68d66cf9-dxlmf -- wget -O - http://tom:6010/actuator/health
Connecting to tom:6010 (10.1.0.139:6010)
wget: can't connect to remote host (10.1.0.139): Operation timed out
command terminated with exit code 1
If I use the pod IP explicitly, the call works:
$ kubectl exec tom-5c68d66cf9-dxlmf -- wget -O - http://10.0.2.10:6010/actuator/health
{"status":"UP"}
So for some reason the routing doesn't work in the loopback case. I can probably get by with that since I don't think we'll need to make calls back to the same service. It is puzzling though.
Peter
This means you didnt publish ports through your service (or used wrong labels). What you are trying to achieve should be done using services exactly, what you need to do is fix your service definition so that it works properly.
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: xxx-name
spec:
template:
metadata:
labels:
app: xxx-label
spec:
containers:
- name: xxx-container
image: kmrcr.azurecr.io/image:0.7
imagePullPolicy: Always
ports:
- containerPort: 7003
- containerPort: 443
---
apiVersion: v1
kind: Service
metadata:
name: xxx-service
spec:
ports:
- port: 7003
name: "7003"
- port: 443
name: "443"
selector:
app: xxx-label < must match your pod label
type: LoadBalancer
notice how this exposes same ports container is listening on and uses the same label as selector to determine to which pods the traffic must go
I want to make services accessible from outside the K8 cluster using an ingress controller. Following 5.5 from the Kubernetes Cookbook, I ran this manifest:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: nginx-public
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host:
http:
paths:
- path: /web
backend:
serviceName: nginx
servicePort: 80
The Ingress object is visible in the Kubernetes dashboard; but it does not have an assigned endpoint:
Output of kubectl get ing:
NAME HOSTS ADDRESS PORTS AGE
nginx-public * 80 54m
update
Running kubectl describe ingress nginx-public gives:
Name: nginx-public
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/web nginx:80 (<none>)
Annotations:
ingress.kubernetes.io/rewrite-target: /
Events: <none>
Actually this is an issue with Kubernetes Dashboard, we have the same issue.
Even if it isn't displayed it doesn't mean your ingress isn't working. First you should check the ingress with kubectl (kubectl describe ingress nginx-public) and verify that the output is smiliar to this:
Name: test-ingress
Namespace: test
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
test-ssl-secret terminates test.myorg.com
Rules:
Host Path Backends
---- ---- --------
test.myorg.com
/ test-service:80 (<none>)
Afterwards you should verify your service is reachable via your specified host.
Update:
Depending on the service in front of your ingress-controller your service should be reachable via http://{serverip}:{nodeport-http-port}/web in case your service is of type NodePort(you will get 2 external ports in the 30000-39999 range, one is the http port the other the https port) or http://{address-from-external-loadbalancer}/web if the service is of type LoadBalancer.
2nd-Update
After some further investigation about the issue i stumbled upon a bug issue of kubernetes-dashboard stating that it's indeed possible to show the endpoints of ingress. The problem actually isn't caused by the dashboard, but a missing parameter on the ingress deployment.
For nginx-ingress-controller its the following:
NGINX Ingress CLI arguments
The missing option is --publish-service
If you used helm to deploy the controller you need to add the parameter --set controller.publishService.enabled=true
As an experiment I'm trying to run a docker container on Azure using the Azure Container Service and Kubernetes as the orchestrator. I'm running the official nginx image. Here are the steps I am taking:
az group create --name test-group --location westus
az acs create --orchestrator-type=kubernetes --resource-group=test-group --name=k8s-cluster --generate-ssh-keys
I created Kubernetes deployment and service files from a docker compose file using Kompose.
deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.service.type: LoadBalancer
creationTimestamp: null
labels:
io.kompose.service: test
name: test
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: test
spec:
containers:
- image: nginx:latest
name: test
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
status: {}
service file
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.service.type: LoadBalancer
creationTimestamp: null
labels:
io.kompose.service: test
name: test
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: test
type: LoadBalancer
status:
loadBalancer: {}
I can then start everything up:
kubectl create -f test-service.yaml,test-deployment.yaml
Once an IP has been exposed I assign a dns prefix to it so I can access my running container like so: http://nginx-test.westus.cloudapp.azure.com/.
My question is, how can I access the service using https? At https://nginx-test.westus.cloudapp.azure.com/
I don't think I'm supposed to configure nginx for https, since the certificate is not mine. I've tried changing the load balancer to send 443 traffic to port 80, but I receive a timeout error.
I tried mapping port 443 to port 80 in my Kubernetes service config.
ports:
- name: "443"
port: 443
targetPort: 80
But that results in:
SSL peer was not expecting a handshake message it received. Error code: SSL_ERROR_HANDSHAKE_UNEXPECTED_ALERT
How can I view my running container at https://nginx-test.westus.cloudapp.azure.com/?
If I understand it correctly, I think you are looking for Nginx Ingress controller.
If we need TLS termination on Kubernetes, we can use ingress controller, on Azure we can use Nginx Ingress controller.
To archive this, we can follow those steps:
1 Deploy the Nginx Ingress controller
2 Create TLS certificates
3 Deploy test http service
4 configure TLS termination
More information about configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure, please refer to this blog.
root#k8s-master-6F403744-0:~/ingress/examples/deployment/nginx# kubectl get services --namespace kube-system -w
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend 10.0.113.185 <none> 80/TCP 42m
heapster 10.0.4.232 <none> 80/TCP 1h
kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 1h
kubernetes-dashboard 10.0.237.125 <nodes> 80:32229/TCP 1h
nginx-ingress-ssl 10.0.92.57 40.71.37.243 443:30215/TCP 13m