I installed istio on kubernetes without helm.
I can see pods and services are created in istio-system namespace.
All service like grafana, Prometheus are created and their ports are not exposed.
As load-balancer-service is created so that one load balancer is also created in AWS, I wanted to access grafana, prometheus etc dashboard from an external network through newly created load balancer endpoint but that dashboard is not accessible from load balancer endpoint.
I tried port forwarding recommended by istio docs:
kubectl -n istio-system port-forward $(kubectl -n istio-system get pod -l app=grafana -o jsonpath='{.items[0].metadata.name}') 3000:3000 &
These is working with only http://localhost:3000 but not accessible with http://publicip:3000
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 172.20.192.71 <none> 3000/TCP 1m
istio-citadel ClusterIP 172.20.111.103 <none> 8060/TCP,15014/TCP 1m
istio-egressgateway ClusterIP 172.20.123.112 <none> 80/TCP,443/TCP,15443/TCP 1m
istio-galley ClusterIP 172.20.45.229 <none> 443/TCP,15014/TCP,9901/TCP 1m
istio-ingressgateway LoadBalancer 172.20.94.157 xxxx-yyyy.us-west-2.elb.amazonaws.com 15020:31336/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:32146/TCP,15030:30126/TCP,15031:31506/TCP,15032:30501/TCP,15443:31053/TCP 1m
istio-pilot ClusterIP 172.20.27.87 <none> 15010/TCP,15011/TCP,8080/TCP,15014/TCP 1m
istio-policy ClusterIP 172.20.222.108 <none> 9091/TCP,15004/TCP,15014/TCP 1m
istio-sidecar-injector ClusterIP 172.20.240.198 <none> 443/TCP 1m
istio-telemetry ClusterIP 172.20.157.227 <none> 9091/TCP,15004/TCP,15014/TCP,42422/TCP 1m
jaeger-agent ClusterIP None <none> 5775/UDP,6831/UDP,6832/UDP 1m
jaeger-collector ClusterIP 172.20.92.248 <none> 14267/TCP,14268/TCP 1m
jaeger-query ClusterIP 172.20.168.197 <none> 16686/TCP 1m
kiali ClusterIP 172.20.236.20 <none> 20001/TCP 1m
prometheus ClusterIP 172.20.21.205 <none> 9090/TCP 1m
tracing ClusterIP 172.20.231.66 <none> 80/TCP 1m
zipkin ClusterIP 172.20.200.32 <none> 9411/TCP 1m
As shown in above I'm trying to access grafana dashboard using load balncer as well as port forwarding but I haven't get grafana dashboard
You can create Istio Gateway and VirtualService in order to forward your requests to grafana service running by default on port 3000
Firstly, let's check grafana and istio-ingressgateway service
kubectl get svc grafana istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
grafana ClusterIP 100.71.67.105 <none> 3000/TCP 18h
istio-ingressgateway LoadBalancer 100.64.42.106 <Public IP address> 15020:31766/TCP,80:31380/TCP,443:31390/TCP,31400:31400/TCP,15029:32576/TCP,15030:30728/TCP,15031:31037/TCP,15032:31613/TCP,15443:32501/TCP 18h
So, we have grafana running service listening on port 3000, and default istio-ingressgateway LoadBalancer service running with assigned public ip address.
Then we create gateway to use this default LoadBalancer.
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: grafana-gateway
namespace: istio-system # Use same namespace with backend service
spec:
selector:
istio: ingressgateway # use Istio default gateway implementation
servers:
- port:
number: 80
name: HTTP
protocol: HTTP
hosts:
- "*"
EOF
Then configure route to grafana service for traffic entering via the this gateway:
$ kubectl apply -f - <<EOF
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: grafana
namespace: istio-system # Use same namespace with backend service
spec:
hosts:
- "*"
gateways:
- grafana-gateway # define gateway name
http:
- match:
- uri:
prefix: "/"
route:
- destination:
port:
number: 3000 # Backend service port
host: grafana # Backend service name
EOF
Then hit the http://<public_ip_istio_ingressgateway>, you should see the grafana dashboard
I hope it will be helpful for you.
kubectl -n istio-system port-forward svc/kiali 20001
Then hit http://localhost:20001/kiali/
Related
These are details from Azure AKS
I am getting 404 on website and in the backend nginx ingress pods getting this log
Service "ns-2/svc-test-2" does not have any active Endpoint
This is liferay application running on the pod.
Ingress describe
Name: ingress-abc-2
Namespace: ns-abc-2
Address: 1.1.1.1
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
TLS:
secret-tls-abc-2 terminates aks.abc.in
Rules:
Host Path Backends
---- ---- --------
aks.abc.in
/ svc-abc-2:80 (10.244.0.23:8080)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: true
Events: <none>
service describe
Name: svc-abc-2
Namespace: ns-abc-2
Labels: <none>
Annotations: service.beta.kubernetes.io/azure-load-balancer-internal: true
Selector: app=pod-abc-2
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.0.162.112
IPs: 10.0.162.112
Port: port-abc-2 80/TCP
TargetPort: 8080/TCP
Endpoints: 10.244.0.23:8080
Session Affinity: None
Events: <none>
kubectl get po -n ns-2 -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
pod-abc-2-1111 1/1 Running 0 103m 10.244.0.23 aks-agentpool-1111-vmss000000 <none> <none>
kubectl describe po -n ns-2 pod-abc-2-1111
Namespace: ns-abc-2
Priority: 0
Node: aks-agentpool-1111-vmss000000/10.224.0.4
Start Time: Thu, 18 Aug 2022 18:23:09 +0530
Labels: app=pod-abc-2
pod-template-hash=5d774586b5
Status: Running
IP: 10.244.0.23
IPs:
IP: 10.244.0.23
Deployment describe
Name: deployment-abc-2
Namespace: ns-abc-2
CreationTimestamp: Thu, 18 Aug 2022 18:23:09 +0530
Labels: app=deployment-canopi-liferay-2
Annotations: deployment.kubernetes.io/revision: 1
Selector: app=pod-abc-2
Selectors and Labels are properly selected.
If I go inside pod and run curl localhost:8080 then I get the response of welcome page.
Endpoints
kubectl get endpoints -n ns-abc-2
NAME ENDPOINTS AGE
svc-abc-2 10.244.0.23:8080 148m
Enpoints ingress controller
kubectl get endpoints -n ns-ingress-2
NAME ENDPOINTS AGE
nginx-ingress-controller-ingress-nginx-controller 10.244.0.20:443,10.244.0.21:443,10.244.0.20:80 + 1 more... 144m
nginx-ingress-controller-ingress-nginx-controller-admission 10.244.0.20:8443,10.244.0.21:8443 144m
Ok, so this issue is resolved.
In my nginx controller I was pointing application on DNS name and I was trying to open application using public IP of controller until DNS-PublicIP mapped in which I was getting 404.
After DNS-PublicIP have mapped, application is started showing on website DNS based URL.
I can find the loadbalancer URL manually:
> LOAD_BALANCER_URL=`kubectl describe service leafsheets-django-service-staging --namespace=$K8S_NAMESPACE | grep "${AWS_REGION}.elb.amazonaws.com*" | awk '{print $3}'`
> echo $LOAD_BALANCER_URL
sample output from kubectl describe command
> kubectl describe service leafsheets-django-service-staging --namespace=leafsheets-staging
Name: leafsheets-django-service-staging
Namespace: leafsheets-staging
Labels: <none>
Annotations: <none>
Selector: pod=leafsheets-staging-django
Type: LoadBalancer
IP: 10.100.13.121
LoadBalancer Ingress: aa13d515171d045319aaf59e9d08e2a5-1015643832.us-west-2.elb.amazonaws.com
Port: http 80/TCP
TargetPort: 8000/TCP
NodePort: http 32567/TCP
Endpoints: 192.168.93.189:8000
Port: https 443/TCP
TargetPort: 8000/TCP
NodePort: https 30041/TCP
Endpoints: 192.168.93.189:8000
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
... but how to find the associated HostedZoneId?
I can hardcode it:
# https://docs.aws.amazon.com/general/latest/gr/elb.html
HOSTED_ZONE_ID=Z1H1FL5HABSF5
... but I'd prefer to deduce it.
I have a service which exposes a "hello world" web deployment in "develop" namespace.
Service YAML
kind: Service
apiVersion: v1
metadata:
name: hello-v1-svc
spec:
selector:
app: hello-v1
ports:
- protocol: TCP
port: 80
targetPort: 8080
To test if the page is working properly, I run "kubectl port-forward" and the page is displayed successfully using the public IP.
Edit:
Then the Ingress is deployed but the page only displays within vnet address space
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: app
version: 1.0.0
name: dev-ingress
namespace: develop
spec:
rules:
- http:
paths:
- backend:
serviceName: hello-v1-svc
servicePort: 80
path: /
Ingress Rules
Rules:
Host Path Backends
---- ---- --------
*
/ hello-v1-svc:80 (10.1.1.13:8080,10.1.1.21:8080,10.1.1.49:8080)
What step am I skipping for the page to display?
First of all to answer your comment:
Maybe it is related to this annotation:
"service.beta.kubernetes.io/azure-load-balancer-internal". Controller
is set to "True"
service.beta.kubernetes.io/azure-load-balancer-internal: "true" annotation is typicaly used when you create an ingress controller to an internal virtual network. In case of this annotation , the ingress controller is configured on an internal, private virtual network and IP address. No external access is allowed.
You can find more info in Create an ingress controller to an internal virtual network in Azure Kubernetes Service (AKS) article.
Reproduced your case, created AKS cluster and applied below yamls. It works as expected so please use
apiVersion: v1
kind: Namespace
metadata:
name: develop
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment
namespace: develop
labels:
app: hello-v1
spec:
selector:
matchLabels:
app: hello-v1
replicas: 2
template:
metadata:
labels:
app: hello-v1
spec:
containers:
- name: hello-v1
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: hello-v1-svc
spec:
type: ClusterIP
selector:
app: hello-v1
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress
namespace: develop
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- backend:
serviceName: hello-v1-svc
servicePort: 80
path: /
List of my services
vitalii#Azure:~$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default hello-v1-svc ClusterIP 10.0.20.206 <none> 80/TCP 19m
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 83m
ingress-basic nginx-ingress-controller LoadBalancer 10.0.222.156 *.*.*.* 80:32068/TCP,443:30907/TCP 53m
ingress-basic nginx-ingress-default-backend ClusterIP 10.0.193.198 <none> 80/TCP 53m
kube-system dashboard-metrics-scraper ClusterIP 10.0.178.224 <none> 8000/TCP 83m
kube-system healthmodel-replicaset-service ClusterIP 10.0.199.235 <none> 25227/TCP 83m
kube-system kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 83m
kube-system kubernetes-dashboard ClusterIP 10.0.115.184 <none> 443/TCP 83m
kube-system metrics-server ClusterIP 10.0.199.200 <none> 443/TCP 83m
Sure for security reasons I hide EXTERNAL-IP of nginx-ingress-controller. Thats the IP you should use to access page.
MOre information and exampke you can find in Create an ingress controller in Azure Kubernetes Service (AKS) article
I have a simple service
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
And here is how my cluster looks like. Pretty simple.
$kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
my-shell-95cb5df57-cdj4z 1/1 Running 0 23m 10.60.1.32 aks-nodepool-19248108-0 <none> <none>
nginx-deployment-76bf4969df-58d66 1/1 Running 0 36m 10.60.1.10 aks-nodepool-19248108-0 <none> <none>
nginx-deployment-76bf4969df-jfkq7 1/1 Running 0 36m 10.60.1.21 aks-nodepool-19248108-0 <none> <none>
$kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
internal-ingress LoadBalancer 10.0.0.194 10.60.1.35 80:30157/TCP 5m28s app=nginx-deployment
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 147m <none>
$kubectl get rs -o wide
NAME DESIRED CURRENT READY AGE CONTAINERS IMAGES SELECTOR
my-shell-95cb5df57 1 1 1 23m my-shell ubuntu pod-template-hash=95cb5df57,run=my-shell
nginx-deployment-76bf4969df 2 2 2 37m nginx nginx:1.7.9 app=nginx,pod-template-hash=76bf4969df
I see I have 2 pods wiht my nginx app. I want to be able to send a request from any other new pod to either one of them. If one crashes, I want to still be able to send this request.
In the past I used a load balancer for this. The problem with load balancers is that they open up a public IP and int this specific scenario, I don't want a public IP anymore. I want this service to be invoked by other pods directly, without a public IP.
I tried to use an internal load balancer.
apiVersion: v1
kind: Service
metadata:
name: internal-ingress
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
service.beta.kubernetes.io/azure-load-balancer-internal-subnet: "my-subnet"
spec:
type: LoadBalancer
loadBalancerIP: 10.60.1.45
ports:
- port: 80
selector:
app: nginx-deployment
The problem is that it does not get an IP in my 10.60.0.0/16 network like it is described here: https://learn.microsoft.com/en-us/azure/aks/internal-lb#specify-a-different-subnet
I get this never ending <pending>.
kubectl get services -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
internal-ingress LoadBalancer 10.0.0.230 <pending> 80:30638/TCP 15s app=nginx-deployment
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 136m <none>
What am I missing? How to troubleshoot? Is it even possible to have pod to service communication?
From the message you provide, it seems you want to use a special private IP address which is in the subnet that the same as the AKS cluster use. I think the possible reason is that the special IP address which you want to use is already assigned by the AKS, it means you cannot use it.
Troubleshooting
So you need to guide to the Vnet which your AKS cluster used and check if the IP address is already in use. Here is the screenshot:
Solution
Choose an IP address that is not assigned by the AKS from the subnet the AKS used. Or do not use a special one, let the AKS assign your load balancer dynamic. Then change your YAML file like below:
apiVersion: v1
kind: Service
metadata:
name: internal-ingress
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: nginx-deployment
Use a ClusterIP Service (the default type) which creates only a cluster-internal IP and no public IP:
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
selector:
app: nginx
ports:
- port: 80
targetPort: 80
Then you can access the Service (and thus the Pods behind it) from any other Pod in the same namespace by using the Service name as the DNS name:
curl nginx-service
If the Pod from which you want to access the Service is in a different namespace, you have to use the fully qualified domain name of the Service:
curl nginx-service.my-namespace.svc.cluster.local
As an experiment I'm trying to run a docker container on Azure using the Azure Container Service and Kubernetes as the orchestrator. I'm running the official nginx image. Here are the steps I am taking:
az group create --name test-group --location westus
az acs create --orchestrator-type=kubernetes --resource-group=test-group --name=k8s-cluster --generate-ssh-keys
I created Kubernetes deployment and service files from a docker compose file using Kompose.
deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.service.type: LoadBalancer
creationTimestamp: null
labels:
io.kompose.service: test
name: test
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: test
spec:
containers:
- image: nginx:latest
name: test
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
status: {}
service file
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.service.type: LoadBalancer
creationTimestamp: null
labels:
io.kompose.service: test
name: test
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: test
type: LoadBalancer
status:
loadBalancer: {}
I can then start everything up:
kubectl create -f test-service.yaml,test-deployment.yaml
Once an IP has been exposed I assign a dns prefix to it so I can access my running container like so: http://nginx-test.westus.cloudapp.azure.com/.
My question is, how can I access the service using https? At https://nginx-test.westus.cloudapp.azure.com/
I don't think I'm supposed to configure nginx for https, since the certificate is not mine. I've tried changing the load balancer to send 443 traffic to port 80, but I receive a timeout error.
I tried mapping port 443 to port 80 in my Kubernetes service config.
ports:
- name: "443"
port: 443
targetPort: 80
But that results in:
SSL peer was not expecting a handshake message it received. Error code: SSL_ERROR_HANDSHAKE_UNEXPECTED_ALERT
How can I view my running container at https://nginx-test.westus.cloudapp.azure.com/?
If I understand it correctly, I think you are looking for Nginx Ingress controller.
If we need TLS termination on Kubernetes, we can use ingress controller, on Azure we can use Nginx Ingress controller.
To archive this, we can follow those steps:
1 Deploy the Nginx Ingress controller
2 Create TLS certificates
3 Deploy test http service
4 configure TLS termination
More information about configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure, please refer to this blog.
root#k8s-master-6F403744-0:~/ingress/examples/deployment/nginx# kubectl get services --namespace kube-system -w
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend 10.0.113.185 <none> 80/TCP 42m
heapster 10.0.4.232 <none> 80/TCP 1h
kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 1h
kubernetes-dashboard 10.0.237.125 <nodes> 80:32229/TCP 1h
nginx-ingress-ssl 10.0.92.57 40.71.37.243 443:30215/TCP 13m