I have created an AKS cluster with below versions.
Kubernetes version: 1.12.6
Istio version: 1.1.4
Cloud Provider: Azure
I have also successfully installed Istio as my Ingress gateway with an external IP address. I have also enabled istio-injection for the namespace where I have deployed my service. and I see that the sidecar injection is happening successfully. it is showing.
NAME READY STATUS RESTARTS AGE
club-finder-deployment-7dcf4479f7-8jlpc 2/2 Running 0 11h
club-finder-deployment-7dcf4479f7-jzfv7 2/2 Running 0 11h
My tls-gateway
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: tls-gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- port:
name: https
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
serverCertificate: /etc/istio/ingressgateway-certs/tls.crt
privateKey: /etc/istio/ingressgateway-certs/tls.key
hosts:
- "*"
Note: I am using self-signed certs for testing.
I have applied below virtual service
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: club-finder-service-rules
namespace: istio-system
spec:
# https://istio.io/docs/reference/config/istio.networking.v1alpha3/#VirtualService
gateways: # The default `mesh` value for when left blank is doesn't seem to propigate the rule properly. For now, always use a list of FQDN gateways
- tls-gateway
hosts:
- "*" # APIM Manager URL
http:
- match:
- uri:
prefix: /dev/clubfinder/service/clubs
rewrite:
uri: /v1/clubfinder/clubs/
route:
- destination:
host: club-finder.club-finder-service-dev.svc.cluster.local
port:
number: 8080
- match:
- uri:
prefix: /dev/clubfinder/service/status
rewrite:
uri: /status
route:
- destination:
host: club-finder.club-finder-service-dev.svc.cluster.local
port:
number: 8080
Now when I am trying to test my service using Ingress external IP like
curl -kv https://<external-ip-of-ingress>/dev/clubfinder/service/status
I get below error
* Using HTTP2, server supports multi-use
* Connection state changed (HTTP/2 confirmed)
* Copying HTTP/2 data in stream buffer to connection buffer after upgrade: len=0
* Using Stream ID: 1 (easy handle 0x7fe5e800d600)
> GET /dev/clubfinder/service/status HTTP/2
> Host: x.x.x.x --> Replacing IP intentionally
> User-Agent: curl/7.54.0
> Accept: */*
>
* Connection state changed (MAX_CONCURRENT_STREAMS updated)!
< HTTP/2 503
< date: Tue, 07 May 2019 05:15:01 GMT
< server: istio-envoy
<
* Connection #0 to host x.x.x.x left intact
Can someone please point me out what is wrong here
I was incorrectly defining my "VirtualService" yaml. Instead of using default HTTP port 80 i was mentioning 8080 which is my applications listening port. Below yaml worked for me
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: club-finder-service-rules
namespace: istio-system
spec:
# https://istio.io/docs/reference/config/istio.networking.v1alpha3/#VirtualService
gateways: # The default `mesh` value for when left blank is doesn't seem to propigate the rule properly. For now, always use a list of FQDN gateways
- tls-gateway
hosts:
- "*" # APIM Manager URL
http:
- match:
- uri:
prefix: /dev/clubfinder/service/clubs
rewrite:
uri: /v1/clubfinder/clubs/
route:
- destination:
host: club-finder.club-finder-service-dev.svc.cluster.local
port:
number: 80
- match:
- uri:
prefix: /dev/clubfinder/service/status
rewrite:
uri: /status
route:
- destination:
host: club-finder.club-finder-service-dev.svc.cluster.local
port:
number: 80
For the future reference, if you have issue like this, there are basically two main steps to troubleshoot:
1) Check Envoy proxies are up and their configs are synchronized with Pilot
istioctl proxy-config
2) Get Envoy's listeners for your pod and see if anything is listening a port on which your service is running
istioctl proxy-config listener club-finder-deployment-7dcf4479f7-8jlpc
So, in your case at step #2 you would see that there was no listener for port 80 , pointing out to a root cause.
Also, if you'd take a look to envoy logs, you'd probably see errors with UF (upstream failure) or UH (No healthy upstream) code. Here is a full list of error flags.
For a more deep Envoy debugging refer to this handbook
Related
Situation: I have an AKS cluster that I'm trying to load my project into from localhost.
When I launch my Ansible scripts to get the project running, I need to log in to openfaas but I encounter this error:
> ...\nCannot connect to OpenFaaS on URL: https:(...).com/faas. Get \"https://(..).com/faas/system/functions\": dial tcp
> xx.xxx.xxx.xxx:xxx: i/o timeout (Client.Timeout exceeded while
> awaiting headers)", "stdout_lines": ["WARNING! Using --password is
> insecure, consider using: cat ~/faas_pass.txt | faas-cli login -u user
> --password-stdin", "Calling the OpenFaaS server to validate the credentials...", "Cannot connect to OpenFaaS on URL:
> https://(...).com/faas. Get
> \"https://(...).com/faas/system/functions\": dial tcp
> xx.xxx.xxx.xxx:xxx: i/o timeout (Client.Timeout exceeded while awaiting headers)"]}
I have a PUBLIC Load Balancer I created from a yaml file and it's linked to the DNS (...).com / IP address of LB created.
My loadbalancer.yml file:
apiVersion: v1
kind: Service
metadata:
name: loadbalancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
My ingress file:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: openfaas
spec:
rules:
- host: (...).com
http:
paths:
- backend:
service:
name: openfaas
port:
number: 80
path: /faas
pathType: Prefix
tls:
- hosts:
- (...).com
secretName: (...).com
---
I haven't found many tutorials that have the same situation or they use internal Load Balancers.
Is this Azure that's blocking the communication? a Firewall problem?
Do I need to make my LB internal instead of external?
I saw a source online that stated this:
If you expose a service through the normal LoadBalancer with a public
ip, it will not be accessible because the traffic that has not been
routed through the azure firewall will be dropped on the way out.
Therefore you need to create your service with a fixed internal ip,
internal LoadBalancer and route the traffic through the azure firewall
both for outgoing and incoming traffic.
https://denniszielke.medium.com/setting-up-azure-firewall-for-analysing-outgoing-traffic-in-aks-55759d188039
But I'm wondering if it's possible to bypass that..
Any help is greatly apprecated!
I found out afterwards that Azure already provides a LB, so you do not need to create one. Not a firewall issue.
Go to "Load Balancing" -> "Frontend IP Configuration" and choose the appropriate IP.
I have deployed Kubernetes Dashboard with a command:
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.2.0/aio/deploy/recommended.yaml
and I've edited the Service as a Nodeport and configured Ingress object accordingly. I could able to login to dashboard with http but getting issue while login the same URL with https:
"TLS handshake error from 10.244.0.0:44950: remote error: tls: unknown certificate" .
When i configured ingress rule with ssl it is giving error:
"Client sent an HTTP request to an HTTPS server."
I have jenkins application running on same cluster with real certificate and i could able to login the jenkins url with https .
Cluster Information:
k8s cluster running on (Linux Server release 7.9)
kubernetes version (v1.19.6)
Request you to confirm if any suggestion to fix this issue
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: kube-system-ingress
namespace: kubernetes-dashboard
annotations:
kubernetes.io/ingress.class: "haproxy"
ingress.kubernetes.io/ssl-passthrough: "false"
spec:
tls:
- hosts:
- console.qa.test.com
secretName: qa-pss-dashboard
rules:
- host: console.qa.test.com
http:
paths:
- path: /
backend:
serviceName: kubernetes-dashboard
servicePort: 8443
I think you have to add the annotation
ingress.kubernetes.io/backend-protocol: "HTTPS"
Please not that the kubernetes dashboard service is exposed in 443 port, not 8443 that is related to the deployment (pod port).
so:
backend:
service:
name: kubernetes-dashboard
port:
number: 443
we are using kubernetes (1.17.14-gke.1600) and istio (1.7.4)
we have several deployments that need to make each other HTTPS requests using the public DNS record (mydomain.com). The goal here is to make internal HTTPS request instead of going public and then come back.
we cannot change the host with the "internal" dns (ex my-svc.my-namespace.svc.cluster-domain.example ) because sometimes the same host is returned to the client to make HTTP request from the client browser
Our services are exposed in HTTP so I understand that if we want to use HTTPS scheme we need to pass through the istio gateway
Here is my VirtualService, adding the mesh gateway I'm able to make internal HTTP request with the public DNS, but this doesn't work with HTTPS
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myservice
spec:
gateways:
- istio-system/gateway
- mesh
hosts:
- myservice.mydomain.com
http:
- match:
- uri:
prefix: /
route:
- destination:
host: myservice
port:
number: 3000
subset: v1
Here is the gateway:
apiVersion: v1
items:
- apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- '*'
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: ingress-cert
mode: SIMPLE
I've figured out one workaround to solve the problem is to use a Service Entry like this:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: internal-https-redirect
spec:
endpoints:
- address: 10.43.2.170 # istio-ingressgateway ClusterIP
hosts:
- '*.mydomain.com'
location: MESH_INTERNAL
ports:
- name: internal-redirect
number: 443
protocol: HTTPS
resolution: STATIC
But I'm not sure if the right way to do it or if that is considered a bad practice.
Thank you
I am having issues with request to my NodeJS app running in my kubernetes cluster in digital ocean. Every request returns a 502 Bad Gateway Error. I am not sure what I am missing.
This is what the service config looks like
apiVersion: v1
kind: Service
metadata:
name: service-api
namespace: default
labels:
app: service-api
spec:
type: ClusterIP
ports:
- name: http
protocol: TCP
port: 80
targetPort: 3000
selector:
app: service-api
The Ingress.yml looks like this
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: service-api-ingress
annotations:
kubernetes.io/ingress.class: nginx
cert-manager.io/cluster-issuer: letsencrypt-prod
nginx.ingress.kubernetes.io/limit-connections: '2'
nginx.ingress.kubernetes.io/limit-rpm: '60'
service.beta.kubernetes.io/do-loadbalancer-protocol: "http"
service.beta.kubernetes.io/do-loadbalancer-algorithm: "round_robin"
service.beta.kubernetes.io/do-loadbalancer-http2-ports: "443,80"
service.beta.kubernetes.io/do-loadbalancer-tls-ports: "443"
service.beta.kubernetes.io/do-loadbalancer-tls-passthrough: "true"
spec:
tls:
- hosts:
- dev-api.service.com
secretName: service-api-tls
rules:
- host: "dev-api.service.com"
http:
paths:
- pathType: Prefix
path: "/"
backend:
service:
name: service-api
port:
number: 80
Whenever I visit the host url I get a 502 error.
This is what appears in the nginx ingress log
2021/01/13 08:41:34 [error] 319#319: *31338 connect() failed (111: Connection refused) while connecting to upstream, client: IP, server: dev-api.service.com, request: "GET /favicon.ico HTTP/2.0", upstream: "http://10.244.0.112:3000/favicon.ico", host: "dev-api.service.com", referrer: "https://dev-api.service.com/status"
As we ( with #Emmanuel Amodu ) have discussed in comment:
mistake was to connect to app using wrong port, port 4000 instead of 3000 as defined in service-api.
For community which will have similar problem please - most important steps for debugging:
Checking netstat -plant output table
Checking your Nginx Configuration: $ kubectl exec -it -n <namespace-of-ingress-controller> <nginx-ingress-controller-pod> -- cat /etc/nginx/nginx.conf
Checking service: $ kubectl describe svc service-api
Could it be the annotation that configures SSL passthru?
If SSL passthru has been configured on your ingress controller then your service needs to expose port 443 in addition to port 80. You're basically saying the pod is terminating the secure connection not nginx.
If this is the issue would explain 50X error which indicates a problem with the backend
No special setup is needed, that 404 is most likely coming from your actual backend.
I’ve download the Prometheus helm chart
https://github.com/helm/charts/tree/master/stable/prometheus
and deploy it to our cluster as-is and I was able to access prom ui via port-forwarding.
As we are using istio I want to configure it to access with host (lik external IP ) and I configure the following but it doesn’t work for me.
I mean if I put the host I don’t get anything in the browser, any idea what could be missing here ?
I dont see any exteranal-ip when running kubectl get svc -n mon ,
just internal-ip which doesnt help to our needs.
gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
namespace: mon
spec:
selector:
istio: ingressgateway
servers:
- hosts: mo-gateway.web-system.svc.cluster.local
port:
name: https-monitoring
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
privateKey: /etc/istio/sa-tls/tls.key
serverCertificate: /etc/istio/sa-tls/tls.crt
virtual_service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: prom-virtualservice
namespace: mon
spec:
hosts:
- mo-gateway.web-system.svc.cluster.locall
gateways:
- http-gateway
http:
- match:
- uri:
prefix: /prometheus
route:
- destination:
host: prometheus-server
port:
number: 80
Any idea why it doesnt works ?
Btw, If I just change the type of Prometheus to use LoadBalancer it work, I was able to get external-ip and use it but not on istio
istio is up-and-running ...