k8s expose virtual service with istio - azure

I’ve download the Prometheus helm chart
https://github.com/helm/charts/tree/master/stable/prometheus
and deploy it to our cluster as-is and I was able to access prom ui via port-forwarding.
As we are using istio I want to configure it to access with host (lik external IP ) and I configure the following but it doesn’t work for me.
I mean if I put the host I don’t get anything in the browser, any idea what could be missing here ?
I dont see any exteranal-ip when running kubectl get svc -n mon ,
just internal-ip which doesnt help to our needs.
gateway.yaml
apiVersion: networking.istio.io/v1alpha3
kind: Gateway
metadata:
name: http-gateway
namespace: mon
spec:
selector:
istio: ingressgateway
servers:
- hosts: mo-gateway.web-system.svc.cluster.local
port:
name: https-monitoring
number: 443
protocol: HTTPS
tls:
mode: SIMPLE
privateKey: /etc/istio/sa-tls/tls.key
serverCertificate: /etc/istio/sa-tls/tls.crt
virtual_service.yaml
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: prom-virtualservice
namespace: mon
spec:
hosts:
- mo-gateway.web-system.svc.cluster.locall
gateways:
- http-gateway
http:
- match:
- uri:
prefix: /prometheus
route:
- destination:
host: prometheus-server
port:
number: 80
Any idea why it doesnt works ?
Btw, If I just change the type of Prometheus to use LoadBalancer it work, I was able to get external-ip and use it but not on istio
istio is up-and-running ...

Related

Can't access an application deployed on AKS

I'm trying to access a simple Asp.net core application deployed on Azure AKS but I'm doing something wrong.
This is the deployment .yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: aspnetapp
spec:
replicas: 1
selector:
matchLabels:
app: aspnet
template:
metadata:
labels:
app: aspnet
spec:
containers:
- name: aspnetapp
image: <my_image>
resources:
limits:
cpu: "0.5"
memory: 64Mi
ports:
- containerPort: 8080
and this is the service .yml
apiVersion: v1
kind: Service
metadata:
name: aspnet-loadbalancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 8080
selector:
name: aspnetapp
Everything seems deployed correctly
Another check I did was to enter the pod and run
curl http://localhost:80,
and the application is running correctly, but if I try to access the application from the browser using http://20.103.147.69 a timeout is returned.
What else could be wrong?
Seems that you do not have an Ingress Controller deployed on your AKS as you have your application exposed directly. You will need that in order to get ingress to work.
To verify if your application is working your can use port-forward and then access http://localhost:8080 :
kubectl port-forward aspnetapp 8080:8080
But you should def. install a ingress-controller: Here is a Workflow from MS to install ingress-nginx as IC on your Cluster.
You will then only expose the ingress-controller to the internet and could also specify the loadBalancerIP statically if you created the PublicIP in advance:
apiVersion: v1
kind: Service
metadata:
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: myResourceGroup # only needed if the LB is in another RG
name: ingress-nginx-controller
spec:
loadBalancerIP: <YOUR_STATIC_IP>
type: LoadBalancer
The Ingress Controller then will route incoming traffic to your application with an Ingress resource:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: minimal-ingress
spec:
ingressClassName: nginx # ingress-nginx specifix
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: test
port:
number: 80
PS: Never expose your application directly to the internet, always use the ingress controller
In your Deployment, you configured your container to listen on port 8080. You need to add targetport value set to 8080 in the Service definition.
Documentation

istio: use service registry to make internal HTTPS request

we are using kubernetes (1.17.14-gke.1600) and istio (1.7.4)
we have several deployments that need to make each other HTTPS requests using the public DNS record (mydomain.com). The goal here is to make internal HTTPS request instead of going public and then come back.
we cannot change the host with the "internal" dns (ex my-svc.my-namespace.svc.cluster-domain.example ) because sometimes the same host is returned to the client to make HTTP request from the client browser
Our services are exposed in HTTP so I understand that if we want to use HTTPS scheme we need to pass through the istio gateway
Here is my VirtualService, adding the mesh gateway I'm able to make internal HTTP request with the public DNS, but this doesn't work with HTTPS
apiVersion: networking.istio.io/v1beta1
kind: VirtualService
metadata:
name: myservice
spec:
gateways:
- istio-system/gateway
- mesh
hosts:
- myservice.mydomain.com
http:
- match:
- uri:
prefix: /
route:
- destination:
host: myservice
port:
number: 3000
subset: v1
Here is the gateway:
apiVersion: v1
items:
- apiVersion: networking.istio.io/v1beta1
kind: Gateway
metadata:
name: gateway
namespace: istio-system
spec:
selector:
istio: ingressgateway
servers:
- hosts:
- '*'
port:
name: http
number: 80
protocol: HTTP
tls:
httpsRedirect: true
- hosts:
- '*'
port:
name: https
number: 443
protocol: HTTPS
tls:
credentialName: ingress-cert
mode: SIMPLE
I've figured out one workaround to solve the problem is to use a Service Entry like this:
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: internal-https-redirect
spec:
endpoints:
- address: 10.43.2.170 # istio-ingressgateway ClusterIP
hosts:
- '*.mydomain.com'
location: MESH_INTERNAL
ports:
- name: internal-redirect
number: 443
protocol: HTTPS
resolution: STATIC
But I'm not sure if the right way to do it or if that is considered a bad practice.
Thank you

Loadbalancer IP and Ingress IP status is pending in kubernetes

I have created the Kubernetes Cluster using two Azure Ubuntu VMs. I am able to deploy and access pods and deployments using the Nodeport service type. I have also checked the pod's status in Kube-system namespace. All of the pod's status showing as running. but, whenever I mention service type to Loadbalancer, it was not creating the LoadBalancer IP and it's status always showing as pending. I have also created an Ingress controller for the Nginx service. still, it is not creating an ingress Address. While initializing the Kubernetes master, I am using the following command.
kubeadm init
Below is deployment, svc and Ingress manifest files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: nginx
servicePort: 80
$ kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"p...
Selector: app=nginx
Type: ClusterIP
IP: 10.96.107.97
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.44.0.4:80,10.44.0.5:80,10.44.0.6:80
Session Affinity: None
Events: <none>
$ kubectl describe ingress nginx
Name: test-ingress
Namespace: default
Address:
Default backend: nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Rules:
Host Path Backends
---- ---- --------
`*` `*` nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"test-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"nginx","servicePort":80}}}
Events: `<none>`
Do we need to mention any IP ranges(private or public) of VMs while initializing the kubeadm init? or
Do we need to change any network settings in Azure Ubuntu VMs?
As you created your own Kubernetes cluster rather than AWS, Azure or GCP provided one, there is no load balancer integrated. Due to this reason, you are getting IP status pending.
But with the use of Ingress Controller or directly through NodePort you can circumvent this problem.
However, I also observed in your nginx service you are using an annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb and you said you are using Azure and those are platform specific annotations for the service and that annotation is AWS specific.
However, you can give something like this a try, if you would like to experiment directly with public IPs, you can define your service by providing externalIPs in your service if you have a public ip allocated to your node and allows ingress traffic from somewhere.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
- 80.11.12.10
But, a good approach to get this done is using an ingress controller if you are planning to build your own Kubernetes cluster.
Hope this helps.

Ingress rule does not work with Service of type LoadBalancer

I am trying to add an ingress rule to an internal load balancer. As per the dock it can be redirected to a service. It works as long as the service is "ClusterIP" but goes to infinite redirect when its "LoadBalancer"
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- demo.azure.com
secretName: aks-ingress-tls
rules:
- host: demo.azure.com
http:
paths:
- path: /
backend:
serviceName: aks-helloworld
servicePort: 80
- path: /demo
backend:
serviceName: demo-backend
servicePort: 80
https://demo.azure.com works but https://demo.azure.com/demo doesn’t. Difference is aks-helloworld is a ClusterIP but demo-backend is a LoadBalancer
13:33 $ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
aks-helloworld ClusterIP 10.0.204.168 <none> 80/TCP 15m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 16h
demo-backend LoadBalancer 10.0.198.251 23.99.128.86 80:30332/TCP 15h
For your issue, I don't think it's the problem that one has the type clusterIP and another has the type LoadBalancer. When the traffic coming in through the two ways, they will all redirect to the service, in your case, demo-backend.
See the result of the test on my side:
Access from the Internet:
I do not add the TLS, but I think the traffic will all redirect to the service no matter it has the TLS or not. I just change the command with --set serviceType="LoadBalancer" when I install the second application through helm. So you can check if there something wrong with your steps.
But I don't think it's a good way to route traffic both in these two ways to one service. If you use the TLS through Ingress, and it will be no secure when there is the way with LoadBalancer at the same time. Because the traffic will bypass the TLS through LoadBalancer.
Update
With your comment, I think you need to create a deployment for your application, and then create a service with it, the file like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: yourImage
ports:
- containerPort: 80
name: myapp
---
apiVersion: v1
kind: Service
metadata:
name: demo-backend
labels:
app: myapp
spec:
type: ClusterIP
selector:
app: myapp
ports:
- port: 80
name: http
The deployment is the basis of the application, the service just accepts the traffic for the pod. So I guess you miss the deployment so that you can access your application.
Why are you exposing the Service as type "LoadBalancer" if you are using Ingress for the resource? You are essentially hitting a the ingress loadbalancer then hitting another service loadbalancer, which is probably causing this redirect issue.
The issue was because of the following headers added by the engine controller.
X-FORWARDED-PROTO: https
X-FORWARDED-PORT: 443
Answer https://stackoverflow.com/a/54880257/747456

Rancher / k8 / azure / Kubectl

I have a mysql pod in my cluster that I want to expose to a public IP. Therefor I changed it to be a loadbalancer by doing
kubectl edit svc mysql-mysql --namespace mysql
release: mysql
name: mysql-mysql
namespace: mysql
resourceVersion: "646616"
selfLink: /api/v1/namespaces/mysql/services/mysql-mysql
uid: cd1cce11-890c-11e8-90f5-869c0c4ba0b5
spec:
clusterIP: 10.0.117.54
externalTrafficPolicy: Cluster
ports:
- name: mysql
nodePort: 31479
port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql-mysql
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 137.117.40.121
changing ClusterIP to LoadBalancer.
However I can't seem to reach it by going to mysql -h137.117.40.121 -uroot -p*****
Anyone have any idea? Is it because i'm trying to forward it over TCP?
For your issue, you want to expose your mysql pod to a public IP. So you need to take a look at Ingress in Kubernets. It's an API object that manages external access to the services in a cluster, typically HTTP. For the Ingress, you need both ingress controller and ingress rules. For more details, you can read the document I posted.
In Azure, you can get more details from HTTPS Ingress on Azure Kubernetes Service (AKS).
As pointed out by #aurelius, your config seems correct it's possible that the traffic is getting blocked by your firewall rules.
Also make sure, the cloud provider option is enabled for your cluster.
kubectl get svc -o wide would show the status of the LoadBalancer and the IP address allocated.
#charles-xu-msft, using Ingress is definitely an option but there is nothing wrong in using LoadBalancer kind of Service when the cloud provider is enabled for the kubernetes cluster.
Just for reference, here is test config:
apiVersion: v1
kind: Pod
metadata:
name: mysql-pod
labels:
name: mysql-pod
spec:
containers:
- name: mysql:5
image: mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: mysqlpassword
---
apiVersion: v1
kind: Service
metadata:
name: test-mysql-lb
spec:
type: LoadBalancer
ports:
- port: 3306
targetPort: 3306
protocol: TCP
selector:
name: mysql-pod

Resources