I need some help trying to achieve this two things here:
Setting up my nginx ingress controller on AKS without using helm (I don't want to)
Make my ingress to use an already reserved IP address with resource name 'kubernetes-ip'
For the first step I'm following with no luck this documentation: https://kubernetes.github.io/ingress-nginx/deploy/#azure
And I didn't forgot the mandatory.yaml!
my step by step guide:
I have a basic kubernetes cluster with two pods, as follows:
NAME READY STATUS RESTARTS AGE
activemq-demo-7b769bcc4-jtsj5 1/1 Running 0 55m
ubuntu-dcb9c6ccb-wkz2w 1/1 Running 0 2d
At this point I want to add my ingress so I can reach the demo activemq using my public ip address a.b.c.d
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/cloud-generic.yaml
After doing it I run kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.0.186.143 zz.zz.zzz.zzz 80:32703/TCP,443:30584/TCP 17s
Which is fine! seems to be working right? at this point I should be able to connect to the external ip address to any of those ports but I can't :(
$ telnet zz.zz.zzz.zzz 80
Trying zz.zz.zzz.zzz...
(a long time later...)
telnet: Unable to connect to remote host: Resource temporarily unavailable
$ telnet zz.zz.zzz.zzz 443
Trying zz.zz.zzz.zzz...
(a long time later...)
telnet: Unable to connect to remote host: Resource temporarily unavailable
I know that the mandatory.yaml doesn't know nothing about my reserved IP address, but I'm ignoring it because I have a bigger problem, I can't connect.
I also ignore for a minute that I can't connect just to test if I need the actual ingress.yaml running. So I use kubectl apply -f ingress.html. (ingress.yaml contains the following code)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress1
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "*"
spec:
tls:
- hosts:
- amq-test.mydomain.com
secretName: my-certificate
rules:
- host: amq-test.mydomain.com
http:
paths:
- path: /*
backend:
serviceName: activemq-demo-service
servicePort: 8161
After that If I run kubectl get ing I get:
NAME HOSTS ADDRESS PORTS AGE
ingress1 amq-test.mydomain.com zz.zz.zzz.zzz 80, 443 66s
But its the same, I can't connect:
FROM WSL
$ telnet zz.zz.zzz.zzz 80
Trying zz.zz.zzz.zzz...
(a long time later...)
telnet: Unable to connect to remote host: Resource temporarily unavailable
$ telnet zz.zz.zzz.zzz 443
Trying zz.zz.zzz.zzz...
(a long time later...)
telnet: Unable to connect to remote host: Resource temporarily unavailable
Honestly I'm not sure what I'm missing, this should be very straight forward with the official documentation, I don't know if I have to enable something else in Azure ...
Thanks for reading, any help will be appreciated.
EDIT 3/7/2020: activemq-demo Deployment & Service #Jean-Philippe Bond
apiVersion: apps/v1
kind: Deployment
metadata:
name: activemq-demo
labels:
app: activemq-demo
tier: backend
spec:
revisionHistoryLimit: 1
replicas: 1
selector:
matchLabels:
app: activemq-demo
template:
metadata:
labels:
app: activemq-demo
tier: backend
spec:
containers:
- name: activemq-demo
image: myproject.azurecr.io/activemq-slim:5.15.9-3
imagePullPolicy: "Always"
command: ["/start.sh"]
args: ["somename"]
env:
- name: LANG
value: "C.UTF-8"
ports:
- containerPort: 8161
- containerPort: 61616
livenessProbe:
exec:
command:
- /isAlive.sh
- somename
initialDelaySeconds: 15
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: activemq-demo-service
labels:
tier: controller
spec:
type: NodePort
ports:
- name: http
protocol: TCP
port: 8161
targetPort: 8161
- name: acceptor
protocol: TCP
port: 61616
targetPort: 61616
selector:
app: activemq-demo
tier: backend
Please note that the only thing I want to access from outside is the HTTP web service that ActiveMQ provides on port 8161 by default
EDIT 3/9/2020: #HelloWorld request
telnet from WSL
$ telnet zz.zz.zzz.zzz 80
Trying zz.zz.zzz.zzz...
(a long time later...)
telnet: Unable to connect to remote host: Resource temporarily unavailable
$ telnet zz.zz.zzz.zzz 443
Trying zz.zz.zzz.zzz...
(a long time later...)
telnet: Unable to connect to remote host: Resource temporarily unavailable
telnet from macos
$ telnet zz.zz.zzz.zzz 80
Trying zz.zz.zzz.zzz...
telnet: connect to address zz.zz.zzz.zzz: Operation timed out
telnet: Unable to connect to remote host
$ telnet zz.zz.zzz.zzz 443
Trying zz.zz.zzz.zzz...
telnet: connect to address zz.zz.zzz.zzz: Operation timed out
telnet: Unable to connect to remote host
curl from macos - HTTP
$ curl -v -X GET http://amq-test.mydomain.com
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying zz.zz.zzz.zzz:80...
* TCP_NODELAY set
* Connection failed
* connect to zz.zz.zzz.zzz port 80 failed: Operation timed out
* Failed to connect to amq-test.mydomain.com port 80: Operation timed out
* Closing connection 0
curl: (7) Failed to connect to amq-test.mydomain.com port 80: Operation timed out
curl from macos - HTTPS
$ curl -v -X GET https://amq-test.mydomain.com
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying zz.zz.zzz.zzz:443...
* TCP_NODELAY set
* Connection failed
* connect to zz.zz.zzz.zzz port 443 failed: Operation timed out
* Failed to connect to amq-test.mydomain.com port 443: Operation timed out
* Closing connection 0
curl: (7) Failed to connect to amq-test.mydomain.com port 443: Operation timed out
EDIT 3/10/2020: Starting from scratch (multiple times)
I deleted the whole thing to start "fresh" with no much more luck, but I have noticed some things that hopefully may trigger some thoughts out there...
The basics:
I already have a Resource Group that I'm using: MyResourceGroup
I already have a Virtual Network with a Subnet MyVirtualNet
I already have reserved a Public IP address I want to use with my ingress, It's an Static IP that I want to prevent from changing (or be deleted) until the end of time: A.B.C.D
I already have my own domain that I have routed to A.B.C.D: amq-test.mydomain.com
My procedure:
I'm creating a new Kubernetes Service using the Azure Web Interface, I'm making sure to select my Resource Group as well as my Virtual Network and Subnet
As soon I created the base Kubernetes Service I notice that it creates a LoadBalancer with a different Public IP address that I can't control, I assume that its because that IP address will be used as the main entry point for things like kubectl and remote management.
With the cluster live I create the basic ActiveMQ image that I shared previously
Now, I start with the ingress-nginx and deploy the mandatory.yaml
And then I add the Azure Service yaml here, but this time with two modifications to make it use my Public IP address. this modification was extracted from Microsoft documentation
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: MyResourceGroup
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
loadBalancerIP: A.B.C.D
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
With that running without errors, everything is as expect:
$ kubectl get svc -n ingress-nginx -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
ingress-nginx LoadBalancer 10.0.30.100 A.B.C.D 80:30682/TCP,443:31002/TCP 35m app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
I deploy the final part right? my ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress1
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "*"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: activemq-demo-service
servicePort: 8161
This runs without issues and when is up I checked with:
$ kubectl describe ingress
Name: ingress1
Namespace: default
Address: A.B.C.D
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/* queue-callbacks-service:8161 (10.94.20.14:8161)
Annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/cors-allow-credentials: true
nginx.ingress.kubernetes.io/cors-allow-methods: *
nginx.ingress.kubernetes.io/cors-allow-origin: *
nginx.ingress.kubernetes.io/enable-cors: true
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/cors-allow-credentials":"true","nginx.ingress.kubernetes.io/cors-allow-methods":"*","nginx.ingress.kubernetes.io/cors-allow-origin":"*","nginx.ingress.kubernetes.io/enable-cors":"true"},"name":"ingress1","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"queue-callbacks-service","servicePort":8161},"path":"/callbacks/*"}]}}]}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 35m nginx-ingress-controller Ingress default/ingress1
Normal UPDATE 34m nginx-ingress-controller Ingress default/ingress1
Everything seems to be perfect but I just can't connect to the IP or host using any (80, 443) port
Hope this helps
Just FYI I just tried with helm following this documentation and I got the same result
EDIT 4/15/2020: not done yet
I was working on another project so this one was paused for a moment, I´m returning to this now, unfortunately still not working, I opened a ticket with Microsoft and I'm waiting for it.
However, we noticed that port 80 are being filtered by some firewall or something, we are not sure what is causing this as we have inbound rules on our SG with port 80 and 443 open from * on any protocol
$ nmap -Pn zz.zz.zz.zzz -p 80,443
Starting Nmap 7.80 ( https://nmap.org ) at 2020-04-15 11:01 Pacific SA Standard Time
Nmap scan report for zz.zz.zz.zzz
Host is up.
PORT STATE SERVICE
80/tcp filtered http
443/tcp filtered https
Nmap done: 1 IP address (1 host up) scanned in 4.62 seconds
ActiveMQ is accessible via TCP, not HTTP and Kubernetes Ingress were not built to support TCP services. Whit that said, Nginx does support TCP load balancing if you really want to use it, but you'll not be able to use an Ingress rule based on the host like you did since this is reserved for HTTP/HTTPS. Your best bet would probably be to use the Azure L4 Load balancer directly instead of going through the ingress controller.
If you want to use Nginx, you'll need to modify the yaml in mandatory.yaml to expose the ActiveMQ port on the Nginx deployment.
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
9000: "default/activemq-demo-service:8161"
You'll also need to add the tcp-services port on the Service resource. Ex :
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
- name: proxied-tcp-9000
port: 9000
targetPort: 9000
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
Here is the documentation for TCP/UDP support in the Nginx Ingress.
I wrote a story that might be useful to you some time ago: https://medium.com/cooking-with-azure/tcp-load-balancing-with-ingress-in-aks-702ac93f2246
Related
Situation: I have an AKS cluster that I'm trying to load my project into from localhost.
When I launch my Ansible scripts to get the project running, I need to log in to openfaas but I encounter this error:
> ...\nCannot connect to OpenFaaS on URL: https:(...).com/faas. Get \"https://(..).com/faas/system/functions\": dial tcp
> xx.xxx.xxx.xxx:xxx: i/o timeout (Client.Timeout exceeded while
> awaiting headers)", "stdout_lines": ["WARNING! Using --password is
> insecure, consider using: cat ~/faas_pass.txt | faas-cli login -u user
> --password-stdin", "Calling the OpenFaaS server to validate the credentials...", "Cannot connect to OpenFaaS on URL:
> https://(...).com/faas. Get
> \"https://(...).com/faas/system/functions\": dial tcp
> xx.xxx.xxx.xxx:xxx: i/o timeout (Client.Timeout exceeded while awaiting headers)"]}
I have a PUBLIC Load Balancer I created from a yaml file and it's linked to the DNS (...).com / IP address of LB created.
My loadbalancer.yml file:
apiVersion: v1
kind: Service
metadata:
name: loadbalancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
My ingress file:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: openfaas
spec:
rules:
- host: (...).com
http:
paths:
- backend:
service:
name: openfaas
port:
number: 80
path: /faas
pathType: Prefix
tls:
- hosts:
- (...).com
secretName: (...).com
---
I haven't found many tutorials that have the same situation or they use internal Load Balancers.
Is this Azure that's blocking the communication? a Firewall problem?
Do I need to make my LB internal instead of external?
I saw a source online that stated this:
If you expose a service through the normal LoadBalancer with a public
ip, it will not be accessible because the traffic that has not been
routed through the azure firewall will be dropped on the way out.
Therefore you need to create your service with a fixed internal ip,
internal LoadBalancer and route the traffic through the azure firewall
both for outgoing and incoming traffic.
https://denniszielke.medium.com/setting-up-azure-firewall-for-analysing-outgoing-traffic-in-aks-55759d188039
But I'm wondering if it's possible to bypass that..
Any help is greatly apprecated!
I found out afterwards that Azure already provides a LB, so you do not need to create one. Not a firewall issue.
Go to "Load Balancing" -> "Frontend IP Configuration" and choose the appropriate IP.
I have a Kubernetes cluster inside Azure which holds some services and pods. I want to make those pods communicate with each other but when I try to execute a CURL/WGET from one to another, a timeout occurs.
The service YAMLs can be found below:
First service:
apiVersion: v1
kind: Service
metadata:
labels:
app: core-node
name: core-node
spec:
ports:
- name: "9001"
port: 9001
targetPort: 8080
selector:
app: core-node
status:
loadBalancer: {}
Second service:
apiVersion: v1
kind: Service
metadata:
labels:
app: core-python
name: core-python
spec:
ports:
- name: "9002"
port: 9002
targetPort: 8080
selector:
app: core-python
status:
loadBalancer: {}
When I am connecting to the "core-node" pod for example through sh and try to execute the following command, it gets a timeout. It happens also if I try for "core-python" pod to the other one.
wget core-python:9002
wget: can't connect to remote host (some ip): Operation timed out
I also tried using the IP directly and also trying to switch from ClusterIP to LoadBalancer, but the same thing happens. I have some proxy configuration as well but this is done mainly at Ingress level and should not affect the communication between PODS via service names, at least from what I know.
Pods are in running status and their APIs can be accessed through the public URLs exposed through Ingress.
#EDIT1:
I connected also to one of the PODs and checked if port 8080 is listening and it seems ok from my perspective.
netstat -nat | grep LISTEN
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN
#EDIT2:
When I do an endpoints check for this service, it returns the following:
kubectl get ep core-node
NAME ENDPOINTS AGE
core-node 10.x.x.x:8080 37m
If I try to wget this IP from the other pod, it responds:
wget 10.x.x.x:8080
Connecting to 10.x.x.x:8080 (10.x.x.x:8080)
wget: server returned error: HTTP/1.1 404 Not Found
I have a mysql pod in my cluster that I want to expose to a public IP. Therefor I changed it to be a loadbalancer by doing
kubectl edit svc mysql-mysql --namespace mysql
release: mysql
name: mysql-mysql
namespace: mysql
resourceVersion: "646616"
selfLink: /api/v1/namespaces/mysql/services/mysql-mysql
uid: cd1cce11-890c-11e8-90f5-869c0c4ba0b5
spec:
clusterIP: 10.0.117.54
externalTrafficPolicy: Cluster
ports:
- name: mysql
nodePort: 31479
port: 3306
protocol: TCP
targetPort: 3306
selector:
app: mysql-mysql
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: 137.117.40.121
changing ClusterIP to LoadBalancer.
However I can't seem to reach it by going to mysql -h137.117.40.121 -uroot -p*****
Anyone have any idea? Is it because i'm trying to forward it over TCP?
For your issue, you want to expose your mysql pod to a public IP. So you need to take a look at Ingress in Kubernets. It's an API object that manages external access to the services in a cluster, typically HTTP. For the Ingress, you need both ingress controller and ingress rules. For more details, you can read the document I posted.
In Azure, you can get more details from HTTPS Ingress on Azure Kubernetes Service (AKS).
As pointed out by #aurelius, your config seems correct it's possible that the traffic is getting blocked by your firewall rules.
Also make sure, the cloud provider option is enabled for your cluster.
kubectl get svc -o wide would show the status of the LoadBalancer and the IP address allocated.
#charles-xu-msft, using Ingress is definitely an option but there is nothing wrong in using LoadBalancer kind of Service when the cloud provider is enabled for the kubernetes cluster.
Just for reference, here is test config:
apiVersion: v1
kind: Pod
metadata:
name: mysql-pod
labels:
name: mysql-pod
spec:
containers:
- name: mysql:5
image: mysql
ports:
- containerPort: 3306
env:
- name: MYSQL_ROOT_PASSWORD
value: mysqlpassword
---
apiVersion: v1
kind: Service
metadata:
name: test-mysql-lb
spec:
type: LoadBalancer
ports:
- port: 3306
targetPort: 3306
protocol: TCP
selector:
name: mysql-pod
I am very new to Azure, Kubernetes, even Docker itself, and playing with the system to learn and evaluate for a possible deployment later. I have so far dockerized my services and successfully deployed them and made the web frontend publicly visible using a service with type: LoadBalancer.
Now I would like to add TLS termination and have learned that for that I am supposed to configure an ingress controller with the most commonly mentioned one being nginx-ingress-controller.
Strictly monkeying examples and then afterwards trying to read up on the docs I have arrived at a setup that looks interesting but does not work. Maybe some kind soul can point out my mistakes and/or give me pointers on how to debug this and where to read more about it.
I have kubectl apply'd the following file:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: default-http-backend-deployment
namespace: kube-system
spec:
template:
metadata:
labels:
app: default-http-backend
spec:
terminationGracePeriodSeconds: 60
containers:
- name: default-http-backend
image: gcr.io/google_containers/defaultbackend:1.0
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: default-http-backend-service
namespace: kube-system
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 80
selector:
app: default-http-backend
---
apiVersion: v1
kind: ConfigMap
metadata:
name: nginx-ingress-controller-conf
namespace: kube-system
data:
# enable-vts-status: 'true'
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nginx-ingress-controller-deployment
namespace: kube-system
spec:
replicas: 1
template:
metadata:
labels:
app: nginx-ingress-controller
spec:
terminationGracePeriodSeconds: 60
containers:
- image: gcr.io/google_containers/nginx-ingress-controller:0.9.0-beta.13
name: nginx-ingress-controller
ports:
- containerPort: 80
hostPort: 80
- containerPort: 443
hostPort: 443
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-ingress-controller-conf
---
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-controller-service
namespace: kube-system
spec:
ports:
- name: https
port: 443
protocol: TCP
targetPort: 443
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx-ingress-controller
sessionAffinity: None
type: LoadBalancer
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nginx-ingress
namespace: kube-system
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: default-http-backend-service
servicePort: 80
This gave me two pods:
c:\Projects\Release-Management\Azure>kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
<some lines removed>
kube-system default-http-backend-deployment-3108185104-68xnk 1/1 Running 0 39m
<some lines removed>
kube-system nginx-ingress-controller-deployment-4106313651-v7p03 1/1 Running 0 24s
Also two new services. Note that I have also configured the default-http-backend-service with type: LoadBalancer, this is for debugging only. I have included my web-frontend which is called webcms:
c:\Projects\Release-Management\Azure>kubectl get services --all-namespaces
NAMESPACE NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
<some lines removed>
default webcms 10.0.105.59 13.94.250.173 80:31400/TCP 23h
<some lines removed>
kube-system default-http-backend-service 10.0.106.233 13.80.68.38 80:31639/TCP 41m
kube-system nginx-ingress-controller-service 10.0.33.80 13.95.30.39 443:31444/TCP,80:31452/TCP 37m
And finally an ingress:
c:\Projects\Release-Management\Azure>kubectl get ingress --all-namespaces
NAMESPACE NAME HOSTS ADDRESS PORTS AGE
kube-system nginx-ingress * 10.240.0.5 80 39m
No errors that I can immediately detect. I then went to the Azure Dashboard and looked at the loadbalancer and its rules and that looks good to my (seriously untrained) eye. I did not touch these, the loadbalancer and the rules were created by the system. There is a screenshot here:
https://qvwx.de/tmp/azure-loadbalancer.png
But unfortunately it does not work. I can curl my webcms-service:
c:\Projects\Release-Management\Azure>curl -v http://13.94.250.173
* Rebuilt URL to: http://13.94.250.173/
* Trying 13.94.250.173...
* TCP_NODELAY set
* Connected to 13.94.250.173 (13.94.250.173) port 80 (#0)
<more lines removed, success>
But neither default-http-backend nor the ingress work:
c:\Projects\Release-Management\Azure>curl -v http://13.80.68.38
* Rebuilt URL to: http://13.80.68.38/
* Trying 13.80.68.38...
* TCP_NODELAY set
* connect to 13.80.68.38 port 80 failed: Timed out
* Failed to connect to 13.80.68.38 port 80: Timed out
* Closing connection 0
curl: (7) Failed to connect to 13.80.68.38 port 80: Timed out
(ingress gives the same with a different IP)
If you read this far: Thank you for your time and I would appreciate any hints.
Marian
Kind of a trivial thing, but it'll save you some $$$: the default-http-backend is not designed to be outside facing, and thus should not have type: LoadBalancer -- it is merely designed to 404 so the Ingress controller can universally /dev/null traffic for Pod-less Services.
Moving slightly up the triviality ladder, and for extreme clarity: I do not think what you have is wrong but I did want to offer you something to decide if you want to change. Typically the contract for a Pod's container is to give an ideally natural-language name to the port ("http", "https", "prometheus", whatever) that maps into the port of underlying image. And then, set targetPort: in the Service to that name and not the number which offers the container the ability to move the port number without breaking the Service-to-Pod contract. The nginx-ingress Deployment's container:ports: agrees with me on this one.
Now, let's get into the parts that may be contributing to your system not behaving as you wish.
I can't prove it right now, but the presence of containers:hostPort: is suspicious without hostNetwork: true. I'm genuinely surprised kubectl didn't whine, because those config combinations are a little weird.
I guess the troubleshooting step would be to get on the Node (that is, something within the cluster which is not a Pod -- you could do it with a separate VM within the same subnet as your Node, too, if you wish) and then curl to port 31452 of the Node upon which the nginx-ingress-controller Pod is running.
kubectl get nodes will list all available Nodes, and
kubectl get -o json pod nginx-ingress-controller-deployment-4106313651-v7p03 | jq -r '.items[0].status.hostIP' should produce the specific VM's IP address, if you don't already know it. Err, I just realized from your prompt you likely don't have jq -- but I don't know PowerShell well enough to know its JSON-querying syntax.
Then, from any Node: curl -v http://${that_host_IP_value}:31452 and see what materializes. It may be something, or it may be the same "wha?!" that the LoadBalancer is giving you.
As for the Ingress resource specifically, again default-http-backend is not supposed to have an Ingress resource -- I don't know if it hurts anything because I've never tried it, but I'd also bet $1 it is not helping your situation, either.
Since you already have a known working Service with default:webcms, I would recommend creating an Ingress resource in the default namespace with pretty much exactly what your current Ingress resource is, but pointed at webcms instead of default-http-backend. That way your Ingress controller will actually have something to target which isn't the default backend.
If you haven't already seen it, adding --v=2 will cause the Pod to emit the actual diff of its nginx config change, which can be unbelievably helpful in tracking down config misunderstandings
I'm so sorry that you're having to do battle with Azure, Ingress controllers, and rough-around-the-edges documentation for your first contact with Kubernetes. It really is amazing when you get all the things set up correctly, but it is a pretty complex piece of machinery, for sure.
As an experiment I'm trying to run a docker container on Azure using the Azure Container Service and Kubernetes as the orchestrator. I'm running the official nginx image. Here are the steps I am taking:
az group create --name test-group --location westus
az acs create --orchestrator-type=kubernetes --resource-group=test-group --name=k8s-cluster --generate-ssh-keys
I created Kubernetes deployment and service files from a docker compose file using Kompose.
deployment file
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.service.type: LoadBalancer
creationTimestamp: null
labels:
io.kompose.service: test
name: test
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: test
spec:
containers:
- image: nginx:latest
name: test
ports:
- containerPort: 80
resources: {}
restartPolicy: Always
status: {}
service file
apiVersion: v1
kind: Service
metadata:
annotations:
kompose.service.type: LoadBalancer
creationTimestamp: null
labels:
io.kompose.service: test
name: test
spec:
ports:
- name: "80"
port: 80
targetPort: 80
selector:
io.kompose.service: test
type: LoadBalancer
status:
loadBalancer: {}
I can then start everything up:
kubectl create -f test-service.yaml,test-deployment.yaml
Once an IP has been exposed I assign a dns prefix to it so I can access my running container like so: http://nginx-test.westus.cloudapp.azure.com/.
My question is, how can I access the service using https? At https://nginx-test.westus.cloudapp.azure.com/
I don't think I'm supposed to configure nginx for https, since the certificate is not mine. I've tried changing the load balancer to send 443 traffic to port 80, but I receive a timeout error.
I tried mapping port 443 to port 80 in my Kubernetes service config.
ports:
- name: "443"
port: 443
targetPort: 80
But that results in:
SSL peer was not expecting a handshake message it received. Error code: SSL_ERROR_HANDSHAKE_UNEXPECTED_ALERT
How can I view my running container at https://nginx-test.westus.cloudapp.azure.com/?
If I understand it correctly, I think you are looking for Nginx Ingress controller.
If we need TLS termination on Kubernetes, we can use ingress controller, on Azure we can use Nginx Ingress controller.
To archive this, we can follow those steps:
1 Deploy the Nginx Ingress controller
2 Create TLS certificates
3 Deploy test http service
4 configure TLS termination
More information about configure Nginx Ingress Controller for TLS termination on Kubernetes on Azure, please refer to this blog.
root#k8s-master-6F403744-0:~/ingress/examples/deployment/nginx# kubectl get services --namespace kube-system -w
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default-http-backend 10.0.113.185 <none> 80/TCP 42m
heapster 10.0.4.232 <none> 80/TCP 1h
kube-dns 10.0.0.10 <none> 53/UDP,53/TCP 1h
kubernetes-dashboard 10.0.237.125 <nodes> 80:32229/TCP 1h
nginx-ingress-ssl 10.0.92.57 40.71.37.243 443:30215/TCP 13m