AKS - public Load Balancer / Openfaas - "Cannot connect to OpenFaaS on URL: https://..." - azure

Situation: I have an AKS cluster that I'm trying to load my project into from localhost.
When I launch my Ansible scripts to get the project running, I need to log in to openfaas but I encounter this error:
> ...\nCannot connect to OpenFaaS on URL: https:(...).com/faas. Get \"https://(..).com/faas/system/functions\": dial tcp
> xx.xxx.xxx.xxx:xxx: i/o timeout (Client.Timeout exceeded while
> awaiting headers)", "stdout_lines": ["WARNING! Using --password is
> insecure, consider using: cat ~/faas_pass.txt | faas-cli login -u user
> --password-stdin", "Calling the OpenFaaS server to validate the credentials...", "Cannot connect to OpenFaaS on URL:
> https://(...).com/faas. Get
> \"https://(...).com/faas/system/functions\": dial tcp
> xx.xxx.xxx.xxx:xxx: i/o timeout (Client.Timeout exceeded while awaiting headers)"]}
I have a PUBLIC Load Balancer I created from a yaml file and it's linked to the DNS (...).com / IP address of LB created.
My loadbalancer.yml file:
apiVersion: v1
kind: Service
metadata:
name: loadbalancer
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 80
My ingress file:
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
name: openfaas
spec:
rules:
- host: (...).com
http:
paths:
- backend:
service:
name: openfaas
port:
number: 80
path: /faas
pathType: Prefix
tls:
- hosts:
- (...).com
secretName: (...).com
---
I haven't found many tutorials that have the same situation or they use internal Load Balancers.
Is this Azure that's blocking the communication? a Firewall problem?
Do I need to make my LB internal instead of external?
I saw a source online that stated this:
If you expose a service through the normal LoadBalancer with a public
ip, it will not be accessible because the traffic that has not been
routed through the azure firewall will be dropped on the way out.
Therefore you need to create your service with a fixed internal ip,
internal LoadBalancer and route the traffic through the azure firewall
both for outgoing and incoming traffic.
https://denniszielke.medium.com/setting-up-azure-firewall-for-analysing-outgoing-traffic-in-aks-55759d188039
But I'm wondering if it's possible to bypass that..
Any help is greatly apprecated!

I found out afterwards that Azure already provides a LB, so you do not need to create one. Not a firewall issue.
Go to "Load Balancing" -> "Frontend IP Configuration" and choose the appropriate IP.

Related

AKS can't modify AGIC on ingress creation due to the policy

I've just finished setting up AKS with AGIC and using Azure CNI. I'm trying to deploy NGINX to test if I set the AKS up correctly with the following configuration:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: nginx-ingress
annotations:
kubernetes.io/ingress.class: azure/application-gateway
kubernetes.io/ingress.allow-http: "false"
appgw.ingress.kubernetes.io/use-private-ip: "false"
appgw.ingress.kubernetes.io/override-frontend-port: "443"
spec:
tls:
- hosts:
- my.domain.com
secretName: aks-ingress-tls
rules:
- host: my.domain.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: nginx-service
port:
number: 80
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
replicas: 1
selector:
matchLabels:
component: nginx
template:
metadata:
labels:
component: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: ClusterIP
selector:
component: nginx
ports:
- port: 80
protocol: TCP
There's no error or any other log message on apply the above configurations.
> k apply -f nginx-test.yml
deployment.apps/nginx-deployment created
service/nginx-service created
ingress.networking.k8s.io/nginx-ingress created
But after a further investigation in the Application Gateway I found these entries in the Activity log popped up at the same time I applied the said configuration.
Further details in one of the entries is as follows:
Operation name: Create or Update Application Gateway
Error code: RequestDisallowedByPolicy
Message: Resource 'my-application-gateway' was disallowed by policy.
[
{
"policyAssignment": {
"name": "Encryption In Transit",
"id": "/providers/Microsoft.Management/managementGroups/***/providers/Microsoft.Authorization/policyAssignments/EncryptionInTransit"
},
"policyDefinition": {
"name": "HTTPS protocol only on Application Gateway listeners",
"id": "/providers/microsoft.management/managementgroups/***/providers/Microsoft.Authorization/policyDefinitions/HttpsOnly_App_Gateways"
},
"policySetDefinition": {
"name": "Encryption In Transit",
"id": "/providers/Microsoft.Management/managementgroups/***/providers/Microsoft.Authorization/policySetDefinitions/EncryptionInTransit"
}
}
]
My organization have a policy to enforce TLS but from my configuration I'm not sure what I did wrong as I have already configured the ingress to only use HTTPS and also have certificate (from the secret) installed.
I'm not sure where to look and wish someone could guide me in the correct direction. Thanks!
• As you said, your organization has a policy for enforcing TLS for securing encrypted communication over HTTPS. Therefore, when you create an ‘NGINX’ deployment through the ‘yaml’ file posted, you can see that the nginx application is trying to connect to the application gateway ingress controller over Port 80 which is reserved for HTTP communications. Thus, your nginx application has also disallowed the usage of private IPs with the AGIC due to which the nginx application is directly overriding the HTTPS 443 port for reaching out to the domain ‘my.domain.com’ over port 80 without using the SSL/TLS certificate-based port for communication.
Thus, would suggest you to please configure NGINX application for port 443 as the frontend port for the cluster IP and ensure ‘SSL redirection’ is set to enabled due to which when the NGINX application is deployed, it will be not face the policy restrictions and get failed. Also, refer to the below snapshot of the listeners in application gateway and load balancer when provisioning an AGIC for an AKS cluster.
Also, for more detailed information on deploying the NGINX application in AKS cluster on ports, kindly refer to the below documentation link: -
https://learn.microsoft.com/en-us/azure/aks/ingress-basic?tabs=azure-cli

How to whitelist egress traffic with a NetworkPolicy that doesn't prevent Apache Ignite from starting up?

I have some more or less complex microservice architecture, where Apache Ignite is used as a stateless database / cache. The Ignite Pod is the only Pod in its Namespace and the architecture has to pass a security audit, which it won't pass if I don't apply the most restrictive NetworkPolicy possible for egress traffic. It has to restrict all possible traffic that is not needed by Ignite itself.
At first, I thought: Nice, Ignite does not push any traffic to other Pods (there are no other pods in that Namespace), so this is gonna be easily done restricting all egress traffic in the Namespace where Ignite is the only Pod! ...
Well, that didn't actually work out great:
Any egress rule, even if I allow traffic to all the ports mentioned in the Ignite Documentation, will cause the startup to fail with an IgniteSpiException that says Failed to retrieve Ignite pods IP addresses, Caused by: java.net.ConnectException: Operation timed out (Connection timed out).
The problem seems to be the TcpDiscoveryKubernetsIpFinder, especially the method getRegisteredAddresses(...) which obviously does some egress traffic inside the Namespace in order to register IP addresses of Ignite nodes. The disovery port 47500 is of course allowed, but that does not change the situation. The functionality of Ignite with the other Pods from other Namespaces is working without egress rules applied, which means (to me) that the configuration concerning ClusterRole, ClusterRoleBinding, a Service in the Namespace and the xml configuration of Ignite itself etc. seems to be correct. Even ingress rules restricting traffic from other namespaces are working as expected, allowing exactly the desired traffic.
These are the policies I applied:
[WORKING, blocking undesired traffic only]:
## Denies all Ingress traffic to all Pods in the Namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-ingress-in-cache-ns
namespace: cache-ns
spec:
# selecting nothing here will deny all traffic between pods in the namespace
podSelector:
matchLabels: {}
# traffic routes to be considered, here: incoming exclusively
policyTypes:
- Ingress
## Allows necessary ingress traffic
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: netpol-cache-ns
namespace: cache-ns
# defines the pod(s) that this policy is targeting
spec:
policyTypes:
- Ingress
podSelector:
matchLabels:
app: ignite
# <----incoming traffic----
ingress:
- from:
- namespaceSelector:
matchLabels:
zone: somewhere-else
podSelector:
matchExpressions:
- key: app
operator: In
values: [some-pod, another-pod] # dummy names, these Pods don't matter at all
ports:
- port: 11211 # JDBC
protocol: TCP
- port: 47100 # SPI communication
protocol: TCP
- port: 47500 # SPI discovery (CRITICAL, most likely...)
protocol: TCP
- port: 10800 # SQL
protocol: TCP
# ----outgoing traffic---->
# NONE AT ALL
With these two applied, everything is working fine, but the security audit will say something like
Where are the restrictions for egress? What if this node is hacked via the allowed routes because one of the Pods using these routes was hacked before? It may call a C&C server then! This configuration will not be permitted, harden your architecture!
[BLOCKING desired/necessary traffic]:
Generally deny all traffic...
## Denies all traffic to all Pods in the Namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: deny-all-traffic-in-cache-ns
namespace: cache-ns
spec:
# selecting nothing here will deny all traffic between pods in the namespace
podSelector:
matchLabels: {}
# traffic routes to be considered, here: incoming exclusively
policyTypes:
- Ingress
- Egress # <------ THIS IS THE DIFFERENCE TO THE WORKING ONE ABOVE
... and allow specific routes afterwards
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: netpol-cache-ns-egress
namespace: cache-ns
# defines the pod(s) that this policy is targeting
spec:
policyTypes:
- Egress
podSelector:
matchLabels:
app: ignite
----outgoing traffic---->
egress:
# [NOT SUFFICIENT]
# allow egress to this namespace at specific ports
- to:
- namespaceSelector:
matchLabels:
zone: cache-zone
ports:
- protocol: TCP
port: 10800
- protocol: TCP
port: 47100 # SPI communication
- protocol: TCP
port: 47500
# [NOT SUFFICIENT]
# allow dns resolution in general (no namespace or pod restriction)
- ports:
- protocol: TCP
port: 53
- protocol: UDP
port: 53
# [NOT SUFFICIENT]
# allow egress to the kube-system (label is present!)
- to:
- namespaceSelector:
matchLabels:
zone: kube-system
# [NOT SUFFICIENT]
# allow egress in this namespace and for the ignite pod
- to:
- namespaceSelector:
matchLabels:
zone: cache-zone
podSelector:
matchLabels:
app: ignite
# [NOT SUFFICIENT]
# allow traffic to the IP address of the ignite pod
- to:
- ipBlock:
cidr: 172.21.70.49/32 # won't work well since those addresses are dynamic
ports:
- port: 11211 # JDBC
protocol: TCP
- port: 47100 # SPI communication
protocol: TCP
- port: 47500 # SPI discovery (CRITICAL, most likely...)
protocol: TCP
- port: 49112 # JMX
protocol: TCP
- port: 10800 # SQL
protocol: TCP
- port: 8080 # REST
protocol: TCP
- port: 10900 # thin clients
protocol: TCP
Apache Ignite version used is 2.10.0
Now the question to all readers is:
How can I restrict Egress to an absolute minimum inside the Namespace so that Ignite starts up and works correctly? Would it be sufficient to just deny Egress to outside the cluster?
If you need any more yamls for an educated guess or hint, please feel free to request them in a comment.
UPDATE:
Executing nslookup -debug kubernetes.default.svc.cluster.local from inside the ignite pod wihtout any policy restricting egress shows
BusyBox v1.29.3 (2019-01-24 07:45:07 UTC) multi-call binary.
Usage: nslookup HOST [DNS_SERVER]
Query DNS about HOST
As soon as (any) NetworkPolicy is applied that restricts Egress to specific ports, pods and namespaces the Ignite pod refuses to start and the lookup does not reach kubernetes.default.svc.cluster.local anymore.
Egress to DNS was allowed (UDP 53 to k8s-app: kube-dns) ⇒ still no ip lookup possible
another Update
Having enabled logger's debug mode, I could find some interesting messages:
a message about the (only) node having 2 IP addresses (no idea if that's causing the problem but I even don't know how to give the node only one IP address, anyway):
2021-09-30 13:39:46,081 DEBUG [org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi] - <This node 6f1c5f37-b1a6-4b05-a407-07aad6a9c725 has 2 TCP addresses. Note that TcpDiscoverySpi.failureDetectionTimeout works per address sequentially. Setting of several addresses can prolong detection of current node failure.>
This message shows with restricted egress and without, so it may not be the reason for this particular problem, still bugging me somehow.
a message that tells about missing IP addresses having been registered (without egress restricted, every egress allowed):
2021-09-30 13:40:46,982 DEBUG [org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi] - <Registered missing addresses in IP finder: [/127.0.0.1:47500]>
Here I'm wondering about the localhost / 127.0.0.1 IP address, why is that one reachable without the egress restriction but isn't when egress restriction is applied?
Ignite couldn't resolve kubernetes.default.svc.cluster.local to IP.

Request timeout between AKS pods

I have a Kubernetes cluster inside Azure which holds some services and pods. I want to make those pods communicate with each other but when I try to execute a CURL/WGET from one to another, a timeout occurs.
The service YAMLs can be found below:
First service:
apiVersion: v1
kind: Service
metadata:
labels:
app: core-node
name: core-node
spec:
ports:
- name: "9001"
port: 9001
targetPort: 8080
selector:
app: core-node
status:
loadBalancer: {}
Second service:
apiVersion: v1
kind: Service
metadata:
labels:
app: core-python
name: core-python
spec:
ports:
- name: "9002"
port: 9002
targetPort: 8080
selector:
app: core-python
status:
loadBalancer: {}
When I am connecting to the "core-node" pod for example through sh and try to execute the following command, it gets a timeout. It happens also if I try for "core-python" pod to the other one.
wget core-python:9002
wget: can't connect to remote host (some ip): Operation timed out
I also tried using the IP directly and also trying to switch from ClusterIP to LoadBalancer, but the same thing happens. I have some proxy configuration as well but this is done mainly at Ingress level and should not affect the communication between PODS via service names, at least from what I know.
Pods are in running status and their APIs can be accessed through the public URLs exposed through Ingress.
#EDIT1:
I connected also to one of the PODs and checked if port 8080 is listening and it seems ok from my perspective.
netstat -nat | grep LISTEN
tcp 0 0 0.0.0.0:8080 0.0.0.0:* LISTEN
#EDIT2:
When I do an endpoints check for this service, it returns the following:
kubectl get ep core-node
NAME ENDPOINTS AGE
core-node 10.x.x.x:8080 37m
If I try to wget this IP from the other pod, it responds:
wget 10.x.x.x:8080
Connecting to 10.x.x.x:8080 (10.x.x.x:8080)
wget: server returned error: HTTP/1.1 404 Not Found

nginx ingress setup up on Azure (not using helm) troubleshooting

I need some help trying to achieve this two things here:
Setting up my nginx ingress controller on AKS without using helm (I don't want to)
Make my ingress to use an already reserved IP address with resource name 'kubernetes-ip'
For the first step I'm following with no luck this documentation: https://kubernetes.github.io/ingress-nginx/deploy/#azure
And I didn't forgot the mandatory.yaml!
my step by step guide:
I have a basic kubernetes cluster with two pods, as follows:
NAME READY STATUS RESTARTS AGE
activemq-demo-7b769bcc4-jtsj5 1/1 Running 0 55m
ubuntu-dcb9c6ccb-wkz2w 1/1 Running 0 2d
At this point I want to add my ingress so I can reach the demo activemq using my public ip address a.b.c.d
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/mandatory.yaml
kubectl apply -f https://raw.githubusercontent.com/kubernetes/ingress-nginx/nginx-0.30.0/deploy/static/provider/cloud-generic.yaml
After doing it I run kubectl get services -n ingress-nginx
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ingress-nginx LoadBalancer 10.0.186.143 zz.zz.zzz.zzz 80:32703/TCP,443:30584/TCP 17s
Which is fine! seems to be working right? at this point I should be able to connect to the external ip address to any of those ports but I can't :(
$ telnet zz.zz.zzz.zzz 80
Trying zz.zz.zzz.zzz...
(a long time later...)
telnet: Unable to connect to remote host: Resource temporarily unavailable
$ telnet zz.zz.zzz.zzz 443
Trying zz.zz.zzz.zzz...
(a long time later...)
telnet: Unable to connect to remote host: Resource temporarily unavailable
I know that the mandatory.yaml doesn't know nothing about my reserved IP address, but I'm ignoring it because I have a bigger problem, I can't connect.
I also ignore for a minute that I can't connect just to test if I need the actual ingress.yaml running. So I use kubectl apply -f ingress.html. (ingress.yaml contains the following code)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress1
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "*"
spec:
tls:
- hosts:
- amq-test.mydomain.com
secretName: my-certificate
rules:
- host: amq-test.mydomain.com
http:
paths:
- path: /*
backend:
serviceName: activemq-demo-service
servicePort: 8161
After that If I run kubectl get ing I get:
NAME HOSTS ADDRESS PORTS AGE
ingress1 amq-test.mydomain.com zz.zz.zzz.zzz 80, 443 66s
But its the same, I can't connect:
FROM WSL
$ telnet zz.zz.zzz.zzz 80
Trying zz.zz.zzz.zzz...
(a long time later...)
telnet: Unable to connect to remote host: Resource temporarily unavailable
$ telnet zz.zz.zzz.zzz 443
Trying zz.zz.zzz.zzz...
(a long time later...)
telnet: Unable to connect to remote host: Resource temporarily unavailable
Honestly I'm not sure what I'm missing, this should be very straight forward with the official documentation, I don't know if I have to enable something else in Azure ...
Thanks for reading, any help will be appreciated.
EDIT 3/7/2020: activemq-demo Deployment & Service #Jean-Philippe Bond
apiVersion: apps/v1
kind: Deployment
metadata:
name: activemq-demo
labels:
app: activemq-demo
tier: backend
spec:
revisionHistoryLimit: 1
replicas: 1
selector:
matchLabels:
app: activemq-demo
template:
metadata:
labels:
app: activemq-demo
tier: backend
spec:
containers:
- name: activemq-demo
image: myproject.azurecr.io/activemq-slim:5.15.9-3
imagePullPolicy: "Always"
command: ["/start.sh"]
args: ["somename"]
env:
- name: LANG
value: "C.UTF-8"
ports:
- containerPort: 8161
- containerPort: 61616
livenessProbe:
exec:
command:
- /isAlive.sh
- somename
initialDelaySeconds: 15
periodSeconds: 5
---
apiVersion: v1
kind: Service
metadata:
name: activemq-demo-service
labels:
tier: controller
spec:
type: NodePort
ports:
- name: http
protocol: TCP
port: 8161
targetPort: 8161
- name: acceptor
protocol: TCP
port: 61616
targetPort: 61616
selector:
app: activemq-demo
tier: backend
Please note that the only thing I want to access from outside is the HTTP web service that ActiveMQ provides on port 8161 by default
EDIT 3/9/2020: #HelloWorld request
telnet from WSL
$ telnet zz.zz.zzz.zzz 80
Trying zz.zz.zzz.zzz...
(a long time later...)
telnet: Unable to connect to remote host: Resource temporarily unavailable
$ telnet zz.zz.zzz.zzz 443
Trying zz.zz.zzz.zzz...
(a long time later...)
telnet: Unable to connect to remote host: Resource temporarily unavailable
telnet from macos
$ telnet zz.zz.zzz.zzz 80
Trying zz.zz.zzz.zzz...
telnet: connect to address zz.zz.zzz.zzz: Operation timed out
telnet: Unable to connect to remote host
$ telnet zz.zz.zzz.zzz 443
Trying zz.zz.zzz.zzz...
telnet: connect to address zz.zz.zzz.zzz: Operation timed out
telnet: Unable to connect to remote host
curl from macos - HTTP
$ curl -v -X GET http://amq-test.mydomain.com
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying zz.zz.zzz.zzz:80...
* TCP_NODELAY set
* Connection failed
* connect to zz.zz.zzz.zzz port 80 failed: Operation timed out
* Failed to connect to amq-test.mydomain.com port 80: Operation timed out
* Closing connection 0
curl: (7) Failed to connect to amq-test.mydomain.com port 80: Operation timed out
curl from macos - HTTPS
$ curl -v -X GET https://amq-test.mydomain.com
Note: Unnecessary use of -X or --request, GET is already inferred.
* Trying zz.zz.zzz.zzz:443...
* TCP_NODELAY set
* Connection failed
* connect to zz.zz.zzz.zzz port 443 failed: Operation timed out
* Failed to connect to amq-test.mydomain.com port 443: Operation timed out
* Closing connection 0
curl: (7) Failed to connect to amq-test.mydomain.com port 443: Operation timed out
EDIT 3/10/2020: Starting from scratch (multiple times)
I deleted the whole thing to start "fresh" with no much more luck, but I have noticed some things that hopefully may trigger some thoughts out there...
The basics:
I already have a Resource Group that I'm using: MyResourceGroup
I already have a Virtual Network with a Subnet MyVirtualNet
I already have reserved a Public IP address I want to use with my ingress, It's an Static IP that I want to prevent from changing (or be deleted) until the end of time: A.B.C.D
I already have my own domain that I have routed to A.B.C.D: amq-test.mydomain.com
My procedure:
I'm creating a new Kubernetes Service using the Azure Web Interface, I'm making sure to select my Resource Group as well as my Virtual Network and Subnet
As soon I created the base Kubernetes Service I notice that it creates a LoadBalancer with a different Public IP address that I can't control, I assume that its because that IP address will be used as the main entry point for things like kubectl and remote management.
With the cluster live I create the basic ActiveMQ image that I shared previously
Now, I start with the ingress-nginx and deploy the mandatory.yaml
And then I add the Azure Service yaml here, but this time with two modifications to make it use my Public IP address. this modification was extracted from Microsoft documentation
kind: Service
apiVersion: v1
metadata:
name: ingress-nginx
namespace: ingress-nginx
annotations:
service.beta.kubernetes.io/azure-load-balancer-resource-group: MyResourceGroup
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
loadBalancerIP: A.B.C.D
externalTrafficPolicy: Local
type: LoadBalancer
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
ports:
- name: http
port: 80
protocol: TCP
targetPort: http
- name: https
port: 443
protocol: TCP
targetPort: https
With that running without errors, everything is as expect:
$ kubectl get svc -n ingress-nginx -o wide
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR
ingress-nginx LoadBalancer 10.0.30.100 A.B.C.D 80:30682/TCP,443:31002/TCP 35m app.kubernetes.io/name=ingress-nginx,app.kubernetes.io/part-of=ingress-nginx
I deploy the final part right? my ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress1
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/enable-cors: "true"
nginx.ingress.kubernetes.io/cors-allow-origin: "*"
nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
nginx.ingress.kubernetes.io/cors-allow-methods: "*"
spec:
rules:
- http:
paths:
- path: /*
backend:
serviceName: activemq-demo-service
servicePort: 8161
This runs without issues and when is up I checked with:
$ kubectl describe ingress
Name: ingress1
Namespace: default
Address: A.B.C.D
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/* queue-callbacks-service:8161 (10.94.20.14:8161)
Annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/cors-allow-credentials: true
nginx.ingress.kubernetes.io/cors-allow-methods: *
nginx.ingress.kubernetes.io/cors-allow-origin: *
nginx.ingress.kubernetes.io/enable-cors: true
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"kubernetes.io/ingress.class":"nginx","nginx.ingress.kubernetes.io/cors-allow-credentials":"true","nginx.ingress.kubernetes.io/cors-allow-methods":"*","nginx.ingress.kubernetes.io/cors-allow-origin":"*","nginx.ingress.kubernetes.io/enable-cors":"true"},"name":"ingress1","namespace":"default"},"spec":{"rules":[{"http":{"paths":[{"backend":{"serviceName":"queue-callbacks-service","servicePort":8161},"path":"/callbacks/*"}]}}]}}
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal CREATE 35m nginx-ingress-controller Ingress default/ingress1
Normal UPDATE 34m nginx-ingress-controller Ingress default/ingress1
Everything seems to be perfect but I just can't connect to the IP or host using any (80, 443) port
Hope this helps
Just FYI I just tried with helm following this documentation and I got the same result
EDIT 4/15/2020: not done yet
I was working on another project so this one was paused for a moment, I´m returning to this now, unfortunately still not working, I opened a ticket with Microsoft and I'm waiting for it.
However, we noticed that port 80 are being filtered by some firewall or something, we are not sure what is causing this as we have inbound rules on our SG with port 80 and 443 open from * on any protocol
$ nmap -Pn zz.zz.zz.zzz -p 80,443
Starting Nmap 7.80 ( https://nmap.org ) at 2020-04-15 11:01 Pacific SA Standard Time
Nmap scan report for zz.zz.zz.zzz
Host is up.
PORT STATE SERVICE
80/tcp filtered http
443/tcp filtered https
Nmap done: 1 IP address (1 host up) scanned in 4.62 seconds
ActiveMQ is accessible via TCP, not HTTP and Kubernetes Ingress were not built to support TCP services. Whit that said, Nginx does support TCP load balancing if you really want to use it, but you'll not be able to use an Ingress rule based on the host like you did since this is reserved for HTTP/HTTPS. Your best bet would probably be to use the Azure L4 Load balancer directly instead of going through the ingress controller.
If you want to use Nginx, you'll need to modify the yaml in mandatory.yaml to expose the ActiveMQ port on the Nginx deployment.
kind: ConfigMap
apiVersion: v1
metadata:
name: tcp-services
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
data:
9000: "default/activemq-demo-service:8161"
You'll also need to add the tcp-services port on the Service resource. Ex :
apiVersion: v1
kind: Service
metadata:
name: ingress-nginx
namespace: ingress-nginx
labels:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
spec:
type: LoadBalancer
ports:
- name: http
port: 80
targetPort: 80
protocol: TCP
- name: https
port: 443
targetPort: 443
protocol: TCP
- name: proxied-tcp-9000
port: 9000
targetPort: 9000
protocol: TCP
selector:
app.kubernetes.io/name: ingress-nginx
app.kubernetes.io/part-of: ingress-nginx
Here is the documentation for TCP/UDP support in the Nginx Ingress.
I wrote a story that might be useful to you some time ago: https://medium.com/cooking-with-azure/tcp-load-balancing-with-ingress-in-aks-702ac93f2246

Kubernetes service load balancer "No route to host" error

I'm trying to expose a pod using a load balancer service. The service was created successfully and an external IP was assigned. When I tried accessing the external in the browser the site is no and I got ERR_CONNECTION_TIMED_OUT. Please see the yaml below:
apiVersion: v1
kind: Service
metadata:
labels:
name: service-api
name: service-api
spec:
externalTrafficPolicy: Cluster
ports:
- nodePort: 30868
port: 80
protocol: TCP
targetPort: 9080
name: http
selector:
name: service-api
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer: {}
I also tried creating the service using kubernetes CLI still no luck.
It looks like I have a faulty DNS on my k8s cluster. In order to resolve the issue, I have to restart the cluster. But before restarting the cluster, you can also delete all the pods in kube-system to refresh the DNS pods and if it's still not working I suggest restarting the cluster.

Resources