K8 Ingress does not have an endpoint - azure

I want to make services accessible from outside the K8 cluster using an ingress controller. Following 5.5 from the Kubernetes Cookbook, I ran this manifest:
kind: Ingress
apiVersion: extensions/v1beta1
metadata:
name: nginx-public
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host:
http:
paths:
- path: /web
backend:
serviceName: nginx
servicePort: 80
The Ingress object is visible in the Kubernetes dashboard; but it does not have an assigned endpoint:
Output of kubectl get ing:
NAME HOSTS ADDRESS PORTS AGE
nginx-public * 80 54m
update
Running kubectl describe ingress nginx-public gives:
Name: nginx-public
Namespace: default
Address:
Default backend: default-http-backend:80 (<none>)
Rules:
Host Path Backends
---- ---- --------
*
/web nginx:80 (<none>)
Annotations:
ingress.kubernetes.io/rewrite-target: /
Events: <none>

Actually this is an issue with Kubernetes Dashboard, we have the same issue.
Even if it isn't displayed it doesn't mean your ingress isn't working. First you should check the ingress with kubectl (kubectl describe ingress nginx-public) and verify that the output is smiliar to this:
Name: test-ingress
Namespace: test
Address:
Default backend: default-http-backend:80 (<none>)
TLS:
test-ssl-secret terminates test.myorg.com
Rules:
Host Path Backends
---- ---- --------
test.myorg.com
/ test-service:80 (<none>)
Afterwards you should verify your service is reachable via your specified host.
Update:
Depending on the service in front of your ingress-controller your service should be reachable via http://{serverip}:{nodeport-http-port}/web in case your service is of type NodePort(you will get 2 external ports in the 30000-39999 range, one is the http port the other the https port) or http://{address-from-external-loadbalancer}/web if the service is of type LoadBalancer.
2nd-Update
After some further investigation about the issue i stumbled upon a bug issue of kubernetes-dashboard stating that it's indeed possible to show the endpoints of ingress. The problem actually isn't caused by the dashboard, but a missing parameter on the ingress deployment.
For nginx-ingress-controller its the following:
NGINX Ingress CLI arguments
The missing option is --publish-service
If you used helm to deploy the controller you need to add the parameter --set controller.publishService.enabled=true

Related

Expose Ingress with host - what is a host and where to get it?

I am using Azure Cloud and I set up a kubernetes cluster.
Now I want to expose 1 service in my cluster with gRPC, so I learned I need Ingress.
I set up ingress:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
annotations:
#nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/backend-protocol: "GRPC"
name: fortune-ingress
#namespace: default
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: apigateway-service
port:
number: 80
Now the ingress is working but it has Host "*" and no external IP, so I dont know how to connect with that.
I know I have a domain on another service, but I dont want to use it right now for developmenmt reasons, I just want to have a test-host or external IP to try if everything is working within my cluster and focus on that.
What to do?

Ingress rule cannot resolve the backend server ip address on Azure AKS

I installed the ingress controller on my aks using the helm install. I also created an ingress rule for my service:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: my-demo-ingress
namespace: my-demo
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/use-regex: "true"
nginx.ingress.kubernetes.io/rewrite-target: /$2
spec:
ingressClassName: nginx
rules:
- host: mydemoingress.com
http:
paths:
- path: /(.*)
pathType: Prefix
backend:
service:
name: api-gateway
port:
number: 8080
When I deployed above ingress rule, i notice that my Backends has no IP as seen below api-gateway:8080 ():
**kubectl describe ing my-demo-ingress -n my-demo**
Name: my-demo-ingress
Labels: app.kubernetes.io/managed-by=Helm
Namespace: my-demo
Address:
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
mydemoingress.com
/(.*) **api-gateway:8080 (<none>)**
Annotations:
nginx.ingress.kubernetes.io/rewrite-target: /$2
nginx.ingress.kubernetes.io/ssl-redirect: false
nginx.ingress.kubernetes.io/use-regex: true
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Sync 4m34s nginx-ingress-controller Scheduled for sync
Normal Sync 4m34s nginx-ingress-controller Scheduled for sync
No IP address gets assigned to the ingress controller.
When i however try this same setup on my local k3s setup, the IP is assigned correctly. Please what am i doing wrong?
Update: Helm install command for ingress controller:
NAMESPACE=ingress-basic
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
helm install ingress-nginx ingress-nginx/ingress-nginx \
--create-namespace \
--namespace $NAMESPACE \
--set controller.service.annotations."service\.beta\.kubernetes\.io/azure-load-balancer-health-probe-request-path"=/healthz
• AFAIK, the syntax used for the ‘servicePort’ and the ‘serviceName’ should be as per given in the below sample ‘YAML’ file. Also, the path to the specified service name might be missing as per the YAML file that you have shared due to which while provisioning the service in the AKS cluster, the port mapping might not be correct and hence, the internal load balancer could not reach out to the created service.
Sample YAML file: -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: hello-world-ingress
namespace: ingress-basic
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
nginx.ingress.kubernetes.io/rewrite-target: /$1
spec:
rules:
- http:
paths:
- backend:
serviceName: aks-helloworld
servicePort: 80
path: /(.*)
- backend:
serviceName: ingress-demo
servicePort: 80
path: /hello-world-two(/|$)(.*)
• Thus, apart from the above-stated modifications, I would also suggest you to please check whether you have assigned an IP address that is not in use in your virtual network and that you have deployed a load balancer using that IP address in AKS cluster as below: -
controller:
service:
loadBalancerIP: 10.240.0.42
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
These modifications should help you resolve your issue with the backend IP address pools.
Also, do refer the below link for more information: -
https://microsoft.github.io/AzureTipsAndTricks/blog/tip253.html

How to block all traffic to pods matching a label using a NetworkPolicy

Afaik, the K8s NetworkPolicy can only allow pods matching a label to do something. I do not want to:
Deny all traffic
Allow traffic for all pods except the ones matching my label
but instead:
Allow all traffic
Deny traffic for pods matching my label
How do I do that?
From kubectl explain NetworkPolicy.spec.ingress.from:
DESCRIPTION:
List of sources which should be able to access the pods selected for this
rule. Items in this list are combined using a logical OR operation. If this
field is empty or missing, this rule matches all sources (traffic not
restricted by source). If this field is present and contains at least one
item, this rule allows traffic only if the traffic matches at least one
item in the from list.
As far as I understand this, we can only allow, not deny.
As you mentioned in the comments, you are using the Kind tool for running Kubernetes. Instead of kindnet CNI plugin (default CNI plugin for Kind) which does not support Kubernetes network policies, you can use Calico CNI plugin which support Kubernetes network policies + it has its own, similar solution called Calico network policies.
Example - I will create cluster with disabled default kind CNI plugin + enabled NodePort for testing (assuming that you have kind + kubectl tools already installed):
kind-cluster-config.yaml file:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
networking:
disableDefaultCNI: true # disable kindnet
podSubnet: 192.168.0.0/16 # set to Calico's default subnet
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 30000
listenAddress: "0.0.0.0" # Optional, defaults to "0.0.0.0"
protocol: tcp # Optional, defaults to tcp
Time for create a cluster using above config:
kind create cluster --config kind-cluster-config.yaml
When cluster is ready, I will install Calico CNI plugin:
kubectl apply -f https://docs.projectcalico.org/manifests/calico.yaml
I will wait until all calico pods are ready (kubectl get pods -n kube-system command to check). Then, I will create sample nginx deployment + service type NodePort for accessing:
nginx-deploy-service.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx-service
spec:
type: NodePort
selector:
app: nginx
ports:
- protocol: TCP
port: 80
targetPort: 80
nodePort: 30000
Let's apply it: kubectl apply -f nginx-deploy-service.yaml
So far so good. Now I will try to access nginx-service using node IP (kubectl get nodes -o wide command to check node IP address):
curl 172.18.0.2:30000
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...
Okay, it's working.
Now time to install calicoctl and apply some example policy - based on this tutorial - to block ingress traffic only for pods with label app with value nginx:
calico-rule.yaml:
apiVersion: projectcalico.org/v3
kind: GlobalNetworkPolicy
metadata:
name: default-deny
spec:
selector: app == "nginx"
types:
- Ingress
Apply it:
calicoctl apply -f calico-rule.yaml
Successfully applied 1 'GlobalNetworkPolicy' resource(s)
Now I can't reach the address 172.18.0.2:30000 which was working previously. The policy is working fine!
Read more about calico policies:
Get started with Calico network policy
Calico policy tutorial
Also check this GitHub topic for more information about NetworkPolicy support in Kind.
EDIT:
Seems like Calico plugin supports as well Kubernetes NetworkPolicy, so you can just install Calico CNI plugin and apply the following policy:
kind: NetworkPolicy
apiVersion: networking.k8s.io/v1
metadata:
name: default-deny
spec:
podSelector:
matchLabels:
app: nginx
policyTypes:
- Ingress
I tested it and seems it's working fine as well.

Replacing ip address in kubernetes with custom name

I have created a sample spring boot app and did the following:-
1.created a docker image
2.created an Azure container registry and did a docker push to this
3.Created a cluster in Azure Kubernetes service and deployed it successfully.I have chosen external endpoint option for this.
Kubernetes external end point
say for service to service call i dont want to use IP like http://20.37.134.68:80 but another custom name how can i do it?
Also if i chose internal then is there any way to replace the name.
Tried editing YAML with endpoint name property but failed.Any ideas?
I think you mixing some concept, so I'll try to explain and help you to reach what you want.
When you deploy a container image in a Kubernetes cluster, in the most cases you will use a pod or deployment spec, that basically is a yaml file with all your deployment/pod configuration, name, image name etc. Here is an example of a simple echo-server app:
apiVersion: apps/v1
kind: Deployment
metadata:
name: echo
spec:
selector:
matchLabels:
app: echo
template:
metadata:
labels:
app: echo
spec:
containers:
- name: echo
image: mendhak/http-https-echo
ports:
- name: http
containerPort: 80
Observe the fields name in the file. Here you can configure the name for your deployment and for your containers.
In order to expose your application, you will need to use a service. Services can be internal and external. Here you can find all service types.
For a internal service, you need to use the service type ClusterIP (default), it means only your cluster will reach the pods. To reach your service from other pods, you can use the service name composed by my-svc.my-namespace.svc.cluster-domain.example.
Here is an example of a service for the deployment above:
apiVersion: v1
kind: Service
metadata:
name: echo-svc
spec:
selector:
app: echo
ports:
- protocol: TCP
port: 80
targetPort: 80
To expose your service externally, you have the option to use a service type NodePort, LoadBalancer or use an ingress.
You can configure your DNS name in the ingress rules and make path rules if you want, or even configure a HTTPS for your application. There are few options to ingresses in kubernetes, and one of the most popular is nginx-ingress.
Here is an example of how to configure a simple ingress for our example service:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/use-regex: "false"
name: echo-ingress
spec:
rules:
- host: myapp.mydomain.com
http:
paths:
- path: "/"
backend:
serviceName: echo-svc
servicePort: 80
In the example, i'm using the dns name myapp.mydomain.com, so it means you can only will reach your application by this name.
After create the ingress, you can see the external ip with the command kubectl get ing, and you can create a A entry in your dns server.

Loadbalancer IP and Ingress IP status is pending in kubernetes

I have created the Kubernetes Cluster using two Azure Ubuntu VMs. I am able to deploy and access pods and deployments using the Nodeport service type. I have also checked the pod's status in Kube-system namespace. All of the pod's status showing as running. but, whenever I mention service type to Loadbalancer, it was not creating the LoadBalancer IP and it's status always showing as pending. I have also created an Ingress controller for the Nginx service. still, it is not creating an ingress Address. While initializing the Kubernetes master, I am using the following command.
kubeadm init
Below is deployment, svc and Ingress manifest files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: nginx
servicePort: 80
$ kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"p...
Selector: app=nginx
Type: ClusterIP
IP: 10.96.107.97
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.44.0.4:80,10.44.0.5:80,10.44.0.6:80
Session Affinity: None
Events: <none>
$ kubectl describe ingress nginx
Name: test-ingress
Namespace: default
Address:
Default backend: nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Rules:
Host Path Backends
---- ---- --------
`*` `*` nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"test-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"nginx","servicePort":80}}}
Events: `<none>`
Do we need to mention any IP ranges(private or public) of VMs while initializing the kubeadm init? or
Do we need to change any network settings in Azure Ubuntu VMs?
As you created your own Kubernetes cluster rather than AWS, Azure or GCP provided one, there is no load balancer integrated. Due to this reason, you are getting IP status pending.
But with the use of Ingress Controller or directly through NodePort you can circumvent this problem.
However, I also observed in your nginx service you are using an annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb and you said you are using Azure and those are platform specific annotations for the service and that annotation is AWS specific.
However, you can give something like this a try, if you would like to experiment directly with public IPs, you can define your service by providing externalIPs in your service if you have a public ip allocated to your node and allows ingress traffic from somewhere.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
- 80.11.12.10
But, a good approach to get this done is using an ingress controller if you are planning to build your own Kubernetes cluster.
Hope this helps.

Resources