Implementing Readiness and Liveness probe in Azure AKS - azure

I am trying to follow this documentation to enable readiness and liveness probe on my pods for health checkes in my cluster, however it gives me an error where connection refused to the container IP and port. Portion where i have added the readiness and liveness is below.
I am using helm for deployment and the port i am trying to monitor is 80. The service file for ingress is also given below.
https://learn.microsoft.com/en-us/azure/application-gateway/ingress-controller-add-health-probes
Service.yaml
apiVersion: v1
kind: Service
metadata:
name: expose-portal
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "{{ .Values.isInternal }}"
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: portal
Deployment.yaml
spec:
containers:
- name: App-server-portal
image: myacr-app-image-:{{ .Values.image_tag }}
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
protocol: TCP
readinessProbe:
httpGet:
path: /
port: 80
periodSeconds: 3
timeoutSeconds: 1
livenessProbe:
httpGet:
path: /
port: 80
periodSeconds: 3
timeoutSeconds: 1
volumeMounts:
- mountPath: /etc/nginx
readOnly: true
name: mynewsite
imagePullSecrets:
- name: my-secret
volumes:
- name: mynewsite.conf
configMap:
name: mynewsite.conf
items:
- key: mynewsite.conf
path: mynewsite.conf
Am i doing something wrong here. As per azure documentation as of today Probing on a port other than the one exposed on the pod is currently not supported. My understanding is that port 80 on my pod is already exposed.

taken from the docs:
initialDelaySeconds: Number of seconds after the container has started before liveness or readiness probes are initiated. Defaults to 0 seconds. Minimum value is 0.
periodSeconds: How often (in seconds) to perform the probe. Default to 10 seconds. Minimum value is 1.
timeoutSeconds: Number of seconds after which the probe times out. Defaults to 1 second. Minimum value is 1.
https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/#configure-probes
The solution was to increase timeout.
PS. I think you might need to introduce initialDelaySeconds instead of increasing timeout to 3 minutes

Related

How to change, edit, save /etc/hosts file from Azure Bash for Kubernetes?

I uploaded a project to kubernetes and for its gateway to redirect the services, it requires the following:
127.0.0.1 app.my.project
127.0.0.1 event-manager.my.project
127.0.0.1 logger.my.project
and so on.
I can't run any sudo commands, so sudo nano /etc/hosts doesnt work. I tried vi /etc/hosts and it gives permission denied error. How can I edit /etc/hosts file or do some configuration on Azure to make it work like that?
Edit:
To give more information, I have uploaded a project to Kubernetes that has reverse-proxy settings.
So reaching the web-app of that project is not available via IP. Instead if I'm running the application locally, I have to edit the hosts file of the computer I'm using with
127.0.0.1 app.my.project
127.0.0.1 event-manager.my.project
127.0.0.1 logger.my.project
and so on. So whenever I type web-app.my.project its gateway redirects to web-app part and if I write app.my.project it redirects to app part, etc.
When I uploaded it to Azure Kubernetes Service it added default-http-backend on ingress-nginx namespace which created by itself. To expose these services, I opened the Http Routing option from Azure which gave me the loadbalancer at the left side of the image. So If I'm reading the situation correctly, (I'm most probably wrong though) it is something like the image below:
So, I added hostaliases to kube-system, ingress-nginx and default namespaces to make it like I edited a hosts file when I was running the project locally. But it still gives me that default backend - 404 ingress error
Edit 2:
I have nginx-ingress-controller which allows the redirection as far as I understand. So, when I add hostaliases to it as follows:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress-controller
namespace: ingress-nginx
spec:
replicas: 1
selector:
matchLabels:
app: ingress-nginx
template:
metadata:
labels:
app: ingress-nginx
annotations:
prometheus.io/port: '10254'
prometheus.io/scrape: 'true'
spec:
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "app.ota.local"
- "gateway.ota.local"
- "treehub.ota.local"
- "tuf-reposerver.ota.local"
- "web-events.ota.local"
hostNetwork: true
serviceAccountName: nginx-ingress-serviceaccount
containers:
- name: nginx-ingress-controller
image: {{ .ingress_controller_docker_image }}
args:
- /nginx-ingress-controller
- --default-backend-service=$(POD_NAMESPACE)/default-http-backend
- --configmap=$(POD_NAMESPACE)/nginx-configuration
- --tcp-services-configmap=$(POD_NAMESPACE)/tcp-services
- --udp-services-configmap=$(POD_NAMESPACE)/udp-services
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
ports:
- name: tcp
containerPort: 8000
- name: http
containerPort: 80
- name: https
containerPort: 443
livenessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
So when I edit yaml file as aforementioned, it gives the following error on Azure:
Failed to update the deployment
Failed to update the deployment 'nginx-ingress-controller'. Error: BadRequest (400) : Deployment in version "v1" cannot be handled as a Deployment: v1.Deployment.Spec: v1.DeploymentSpec.Template: v1.PodTemplateSpec.Spec: v1.PodSpec.HostAliases: []v1.HostAlias: readObjectStart: expect { or n, but found ", error found in #10 byte of ...|liases":["ip "127.0|..., bigger context ...|theus.io/scrape":"true"}},"spec":{"hostAliases":["ip "127.0.0.1""],"hostnames":["app.ota.local","g|...
If I edit the yaml file locally and try to run it from local kubectl which is connected to Azure, it gives the following error:
serviceaccount/weave-net configured
clusterrole.rbac.authorization.k8s.io/weave-net configured
clusterrolebinding.rbac.authorization.k8s.io/weave-net configured
role.rbac.authorization.k8s.io/weave-net configured
rolebinding.rbac.authorization.k8s.io/weave-net configured
daemonset.apps/weave-net configured Using cluster from kubectl
context: k8s_14
namespace/ingress-nginx unchanged deployment.apps/default-http-backend
unchanged service/default-http-backend unchanged
configmap/nginx-configuration unchanged configmap/tcp-services
unchanged configmap/udp-services unchanged
serviceaccount/nginx-ingress-serviceaccount unchanged
clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole
unchanged role.rbac.authorization.k8s.io/nginx-ingress-role unchanged
rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding
unchanged
clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding
unchanged error: error validating
"/home/.../ota-community-edition/scripts/../generated/templates/ingress":
error validating data: ValidationError(Deployment.spec.template.spec):
unknown field "hostnames" in io.k8s.api.core.v1.PodSpec; if you choose
to ignore these errors, turn validation off with --validate=false
make: *** [Makefile:34: start_start-all] Error 1
Adding entries to a
Pod's /etc/hosts file provides Pod-level override of hostname
resolution when DNS and other options are not applicable. You can add
these custom entries with the HostAliases field in PodSpec.
Modification not using HostAliases is not suggested because the file
is managed by the kubelet and can be overwritten on during Pod
creation/restart.
I suggest that you use hostAliases instead
apiVersion: v1
kind: Pod
metadata:
name: hostaliases-pod
spec:
restartPolicy: Never
hostAliases:
- ip: "127.0.0.1"
hostnames:
- "app.my.project"
- "event-manager.my.project"
- "logger.my.project"
containers:
- name: cat-hosts
image: busybox
command:
- cat
args:
- "/etc/hosts"

Azure AKS External Load Balancer Not Connecting to POD

I am trying to create a multicontainer pod for a simple demo. I have an app that is build in docker containers. There are 3 containers
1 - redis server
1 - node/express microservice
2 - node/express/react front end
All 3 containers are deployed successfully and running.
I have created a public load balancer, which is running without any errors.
I cannot connect to the front end from the public ip.
I have also run tcpdump in the frontend container and there is no traffic getting in.
Here is my yaml file used to create the deployment and service
apiVersion: apps/v1
kind: Deployment
metadata:
name: mydemoapp
spec:
replicas: 1
selector:
matchLabels:
app: mydemoapp
template:
metadata:
labels:
app: mydemoapp
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: microservices-web
image: mydemocr.azurecr.io/microservices_web:v1
ports:
- containerPort: 3001
- name: redislabs-rejson
image: mydemocr.azurecr.io/redislabs-rejson:v1
ports:
- containerPort: 6379
- name: mydemoappwebtest
image: mydemocr.azurecr.io/jsonformwebtest:v1
ports:
- containerPort: 3000
---
apiVersion: v1
kind: Service
metadata:
name: mydemoappservice
spec:
type: LoadBalancer
ports:
- protocol: TCP
port: 80
targetPort: 3000
selector:
app: mydemoapp
This is what a describe of my service looks like :
Name: mydemoappservice
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"mydemoappservice","namespace":"default"},"spec":{"ports":[{"p...
Selector: app=mydemoapp
Type: LoadBalancer
IP: 10.0.104.159
LoadBalancer Ingress: 20.49.172.10
Port: <unset> 80/TCP
TargetPort: 3000/TCP
NodePort: <unset> 31990/TCP
Endpoints: 10.244.0.17:3000
Session Affinity: None
External Traffic Policy: Cluster
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal EnsuringLoadBalancer 24m service-controller Ensuring load balancer
Normal EnsuredLoadBalancer 24m service-controller Ensured load balancer
One more weirdness is that when I run the docker container from the front end I can get a shell and run curl localhost:3000 and get some output but when I do it in the az container I get the following response after some delay
curl: (52) Empty reply from server
As to why this container works on my machine and not in azure is another layer to the mystery.
Referring from docs here the container need to listen on 0.0.0.0 instead of 127.0.0.1 because
any port which is listening on the default 0.0.0.0 address inside a
container will be accessible from the network
.

NodeJS Service is not reachable on Kubernete (GCP)

I have a true roadblock here and I have not found any solutions so far. Ultimately, my deployed NodeJS + Express server is not reachable when deploying to a Kubernete cluster on GCP. I followed the guide & example, nothing seems to work.
The cluster, node and service are running just fine and don't have any issues. Furthermore, it works just fine locally when running it with Docker.
Here's my Node YAML:
apiVersion: apps/v1
kind: Deployment
metadata:
annotations:
deployment.kubernetes.io/revision: "1"
creationTimestamp: 2019-08-06T04:13:29Z
generation: 1
labels:
run: nodejsapp
name: nodejsapp
namespace: default
resourceVersion: "23861"
selfLink: /apis/apps/v1/namespaces/default/deployments/nodejsapp
uid: 8b6b7ac5-b800-11e9-816e-42010a9600de
spec:
progressDeadlineSeconds: 2147483647
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: nodejsapp
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: nodejsapp
spec:
containers:
- image: gcr.io/${project}/nodejsapp:latest
imagePullPolicy: Always
name: nodejsapp
ports:
- containerPort: 5000
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2019-08-06T04:13:29Z
lastUpdateTime: 2019-08-06T04:13:29Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 1
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Service YAML:
apiVersion: v1
kind: Service
metadata:
creationTimestamp: 2019-08-06T04:13:34Z
labels:
run: nodejsapp
name: nodejsapp
namespace: default
resourceVersion: "25444"
selfLink: /api/v1/namespaces/default/services/nodejsapp
uid: 8ef81536-b800-11e9-816e-42010a9600de
spec:
clusterIP: XXX.XXX.XXX.XXX
externalTrafficPolicy: Cluster
ports:
- nodePort: 32393
port: 80
protocol: TCP
targetPort: 5000
selector:
run: nodejsapp
sessionAffinity: None
type: LoadBalancer
status:
loadBalancer:
ingress:
- ip: XXX.XXX.XXX.XXX
The NodeJS server is configured to run on Port 5000. I tried doing no port-forwarding as well but not a difference in the result.
Any help is much appreciated.
UPDATE:
I used this guide and followed the instructions: https://cloud.google.com/kubernetes-engine/docs/tutorials/hello-app
UPDATE 2:
FINALLY - figured it out. I'm not sure why this is not mentioned anywhere but you have to create an Ingress that routes the traffic to the pod accordingly.
Here's the example config:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/backends: '{"k8s-be-32064--abfe1f07378017e9":"HEALTHY"}'
ingress.kubernetes.io/forwarding-rule: k8s-fw-default-nodejsapp--abfe1f07378017e9
ingress.kubernetes.io/target-proxy: k8s-tp-default-nodejsapp--abfe1f07378017e9
ingress.kubernetes.io/url-map: k8s-um-default-nodejsapp--abfe1f07378017e9
creationTimestamp: 2019-08-06T18:59:15Z
generation: 1
name: nodejsapp
namespace: default
resourceVersion: "171168"
selfLink: /apis/extensions/v1beta1/namespaces/default/ingresses/versapay-api
uid: 491cd248-b87c-11e9-816e-42010a9600de
spec:
backend:
serviceName: nodejsapp
servicePort: 80
status:
loadBalancer:
ingress:
- ip: XXX.XXX.XXX
Adding it as an answer as need to include image (But not necessarily an answer):
As shown in the image, besides your backend service, a green tick should be visible
Probable Solution:
In your NodeJsApp, please add the following base URL .i.e.,
When the application is started locally, http://localhost:5000/ should return a 200 status code (With ideally Server is running... or some message)
And also, if path based routing is enabled, another base URL is also required:
http://localhost:5000/<nodeJsAppUrl>/ should also return 200 status code.
Above URLs are required for health check of both LoadBalancer and Backend Service and redeploy the service.
Please let me know if the above solution doesn't fix the said issue.
You need an intermediate service to internally expose your deployment.
Right now, you have a set of pods grouped in a deployment and a load balancer exposed in your cluster but you need to link them with an additional service.
You can try using a NodePort like the following:
apiVersion: v1
kind: Service
metadata:
name: nodejsapp-nodeport
spec:
selector:
run: nodejsapp
ports:
- name: default
protocol: TCP
port: 32393
targetPort: 5000
type: NodePort
This NodePort service is in between your Load Balancer and the pods in your deployment, targeting them in port 5000 and exposing port 32393 (as per your settings in the original question, you can change it).
From here, you can redeploy your Load Balancer to target the previous NodePort. This way, you can reach your NodeJS app via port 80 from your load balancer public address.
apiVersion: v1
kind: Service
metadata:
name: nodejs-lb
spec:
selector:
run: nodejsapp
ports:
- name: default
protocol: TCP
port: 80
targetPort: 32393
type: LoadBalancer
The whole scenario would look like this:
publicy exposed address --> LoadBalancer --> | NodePort --> Deployment --> Pods

kubernetes Seed provider couldn't lookup host cassandra-0.cassandra.default.svc.cluster.local

Cassandra cluster on aws is failing to start.
The error is as follows.
INFO [main] 2018-10-11 08:11:42,794 DatabaseDescriptor.java:729 -
Back-pressure is disabled with strategy
org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9,
factor=5, flow=FAST}.
WARN [main] 2018-10-11 08:11:42,848 SimpleSeedProvider.java:60 - Seed
provider couldn't lookup host
cassandra-0.cassandra.default.svc.cluster.local Exception
(org.apache.cassandra.exceptions.ConfigurationException) encountered
during startup: The seed provider lists no seeds. The seed provider
lists no seeds. ERROR [main] 2018-10-11 08:11:42,851
CassandraDaemon.java:708 - Exception encountered during startup: The
seed provider lists no seeds.
Here are my details of it.
$kubectl get pods [13:48]
NAME READY STATUS RESTARTS AGE
cassandra-0 1/1 Running 0 19h
cassandra-1 0/1 CrashLoopBackOff 231 19h
$kubectl get services [13:49]
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cassandra NodePort 100.69.201.208 <none> 9042:30000/TCP 1d
kubernetes ClusterIP 100.64.0.1 <none> 443/TCP 15d
$kubectl get pvc [13:50]
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
cassandra-data-cassandra-0 Pending fast 15d
cassandra-storage-cassandra-0 Bound pvc-f3ff4203-c0a4-11e8-84a8-02c7556b5a4a 320Gi RWO gp2 15d
cassandra-storage-cassandra-1 Bound pvc-1bc3f896-c0a5-11e8-84a8-02c7556b5a4a 320Gi RWO gp2 15d
$kubectl get namespaces [13:53]
NAME STATUS AGE
default Active 15d
kube-public Active 15d
kube-system Active 15d
Even the working pod is not loading storage.
It was working fine till I tried to change MAX_HEAP_SIZE from 1024M to 2048M.
After that even I deleted all old pods, services and created fresh, still it's not working.
You are using the NodePort type. This will not make the service a headless service which is why the IP-address of the pod doesn't get resolved.
What you need to do is to create a seperate headless service. You also need to create your own Docker image and run a script in your entrypoint that will fetch all the ips for the service domain name.
You can look at the following project as an example: https://github.com/vyshane/cassandra-kubernetes/
I did try Simon's code solution. But he gives me a tip that it has to be a headless service. however in my case I create headless service by adding "clusterIP: None". It give "Seed provider couldn't lookup host" without this line.
I can not find DNS of pod(elassandra-0.elassandra.chargington.svc.cluster.local) , but i can find DNS of a service(elassandra.chargington.svc.cluster.local).
Sometimes you don’t need or want load-balancing and a single service IP. In this case, you can create “headless” services by specifying "None" for the cluster IP (.spec.clusterIP).
https://kubernetes.io/docs/concepts/services-networking/service/
Here is my code
apiVersion: v1
kind: Service
metadata:
labels:
app: elassandra
name: elassandra
namespace: chargington
spec:
clusterIP: None
ports:
- name: cassandra
port: 9042
- name: http
port: 9200
- name: transport
protocol: TCP
port: 9300
selector:
app: elassandra
And in my statefulset I need to set
serviceName: elassandra . This is necessary to point the StatefulSet to the Service that will manage the domain for the pod's DNS names.
---
apiVersion: "apps/v1beta1"
kind: StatefulSet
metadata:
name: elassandra
namespace: chargington
spec:
serviceName: elassandra
replicas: 1
template:
metadata:
labels:
app: elassandra
spec:
containers:
- name: elassandra
image: strapdata/elassandra:6.2.3.3
imagePullPolicy: IfNotPresent
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
- containerPort: 9200
name: http
protocol: TCP
- containerPort: 9300
name: transport
protocol: TCP
env:
- name: CASSANDRA_SEEDS
value: elassandra-0.elassandra.chargington.svc.cluster.local
- name: MAX_HEAP_SIZE
value: 256M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_CLUSTER_NAME
value: "Cassandra"
- name: CASSANDRA_DC
value: "DC1"
- name: CASSANDRA_RACK
value: "Rack1"
- name: CASSANDRA_ENDPOINT_SNITCH
value: GossipingPropertyFileSnitch
volumeMounts:
- name: elassandra-data
mountPath: /opt/elassandra-5.5.0.8/data
volumeClaimTemplates:
- metadata:
name: elassandra-data
annotations:
volume.beta.kubernetes.io/storage-class: ""
spec:
accessModes: ["ReadWriteOnce"]
resources:
requests:
storage: 1Gi
If any one else still facing issue with Seed provider couldn't lookup host cassandra-0.cassandra.default.svc.cluster.local Exception
For me the issue was the domain. I have configured the seed to be as:
cassandra-0.cassandra.<my-kube-namespace>.svc.<kube-domain>
In our company the k8s cluster was deployed with a cluster domain - and not default settings, so I had to use that domain and it started working for me.
To know the domain you may need to contact your local kube admin.

Kubernetes - Ingress / Service / LB

I am new to K8s and this is my first time trying to get to grips with it. I am trying to set up a basic Nodejs Express API using this deployment.yml:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: api
spec:
replicas: 1
template:
metadata:
labels:
app: api
spec:
containers:
- image: registry.gitlab.com/<project>/<app>:<TAG>
imagePullPolicy: Always
name: api
env:
- name: PORT
value: "8080"
ports:
- containerPort: 8080
hostPort: 80
livenessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 1
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 1
imagePullSecrets:
- name: registry.gitlab.com
Which is being deployed via gitlab-ci. This is working and I have set up a service to expose it:
apiVersion: v1
kind: Service
metadata:
name: api-svc
labels:
app: api-svc
spec:
ports:
- port: 80
targetPort: 80
protocol: TCP
name: http
selector:
app: api
type: LoadBalancer
But I have been looking into ingress to have a single point of entry for possibly multiple services. I have been reading through Kubernetes guides and I read through this Kubernetes Ingress Example and this is the ingress.yml I created:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress
spec:
backend:
serviceName: api-svc
servicePort: 80
But this did not work, when I visited the external IP address that was generated from the ingress and I just 502 error pages.
Could anyone point me in the right direction, what am I doing wrong or what am I missing? I see that in the example link above that there is an nginx-rc.yml which I deployed exactly like in the example and that was created but still got nothing from the endpoint. The API was accessible from the Service external IP though..
Many Thanks
I have looked into it again and think I figured it out.
In order for Ingress to work on GCE you need to define your backend service das a NodePort not as ClusterIP or LoadBalancer.
Also you need to make sure the http health check to / works (you'll see the Google L7 Loadbalancer hitting your service quite a lot on that url) and then it's available.
Thought I would post my working deployment/service/ingress
So after much effort in getting this working, here is what I used to get it working:
Deployment
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: backend-api-v2
spec:
replicas: 2
template:
metadata:
labels:
app: backend-api-v2
spec:
containers:
- image: registry.gitlab.com/<project>/<app>:<TAG>
imagePullPolicy: Always
name: backend-api-v2
env:
- name: PORT
value: "8080"
ports:
- containerPort: 8080
livenessProbe:
httpGet:
# Path to probe; should be cheap, but representative of typical behavior
path: /healthz
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /healthz
port: 8080
initialDelaySeconds: 30
timeoutSeconds: 5
imagePullSecrets:
- name: registry.gitlab.com
Service
apiVersion: v1
kind: Service
metadata:
name: api-svc-v2
labels:
app: api-svc-v2
spec:
type: NodePort
ports:
- port: 80
targetPort: 8080
nodePort: 31810
protocol: TCP
name: http
selector:
app: backend-api-v2
Ingress
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app-ingress
spec:
rules:
- host: api.foo.com
http:
paths:
- path: /v1/*
backend:
serviceName: api-svc
servicePort: 80
- path: /v2/*
backend:
serviceName: api-svc-v2
servicePort: 80
The important bits to notice as #Tigraine pointed out is the service is using type: NodePort and not LoadBalancer, I have also defined a nodePort but I believe it will create one if you leave it out.
It will use the default-http-backend for any routes that don't match the rules this is a default container that GKE runs in the kube-system namespace. So if I visited http://api.foo.com/bob I get the default response of default backend - 404.
Hope this helps
Looks like you're exposing your service to port 80 but your container is exposing 8080 so any request to the service is going to fail.
Also, have a look at the sample ingress resource (https://github.com/nginxinc/kubernetes-ingress/blob/master/examples/complete-example/cafe-ingress.yaml), you need to also define which hosts / paths route when the ingress controller is hit. (i.e. example.foo.com --> api-svc)

Resources