I have a service which exposes a "hello world" web deployment in "develop" namespace.
Service YAML
kind: Service
apiVersion: v1
metadata:
name: hello-v1-svc
spec:
selector:
app: hello-v1
ports:
- protocol: TCP
port: 80
targetPort: 8080
To test if the page is working properly, I run "kubectl port-forward" and the page is displayed successfully using the public IP.
Edit:
Then the Ingress is deployed but the page only displays within vnet address space
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: app
version: 1.0.0
name: dev-ingress
namespace: develop
spec:
rules:
- http:
paths:
- backend:
serviceName: hello-v1-svc
servicePort: 80
path: /
Ingress Rules
Rules:
Host Path Backends
---- ---- --------
*
/ hello-v1-svc:80 (10.1.1.13:8080,10.1.1.21:8080,10.1.1.49:8080)
What step am I skipping for the page to display?
First of all to answer your comment:
Maybe it is related to this annotation:
"service.beta.kubernetes.io/azure-load-balancer-internal". Controller
is set to "True"
service.beta.kubernetes.io/azure-load-balancer-internal: "true" annotation is typicaly used when you create an ingress controller to an internal virtual network. In case of this annotation , the ingress controller is configured on an internal, private virtual network and IP address. No external access is allowed.
You can find more info in Create an ingress controller to an internal virtual network in Azure Kubernetes Service (AKS) article.
Reproduced your case, created AKS cluster and applied below yamls. It works as expected so please use
apiVersion: v1
kind: Namespace
metadata:
name: develop
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment
namespace: develop
labels:
app: hello-v1
spec:
selector:
matchLabels:
app: hello-v1
replicas: 2
template:
metadata:
labels:
app: hello-v1
spec:
containers:
- name: hello-v1
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: hello-v1-svc
spec:
type: ClusterIP
selector:
app: hello-v1
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress
namespace: develop
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- backend:
serviceName: hello-v1-svc
servicePort: 80
path: /
List of my services
vitalii#Azure:~$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default hello-v1-svc ClusterIP 10.0.20.206 <none> 80/TCP 19m
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 83m
ingress-basic nginx-ingress-controller LoadBalancer 10.0.222.156 *.*.*.* 80:32068/TCP,443:30907/TCP 53m
ingress-basic nginx-ingress-default-backend ClusterIP 10.0.193.198 <none> 80/TCP 53m
kube-system dashboard-metrics-scraper ClusterIP 10.0.178.224 <none> 8000/TCP 83m
kube-system healthmodel-replicaset-service ClusterIP 10.0.199.235 <none> 25227/TCP 83m
kube-system kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 83m
kube-system kubernetes-dashboard ClusterIP 10.0.115.184 <none> 443/TCP 83m
kube-system metrics-server ClusterIP 10.0.199.200 <none> 443/TCP 83m
Sure for security reasons I hide EXTERNAL-IP of nginx-ingress-controller. Thats the IP you should use to access page.
MOre information and exampke you can find in Create an ingress controller in Azure Kubernetes Service (AKS) article
Related
I have deployed a service on azure kubernetes service
I get this when I run kubectl get service -n dev
Myservice LoadBalancer 10.0.115.231 20.xxx.xx.xx 8080:32475/TC
But when I try to open my application with 20.xxx.xx.xx:8080 I am not able to access it.
What could be the issue? I am posting my deployment and service file below
apiVersion: apps/v1
kind: Deployment
metadata:
name: Myservice
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: Myservice
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: MyService
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: MyService
image: image_url:v1
ports:
- containerPort: 8080
resources:
requests:
cpu: 250m
limits:
cpu: 500m
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: dev-lifestyle
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: myservice
My pods are in running state.
Name: myservice
Namespace: dev-lifestyle
Labels: <none>
Annotations: <none>
Selector: myservice
Type: LoadBalancer
IP Families: <none>
IP: 10.0.xx.xx
IPs: <none>
LoadBalancer Ingress: 20.xxx.xx.xx
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 32475/TCP
Endpoints: 172.xx.xx.xx:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Check that 8080 is open in security groups in azure
I have created the Kubernetes Cluster using two Azure Ubuntu VMs. I am able to deploy and access pods and deployments using the Nodeport service type. I have also checked the pod's status in Kube-system namespace. All of the pod's status showing as running. but, whenever I mention service type to Loadbalancer, it was not creating the LoadBalancer IP and it's status always showing as pending. I have also created an Ingress controller for the Nginx service. still, it is not creating an ingress Address. While initializing the Kubernetes master, I am using the following command.
kubeadm init
Below is deployment, svc and Ingress manifest files.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 3
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
labels:
app: nginx
spec:
ports:
- name: http
port: 80
protocol: TCP
targetPort: 80
selector:
app: nginx
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
spec:
backend:
serviceName: nginx
servicePort: 80
$ kubectl describe svc nginx
Name: nginx
Namespace: default
Labels: app=nginx
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"app":"nginx"},"name":"nginx","namespace":"default"},"spec":{"p...
Selector: app=nginx
Type: ClusterIP
IP: 10.96.107.97
Port: http 80/TCP
TargetPort: 80/TCP
Endpoints: 10.44.0.4:80,10.44.0.5:80,10.44.0.6:80
Session Affinity: None
Events: <none>
$ kubectl describe ingress nginx
Name: test-ingress
Namespace: default
Address:
Default backend: nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Rules:
Host Path Backends
---- ---- --------
`*` `*` nginx:80 (10.44.0.4:80,10.44.0.5:80,10.44.0.6:80)
Annotations:
kubectl.kubernetes.io/last-applied-configuration: {"apiVersion":"networking.k8s.io/v1beta1","kind":"Ingress","metadata":{"annotations":{},"name":"test-ingress","namespace":"default"},"spec":{"backend":{"serviceName":"nginx","servicePort":80}}}
Events: `<none>`
Do we need to mention any IP ranges(private or public) of VMs while initializing the kubeadm init? or
Do we need to change any network settings in Azure Ubuntu VMs?
As you created your own Kubernetes cluster rather than AWS, Azure or GCP provided one, there is no load balancer integrated. Due to this reason, you are getting IP status pending.
But with the use of Ingress Controller or directly through NodePort you can circumvent this problem.
However, I also observed in your nginx service you are using an annotation service.beta.kubernetes.io/aws-load-balancer-type: nlb and you said you are using Azure and those are platform specific annotations for the service and that annotation is AWS specific.
However, you can give something like this a try, if you would like to experiment directly with public IPs, you can define your service by providing externalIPs in your service if you have a public ip allocated to your node and allows ingress traffic from somewhere.
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: MyApp
ports:
- name: http
protocol: TCP
port: 80
targetPort: 9376
externalIPs:
- 80.11.12.10
But, a good approach to get this done is using an ingress controller if you are planning to build your own Kubernetes cluster.
Hope this helps.
I am trying to add an ingress rule to an internal load balancer. As per the dock it can be redirected to a service. It works as long as the service is "ClusterIP" but goes to infinite redirect when its "LoadBalancer"
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: demo-ingress
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
tls:
- hosts:
- demo.azure.com
secretName: aks-ingress-tls
rules:
- host: demo.azure.com
http:
paths:
- path: /
backend:
serviceName: aks-helloworld
servicePort: 80
- path: /demo
backend:
serviceName: demo-backend
servicePort: 80
https://demo.azure.com works but https://demo.azure.com/demo doesn’t. Difference is aks-helloworld is a ClusterIP but demo-backend is a LoadBalancer
13:33 $ kubectl get services
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
aks-helloworld ClusterIP 10.0.204.168 <none> 80/TCP 15m
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 16h
demo-backend LoadBalancer 10.0.198.251 23.99.128.86 80:30332/TCP 15h
For your issue, I don't think it's the problem that one has the type clusterIP and another has the type LoadBalancer. When the traffic coming in through the two ways, they will all redirect to the service, in your case, demo-backend.
See the result of the test on my side:
Access from the Internet:
I do not add the TLS, but I think the traffic will all redirect to the service no matter it has the TLS or not. I just change the command with --set serviceType="LoadBalancer" when I install the second application through helm. So you can check if there something wrong with your steps.
But I don't think it's a good way to route traffic both in these two ways to one service. If you use the TLS through Ingress, and it will be no secure when there is the way with LoadBalancer at the same time. Because the traffic will bypass the TLS through LoadBalancer.
Update
With your comment, I think you need to create a deployment for your application, and then create a service with it, the file like this:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: myapp
spec:
replicas: 1
template:
metadata:
labels:
app: myapp
spec:
containers:
- name: myapp
image: yourImage
ports:
- containerPort: 80
name: myapp
---
apiVersion: v1
kind: Service
metadata:
name: demo-backend
labels:
app: myapp
spec:
type: ClusterIP
selector:
app: myapp
ports:
- port: 80
name: http
The deployment is the basis of the application, the service just accepts the traffic for the pod. So I guess you miss the deployment so that you can access your application.
Why are you exposing the Service as type "LoadBalancer" if you are using Ingress for the resource? You are essentially hitting a the ingress loadbalancer then hitting another service loadbalancer, which is probably causing this redirect issue.
The issue was because of the following headers added by the engine controller.
X-FORWARDED-PROTO: https
X-FORWARDED-PORT: 443
Answer https://stackoverflow.com/a/54880257/747456
I am using AKS cluster on Azure. I am trying to discover service using DNS (http://my-api.default.svc.cluster.local:3000/) but, it's not working (This site can’t be reached). With service IP endpoint everything is working fine.
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-api
labels:
app: my-api
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- name: my-api
image: test.azurecr.io/my-api:latest
ports:
- containerPort: 3000
imagePullSecrets:
- name: testsecret
---
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
selector:
app: my-api
ports:
- protocol: TCP
port: 3000
targetPort: 3000
kubectl describe services kube-dns --namespace kube-system
Name: kube-dns
Namespace: kube-system
Labels: addonmanager.kubernetes.io/mode=Reconcile
k8s-app=kube-dns
kubernetes.io/cluster-service=true
kubernetes.io/name=KubeDNS
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"labels":{"addonmanager.kubernetes.io/mode":"Reconcile","k8s-app":"kube-dns","kubernet...
Selector: k8s-app=kube-dns
Type: ClusterIP
IP: 10.10.110.110
Port: dns 53/UDP
TargetPort: 53/UDP
Endpoints: 10.10.100.54:53,10.10.100.64:53
Port: dns-tcp 53/TCP
TargetPort: 53/TCP
Endpoints: 10.10.100.54:53,10.10.100.64:53
Session Affinity: None
Events: <none>
kubectl describe svc my-api
Name: my-api
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Service","metadata":{"annotations":{},"name":"my-api","namespace":"default"},"spec":{"ports":[{"port":3000,"protocol":...
Selector: app=my-api
Type: ClusterIP
IP: 10.10.110.104
Port: <unset> 3000/TCP
TargetPort: 3000/TCP
Endpoints: 10.10.100.42:3000
Session Affinity: None
Events: <none>
From Second POD
kubectl exec -it second-pod /bin/bash
curl my-api.default.svc.cluster.local:3000
Response: {"value":"Hello world2"}
From Second POD website is running which is using the same endpoint but it's not connecting to the service.
Fixing the indentation of your yaml file, I was able to launch the deployment and service successfully. Also the DNS resolution worked fine.
Differences:
Fixed indentation
Used test1 namespaces instead of default
Used containerPort 80 instead of 3000
Used my image
Deployment:
apiVersion: apps/v1beta2
kind: Deployment
metadata:
labels:
app: my-api
name: my-api
namespace: test1
spec:
replicas: 1
selector:
matchLabels:
app: my-api
template:
metadata:
labels:
app: my-api
spec:
containers:
- image: leodotcloud/swiss-army-knife
name: my-api
ports:
- containerPort: 80
protocol: TCP
Service:
apiVersion: v1
kind: Service
metadata:
name: my-api
namespace: test1
spec:
ports:
- port: 3000
protocol: TCP
targetPort: 80
selector:
app: my-api
type: ClusterIP
Debugging steps:
Install tcpdump inside both of the kube-dns containers and start capturing DNS traffic (with filters from the second pod IP)
From inside the second pod, run curl or dig command using the FQDN.
Check if the DNS query packets are reaching the kube-dns containers.
If not, check for networking issues.
If the DNS resolution is working, then start tcpdump inside your application container and check if the curl packet is reaching the container.
Check the source and destination IP address of the packets.
Check the iptables rules on the hosts.
Check sysctl settings.
If you use Deployment to deploy your application onto cluster where it will be consumed via a Service you should have no need at all to manually set Endpoints. Just rely on kubernetes and define normal selector in your Service object.
Other then that, when it makes sense (external service consumed from within cluster), you need to make sure your Endpoints ports definition fully matches the one on service (incl. protocol and potentially name). This incomplete matching is a most common reason for endpoints to be not visible as a part of service.
From the above discussion, what I understood is, you want to expose a service but not using the IP address.
Service can be exposed in many ways. you should look for Service type LoadBalancer.
Try modifying your service is follow :
apiVersion: v1
kind: Service
metadata:
name: my-api
spec:
type: LoadBalancer
selector:
app: my-api
ports:
- protocol: TCP
port: 3000
targetPort: 3000
This will create a loadbalancer and map your service to the same.
Later you can add this loadbalancer to your DNS mapping service provided by Azure to give the domain name you like. ex: http:\\my-api.example.com:3000
Also I would like to add, if you define your ports as follow :
ports:
- name: http
port: 80
targetPort: 3000
This will redirect traffic coming to port 80 to 3000 and your service call would look much cleaner for ex. http:\\my-api.example.com
I am trying to setup a K8s deployment where ingress's controllers can define a service as a subdomain. i.e. app1 can define itself to get traffic from app1.sub.domain.io in its ingress config.
I have a DNS A Record *.sub.domain.io that points to a Load Balancer. That load balancer is pointing to the cluster's instance group.
So if I am right all traffic that goes to anything at sub.domain.io will land inside the cluster and just need to route said traffic.
Below are the k8 configs, which has a pod, a service and an ingress. The pods are healthy and working, I believe the service isn't required but will want other pods to talk to it via internal DNS so it's added.
The ingress rules have a host app1.sub.domain.io, so in theory, curl'ing app1.sub.domain.io should follow:
DNS -> Load Balancer -> Cluster -> Ingress Controller -> Pod
At the moment when I try to hit app1.sub.domain.io it just hangs. I have tried not having service, making external-name service and doesn't work.
I don't want to go down the route of using the loadBalancer ingress as that makes a new external IP that needs to be applied to DNS records manually, or with a nasty bash script that waits for services external IP and runs GCP command, and we don't want to do this for each service.
Ref links:https://kubernetes.io/docs/concepts/services-networking/ingress/#name-based-virtual-hosting
Deployment
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: app1
namespace: default
labels:
app: app1
spec:
replicas: 3
selector:
matchLabels:
app: app1
template:
metadata:
labels:
app: app1
spec:
containers:
- image: xxxx:latest
name: app1
ports:
- containerPort: 80
env:
- name: NODE_ENV
value: production
Service
---
kind: Service
apiVersion: v1
metadata:
name: app1
labels:
app: app1
spec:
ports:
- port: 80
targetPort: 80
selector:
app: app1
type: ClusterIP
Ingress
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1-ingress
labels:
app: app1
spec:
rules:
- host: app1.sub.domain.io
http:
paths:
- backend:
serviceName: app1
servicePort: 80
Once everything is deployed if you query
kubectl get pods,services,ingresses -l app=app1
NAME READY STATUS RESTARTS AGE
po/app1-6d4b9d8c5-4gcz5 1/1 Running 0 20m
po/app1-6d4b9d8c5-m4kwq 1/1 Running 0 20m
po/app1-6d4b9d8c5-rpm9l 1/1 Running 0 20m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
svc/app1 ClusterIP x.x.x.x <none> 80/TCP 20m
NAME HOSTS ADDRESS PORTS AGE
ing/app1-ingress app1.sub.domain.io 80 20m
----------------------------------- Update -----------------------------------
Currently doing this, not ideal. Have global static IP that's assigned to a DNS record.
---
kind: Service
apiVersion: v1
metadata:
name: app1
labels:
app: app1
spec:
type: NodePort
selector:
app: app1
ports:
- port: 80
targetPort: 80
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1-ingress
annotations:
kubernetes.io/ingress.global-static-ip-name: app1-static-ip
labels:
app: app1-static-ip
spec:
backend:
serviceName: app1
servicePort: 80
*.sub.domain.io should point to the IP of the Ingress.
You can use a static IP for the Ingress by following the instructions in the tutorial here: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer#step_5_optional_configuring_a_static_ip_address
Try adding path to your Ingress:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: app1-ingress
labels:
app: app1
spec:
rules:
- host: app1.sub.domain.io
http:
paths:
- backend:
serviceName: app1
servicePort: 80
path: /
If that doesn't work, please post the output of describe service and describe ingress.
Do you have an Ingress Controller?
Traffic should go LB-> Ingress Controller-> Ingress-> Service ClusterIP-> Pods