Service Deployed on kubernetes is not accessible by internet - azure

I have deployed a service on azure kubernetes service
I get this when I run kubectl get service -n dev
Myservice LoadBalancer 10.0.115.231 20.xxx.xx.xx 8080:32475/TC
But when I try to open my application with 20.xxx.xx.xx:8080 I am not able to access it.
What could be the issue? I am posting my deployment and service file below
apiVersion: apps/v1
kind: Deployment
metadata:
name: Myservice
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: Myservice
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: MyService
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: MyService
image: image_url:v1
ports:
- containerPort: 8080
resources:
requests:
cpu: 250m
limits:
cpu: 500m
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: dev-lifestyle
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: myservice
My pods are in running state.
Name: myservice
Namespace: dev-lifestyle
Labels: <none>
Annotations: <none>
Selector: myservice
Type: LoadBalancer
IP Families: <none>
IP: 10.0.xx.xx
IPs: <none>
LoadBalancer Ingress: 20.xxx.xx.xx
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 32475/TCP
Endpoints: 172.xx.xx.xx:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

Check that 8080 is open in security groups in azure

Related

Why unable to access Azure aks Service external IP with browser?

apiVersion: apps/v1
kind: Deployment
metadata:
name: farwell-helloworld-net3-webapi
spec:
replicas: 1
selector:
matchLabels:
app: farwell-helloworld-net3-webapi
template:
metadata:
labels:
app: farwell-helloworld-net3-webapi
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: farwell-helloworld-net3-webapi
image: devcontainerregistry.azurecr.cn/farwell.helloworld.net3.webapi:latest
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 8077
---
apiVersion: v1
kind: Service
metadata:
name: farwell-helloworld-net3-webapi
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8077
selector:
app: farwell-helloworld-net3-webapi
I use this command: kubectl get service farwell-helloworld-net3-webapi --watch
then i get this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
farwell-helloworld-net3-webapi LoadBalancer 10.0.116.13 52.131.xxx.xxx 80:30493/TCP 13m
I have already open the azure port 80.But I cannot access http://52.131.xxx.xxx/WeatherForecast
Could you help me pls? or tell me some steps to help to find the reason?
It seems that I changed the port number from 8077 to 80, then it runs well.

Can't access external load balancer using AKS

I am not sure why external access is not working, it seems like I followed all tutorials I could find to a T.
In my final docker image i do the following:
EXPOSE 80
EXPOSE 443
This is my deployment script which deploys my app and a load balancer service. Everything seems to boot up ok. I can tell my .NET Core application is running on port 80 b/c I can get live logs using the azure portal. The load balancer finds the pods from the deployment and shows the appropriate mappings but i still am unable to access them externally. Cannot access in browser, ping nor telnet.
apiVersion: apps/v1
kind: Deployment
metadata:
name: 2d69-deployment
labels:
app: 2d69-deployment
spec:
replicas: 1
selector:
matchLabels:
app: 2d69
template:
metadata:
labels:
app: 2d69
spec:
containers:
- name: 2d69
image: 2d69containerregistry.azurecr.io/2d69:latest
ports:
- containerPort: 80
volumeMounts:
- name: keyvault-cert
mountPath: /etc/keyvault
readOnly: true
volumes:
- name: keyvault-cert
secret:
secretName: keyvault-cert
---
kind: Service
apiVersion: v1
metadata:
name: 2d69
labels:
app: 2d69
spec:
selector:
app: 2d69
type: LoadBalancer
ports:
- port: 80
targetPort: 80
deployment description:
kubectl -n 2d69 describe deployment 2d69
Name: 2d69
Namespace: 2d69
CreationTimestamp: Fri, 11 Dec 2020 13:23:24 -0500
Labels: app=2d69
deployment.kubernetes.io/revision: 9
Selector: app=okrx2d69
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=okrx2d69
Containers:
2d69:
Image: 2d69containerregistry.azurecr.io/2d69:5520
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/etc/keyvault from keyvault-cert (ro)
Volumes:
keyvault-cert:
Type: Secret (a volume populated by a Secret)
SecretName: keyvault-cert
Optional: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: 2d69-5dbcff8b94 (2/2 replicas created)
Events: <none>
service description:
kubectl -n 2d69 describe service 2d69
Name: 2d69
Namespace: 2d69
Labels: app=2d69
Selector: app=2d69
Type: LoadBalancer
IP: ***.***.14.208
LoadBalancer Ingress: ***.***.***.***
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32112/TCP
Endpoints: ***.***.9.103:443,***.***.9.47:443
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31408/TCP
Endpoints: ***.***.9.103:80,***.***.9.47:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

How to expose service in AKS NginX Ingress

I have a service which exposes a "hello world" web deployment in "develop" namespace.
Service YAML
kind: Service
apiVersion: v1
metadata:
name: hello-v1-svc
spec:
selector:
app: hello-v1
ports:
- protocol: TCP
port: 80
targetPort: 8080
To test if the page is working properly, I run "kubectl port-forward" and the page is displayed successfully using the public IP.
Edit:
Then the Ingress is deployed but the page only displays within vnet address space
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
labels:
app: app
version: 1.0.0
name: dev-ingress
namespace: develop
spec:
rules:
- http:
paths:
- backend:
serviceName: hello-v1-svc
servicePort: 80
path: /
Ingress Rules
Rules:
Host Path Backends
---- ---- --------
*
/ hello-v1-svc:80 (10.1.1.13:8080,10.1.1.21:8080,10.1.1.49:8080)
What step am I skipping for the page to display?
First of all to answer your comment:
Maybe it is related to this annotation:
"service.beta.kubernetes.io/azure-load-balancer-internal". Controller
is set to "True"
service.beta.kubernetes.io/azure-load-balancer-internal: "true" annotation is typicaly used when you create an ingress controller to an internal virtual network. In case of this annotation , the ingress controller is configured on an internal, private virtual network and IP address. No external access is allowed.
You can find more info in Create an ingress controller to an internal virtual network in Azure Kubernetes Service (AKS) article.
Reproduced your case, created AKS cluster and applied below yamls. It works as expected so please use
apiVersion: v1
kind: Namespace
metadata:
name: develop
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: hello-world-deployment
namespace: develop
labels:
app: hello-v1
spec:
selector:
matchLabels:
app: hello-v1
replicas: 2
template:
metadata:
labels:
app: hello-v1
spec:
containers:
- name: hello-v1
image: paulbouwer/hello-kubernetes:1.8
ports:
- containerPort: 8080
---
kind: Service
apiVersion: v1
metadata:
name: hello-v1-svc
spec:
type: ClusterIP
selector:
app: hello-v1
ports:
- protocol: TCP
port: 80
targetPort: 8080
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: dev-ingress
namespace: develop
annotations:
kubernetes.io/ingress.class: nginx
spec:
rules:
- http:
paths:
- backend:
serviceName: hello-v1-svc
servicePort: 80
path: /
List of my services
vitalii#Azure:~$ kubectl get svc -A
NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
default hello-v1-svc ClusterIP 10.0.20.206 <none> 80/TCP 19m
default kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 83m
ingress-basic nginx-ingress-controller LoadBalancer 10.0.222.156 *.*.*.* 80:32068/TCP,443:30907/TCP 53m
ingress-basic nginx-ingress-default-backend ClusterIP 10.0.193.198 <none> 80/TCP 53m
kube-system dashboard-metrics-scraper ClusterIP 10.0.178.224 <none> 8000/TCP 83m
kube-system healthmodel-replicaset-service ClusterIP 10.0.199.235 <none> 25227/TCP 83m
kube-system kube-dns ClusterIP 10.0.0.10 <none> 53/UDP,53/TCP 83m
kube-system kubernetes-dashboard ClusterIP 10.0.115.184 <none> 443/TCP 83m
kube-system metrics-server ClusterIP 10.0.199.200 <none> 443/TCP 83m
Sure for security reasons I hide EXTERNAL-IP of nginx-ingress-controller. Thats the IP you should use to access page.
MOre information and exampke you can find in Create an ingress controller in Azure Kubernetes Service (AKS) article

Cannot access application deployed in Azure ACS Kubernetes Cluster using Azure CICD Pipeline

I am following this document.
https://github.com/Azure/DevOps-For-AI-Apps/blob/master/Tutorial.md
The CICD pipeline works fine. But I want to validate the application using the external ip that is being deployed to Kubernete cluster.
Deploy.yaml
apiVersion: v1
kind: Pod
metadata:
name: imageclassificationapp
spec:
containers:
- name: model-api
image: crrq51278013.azurecr.io/model-api:156
ports:
- containerPort: 88
imagePullSecrets:
- name: imageclassificationappdemosecret
Pod details
C:\Users\nareshkumar_h>kubectl describe pod imageclassificationapp
Name: imageclassificationapp
Namespace: default
Node: aks-nodepool1-97378755-2/10.240.0.5
Start Time: Mon, 05 Nov 2018 17:10:34 +0530
Labels: new-label=imageclassification-label
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"imageclassificationapp","namespace":"default"},"spec":{"containers":[{"image":"crr...
Status: Running
IP: 10.244.1.87
Containers:
model-api:
Container ID: docker://db8687866d25eb4311175c5ccb5a7205379168c64cdfe716b09557fc98e2bd6a
Image: crrq51278013.azurecr.io/model-api:156
Image ID: docker-pullable://crrq51278013.azurecr.io/model-api#sha256:766673989a59fe0b1e849469f38acda96853a1d84e4b4d64ffe07810dd5d04e9
Port: 88/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 05 Nov 2018 17:12:49 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qhdjr (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-qhdjr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qhdjr
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Service details:
C:\Users\nareshkumar_h>kubectl describe service imageclassification-service
Name: imageclassification-service
Namespace: default
Labels: run=load-balancer-example
Annotations: <none>
Selector: run=load-balancer-example
Type: LoadBalancer
IP: 10.0.24.9
LoadBalancer Ingress: 52.163.191.28
Port: <unset> 88/TCP
TargetPort: 88/TCP
NodePort: <unset> 32672/TCP
Endpoints: 10.244.1.65:88,10.244.1.88:88,10.244.2.119:88
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I am hitting the below url but the request times out.
http://52.163.191.28:88/
Can some one please help? Please let me know if you need any further details.
For your issue, I did a test and it worked in my side. The yaml file here:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
ports:
- port: 88
targetPort: 80
selector:
app: nginx
And there are some points should pay attention to.
You should make sure which port the service listen to in the container. For example, in my test, the nginx service listens to port 80 default.
The port that you want to expose in the node should be idle. In other words, the port was not used by other services.
When all the steps have done. You can access the public IP with the port you have exposed in the node.
The screenshots show the result of my test:
Hope this will help you!
We are able to solve this issue after reconfiguring Kubernetes Service with Right configuration and changing deploy.yaml file as follows -
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: imageclassificationapp
spec:
selector:
matchLabels:
app: imageclassificationapp
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: imageclassificationapp
spec:
containers:
- name: model-api
image: crrq51278013.azurecr.io/model-api:205
ports:
- containerPort: 88
---
apiVersion: v1
kind: Service
metadata:
name: imageclassificationapp
spec:
type: LoadBalancer
ports:
- port: 85
targetPort: 88
selector:
app: imageclassificationapp
We can close this thread now.

Pods do not resolve the domain names of a service through ingress

I have a problem that my pods in minikube cluster are not able to see the service through the domain name.
to run my minikube i use the following commands (running on windows 10):
minikube start --vm-driver hyperv;
minikube addons enable kube-dns;
minikube addons enable ingress;
This is my deployment.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
spec:
progressDeadlineSeconds: 600
replicas: 1
revisionHistoryLimit: 10
selector:
matchLabels:
run: hello-world
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
run: hello-world
spec:
containers:
- image: karthequian/helloworld:latest
imagePullPolicy: Always
name: hello-world
ports:
- containerPort: 80
protocol: TCP
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
this is the service.yaml:
apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
namespace: default
selfLink: /api/v1/namespaces/default/services/hello-world
spec:
ports:
- nodePort: 31595
port: 80
protocol: TCP
targetPort: 80
selector:
run: hello-world
sessionAffinity: None
type: ExternalName
externalName: minikube.local.com
status:
loadBalancer: {}
this is my ingress.yaml:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
spec:
rules:
- host: minikube.local.com
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
So, if i go inside the hello-world pod and from /bin/bash will run curl minikube.local.com or nslookup minikube.local.com.
So how can i make sure that the pods can resolve the DNS name of the service?
I know i can specify hostAlias in the deployment definition, but is there an automatic way tht will allow to update the DNS of kubernetes?
So, you want to expose your app on Minikube? I've just tried it using the default ClusterIP service type (essentially, removing the ExternalName stuff you had) and with this YAML file I can see your service on https://192.168.99.100 where the Ingress controller lives:
The service now looks like so:
apiVersion: v1
kind: Service
metadata:
labels:
run: hello-world
name: hello-world
spec:
ports:
- port: 80
targetPort: 80
selector:
run: hello-world
And the ingress is:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: minikube-local-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host:
http:
paths:
- path: /
backend:
serviceName: hello-world
servicePort: 80
Note: Within the cluster your service is now available via hello-world.default (that's the DNS name assigned by Kubernetes within the cluster) and from the outside you'd need to map, say hello-world.local to 192.168.99.100 in your /etc/hosts file on your host machine.
Alternatively, if you change the Ingress resource to - host: hello-world.local then you can (from the host) reach your service using this FQDN like so: curl -H "Host: hello-world.local" 192.168.99.100.

Resources