Why unable to access Azure aks Service external IP with browser? - azure

apiVersion: apps/v1
kind: Deployment
metadata:
name: farwell-helloworld-net3-webapi
spec:
replicas: 1
selector:
matchLabels:
app: farwell-helloworld-net3-webapi
template:
metadata:
labels:
app: farwell-helloworld-net3-webapi
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: farwell-helloworld-net3-webapi
image: devcontainerregistry.azurecr.cn/farwell.helloworld.net3.webapi:latest
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 8077
---
apiVersion: v1
kind: Service
metadata:
name: farwell-helloworld-net3-webapi
spec:
type: LoadBalancer
ports:
- port: 80
targetPort: 8077
selector:
app: farwell-helloworld-net3-webapi
I use this command: kubectl get service farwell-helloworld-net3-webapi --watch
then i get this:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
farwell-helloworld-net3-webapi LoadBalancer 10.0.116.13 52.131.xxx.xxx 80:30493/TCP 13m
I have already open the azure port 80.But I cannot access http://52.131.xxx.xxx/WeatherForecast
Could you help me pls? or tell me some steps to help to find the reason?

It seems that I changed the port number from 8077 to 80, then it runs well.

Related

Service Deployed on kubernetes is not accessible by internet

I have deployed a service on azure kubernetes service
I get this when I run kubectl get service -n dev
Myservice LoadBalancer 10.0.115.231 20.xxx.xx.xx 8080:32475/TC
But when I try to open my application with 20.xxx.xx.xx:8080 I am not able to access it.
What could be the issue? I am posting my deployment and service file below
apiVersion: apps/v1
kind: Deployment
metadata:
name: Myservice
namespace: dev
spec:
replicas: 1
selector:
matchLabels:
app: Myservice
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: MyService
spec:
nodeSelector:
"beta.kubernetes.io/os": linux
containers:
- name: MyService
image: image_url:v1
ports:
- containerPort: 8080
resources:
requests:
cpu: 250m
limits:
cpu: 500m
apiVersion: v1
kind: Service
metadata:
name: myservice
namespace: dev-lifestyle
spec:
type: LoadBalancer
ports:
- port: 8080
selector:
app: myservice
My pods are in running state.
Name: myservice
Namespace: dev-lifestyle
Labels: <none>
Annotations: <none>
Selector: myservice
Type: LoadBalancer
IP Families: <none>
IP: 10.0.xx.xx
IPs: <none>
LoadBalancer Ingress: 20.xxx.xx.xx
Port: <unset> 8080/TCP
TargetPort: 8080/TCP
NodePort: <unset> 32475/TCP
Endpoints: 172.xx.xx.xx:8080
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
Check that 8080 is open in security groups in azure

Can't access external load balancer using AKS

I am not sure why external access is not working, it seems like I followed all tutorials I could find to a T.
In my final docker image i do the following:
EXPOSE 80
EXPOSE 443
This is my deployment script which deploys my app and a load balancer service. Everything seems to boot up ok. I can tell my .NET Core application is running on port 80 b/c I can get live logs using the azure portal. The load balancer finds the pods from the deployment and shows the appropriate mappings but i still am unable to access them externally. Cannot access in browser, ping nor telnet.
apiVersion: apps/v1
kind: Deployment
metadata:
name: 2d69-deployment
labels:
app: 2d69-deployment
spec:
replicas: 1
selector:
matchLabels:
app: 2d69
template:
metadata:
labels:
app: 2d69
spec:
containers:
- name: 2d69
image: 2d69containerregistry.azurecr.io/2d69:latest
ports:
- containerPort: 80
volumeMounts:
- name: keyvault-cert
mountPath: /etc/keyvault
readOnly: true
volumes:
- name: keyvault-cert
secret:
secretName: keyvault-cert
---
kind: Service
apiVersion: v1
metadata:
name: 2d69
labels:
app: 2d69
spec:
selector:
app: 2d69
type: LoadBalancer
ports:
- port: 80
targetPort: 80
deployment description:
kubectl -n 2d69 describe deployment 2d69
Name: 2d69
Namespace: 2d69
CreationTimestamp: Fri, 11 Dec 2020 13:23:24 -0500
Labels: app=2d69
deployment.kubernetes.io/revision: 9
Selector: app=okrx2d69
Replicas: 2 desired | 2 updated | 2 total | 2 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 0
RollingUpdateStrategy: 25% max unavailable, 25% max surge
Pod Template:
Labels: app=okrx2d69
Containers:
2d69:
Image: 2d69containerregistry.azurecr.io/2d69:5520
Port: 80/TCP
Host Port: 0/TCP
Environment: <none>
Mounts:
/etc/keyvault from keyvault-cert (ro)
Volumes:
keyvault-cert:
Type: Secret (a volume populated by a Secret)
SecretName: keyvault-cert
Optional: false
Conditions:
Type Status Reason
---- ------ ------
Available True MinimumReplicasAvailable
Progressing True NewReplicaSetAvailable
OldReplicaSets: <none>
NewReplicaSet: 2d69-5dbcff8b94 (2/2 replicas created)
Events: <none>
service description:
kubectl -n 2d69 describe service 2d69
Name: 2d69
Namespace: 2d69
Labels: app=2d69
Selector: app=2d69
Type: LoadBalancer
IP: ***.***.14.208
LoadBalancer Ingress: ***.***.***.***
Port: https 443/TCP
TargetPort: 443/TCP
NodePort: https 32112/TCP
Endpoints: ***.***.9.103:443,***.***.9.47:443
Port: http 80/TCP
TargetPort: 80/TCP
NodePort: http 31408/TCP
Endpoints: ***.***.9.103:80,***.***.9.47:80
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>

Kubernetes Crashloopbackoff With Minikube

So I am learning about Kubernetes with a guide, I am trying to deploy a MongoDB Pod with 1 replica. This is the deployment config file.
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodb-deployment
labels:
app: mongodb
spec:
replicas: 1
selector:
matchLabels:
app: mongodb
template:
metadata:
labels:
app: mongodb
spec:
containers:
- name: mongodb
image: mongo
ports:
- containerPort: 27017
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-secret
key: mongo-root-password
---
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
spec:
selector:
app: mongodb
ports:
- protocol: TCP
port: 27017
targetPort: 27017
I also try to deploy a Mongo-Express Pod with almost the same config file, but I keep getting CrashLoopBackOff for both Pods, From the little understanding I have, this is caused by the container failing and restarting in a cycle. I tried going through the events with kubectl get events and I see that a warning with message Back-off restarting failed container keeps occurring. I also tried doing a little digging around and came across a solution that says to add
command: ['sleep']
args: ['infinity']
That fixed the CrashLoopBackOff issue, but when I try to get the logs for the Pod, nothing is displayed on the terminal. Please I need some help and possible explanation as how the command and args seem to fix it, also how do I stop this crash from happening to my Pods and current one, Thank you very much.
My advice is to deploy MongoDB as StatefulSet on Kubernetes.
In stateful application, the N-replicas of master nodes manages several worker nodes under a cluster. So, if any master node goes down the other ordinal instances will be active to execute the workflow. The master node instances must be identified as a unique ordinal number known as StatefulSet.
See more: mongodb-sts, mongodb-on-kubernetes.
Also use Headless service to manage the domain of a Pod. In general understanding of Headless Service, there is no need for LoadBalancer or a kube-proxy to interact directly with Pods but using a Service IP, so the Cluster IP is set to none.
In your case:
apiVersion: v1
kind: Service
metadata:
name: mongodb
spec:
clusterIP: None
selector:
app: mongodb
ports:
- port: 27017
The error:
Also uncaught exception: Error: couldn't add user: Error preflighting normalization: U_STRINGPREP_PROHIBITED_ERROR _getErrorWithCode#src/mongo/shell/utils.js:25:13
indicates that the secret may be missing. Take a look: mongodb-initializating.
In your case secret should look similar:
apiVersion: v1
kind: Secret
metadata:
name: mongodb-secret
type: Opaque
data:
mongo-root-username: YWRtaW4=
mongo-root-password: MWYyZDFlMmU2N2Rm
Remember to configure also a volume for your pods - follow tutorials I have linked above.
Deploy mongodb with StatefulSet not as deployment.
Example:
apiVersion: v1
kind: Service
metadata:
name: mongodb-service
labels:
name: mongo
spec:
ports:
port: 27017
targetPort: 27017
clusterIP: None
selector:
role: mongo
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: mongod
spec:
serviceName: mongodb-service
replicas: 3
template:
metadata:
labels:
role: mongo
environment: test
replicaset: MainRepSet
spec:
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: replicaset
operator: In
values:
- MainRepSet
topologyKey: kubernetes.io/hostname
terminationGracePeriodSeconds: 10
volumes:
- name: secrets-volume
secret:
secretName: shared-bootstrap-data
defaultMode: 256
containers:
- name: mongod-container
#image: pkdone/mongo-ent:3.4
image: mongo
command:
- "numactl"
- "--interleave=all"
- "mongod"
- "--wiredTigerCacheSizeGB"
- "0.1"
- "--bind_ip"
- "0.0.0.0"
- "--replSet"
- "MainRepSet"
- "--auth"
- "--clusterAuthMode"
- "keyFile"
- "--keyFile"
- "/etc/secrets-volume/internal-auth-mongodb-keyfile"
- "--setParameter"
- "authenticationMechanisms=SCRAM-SHA-1"
resources:
requests:
cpu: 0.2
memory: 200Mi
ports:
- containerPort: 27017
volumeMounts:
- name: secrets-volume
readOnly: true
mountPath: /etc/secrets-volume
- name: mongodb-persistent-storage-claim
mountPath: /data/db
volumeClaimTemplates:
metadata:
name: mongodb-persistent-storage-claim
annotations:
volume.beta.kubernetes.io/storage-class: "standard"
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 1Gi

How to get a external IP of VM running Kubernet Services

I have hosted Docker Images in a VM of Azure and I'm trying to access the Service outside VM. This is not working because of External IP is not generated for the Service.
After building the Docker image, I've applied yml file for creating Deployment and Service. My yml file looks as below:
apiVersion: apps/v1
kind: Deployment
metadata:
name: planservice-deployment
labels:
app: planservice-deploy
spec:
selector:
matchLabels:
run: planservice-deploy
replicas: 2
template:
metadata:
labels:
run: planservice-deploy
spec:
containers:
- name: planservice-deploy
image: planserviceimage
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8086
---
apiVersion: v1
kind: Service
metadata:
name: planservice-service
labels:
app: planservice-deploy
spec:
type: NodePort
ports:
- port: 80
protocol: TCP
targetPort: 8086
selector:
run: planservice-deploy
---
After I ran the following command to look running services:
kubectl get pods --output=wide
This command returned all the running services and it's external IP information. But, when I saw the list, all the services are generated with blank external IPs.
How to set external IP for all the services, so that I can access my web services outside VM?
you need to change type to LoadBalancer:
apiVersion: v1
kind: Service
metadata:
name: planservice-service
labels:
app: planservice-deploy
spec:
type: LoadBalancer
ports:
- port: 80
protocol: TCP
targetPort: 8086
selector:
run: planservice-deploy
https://kubernetes.io/docs/concepts/services-networking/service/#loadbalancer

Cannot access application deployed in Azure ACS Kubernetes Cluster using Azure CICD Pipeline

I am following this document.
https://github.com/Azure/DevOps-For-AI-Apps/blob/master/Tutorial.md
The CICD pipeline works fine. But I want to validate the application using the external ip that is being deployed to Kubernete cluster.
Deploy.yaml
apiVersion: v1
kind: Pod
metadata:
name: imageclassificationapp
spec:
containers:
- name: model-api
image: crrq51278013.azurecr.io/model-api:156
ports:
- containerPort: 88
imagePullSecrets:
- name: imageclassificationappdemosecret
Pod details
C:\Users\nareshkumar_h>kubectl describe pod imageclassificationapp
Name: imageclassificationapp
Namespace: default
Node: aks-nodepool1-97378755-2/10.240.0.5
Start Time: Mon, 05 Nov 2018 17:10:34 +0530
Labels: new-label=imageclassification-label
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"imageclassificationapp","namespace":"default"},"spec":{"containers":[{"image":"crr...
Status: Running
IP: 10.244.1.87
Containers:
model-api:
Container ID: docker://db8687866d25eb4311175c5ccb5a7205379168c64cdfe716b09557fc98e2bd6a
Image: crrq51278013.azurecr.io/model-api:156
Image ID: docker-pullable://crrq51278013.azurecr.io/model-api#sha256:766673989a59fe0b1e849469f38acda96853a1d84e4b4d64ffe07810dd5d04e9
Port: 88/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 05 Nov 2018 17:12:49 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-qhdjr (ro)
Conditions:
Type Status
Initialized True
Ready True
PodScheduled True
Volumes:
default-token-qhdjr:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-qhdjr
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
Service details:
C:\Users\nareshkumar_h>kubectl describe service imageclassification-service
Name: imageclassification-service
Namespace: default
Labels: run=load-balancer-example
Annotations: <none>
Selector: run=load-balancer-example
Type: LoadBalancer
IP: 10.0.24.9
LoadBalancer Ingress: 52.163.191.28
Port: <unset> 88/TCP
TargetPort: 88/TCP
NodePort: <unset> 32672/TCP
Endpoints: 10.244.1.65:88,10.244.1.88:88,10.244.2.119:88
Session Affinity: None
External Traffic Policy: Cluster
Events: <none>
I am hitting the below url but the request times out.
http://52.163.191.28:88/
Can some one please help? Please let me know if you need any further details.
For your issue, I did a test and it worked in my side. The yaml file here:
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: nginx
spec:
type: LoadBalancer
ports:
- port: 88
targetPort: 80
selector:
app: nginx
And there are some points should pay attention to.
You should make sure which port the service listen to in the container. For example, in my test, the nginx service listens to port 80 default.
The port that you want to expose in the node should be idle. In other words, the port was not used by other services.
When all the steps have done. You can access the public IP with the port you have exposed in the node.
The screenshots show the result of my test:
Hope this will help you!
We are able to solve this issue after reconfiguring Kubernetes Service with Right configuration and changing deploy.yaml file as follows -
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: imageclassificationapp
spec:
selector:
matchLabels:
app: imageclassificationapp
replicas: 1 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: imageclassificationapp
spec:
containers:
- name: model-api
image: crrq51278013.azurecr.io/model-api:205
ports:
- containerPort: 88
---
apiVersion: v1
kind: Service
metadata:
name: imageclassificationapp
spec:
type: LoadBalancer
ports:
- port: 85
targetPort: 88
selector:
app: imageclassificationapp
We can close this thread now.

Resources