Unable to connect to MongoDB: MongoNetworkError & MongoNetworkError connecting to kubernetis MongoDB pod with mongoose - node.js

I am trying to connect to MongoDB in a microservice-based project using NodeJs, Kubernetes, Ingress, and skaffold.
I got two errors on doing skaffold dev:
MongoNetworkError: failed to connect to server [auth-mongo-srv:21017] on first connect [MongoNetworkTimeoutError: connection timed out.
Mongoose default connection error: MongoNetworkError: MongoNetworkError: failed to connect to server [auth-mongo-srv:21017] on first connect [MongoNetworkTimeoutError: connection timed out at connectionFailureError.
My auth-mongo-deploy.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-mongo-deploy
spec:
replicas: 1
selector:
matchLabels:
app: auth-mongo
template:
metadata:
labels:
app: auth-mongo
spec:
containers:
- name: auth-mongo
image: mongo
---
apiVersion: v1
kind: Service
metadata:
name: auth-mongo-srv
spec:
selector:
app: auth-mongo
ports:
- name: db
protocol: TCP
port: 27017
targetPort: 27017
My server.ts
const dbURI: string = "mongodb://auth-mongo-srv:21017/auth"
logger.debug(dbURI)
logger.info('connecting to database...')
// changing {} --> options change nothing!
mongoose.connect(dbURI, {}).then(() => {
logger.info('Mongoose connection done')
app.listen(APP_PORT, () => {
logger.info(`server listening on ${APP_PORT}`)
})
console.clear();
}).catch((e) => {
logger.info('Mongoose connection error')
logger.error(e)
})
Additional information:
1. pod is created:
rhythm#vivobook:~/Documents/TicketResale/server$ kubectl get pods
NAME STATUS RESTARTS AGE
auth-deploy-595c6cbf6d-9wzt9 1/1 Running 0 5m53s
auth-mongo-deploy-6b96b7798c-9726w 1/1 Running 0 5m53s
tickets-deploy-675b7b9b58-f5bzs 1/1 Running 0 5m53s
2. pod description:
kubectl describe pod auth-mongo-deploy-6b96b7798c-9726w
Name: auth-mongo-deploy-694b67f76d-ksw82
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Tue, 21 Jun 2022 14:11:47 +0530
Labels: app=auth-mongo
pod-template-hash=694b67f76d
skaffold.dev/run-id=2f5d2142-0f1a-4fa4-b641-3f301f10e65a
Annotations: <none>
Status: Running
IP: 172.17.0.2
IPs:
IP: 172.17.0.2
Controlled By: ReplicaSet/auth-mongo-deploy-694b67f76d
Containers:
auth-mongo:
Container ID: docker://fa43cd7e03ac32ed63c82419e5f9722deffd2f93206b6a0f2b25ae9be8f6cedf
Image: mongo
Image ID: docker-pullable://mongo#sha256:37e84d3dd30cdfb5472ec42b8a6b4dc6ca7cacd91ebcfa0410a54528bbc5fa6d
Port: <none>
Host Port: <none>
State: Running
Started: Tue, 21 Jun 2022 14:11:52 +0530
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zw7s9 (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-zw7s9:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 79s default-scheduler Successfully assigned default/auth-mongo-deploy-694b67f76d-ksw82 to minikube
Normal Pulling 79s kubelet Pulling image "mongo"
Normal Pulled 75s kubelet Successfully pulled image "mongo" in 4.429126953s
Normal Created 75s kubelet Created container auth-mongo
Normal Started 75s kubelet Started container auth-mongo
I have also tried:
kubectl describe service auth-mongo-srv
Name: auth-mongo-srv
Namespace: default
Labels: skaffold.dev/run-id=2f5d2142-0f1a-4fa4-b641-3f301f10e65a
Annotations: <none>
Selector: app=auth-mongo
Type: ClusterIP
IP Family Policy: SingleStack
IP Families: IPv4
IP: 10.100.42.183
IPs: 10.100.42.183
Port: db 27017/TCP
TargetPort: 27017/TCP
Endpoints: 172.17.0.2:27017
Session Affinity: None
Events: <none>
And then changed:
const dbURI: string = "mongodb://auth-mongo-srv:21017/auth" to
const dbURI: string = "mongodb://172.17.0.2:27017:21017/auth"
generated a different error of MongooseServerSelectionError.

const dbURI: string = "mongodb://auth-mongo-srv:27017/auth"

Related

Kubernetes Ingress returns "Cannot get /"

I'm trying to deploy a NodeRED pod on my cluster, and have created a service and ingress for it so it can be accessible as I access the rest of my cluster, under the same domain. However when i try to access it via host-name.com/nodered I receive Cannot GET /nodered.
Following are the templates used and describes of all the involved components.
apiVersion: v1
kind: Service
metadata:
name: nodered-app-service
namespace: {{ kubernetes_namespace_name }}
spec:
ports:
- port: 1880
targetPort: 1880
selector:
app: nodered-service-pod
I have also tried with port:80 for the service, to no avail.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodered-service-deployment
namespace: {{ kubernetes_namespace_name }}
labels:
app: nodered-service-deployment
name: nodered-service-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nodered-service-pod
template:
metadata:
labels:
app: nodered-service-pod
target: gateway
buildVersion: "{{ kubernetes_build_number }}"
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: nodered-service-account
automountServiceAccountToken: false
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
containers:
- name: nodered-service-statefulset
image: nodered/node-red:{{ nodered_service_version }}
imagePullPolicy: {{ kubernetes_image_pull_policy }}
readinessProbe:
httpGet:
path: /
port: 1880
initialDelaySeconds: 30
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
livenessProbe:
httpGet:
path: /
port: 1880
initialDelaySeconds: 30
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
securityContext:
allowPrivilegeEscalation: false
resources:
limits:
memory: "2048M"
cpu: "1000m"
requests:
memory: "500M"
cpu: "100m"
ports:
- containerPort: 1880
name: port-name
envFrom:
- configMapRef:
name: nodered-service-configmap
env:
- name: BUILD_TIME
value: "{{ kubernetes_build_time }}"
The target: gateway refers to the ingress controller
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nodered-ingress
namespace: {{ kubernetes_namespace_name }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: host-name.com
http:
paths:
- path: /nodered(/|$)(.*)
backend:
serviceName: nodered-app-service
servicePort: 1880
The following is what my Describes show
Name: nodered-app-service
Namespace: nodered
Labels: <none>
Annotations: <none>
Selector: app=nodered-service-pod
Type: ClusterIP
IP: 55.3.145.249
Port: <unset> 1880/TCP
TargetPort: port-name/TCP
Endpoints: 10.7.0.79:1880
Session Affinity: None
Events: <none>
Name: nodered-service-statefulset-6c678b7774-clx48
Namespace: nodered
Priority: 0
Node: aks-default-40441371-vmss000007/10.7.0.66
Start Time: Thu, 26 Aug 2021 14:23:33 +0200
Labels: app=nodered-service-pod
buildVersion=latest
pod-template-hash=6c678b7774
target=gateway
Annotations: <none>
Status: Running
IP: 10.7.0.79
IPs:
IP: 10.7.0.79
Controlled By: ReplicaSet/nodered-service-statefulset-6c678b7774
Containers:
nodered-service-statefulset:
Container ID: docker://a6f8c9d010feaee352bf219f85205222fa7070c72440c885b9cd52215c4c1042
Image: nodered/node-red:latest-12
Image ID: docker-pullable://nodered/node-red#sha256:f02ccb26aaca2b3ee9c8a452d9516c9546509690523627a33909af9cf1e93d1e
Port: 1880/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 26 Aug 2021 14:23:36 +0200
Ready: True
Restart Count: 0
Limits:
cpu: 1
memory: 2048M
Requests:
cpu: 100m
memory: 500M
Liveness: http-get http://:1880/ delay=30s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:1880/ delay=30s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
nodered-service-configmap ConfigMap Optional: false
Environment:
BUILD_TIME: 2021-08-26T12:23:06.219818+0000
Mounts: <none>
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes: <none>
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Name: nodered-app-service
Namespace: nodered
Labels: <none>
Annotations: <none>
Selector: app=nodered-service-pod
Type: ClusterIP
IP: 55.3.145.249
Port: <unset> 1880/TCP
TargetPort: port-name/TCP
Endpoints: 10.7.0.79:1880
Session Affinity: None
Events: <none>
PS C:\Users\hid5tim> kubectl describe ingress -n nodered
Name: nodered-ingress
Namespace: nodered
Address: 10.7.31.254
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
host-name.com
/nodered(/|$)(.*) nodered-app-service:1880 (10.7.0.79:1880)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
The logs from the ingress controller are below. I've been on this issue for the last 24 hours or so and its tearing me apart, the setup looks identical to other deployments I have that are functional. Could this be something wrong with the nodered image? I have checked and it does expose 1880.
194.xx.xxx.x - [194.xx.xxx.x] - - [26/Aug/2021:10:40:12 +0000] "GET /nodered HTTP/1.1" 404 146 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" 871 0.008 [nodered-nodered-app-service-80] 10.7.0.68:1880 146 0.008 404
74887808fa2eb09fd4ed64061639991e ```
as the comment by Andrew points out, I was using rewrite annotation wrong, once I removed the (/|$)(.*) and specified the path type as prefix it worked.

Not able to container images from ACR to Minikube vm

I have created a virtual machine on Azure and I have installed minikube on the VM with VirtualBox. I have created kubectl secret using the instructions in the following link:
https://learn.microsoft.com/en-us/azure/container-registry/container-registry-auth-kubernetes
I am able to initiate pull request from ACR on the Azure portal:
But the Container is creating for a very long time:
Following is the description of the pod in uestion:
Name: loginfunctionality-84b59c4464-rr5ss
Namespace: default
Priority: 0
Node: minikube/192.168.99.101
Start Time: Mon, 29 Jun 2020 11:42:01 +0000
Labels: io.kompose.service=loginfunctionality
pod-template-hash=84b59c4464
Annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.21.0 (992df58d8)
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/loginfunctionality-84b59c4464
Containers:
loginfunctionality:
Container ID:
Image: healthcareakscicdacr.azurecr.io/loginfunctionality:latest
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
ASPNETCORE_ENVIRONMENT: Development
RedisCacheConnection: rediscache:6379
WebApiBaseUrl: http://20.185.77.158:5018/api/
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f4wfq (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-f4wfq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-f4wfq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 5m3s default-scheduler Successfully assigned default/loginfunctionality-84b59c4464-rr5ss to mini
Normal Pulling 4m35s kubelet, minikube Pulling image "healthcareakscicdacr.azurecr.io/loginfunctionality:latest"
PS C:\DeploymentFiles> kubectl describe pod loginfunctionality-84b59c4464-rr5ss
Name: loginfunctionality-84b59c4464-rr5ss
Namespace: default
Priority: 0
Node: minikube/192.168.99.101
Start Time: Mon, 29 Jun 2020 11:42:01 +0000
Labels: io.kompose.service=loginfunctionality
pod-template-hash=84b59c4464
Annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.21.0 (992df58d8)
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/loginfunctionality-84b59c4464
Containers:
loginfunctionality:
Container ID:
Image: healthcareakscicdacr.azurecr.io/loginfunctionality:latest
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
ASPNETCORE_ENVIRONMENT: Development
RedisCacheConnection: rediscache:6379
WebApiBaseUrl: http://20.185.77.158:5018/api/
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f4wfq (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-f4wfq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-f4wfq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m35s default-scheduler Successfully assigned default/loginfunctionality-84b59c4464-rr5ss to mini
Normal Pulling 7m7s kubelet, minikube Pulling image "healthcareakscicdacr.azurecr.io/loginfunctionality:latest"
PS C:\DeploymentFiles> kubectl describe pod loginfunctionality-84b59c4464-rr5ss
Name: loginfunctionality-84b59c4464-rr5ss
Namespace: default
Priority: 0
Node: minikube/192.168.99.101
Start Time: Mon, 29 Jun 2020 11:42:01 +0000
Labels: io.kompose.service=loginfunctionality
pod-template-hash=84b59c4464
Annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.21.0 (992df58d8)
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/loginfunctionality-84b59c4464
Containers:
loginfunctionality:
Container ID:
Image: healthcareakscicdacr.azurecr.io/loginfunctionality:latest
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
ASPNETCORE_ENVIRONMENT: Development
RedisCacheConnection: rediscache:6379
WebApiBaseUrl: http://20.185.77.158:5018/api/
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f4wfq (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-f4wfq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-f4wfq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned default/loginfunctionality-84b59c4464-rr5ss to minik
Normal Pulling 11m kubelet, minikube Pulling image "healthcareakscicdacr.azurecr.io/loginfunctionality:latest"
PS C:\DeploymentFiles> kubectl describe pod loginfunctionality-84b59c4464-rr5ss
Name: loginfunctionality-84b59c4464-rr5ss
Namespace: default
Priority: 0
Node: minikube/192.168.99.101
Start Time: Mon, 29 Jun 2020 11:42:01 +0000
Labels: io.kompose.service=loginfunctionality
pod-template-hash=84b59c4464
Annotations: kompose.cmd: C:\ProgramData\chocolatey\lib\kubernetes-kompose\tools\kompose.exe convert
kompose.version: 1.21.0 (992df58d8)
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/loginfunctionality-84b59c4464
Containers:
loginfunctionality:
Container ID:
Image: healthcareakscicdacr.azurecr.io/loginfunctionality:latest
Image ID:
Port: 80/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment:
ASPNETCORE_ENVIRONMENT: Development
RedisCacheConnection: rediscache:6379
WebApiBaseUrl: http://20.185.77.158:5018/api/
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-f4wfq (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-f4wfq:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-f4wfq
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 16m default-scheduler Successfully assigned default/loginfunctionality-84b59c4464-rr5ss to minik
Normal Pulling 16m kubelet, minikube Pulling image "healthcareakscicdacr.azurecr.io/loginfunctionality:latest"
Please let me know where I am going wrong.
Restarting the VM helped resolving the issue.

Nginx-Ingress for TCP entrypoint not working

I am using Nginx as Kubernetes Ingress controller. After following this simple example I was able to setup this example
Now I am trying to setup TCP entrypoint for logstash with following config
Logstash :
apiVersion: v1
kind: Secret
metadata:
name: logstash-secret
namespace: kube-logging
type: Opaque
data:
tls.crt: "<base64 encoded>" #For logstash.test.domain.com
tls.key: "<base64 encoded>" #For logstash.test.domain.com
---
apiVersion: v1
kind: ConfigMap
metadata:
name: logstash-config
namespace: kube-logging
labels:
app: logstash
data:
syslog.conf: |-
input {
tcp {
port => 5050
type => syslog
}
}
filter {
grok {
match => {"message" => "%{SYSLOGLINE}"}
}
}
output {
elasticsearch {
hosts => ["http://elasticsearch:9200"] #elasticsearch running in same namespace (kube-logging)
index => "syslog-%{+YYYY.MM.dd}"
}
stdout { codec => rubydebug }
}
---
kind: Deployment
apiVersion: apps/v1
metadata:
name: logstash
namespace: kube-logging
labels:
app: logstash
spec:
replicas: 1
selector:
matchLabels:
app: logstash
strategy:
type: RollingUpdate
template:
metadata:
labels:
app: logstash
spec:
#serviceAccountName: logstash
containers:
- name: logstash
image: docker.elastic.co/logstash/logstash:7.2.1
imagePullPolicy: Always
env:
- name: ELASTICSEARCH_HOST
value: elasticsearch
- name: ELASTICSEARCH_PORT
value: "9200"
- name: ELASTICSEARCH_USERNAME
value: elastic
- name: ELASTICSEARCH_PASSWORD
value: changeme
- name: ELASTIC_CLOUD_ID
value:
- name: ELASTIC_CLOUD_AUTH
value:
ports:
- name: logstash
containerPort: 5050
protocol: TCP
securityContext:
runAsUser: 0
volumeMounts:
- name: config
mountPath: /usr/share/logstash/pipeline/syslog.conf
readOnly: true
subPath: syslog.conf
volumes:
- name: config
configMap:
defaultMode: 0600
name: logstash-config
---
kind: Service
apiVersion: v1
metadata:
name: logstash
namespace: kube-logging
labels:
app: logstash
spec:
ports:
- name: tcp-port
protocol: TCP
port: 5050
targetPort: 5050
selector:
app: logstash
Nginx-Ingress:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-ingress
namespace: kube-ingress
spec:
replicas: 1
selector:
matchLabels:
app: nginx-ingress
template:
metadata:
labels:
app: nginx-ingress
spec:
serviceAccountName: nginx-ingress
containers:
- image: nginx/nginx-ingress:1.5.7
imagePullPolicy: Always
name: nginx-ingress
ports:
- name: http
containerPort: 80
- name: https
containerPort: 443
- name: tcp5050
containerPort: 5050
securityContext:
allowPrivilegeEscalation: true
runAsUser: 101 #nginx
capabilities:
drop:
- ALL
add:
- NET_BIND_SERVICE
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
args:
- -nginx-configmaps=$(POD_NAMESPACE)/nginx-config
- -default-server-tls-secret=$(POD_NAMESPACE)/default-server-secret
- -v=3 # Enables extensive logging. Useful for troubleshooting.
#- -report-ingress-status
#- -external-service=nginx-ingress
#- -enable-leader-election
#- -enable-prometheus-metrics
#- -enable-custom-resources
LoadBalancer :
apiVersion: v1
kind: Service
metadata:
name: nginx-ingress-external
namespace: kube-ingress
annotations:
service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "tcp"
service.beta.kubernetes.io/aws-load-balancer-proxy-protocol: "*"
service.beta.kubernetes.io/aws-load-balancer-type: "nlb"
spec:
type: LoadBalancer
ports:
- name: http
protocol: TCP
port: 80
targetPort: 80
- name: https
protocol: TCP
port: 443
targetPort: 443
- name: tcp5050
protocol: TCP
port: 5050
targetPort: 5050
selector:
app: nginx-ingress
Ingress :
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: logstash-ingress
namespace: kube-logging
spec:
tls:
- hosts:
- logstash.test.domain.com
secretName: logstash-secret #This has self-signed cert for logstash.test.domain.com
rules:
- host: logstash.test.domain.com
http:
paths:
- path: /
backend:
serviceName: logstash
servicePort: 5050
With this config it shows following ,
NAME HOSTS ADDRESS PORTS AGE
logstash-ingress logstash.test.domain.com 80, 443 79m
Why port 5050 not listed here ?
Just want to expose logstash service through public endpoint. When I use
openssl s_client -connect logstash.kube-logging.svc.cluster.local:5050 within the cluster I get
$ openssl s_client -connect logstash.kube-logging.svc.cluster.local:5050
CONNECTED(00000005)
But from outside of the cluster openssl s_client -connect logstash.test.domain.com:5050 I get
$ openssl s_client -connect logstash.test.domain.com:5050
connect: Connection refused
connect:errno=61
and
$ openssl s_client -cert logstash_test_domain_com.crt -key logstash_test_domain_com.key -servername logstash.test.domain.com:5050
connect: Connection refused
connect:errno=61
What I need to do to get this working ?
It seems like you are kind of confused. So let's start by ordering your services and ingress.
First, there are 3 types of services in kubernetes. ClusterIP which allows you to expose your deployments internally in k8s. Nodeport which is the same as ClusterIP but also exposes your deployment through every node external IP and a PORT which is in the range ~30K-32K. Finally there is the LoadBalancer service which is the same as ClusterIP but also exposes your app in a specific external IP address assigned by the cloud provider LoadBalancer.
The NodePort service you created will make logstash accessible through every node external IP in a random port in the range 30K to 32K; find the port running kubectl get services | grep nginx-ingress and check the last column. To get your node's external ip addresses run kubectl get node -o wide. The LoadBalancer service you created will make logstash accessible through an external IP address in the port 5050. To find out the IP run kubectl get services | grep nginx-ingress-external. Finally, you have also created an ingress resource to reach logstash. For this you have defined a host which given TLS will be accessible in port 443 and inbound traffic there will be redirected to logstash's service type ClusterIP in port 5050. So there you go you have 3 ways to reach logstash. I would go for the LoadBalancer given that it is a specific port.

grpc in k8s cannot resolve service dns name

i am using node js trying to run grpc in kubernetes cluster env. localy without kubernetes its working fine.the server side is listening to : '0.0.0.0:80' and client tries to connect via: http://recommended-upgrades-qa-int. in the kuberenets i get the following error:
ERROR failed to get via grpc the getRecommended Error: 14 UNAVAILABLE: Connect Failed endpoint:http://<K8S_SERVICE_NAME>
ERROR: Recommendations fetch error: Error: 14 UNAVAILABLE: Connect Failed severity=error, message=failed to get via grpc the getRecommended Error: 14 UNAVAILABLE: Connect Failed endpoint:http://<K8S_SERVICE_NAME>
server side:
const connectionHost = this.listenHost + ':' + this.listenPort;
server.bind(connectionHost, grpc.ServerCredentials.createInsecure());
logger.info(`Server running at ${connectionHost}`);
server.start();
client side:
RecommendedService = grpc.load(__dirname + '/../../node_modules/#zerto/lib-service-clients/Output/sources/recommendedClient.proto').RecommendedService;
} catch (error){
console.log(error);
}
this.client = RecommendedService && new RecommendedService(grpcAddress, grpc.credentials.createInsecure());
manefist files:
server side
apiVersion: apps/v1
kind: Deployment
metadata:
name: server-side-deployment
namespace: default
spec:
selector:
matchLabels:
app: server-side-deployment
replicas: 1
template:
metadata:
labels:
app: server-side-deployment
spec:
containers:
- name: server-side-deployment
image: (DOCKER_IMAGE_PATH)
imagePullPolicy: Always
ports:
- containerPort: 80
---
apiVersion: v1
kind: Service
metadata:
name: recommended-upgrades-qa-int
namespace: default
spec:
selector:
app: server-side-deployment
ports:
- protocol: TCP
port: 80
targetPort: 80
name: http
client side
apiVersion: apps/v1
kind: Deployment
metadata:
name: client-side-deployment
namespace: default
spec:
selector:
matchLabels:
app: client-side-deployment
replicas: 1
template:
metadata:
labels:
app: client-side-deployment
spec:
containers:
- name: client-side-deployment
image: (DOCKER_IMAGE_PATH)
imagePullPolicy: Always
env:
- name: RECOMANDED_SERVICE
value: http://recommended-upgrades-qa-int
ports:
- containerPort: 80
From the docs:
“Normal” (not headless) Services are assigned a DNS A record for a name of the form my-svc.my-namespace.svc.cluster.local. This resolves to the cluster IP of the Service.
Your issue here is probably sing <service name> while your service is in another namespace. Try using:
<service name>.<service namespace>.svc.cluster.local
it seems we figure it out. first the url must contain port 80 also there was an inner uncaught exception in the server service which may caused it not to work.
Thank you all

OpenShift Access Mongodb Pod from another Pod

I'm currentrly trying to deploy a mongodb pod on OpenShift and accessing this pod from another node.js application via mongoose. Now at first everything seems fine. I have created a route to the mongodb and when i open it in my browser I get
It looks like you are trying to access MongoDB over HTTP on the
native driver port.
So far so good. But when I try opening a connection to the database from another pod it refuses the connection. I'm using the username and password provided by OpenShift and connect to
mongodb://[username]:[password]#[host]:[port]/[dbname]
unfortunately without luck. It seems that the database is just accepting connections from the localhost. However I could not find out how to change that. Would be great if someone had an idea.
Heres the Deployment Config
apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
template.alpha.openshift.io/wait-for-ready: "true"
creationTimestamp: null
generation: 1
labels:
app: mongodb-persistent
template: mongodb-persistent-template
name: mongodb
spec:
replicas: 1
selector:
name: mongodb
strategy:
activeDeadlineSeconds: 21600
recreateParams:
timeoutSeconds: 600
resources: {}
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
name: mongodb
spec:
containers:
- env:
- name: MONGODB_USER
valueFrom:
secretKeyRef:
key: database-user
name: mongodb
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: mongodb
- name: MONGODB_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: database-admin-password
name: mongodb
- name: MONGODB_DATABASE
valueFrom:
secretKeyRef:
key: database-name
name: mongodb
image: registry.access.redhat.com/rhscl/mongodb-32-rhel7#sha256:82c79f0e54d5a23f96671373510159e4fac478e2aeef4181e61f25ac38c1ae1f
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 27017
timeoutSeconds: 1
name: mongodb
ports:
- containerPort: 27017
protocol: TCP
readinessProbe:
exec:
command:
- /bin/sh
- -i
- -c
- mongo 127.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD
--eval="quit()"
failureThreshold: 3
initialDelaySeconds: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 512Mi
securityContext:
capabilities: {}
privileged: false
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: mongodb-data
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: mongodb-data
persistentVolumeClaim:
claimName: mongodb
test: false
triggers:
- imageChangeParams:
automatic: true
containerNames:
- mongodb
from:
kind: ImageStreamTag
name: mongodb:3.2
namespace: openshift
type: ImageChange
- type: ConfigChange
status:
availableReplicas: 0
latestVersion: 0
observedGeneration: 0
replicas: 0
unavailableReplicas: 0
updatedReplicas: 0
The Service Config
apiVersion: v1
kind: Service
metadata:
annotations:
template.openshift.io/expose-uri: mongodb://{.spec.clusterIP}:{.spec.ports[?(.name=="mongo")].port}
creationTimestamp: null
labels:
app: mongodb-persistent
template: mongodb-persistent-template
name: mongodb
spec:
ports:
- name: mongo
port: 27017
protocol: TCP
targetPort: 27017
selector:
name: mongodb
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
and the pod
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"some-name-space","name":"mongodb-3","uid":"xxxx-xxx-xxx-xxxxxx","apiVersion":"v1","resourceVersion":"244413593"}}
kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
mongodb'
openshift.io/deployment-config.latest-version: "3"
openshift.io/deployment-config.name: mongodb
openshift.io/deployment.name: mongodb-3
openshift.io/scc: nfs-scc
creationTimestamp: null
generateName: mongodb-3-
labels:
deployment: mongodb-3
deploymentconfig: mongodb
name: mongodb
ownerReferences:
- apiVersion: v1
controller: true
kind: ReplicationController
name: mongodb-3
uid: a694b832-5dd2-11e8-b2fc-40f2e91e2433
spec:
containers:
- env:
- name: MONGODB_USER
valueFrom:
secretKeyRef:
key: database-user
name: mongodb
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: mongodb
- name: MONGODB_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: database-admin-password
name: mongodb
- name: MONGODB_DATABASE
valueFrom:
secretKeyRef:
key: database-name
name: mongodb
image: registry.access.redhat.com/rhscl/mongodb-32-rhel7#sha256:82c79f0e54d5a23f96671373510159e4fac478e2aeef4181e61f25ac38c1ae1f
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 27017
timeoutSeconds: 1
name: mongodb
ports:
- containerPort: 27017
protocol: TCP
readinessProbe:
exec:
command:
- /bin/sh
- -i
- -c
- mongo 127.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD
--eval="quit()"
failureThreshold: 3
initialDelaySeconds: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 512Mi
requests:
cpu: 250m
memory: 512Mi
securityContext:
capabilities:
drop:
- KILL
- MKNOD
- SETGID
- SETUID
- SYS_CHROOT
privileged: false
runAsUser: 1049930000
seLinuxOptions:
level: s0:c223,c212
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: mongodb-data
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-rfvr5
readOnly: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: default-dockercfg-3mpps
nodeName: thenode.name.net
nodeSelector:
region: primary
restartPolicy: Always
securityContext:
fsGroup: 1049930000
seLinuxOptions:
level: s0:c223,c212
supplementalGroups:
- 5555
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: mongodb-data
persistentVolumeClaim:
claimName: mongodb
- name: default-token-rfvr5
secret:
defaultMode: 420
secretName: default-token-rfvr5
status:
phase: Pending
Ok that was a long search and finally I was able to solve it. My first mistake was, that routes are not suited to make a connection to a database as they only use the http-protocol.
Now there were 2 usecases left for me
You're working on your local machine and want to test code that you later upload to OpenShift
You deploy that code to OpenShift (has to be in the same project but is a different app than the database)
1. Local Machine
Since the route doesn't work port forwarding is used. I've read that before but didn't really understand what it meant (i thought the service itsself is forwading ports already).
When you are on your local machine you will do the following with the oc
oc port-forward <pod-name> <local-port>:<remote-port>
You'll get the info that the port is forwarded. Now the thing is, that in your app you will now connect to localhost (even on your local machine)
2. App running on OpenShift
After you will upload your code to OpenShift(In my case, just Add to project --> Node.js --> Add your repo), localhost will not be working any longer.
What took a while for me to understand is that as long as you are in the same project you will have a lot of information in your environment variables.
So just check the name of the service of your database (in my case mongodb) and you will find the host and port to use
Summary
Here's a little code example that works now, as well on the local machine as on OpenShift. I have already set up a persistand MongoDB on OpenShift called mongodb.
The code doesn't do much, but It will make a connection and tell you that it did, so you know it's working.
var mongoose = require('mongoose');
// Connect to Mongodb
var username = process.env.MONGO_DB_USERNAME || 'someUserName';
var password = process.env.MONGO_DB_PASSWORD || 'somePassword';
var host = process.env.MONGODB_SERVICE_HOST || '127.0.0.1';
var port = process.env.MONGODB_SERVICE_PORT || '27017';
var database = process.env.MONGO_DB_DATABASE || 'sampledb';
console.log('---DATABASE PARAMETERS---');
console.log('Host: ' + host);
console.log('Port: ' + port);
console.log('Username: ' + username);
console.log('Password: ' + password);
console.log('Database: ' + database);
var connectionString = 'mongodb://' + username + ':' + password +'#' + host + ':' + port + '/' + database;
console.log('---CONNECTING TO---');
console.log(connectionString);
mongoose.connect(connectionString);
mongoose.connection.once('open', (data) => {
console.log('Connection has been made');
console.log(data);
});

Resources