OpenShift Access Mongodb Pod from another Pod - node.js

I'm currentrly trying to deploy a mongodb pod on OpenShift and accessing this pod from another node.js application via mongoose. Now at first everything seems fine. I have created a route to the mongodb and when i open it in my browser I get
It looks like you are trying to access MongoDB over HTTP on the
native driver port.
So far so good. But when I try opening a connection to the database from another pod it refuses the connection. I'm using the username and password provided by OpenShift and connect to
mongodb://[username]:[password]#[host]:[port]/[dbname]
unfortunately without luck. It seems that the database is just accepting connections from the localhost. However I could not find out how to change that. Would be great if someone had an idea.
Heres the Deployment Config
apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
template.alpha.openshift.io/wait-for-ready: "true"
creationTimestamp: null
generation: 1
labels:
app: mongodb-persistent
template: mongodb-persistent-template
name: mongodb
spec:
replicas: 1
selector:
name: mongodb
strategy:
activeDeadlineSeconds: 21600
recreateParams:
timeoutSeconds: 600
resources: {}
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
name: mongodb
spec:
containers:
- env:
- name: MONGODB_USER
valueFrom:
secretKeyRef:
key: database-user
name: mongodb
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: mongodb
- name: MONGODB_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: database-admin-password
name: mongodb
- name: MONGODB_DATABASE
valueFrom:
secretKeyRef:
key: database-name
name: mongodb
image: registry.access.redhat.com/rhscl/mongodb-32-rhel7#sha256:82c79f0e54d5a23f96671373510159e4fac478e2aeef4181e61f25ac38c1ae1f
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 27017
timeoutSeconds: 1
name: mongodb
ports:
- containerPort: 27017
protocol: TCP
readinessProbe:
exec:
command:
- /bin/sh
- -i
- -c
- mongo 127.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD
--eval="quit()"
failureThreshold: 3
initialDelaySeconds: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 512Mi
securityContext:
capabilities: {}
privileged: false
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: mongodb-data
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: mongodb-data
persistentVolumeClaim:
claimName: mongodb
test: false
triggers:
- imageChangeParams:
automatic: true
containerNames:
- mongodb
from:
kind: ImageStreamTag
name: mongodb:3.2
namespace: openshift
type: ImageChange
- type: ConfigChange
status:
availableReplicas: 0
latestVersion: 0
observedGeneration: 0
replicas: 0
unavailableReplicas: 0
updatedReplicas: 0
The Service Config
apiVersion: v1
kind: Service
metadata:
annotations:
template.openshift.io/expose-uri: mongodb://{.spec.clusterIP}:{.spec.ports[?(.name=="mongo")].port}
creationTimestamp: null
labels:
app: mongodb-persistent
template: mongodb-persistent-template
name: mongodb
spec:
ports:
- name: mongo
port: 27017
protocol: TCP
targetPort: 27017
selector:
name: mongodb
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
and the pod
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"some-name-space","name":"mongodb-3","uid":"xxxx-xxx-xxx-xxxxxx","apiVersion":"v1","resourceVersion":"244413593"}}
kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
mongodb'
openshift.io/deployment-config.latest-version: "3"
openshift.io/deployment-config.name: mongodb
openshift.io/deployment.name: mongodb-3
openshift.io/scc: nfs-scc
creationTimestamp: null
generateName: mongodb-3-
labels:
deployment: mongodb-3
deploymentconfig: mongodb
name: mongodb
ownerReferences:
- apiVersion: v1
controller: true
kind: ReplicationController
name: mongodb-3
uid: a694b832-5dd2-11e8-b2fc-40f2e91e2433
spec:
containers:
- env:
- name: MONGODB_USER
valueFrom:
secretKeyRef:
key: database-user
name: mongodb
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: mongodb
- name: MONGODB_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: database-admin-password
name: mongodb
- name: MONGODB_DATABASE
valueFrom:
secretKeyRef:
key: database-name
name: mongodb
image: registry.access.redhat.com/rhscl/mongodb-32-rhel7#sha256:82c79f0e54d5a23f96671373510159e4fac478e2aeef4181e61f25ac38c1ae1f
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 27017
timeoutSeconds: 1
name: mongodb
ports:
- containerPort: 27017
protocol: TCP
readinessProbe:
exec:
command:
- /bin/sh
- -i
- -c
- mongo 127.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD
--eval="quit()"
failureThreshold: 3
initialDelaySeconds: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 512Mi
requests:
cpu: 250m
memory: 512Mi
securityContext:
capabilities:
drop:
- KILL
- MKNOD
- SETGID
- SETUID
- SYS_CHROOT
privileged: false
runAsUser: 1049930000
seLinuxOptions:
level: s0:c223,c212
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: mongodb-data
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-rfvr5
readOnly: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: default-dockercfg-3mpps
nodeName: thenode.name.net
nodeSelector:
region: primary
restartPolicy: Always
securityContext:
fsGroup: 1049930000
seLinuxOptions:
level: s0:c223,c212
supplementalGroups:
- 5555
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: mongodb-data
persistentVolumeClaim:
claimName: mongodb
- name: default-token-rfvr5
secret:
defaultMode: 420
secretName: default-token-rfvr5
status:
phase: Pending

Ok that was a long search and finally I was able to solve it. My first mistake was, that routes are not suited to make a connection to a database as they only use the http-protocol.
Now there were 2 usecases left for me
You're working on your local machine and want to test code that you later upload to OpenShift
You deploy that code to OpenShift (has to be in the same project but is a different app than the database)
1. Local Machine
Since the route doesn't work port forwarding is used. I've read that before but didn't really understand what it meant (i thought the service itsself is forwading ports already).
When you are on your local machine you will do the following with the oc
oc port-forward <pod-name> <local-port>:<remote-port>
You'll get the info that the port is forwarded. Now the thing is, that in your app you will now connect to localhost (even on your local machine)
2. App running on OpenShift
After you will upload your code to OpenShift(In my case, just Add to project --> Node.js --> Add your repo), localhost will not be working any longer.
What took a while for me to understand is that as long as you are in the same project you will have a lot of information in your environment variables.
So just check the name of the service of your database (in my case mongodb) and you will find the host and port to use
Summary
Here's a little code example that works now, as well on the local machine as on OpenShift. I have already set up a persistand MongoDB on OpenShift called mongodb.
The code doesn't do much, but It will make a connection and tell you that it did, so you know it's working.
var mongoose = require('mongoose');
// Connect to Mongodb
var username = process.env.MONGO_DB_USERNAME || 'someUserName';
var password = process.env.MONGO_DB_PASSWORD || 'somePassword';
var host = process.env.MONGODB_SERVICE_HOST || '127.0.0.1';
var port = process.env.MONGODB_SERVICE_PORT || '27017';
var database = process.env.MONGO_DB_DATABASE || 'sampledb';
console.log('---DATABASE PARAMETERS---');
console.log('Host: ' + host);
console.log('Port: ' + port);
console.log('Username: ' + username);
console.log('Password: ' + password);
console.log('Database: ' + database);
var connectionString = 'mongodb://' + username + ':' + password +'#' + host + ':' + port + '/' + database;
console.log('---CONNECTING TO---');
console.log(connectionString);
mongoose.connect(connectionString);
mongoose.connection.once('open', (data) => {
console.log('Connection has been made');
console.log(data);
});

Related

Nginx Ingress Controller getting error in Node.js application with routes prefixed

I'm using Ingress Nginx Controller in my Kubernetes Cluster hosted on Google Kubernetes Engine. The application is a Node.js app.
When I integrated my app with Rollbar (logger service) I started to notice repetitive errors every ~15 seconds (154K times in one week).
Error: Cannot GET /
Print:
I think the reason is the fact that the my Node.js application uses the /v1 prefix in the routes, i.e the / route doesn'i exists.
PS: Rollbar is linked in Develop (local), Testing (Heroku) and Production (GKE) environments and the error only occurs in production.
My ingress file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-app
spec:
rules:
- host: api.company.com
http:
paths:
- path: /v1
backend:
serviceName: company-prod-v1-service
servicePort: 3000
Ingress documentation says something about / endpoint, but i don't understand very well.
I need to remove this error. Can any jedi masters help me fix this error?
Thanks in advance
[UPDATE]
Deployment File
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deploy-v1
labels:
app: app-v1
version: 1.10.0-2
spec:
selector:
matchLabels:
app: app-v1
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
minReadySeconds: 30
progressDeadlineSeconds: 60
template:
metadata:
labels:
app: app-v1
version: 1.10.0-2
spec:
serviceAccountName: gke-account
containers:
- name: app-container
image: registry.gitlab.com/company/app-back:1.10.0
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
if [ -f "$SECRETS_FOLDER/$APPLE_NOTIFICATION_KEY" ]; then
cp $SECRETS_FOLDER/$APPLE_NOTIFICATION_KEY $APP_FOLDER;
fi;
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command:
- "/cloud_sql_proxy"
- "-instances=thermal-petal-283313:us-east1:app-instance=tcp:5432"
securityContext:
runAsNonRoot: true
lifecycle:
preStop:
exec:
command: ["/bin/sleep", "300"]
imagePullSecrets:
- name: registry-credentials
Ingress Controller Pod File (auto generate in installation command)
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-06-16T03:08:53Z"
generateName: ingress-nginx-controller
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx-controller
namespace: ingress-nginx
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: ingress-nginx-controller
resourceVersion: "112544950"
selfLink: /api/v1/namespaces/ingress-nginx/pods/ingress-nginx-controller
spec:
containers:
image: k8s.gcr.io/ingress-nginx/controller:v0.46.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1

Azure Key Vault integration with AKS works for nginx tutorial Pod, but not actual project deployment

Per the title, I have the integration working following the documentation.
I can deploy the nginx.yaml and after about 70 seconds I can print out secrets with:
kubectl exec -it nginx -- cat /mnt/secrets-store/secret1
Now I'm trying to apply it to a PostgreSQL deployment for testing and I get the following from the Pod description:
Warning FailedMount 3s kubelet MountVolume.SetUp failed for volume "secrets-store01-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod staging/postgres-deployment-staging-69965ff767-8hmww, err: rpc error: code = Unknown desc = failed to mount objects, error: failed to get keyvault client: failed to get key vault token: nmi response failed with status code: 404, err: <nil>
And from the nmi logs:
E0221 22:54:32.037357 1 server.go:234] failed to get identities, error: getting assigned identities for pod staging/postgres-deployment-staging-69965ff767-8hmww in CREATED state failed after 16 attempts, retry duration [5]s, error: <nil>. Check MIC pod logs for identity assignment errors
I0221 22:54:32.037409 1 server.go:192] status (404) took 80003389208 ns for req.method=GET reg.path=/host/token/ req.remote=127.0.0.1
Not sure why since I basically copied the settings from the nignx.yaml into the postgres.yaml. Here they are:
# nginx.yaml
kind: Pod
apiVersion: v1
metadata:
name: nginx
namespace: staging
labels:
aadpodidbinding: aks-akv-identity-binding-selector
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
# postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment-staging
namespace: staging
labels:
aadpodidbinding: aks-akv-identity-binding-selector
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
containers:
- name: postgres
image: postgres:13-alpine
ports:
- containerPort: 5432
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
- name: postgres-storage-staging
mountPath: /var/postgresql
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
- name: postgres-storage-staging
persistentVolumeClaim:
claimName: postgres-storage-staging
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service-staging
namespace: staging
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
Suggestions for what the issue is here?
Oversight on my part... the aadpodidbinding should be in the template: per:
https://azure.github.io/aad-pod-identity/docs/best-practices/#deploymenthttpskubernetesiodocsconceptsworkloadscontrollersdeployment
The resulting YAML should be:
# postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment-production
namespace: production
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
aadpodidbinding: aks-akv-identity-binding-selector
spec:
containers:
- name: postgres
image: postgres:13-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB_FILE
value: /mnt/secrets-store/DEV-PGDATABASE
- name: POSTGRES_USER_FILE
value: /mnt/secrets-store/DEV-PGUSER
- name: POSTGRES_PASSWORD_FILE
value: /mnt/secrets-store/DEV-PGPASSWORD
- name: POSTGRES_INITDB_ARGS
value: "-A md5"
- name: PGDATA
value: /var/postgresql/data
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
- name: postgres-storage-production
mountPath: /var/postgresql
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
- name: postgres-storage-production
persistentVolumeClaim:
claimName: postgres-storage-production
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service-production
namespace: production
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
Adding template in spec will resolve the issue, use label "aadpodidbinding: "your azure pod identity selector" in the template labels section in deployment.yaml file
sample deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
aadpodidbinding: azure-pod-identity-binding-selector
spec:
containers:
- name: nginx
image: nginx
env:
- name: SECRET
valueFrom:
secretKeyRef:
name: test-secret
key: key
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: dev-1spc

Access Pod By Internet From AKS Private Cluster

I have a fully Private AKS Cluster, I have it setup on a private VNET which I access through an Azure Bastion to run kubectl commands. I have also set up a DevOps pipeline which uses a self hosted agent to run commands on the private cluster. All my pods and ingresses seem to be running fine. However, when I try to access my ingress using a hostname (by mapping the public ip) I am getting a 404 not found. When verifying against my public cluster setup, I don't see any issues. Can someone please shed some light on why I cannot access my pod which appears to be running fine?
Also, it seems I cannot access the external IP of the ingress even on the virtual machine which is on the virtual network. But I can run the kubectl commands and access the kubernetes dashboard.
---
apiVersion: v1
kind: Service
metadata:
namespace: app-auth
labels:
environment: staging
name: app-auth-staging # The name of the app
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app-auth-staging
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-auth-staging
namespace: app-auth
labels:
app: app-auth-staging
environment: staging # The environment being used
app-role: api # The application type
tier: backend # The tier that this app represents
spec:
replicas: 1
selector:
matchLabels:
app: app-auth-staging
template:
metadata:
labels:
app: app-auth-staging
environment: staging
app-role: api
tier: backend
annotations:
build: _{Tag}_
spec:
containers:
- name: auth
image: auth.azurecr.io/auth:_{Tag}_ # Note: Do not modify this field.
imagePullPolicy: Always
env:
- name: ConnectionStrings__ZigzyAuth # Note: The appsettings value being replaced
valueFrom:
secretKeyRef:
name: connectionstrings
key: _{ConnectionString}_ # Note: This is an environmental variable, it is replaced accordingly in DevOps
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
periodSeconds: 20
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: agentpool
operator: In
values:
- general
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "aks-provider"
nodePublishSecretRef:
name: aks-prod-credstore
imagePullSecrets:
- name: aks-prod-acrps
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-auth-staging-ingress-main # The name of the ingress, ex: app-auth-ingress-main
namespace: app-auth
labels:
environment: staging
annotations:
kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/enable-cors: "true"
# nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, OPTIONS"
# nginx.ingress.kubernetes.io/cors-allow-origin: "https://admin.example.com"
# nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
tls:
- hosts:
- stagingauth.app.com # Modify
- frontend.21.72.207.63.nip.io
- aksstagingauth.app.com
secretName: zigzypfxtls
rules:
- host: stagingauth.app.com
http:
paths:
- backend:
serviceName: zigzy-auth-staging # Modify
servicePort: 80
path: /
- host: frontend.21.72.207.63.nip.io
http:
paths:
- backend:
serviceName: app-auth-staging # Modify
servicePort: 80
path: /
- host: aksstagingauth.app.com
http:
paths:
- backend:
serviceName: app-auth-staging # Modify
servicePort: 80
path: /

Application running in Kubernetes cron job does not connect to database in same Kubernetes cluster

I have a Kubernetes cluster running the a PostgreSQL database, a Grafana dashboard, and a Python single-run application (built as a Docker image) that runs hourly inside a Kubernetes CronJob (see manifests below). Additionally, this is all being deployed using ArgoCD with Istio side-car injection.
The issue I'm having (as the title indicates) is that my Python application cannot connect to the database in the cluster. This is very strange to me since the dashboard, in fact, can connect to the database so I'm not sure what might be different for the Python app.
Following are my manifests (with a few things changed to remove identifiable information):
Contents of database.yaml:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: database
name: database
spec:
replicas: 1
selector:
matchLabels:
app: database
strategy: {}
template:
metadata:
labels:
app: database
spec:
containers:
- image: postgres:12.5
imagePullPolicy: ""
name: database
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_PASSWORD
resources: {}
readinessProbe:
initialDelaySeconds: 30
tcpSocket:
port: 5432
restartPolicy: Always
serviceAccountName: ""
volumes: null
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: database
name: database
spec:
ports:
- name: "5432"
port: 5432
targetPort: 5432
selector:
app: database
status:
loadBalancer: {}
Contents of dashboard.yaml:
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: dashboard
name: dashboard
spec:
replicas: 1
selector:
matchLabels:
app: dashboard
strategy: {}
template:
metadata:
labels:
app: dashboard
spec:
containers:
- image: grafana:7.3.3
imagePullPolicy: ""
name: dashboard
ports:
- containerPort: 3000
resources: {}
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_PASSWORD
volumeMounts:
- name: grafana-datasource
mountPath: /etc/grafana/provisioning/datasources
readinessProbe:
initialDelaySeconds: 30
httpGet:
path: /
port: 3000
restartPolicy: Always
serviceAccountName: ""
volumes:
- name: grafana-datasource
configMap:
defaultMode: 420
name: grafana-datasource
- name: grafana-dashboard-provision
status: {}
---
apiVersion: v1
kind: Service
metadata:
labels:
app: dashboard
name: dashboard
spec:
ports:
- name: "3000"
port: 3000
targetPort: 3000
selector:
app: dashboard
status:
loadBalancer: {}
Contents of cronjob.yaml:
---
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: python
spec:
concurrencyPolicy: Replace
# TODO: Go back to hourly when finished testing/troubleshooting
# schedule: "#hourly"
schedule: "*/15 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- image: python-tool:1.0.5
imagePullPolicy: ""
name: python
args: []
command:
- /bin/sh
- -c
- >-
echo "$(POSTGRES_USER)" > creds/db.creds;
echo "$(POSTGRES_PASSWORD)" >> creds/db.creds;
echo "$(SERVICE1_TOKEN)" > creds/service1.creds;
echo "$(SERVICE2_TOKEN)" > creds/service2.creds;
echo "$(SERVICE3_TOKEN)" > creds/service3.creds;
python3 -u main.py;
echo "Job finished with exit code $?";
env:
- name: POSTGRES_DB
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_DB
- name: POSTGRES_USER
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_USER
- name: POSTGRES_PASSWORD
valueFrom:
secretKeyRef:
name: postgres-secret
key: POSTGRES_PASSWORD
- name: SERVICE1_TOKEN
valueFrom:
secretKeyRef:
name: api-tokens-secret
key: SERVICE1_TOKEN
- name: SERVICE2_TOKEN
valueFrom:
secretKeyRef:
name: api-tokens-secret
key: SERVICE2_TOKEN
- name: SERVICE3_TOKEN
valueFrom:
secretKeyRef:
name: api-tokens-secret
key: SERVICE3_TOKEN
restartPolicy: OnFailure
serviceAccountName: ""
status: {}
Now, as I mentioned Istio is also a part of this picture so I have a Virtual service for the dashboard since it should be accessible outside of the cluster, but that's it.
With all of that out of the way, here's what I've done to try and solve this, myself:
Confirm the CronJob is using the correct connection settings (i.e. host, database name, username, and password) for connecting to the database.
For this, I added echo statements to the CronJob deployment showing the username and password (I know, I know) and they were the expected values. I also know those were the correct connection settings for the database because I used them verbatim to connect the dashboard to the database, which gave a successful connection.
The data source settings for the Grafana dashboard:
The error message from the Python application (shown in the ArgoCD logs for the container):
Thinking Istio might be causing this problem, I tried disabling Istio side-car injection for the CronJob resource (by adding this annotation to the metadata.annotations section: sidecar.istio.io/inject: false) but the annotation never actually showed up in the Argo logs and no change was observed when the CronJob was running.
I tried kubectl execing into the CronJob container that was running the Python script to debug more but was never actually able to since the container exited as soon as the connection error occurs.
That said, I've been banging my head into a wall for long enough on this. Could anyone spot what I might be missing and point me in the right direction, please?
I think the problem is that your pod tries to connect to the database before the istio sidecar is ready. And thus the connection can't be established.
Istio runs an init container that configures the pods route table so all traffic is routed through the sidecar. So if the sidecar isn't running and the other pod tries to connect to the db, no connection can be established.
There are two solutions.
First your job could wait for eg 30 seconds before calling main.py with some sleep command.
Alternatively you could enable holdApplicationUntilProxyStarts. By this main container will not start until the sidecar is running.

loadbalancer service won't redirect to desired pod

I'm playing around with kubernetes and I've set up my environment with 4 deployments:
hello: basic "hello world" service
auth: provides authentication and encryption
frontend: an nginx reverse proxy which represents a single-point-of-entry from the outside and routes to the accurate pods internally
nodehello: basic "hello world" service, written in nodejs (this is what I contributed)
For the hello, auth and nodehello deployments I've set up each one internal service.
For the frontend deployment I've set up a load-balancer service which would be exposed to the outside world. It uses a config map nginx-frontend-conf to redirect to the appropriate pods and has the following contents:
upstream hello {
server hello.default.svc.cluster.local;
}
upstream auth {
server auth.default.svc.cluster.local;
}
upstream nodehello {
server nodehello.default.svc.cluster.local;
}
server {
listen 443;
ssl on;
ssl_certificate /etc/tls/cert.pem;
ssl_certificate_key /etc/tls/key.pem;
location / {
proxy_pass http://hello;
}
location /login {
proxy_pass http://auth;
}
location /nodehello {
proxy_pass http://nodehello;
}
}
When calling the frontend endpoint using curl -k https://<frontend-external-ip> I get routed to an available hello pod which is the expected behavior.
When calling https://<frontend-external-ip>/nodehello however I won't get routed to a nodehello pod, but instead to a hellopod again.
I suspect the upstream nodehello configuration to be the failing part. I'm not sure how service discovery works here, i.e. how the dns name nodehello.default.svc.cluster.local would be exposed. I'd appreciate an explanation on how it works and what I did wrong.
yaml files used
deployments/hello.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: hello
spec:
replicas: 3
template:
metadata:
labels:
app: hello
track: stable
spec:
containers:
- name: hello
image: "udacity/example-hello:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
deployments/auth.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: auth
spec:
replicas: 1
template:
metadata:
labels:
app: auth
track: stable
spec:
containers:
- name: auth
image: "udacity/example-auth:1.0.0"
ports:
- name: http
containerPort: 80
- name: health
containerPort: 81
resources:
limits:
cpu: 0.2
memory: "10Mi"
livenessProbe:
httpGet:
path: /healthz
port: 81
scheme: HTTP
initialDelaySeconds: 5
periodSeconds: 15
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /readiness
port: 81
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 1
deployments/frontend.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: frontend
spec:
replicas: 1
template:
metadata:
labels:
app: frontend
track: stable
spec:
containers:
- name: nginx
image: "nginx:1.9.14"
lifecycle:
preStop:
exec:
command: ["/usr/sbin/nginx","-s","quit"]
volumeMounts:
- name: "nginx-frontend-conf"
mountPath: "/etc/nginx/conf.d"
- name: "tls-certs"
mountPath: "/etc/tls"
volumes:
- name: "tls-certs"
secret:
secretName: "tls-certs"
- name: "nginx-frontend-conf"
configMap:
name: "nginx-frontend-conf"
items:
- key: "frontend.conf"
path: "frontend.conf"
deployments/nodehello.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: nodehello
spec:
replicas: 1
template:
metadata:
labels:
app: nodehello
track: stable
spec:
containers:
- name: nodehello
image: "thezebra/nodehello:0.0.2"
ports:
- name: http
containerPort: 80
resources:
limits:
cpu: 0.2
memory: "10Mi"
services/hello.yaml
kind: Service
apiVersion: v1
metadata:
name: "hello"
spec:
selector:
app: "hello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
services/auth.yaml
kind: Service
apiVersion: v1
metadata:
name: "auth"
spec:
selector:
app: "auth"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
services/frontend.yaml
kind: Service
apiVersion: v1
metadata:
name: "frontend"
spec:
selector:
app: "frontend"
ports:
- protocol: "TCP"
port: 443
targetPort: 443
type: LoadBalancer
services/nodehello.yaml
kind: Service
apiVersion: v1
metadata:
name: "nodehello"
spec:
selector:
app: "nodehello"
ports:
- protocol: "TCP"
port: 80
targetPort: 80
This works perfectly :-)
$ curl -s http://frontend/
{"message":"Hello"}
$ curl -s http://frontend/login
authorization failed
$ curl -s http://frontend/nodehello
Hello World!
I suspect you might have updated the nginx-frontend-conf when you added /nodehello but have not restarted nginx. Pods won't pickup changed ConfigMaps automatically. Try:
kubectl delete pod -l app=frontend
Until versioned ConfigMaps happen there isn't a nicer solution.

Resources