This is my first time using yaml. I want to create a conatiner group in azure using yaml file:
apiVersion: 2019-12-01
location: eastus
name: myContainerGroup
properties:
containers:
- name: aci-tutorial-app
properties:
image: mcr.microsoft.com/azuredocs/aci-helloworld:latest
resources:
requests:
cpu: 1
memoryInGb: 1.5
ports:
- port: 80
- port: 8080
- name: aci-tutorial-sidecar
properties:
image: mcr.microsoft.com/azuredocs/aci-tutorial-sidecar
resources:
requests:
cpu: 1
memoryInGb: 1.5
osType: Linux
ipAddress:
type: Public
ports:
- protocol: tcp
port: 80
- protocol: tcp
port: 8080
tags: {exampleTag: tutorial}
type: Microsoft.ContainerInstance/containerGroups
But i want that if var=3 then yaml is going to create 3 blocks of aci
- name: aci-tutorial-sidecar
properties:
image: mcr.microsoft.com/azuredocs/aci-tutorial-sidecar
resources:
requests:
cpu: 1
memoryInGb: 1.5
Thanks.
Related
I'm trying to deploy a NodeRED pod on my cluster, and have created a service and ingress for it so it can be accessible as I access the rest of my cluster, under the same domain. However when i try to access it via host-name.com/nodered I receive Cannot GET /nodered.
Following are the templates used and describes of all the involved components.
apiVersion: v1
kind: Service
metadata:
name: nodered-app-service
namespace: {{ kubernetes_namespace_name }}
spec:
ports:
- port: 1880
targetPort: 1880
selector:
app: nodered-service-pod
I have also tried with port:80 for the service, to no avail.
apiVersion: apps/v1
kind: Deployment
metadata:
name: nodered-service-deployment
namespace: {{ kubernetes_namespace_name }}
labels:
app: nodered-service-deployment
name: nodered-service-deployment
spec:
replicas: 1
selector:
matchLabels:
app: nodered-service-pod
template:
metadata:
labels:
app: nodered-service-pod
target: gateway
buildVersion: "{{ kubernetes_build_number }}"
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: nodered-service-account
automountServiceAccountToken: false
securityContext:
runAsUser: 1000
runAsGroup: 1000
fsGroup: 1000
containers:
- name: nodered-service-statefulset
image: nodered/node-red:{{ nodered_service_version }}
imagePullPolicy: {{ kubernetes_image_pull_policy }}
readinessProbe:
httpGet:
path: /
port: 1880
initialDelaySeconds: 30
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
livenessProbe:
httpGet:
path: /
port: 1880
initialDelaySeconds: 30
timeoutSeconds: 1
periodSeconds: 10
failureThreshold: 3
securityContext:
allowPrivilegeEscalation: false
resources:
limits:
memory: "2048M"
cpu: "1000m"
requests:
memory: "500M"
cpu: "100m"
ports:
- containerPort: 1880
name: port-name
envFrom:
- configMapRef:
name: nodered-service-configmap
env:
- name: BUILD_TIME
value: "{{ kubernetes_build_time }}"
The target: gateway refers to the ingress controller
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nodered-ingress
namespace: {{ kubernetes_namespace_name }}
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- host: host-name.com
http:
paths:
- path: /nodered(/|$)(.*)
backend:
serviceName: nodered-app-service
servicePort: 1880
The following is what my Describes show
Name: nodered-app-service
Namespace: nodered
Labels: <none>
Annotations: <none>
Selector: app=nodered-service-pod
Type: ClusterIP
IP: 55.3.145.249
Port: <unset> 1880/TCP
TargetPort: port-name/TCP
Endpoints: 10.7.0.79:1880
Session Affinity: None
Events: <none>
Name: nodered-service-statefulset-6c678b7774-clx48
Namespace: nodered
Priority: 0
Node: aks-default-40441371-vmss000007/10.7.0.66
Start Time: Thu, 26 Aug 2021 14:23:33 +0200
Labels: app=nodered-service-pod
buildVersion=latest
pod-template-hash=6c678b7774
target=gateway
Annotations: <none>
Status: Running
IP: 10.7.0.79
IPs:
IP: 10.7.0.79
Controlled By: ReplicaSet/nodered-service-statefulset-6c678b7774
Containers:
nodered-service-statefulset:
Container ID: docker://a6f8c9d010feaee352bf219f85205222fa7070c72440c885b9cd52215c4c1042
Image: nodered/node-red:latest-12
Image ID: docker-pullable://nodered/node-red#sha256:f02ccb26aaca2b3ee9c8a452d9516c9546509690523627a33909af9cf1e93d1e
Port: 1880/TCP
Host Port: 0/TCP
State: Running
Started: Thu, 26 Aug 2021 14:23:36 +0200
Ready: True
Restart Count: 0
Limits:
cpu: 1
memory: 2048M
Requests:
cpu: 100m
memory: 500M
Liveness: http-get http://:1880/ delay=30s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:1880/ delay=30s timeout=1s period=10s #success=1 #failure=3
Environment Variables from:
nodered-service-configmap ConfigMap Optional: false
Environment:
BUILD_TIME: 2021-08-26T12:23:06.219818+0000
Mounts: <none>
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes: <none>
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Name: nodered-app-service
Namespace: nodered
Labels: <none>
Annotations: <none>
Selector: app=nodered-service-pod
Type: ClusterIP
IP: 55.3.145.249
Port: <unset> 1880/TCP
TargetPort: port-name/TCP
Endpoints: 10.7.0.79:1880
Session Affinity: None
Events: <none>
PS C:\Users\hid5tim> kubectl describe ingress -n nodered
Name: nodered-ingress
Namespace: nodered
Address: 10.7.31.254
Default backend: default-http-backend:80 (<error: endpoints "default-http-backend" not found>)
Rules:
Host Path Backends
---- ---- --------
host-name.com
/nodered(/|$)(.*) nodered-app-service:1880 (10.7.0.79:1880)
Annotations: kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/ssl-redirect: false
Events: <none>
The logs from the ingress controller are below. I've been on this issue for the last 24 hours or so and its tearing me apart, the setup looks identical to other deployments I have that are functional. Could this be something wrong with the nodered image? I have checked and it does expose 1880.
194.xx.xxx.x - [194.xx.xxx.x] - - [26/Aug/2021:10:40:12 +0000] "GET /nodered HTTP/1.1" 404 146 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/92.0.4515.159 Safari/537.36" 871 0.008 [nodered-nodered-app-service-80] 10.7.0.68:1880 146 0.008 404
74887808fa2eb09fd4ed64061639991e ```
as the comment by Andrew points out, I was using rewrite annotation wrong, once I removed the (/|$)(.*) and specified the path type as prefix it worked.
I'm using Ingress Nginx Controller in my Kubernetes Cluster hosted on Google Kubernetes Engine. The application is a Node.js app.
When I integrated my app with Rollbar (logger service) I started to notice repetitive errors every ~15 seconds (154K times in one week).
Error: Cannot GET /
Print:
I think the reason is the fact that the my Node.js application uses the /v1 prefix in the routes, i.e the / route doesn'i exists.
PS: Rollbar is linked in Develop (local), Testing (Heroku) and Production (GKE) environments and the error only occurs in production.
My ingress file:
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: ingress-app
spec:
rules:
- host: api.company.com
http:
paths:
- path: /v1
backend:
serviceName: company-prod-v1-service
servicePort: 3000
Ingress documentation says something about / endpoint, but i don't understand very well.
I need to remove this error. Can any jedi masters help me fix this error?
Thanks in advance
[UPDATE]
Deployment File
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-deploy-v1
labels:
app: app-v1
version: 1.10.0-2
spec:
selector:
matchLabels:
app: app-v1
replicas: 1
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
minReadySeconds: 30
progressDeadlineSeconds: 60
template:
metadata:
labels:
app: app-v1
version: 1.10.0-2
spec:
serviceAccountName: gke-account
containers:
- name: app-container
image: registry.gitlab.com/company/app-back:1.10.0
lifecycle:
postStart:
exec:
command:
- "/bin/sh"
- "-c"
- >
if [ -f "$SECRETS_FOLDER/$APPLE_NOTIFICATION_KEY" ]; then
cp $SECRETS_FOLDER/$APPLE_NOTIFICATION_KEY $APP_FOLDER;
fi;
- name: cloud-sql-proxy
image: gcr.io/cloudsql-docker/gce-proxy:1.17
command:
- "/cloud_sql_proxy"
- "-instances=thermal-petal-283313:us-east1:app-instance=tcp:5432"
securityContext:
runAsNonRoot: true
lifecycle:
preStop:
exec:
command: ["/bin/sleep", "300"]
imagePullSecrets:
- name: registry-credentials
Ingress Controller Pod File (auto generate in installation command)
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2021-06-16T03:08:53Z"
generateName: ingress-nginx-controller
labels:
app.kubernetes.io/component: controller
app.kubernetes.io/instance: ingress-nginx
app.kubernetes.io/name: ingress-nginx
name: ingress-nginx-controller
namespace: ingress-nginx
ownerReferences:
- apiVersion: apps/v1
blockOwnerDeletion: true
controller: true
kind: ReplicaSet
name: ingress-nginx-controller
resourceVersion: "112544950"
selfLink: /api/v1/namespaces/ingress-nginx/pods/ingress-nginx-controller
spec:
containers:
image: k8s.gcr.io/ingress-nginx/controller:v0.46.0
imagePullPolicy: IfNotPresent
lifecycle:
preStop:
exec:
command:
- /wait-shutdown
livenessProbe:
failureThreshold: 5
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
name: controller
ports:
- containerPort: 80
name: http
protocol: TCP
- containerPort: 443
name: https
protocol: TCP
- containerPort: 8443
name: webhook
protocol: TCP
readinessProbe:
failureThreshold: 3
httpGet:
path: /healthz
port: 10254
scheme: HTTP
initialDelaySeconds: 10
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
Because ACI doesn't support scaling, we deploy multiple container groups containing an Azure DevOps agent, a selenium grid hub and a selenium grid node. To try and speed things up I've tried to deploy the container groups with an additional node, identical to the first only being started on port 6666 instead of port 5555. I can see the two nodes register with the grid without issue but when I execute the same batch of tests with the additional node and without they take the exact same amount of time. How do I go about finding out what's going on here?
My ACI yaml:
apiVersion: 2018-10-01
location: australiaeast
properties:
containers:
- name: devops-agent
properties:
image: __AZUREDEVOPSAGENTIMAGE__
resources:
requests:
cpu: 0.5
memoryInGb: 1
environmentVariables:
- name: AZP_URL
value: __AZUREDEVOPSPROJECTURL__
- name: AZP_POOL
value: __AGENTPOOLNAME__
- name: AZP_TOKEN
secureValue: __AZUREDEVOPSAGENTTOKEN__
- name: SCREEN_WIDTH
value: "1920"
- name: SCREEN_HEIGHT
value: "1080"
volumeMounts:
- name: downloads
mountPath: /tmp/
- name: selenium-hub
properties:
image: selenium/hub:3.141.59-xenon
resources:
requests:
cpu: 1
memoryInGb: 1
ports:
- port: 4444
- name: chrome-node
properties:
image: selenium/node-chrome:3.141.59-xenon
resources:
requests:
cpu: 1
memoryInGb: 2
environmentVariables:
- name: HUB_HOST
value: localhost
- name: HUB_PORT
value: 4444
- name: SCREEN_WIDTH
value: "1920"
- name: SCREEN_HEIGHT
value: "1080"
volumeMounts:
- name: devshm
mountPath: /dev/shm
- name: downloads
mountPath: /home/seluser/downloads
- name: chrome-node-2
properties:
image: selenium/node-chrome:3.141.59-xenon
resources:
requests:
cpu: 1
memoryInGb: 2
environmentVariables:
- name: HUB_HOST
value: localhost
- name: HUB_PORT
value: 4444
- name: SCREEN_WIDTH
value: "1920"
- name: SCREEN_HEIGHT
value: "1080"
- name: SE_OPTS
value: "-port 6666"
volumeMounts:
- name: devshm
mountPath: /dev/shm
- name: downloads
mountPath: /home/seluser/downloads
osType: Linux
diagnostics:
logAnalytics:
workspaceId: __LOGANALYTICSWORKSPACEID__
workspaceKey: __LOGANALYTICSPRIMARYKEY__
volumes:
- name: devshm
emptyDir: {}
- name: downloads
emptyDir: {}
ipAddress:
type: Public
ports:
- protocol: tcp
port: '4444'
#==================== remove this section if not pulling images from private image registries ===============
imageRegistryCredentials:
- server: __IMAGEREGISTRYLOGINSERVER__
username: __IMAGEREGISTRYUSERNAME__
password: __IMAGEREGISTRYPASSWORD__
#========================================================================================================================
tags: null
type: Microsoft.ContainerInstance/containerGroups
When I run my tests locally against a docker selenium grid either from Visual Studio or via dotnet vstest, my tests run in parallel across all available nodes and complete in half the time.
I have a fully Private AKS Cluster, I have it setup on a private VNET which I access through an Azure Bastion to run kubectl commands. I have also set up a DevOps pipeline which uses a self hosted agent to run commands on the private cluster. All my pods and ingresses seem to be running fine. However, when I try to access my ingress using a hostname (by mapping the public ip) I am getting a 404 not found. When verifying against my public cluster setup, I don't see any issues. Can someone please shed some light on why I cannot access my pod which appears to be running fine?
Also, it seems I cannot access the external IP of the ingress even on the virtual machine which is on the virtual network. But I can run the kubectl commands and access the kubernetes dashboard.
---
apiVersion: v1
kind: Service
metadata:
namespace: app-auth
labels:
environment: staging
name: app-auth-staging # The name of the app
spec:
type: ClusterIP
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app-auth-staging
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-auth-staging
namespace: app-auth
labels:
app: app-auth-staging
environment: staging # The environment being used
app-role: api # The application type
tier: backend # The tier that this app represents
spec:
replicas: 1
selector:
matchLabels:
app: app-auth-staging
template:
metadata:
labels:
app: app-auth-staging
environment: staging
app-role: api
tier: backend
annotations:
build: _{Tag}_
spec:
containers:
- name: auth
image: auth.azurecr.io/auth:_{Tag}_ # Note: Do not modify this field.
imagePullPolicy: Always
env:
- name: ConnectionStrings__ZigzyAuth # Note: The appsettings value being replaced
valueFrom:
secretKeyRef:
name: connectionstrings
key: _{ConnectionString}_ # Note: This is an environmental variable, it is replaced accordingly in DevOps
ports:
- containerPort: 80
readinessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 5
periodSeconds: 10
livenessProbe:
tcpSocket:
port: 80
initialDelaySeconds: 15
periodSeconds: 20
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: agentpool
operator: In
values:
- general
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: "aks-provider"
nodePublishSecretRef:
name: aks-prod-credstore
imagePullSecrets:
- name: aks-prod-acrps
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-auth-staging-ingress-main # The name of the ingress, ex: app-auth-ingress-main
namespace: app-auth
labels:
environment: staging
annotations:
kubernetes.io/ingress.class: nginx
# nginx.ingress.kubernetes.io/enable-cors: "true"
# nginx.ingress.kubernetes.io/cors-allow-methods: "GET, POST, OPTIONS"
# nginx.ingress.kubernetes.io/cors-allow-origin: "https://admin.example.com"
# nginx.ingress.kubernetes.io/cors-allow-credentials: "true"
spec:
tls:
- hosts:
- stagingauth.app.com # Modify
- frontend.21.72.207.63.nip.io
- aksstagingauth.app.com
secretName: zigzypfxtls
rules:
- host: stagingauth.app.com
http:
paths:
- backend:
serviceName: zigzy-auth-staging # Modify
servicePort: 80
path: /
- host: frontend.21.72.207.63.nip.io
http:
paths:
- backend:
serviceName: app-auth-staging # Modify
servicePort: 80
path: /
- host: aksstagingauth.app.com
http:
paths:
- backend:
serviceName: app-auth-staging # Modify
servicePort: 80
path: /
I'm currentrly trying to deploy a mongodb pod on OpenShift and accessing this pod from another node.js application via mongoose. Now at first everything seems fine. I have created a route to the mongodb and when i open it in my browser I get
It looks like you are trying to access MongoDB over HTTP on the
native driver port.
So far so good. But when I try opening a connection to the database from another pod it refuses the connection. I'm using the username and password provided by OpenShift and connect to
mongodb://[username]:[password]#[host]:[port]/[dbname]
unfortunately without luck. It seems that the database is just accepting connections from the localhost. However I could not find out how to change that. Would be great if someone had an idea.
Heres the Deployment Config
apiVersion: v1
kind: DeploymentConfig
metadata:
annotations:
template.alpha.openshift.io/wait-for-ready: "true"
creationTimestamp: null
generation: 1
labels:
app: mongodb-persistent
template: mongodb-persistent-template
name: mongodb
spec:
replicas: 1
selector:
name: mongodb
strategy:
activeDeadlineSeconds: 21600
recreateParams:
timeoutSeconds: 600
resources: {}
type: Recreate
template:
metadata:
creationTimestamp: null
labels:
name: mongodb
spec:
containers:
- env:
- name: MONGODB_USER
valueFrom:
secretKeyRef:
key: database-user
name: mongodb
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: mongodb
- name: MONGODB_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: database-admin-password
name: mongodb
- name: MONGODB_DATABASE
valueFrom:
secretKeyRef:
key: database-name
name: mongodb
image: registry.access.redhat.com/rhscl/mongodb-32-rhel7#sha256:82c79f0e54d5a23f96671373510159e4fac478e2aeef4181e61f25ac38c1ae1f
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 27017
timeoutSeconds: 1
name: mongodb
ports:
- containerPort: 27017
protocol: TCP
readinessProbe:
exec:
command:
- /bin/sh
- -i
- -c
- mongo 127.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD
--eval="quit()"
failureThreshold: 3
initialDelaySeconds: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 512Mi
securityContext:
capabilities: {}
privileged: false
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: mongodb-data
dnsPolicy: ClusterFirst
restartPolicy: Always
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- name: mongodb-data
persistentVolumeClaim:
claimName: mongodb
test: false
triggers:
- imageChangeParams:
automatic: true
containerNames:
- mongodb
from:
kind: ImageStreamTag
name: mongodb:3.2
namespace: openshift
type: ImageChange
- type: ConfigChange
status:
availableReplicas: 0
latestVersion: 0
observedGeneration: 0
replicas: 0
unavailableReplicas: 0
updatedReplicas: 0
The Service Config
apiVersion: v1
kind: Service
metadata:
annotations:
template.openshift.io/expose-uri: mongodb://{.spec.clusterIP}:{.spec.ports[?(.name=="mongo")].port}
creationTimestamp: null
labels:
app: mongodb-persistent
template: mongodb-persistent-template
name: mongodb
spec:
ports:
- name: mongo
port: 27017
protocol: TCP
targetPort: 27017
selector:
name: mongodb
sessionAffinity: None
type: ClusterIP
status:
loadBalancer: {}
and the pod
apiVersion: v1
kind: Pod
metadata:
annotations:
kubernetes.io/created-by: |
{"kind":"SerializedReference","apiVersion":"v1","reference":{"kind":"ReplicationController","namespace":"some-name-space","name":"mongodb-3","uid":"xxxx-xxx-xxx-xxxxxx","apiVersion":"v1","resourceVersion":"244413593"}}
kubernetes.io/limit-ranger: 'LimitRanger plugin set: cpu request for container
mongodb'
openshift.io/deployment-config.latest-version: "3"
openshift.io/deployment-config.name: mongodb
openshift.io/deployment.name: mongodb-3
openshift.io/scc: nfs-scc
creationTimestamp: null
generateName: mongodb-3-
labels:
deployment: mongodb-3
deploymentconfig: mongodb
name: mongodb
ownerReferences:
- apiVersion: v1
controller: true
kind: ReplicationController
name: mongodb-3
uid: a694b832-5dd2-11e8-b2fc-40f2e91e2433
spec:
containers:
- env:
- name: MONGODB_USER
valueFrom:
secretKeyRef:
key: database-user
name: mongodb
- name: MONGODB_PASSWORD
valueFrom:
secretKeyRef:
key: database-password
name: mongodb
- name: MONGODB_ADMIN_PASSWORD
valueFrom:
secretKeyRef:
key: database-admin-password
name: mongodb
- name: MONGODB_DATABASE
valueFrom:
secretKeyRef:
key: database-name
name: mongodb
image: registry.access.redhat.com/rhscl/mongodb-32-rhel7#sha256:82c79f0e54d5a23f96671373510159e4fac478e2aeef4181e61f25ac38c1ae1f
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 3
initialDelaySeconds: 30
periodSeconds: 10
successThreshold: 1
tcpSocket:
port: 27017
timeoutSeconds: 1
name: mongodb
ports:
- containerPort: 27017
protocol: TCP
readinessProbe:
exec:
command:
- /bin/sh
- -i
- -c
- mongo 127.0.1:27017/$MONGODB_DATABASE -u $MONGODB_USER -p $MONGODB_PASSWORD
--eval="quit()"
failureThreshold: 3
initialDelaySeconds: 3
periodSeconds: 10
successThreshold: 1
timeoutSeconds: 1
resources:
limits:
memory: 512Mi
requests:
cpu: 250m
memory: 512Mi
securityContext:
capabilities:
drop:
- KILL
- MKNOD
- SETGID
- SETUID
- SYS_CHROOT
privileged: false
runAsUser: 1049930000
seLinuxOptions:
level: s0:c223,c212
terminationMessagePath: /dev/termination-log
volumeMounts:
- mountPath: /var/lib/mongodb/data
name: mongodb-data
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: default-token-rfvr5
readOnly: true
dnsPolicy: ClusterFirst
imagePullSecrets:
- name: default-dockercfg-3mpps
nodeName: thenode.name.net
nodeSelector:
region: primary
restartPolicy: Always
securityContext:
fsGroup: 1049930000
seLinuxOptions:
level: s0:c223,c212
supplementalGroups:
- 5555
serviceAccount: default
serviceAccountName: default
terminationGracePeriodSeconds: 30
volumes:
- name: mongodb-data
persistentVolumeClaim:
claimName: mongodb
- name: default-token-rfvr5
secret:
defaultMode: 420
secretName: default-token-rfvr5
status:
phase: Pending
Ok that was a long search and finally I was able to solve it. My first mistake was, that routes are not suited to make a connection to a database as they only use the http-protocol.
Now there were 2 usecases left for me
You're working on your local machine and want to test code that you later upload to OpenShift
You deploy that code to OpenShift (has to be in the same project but is a different app than the database)
1. Local Machine
Since the route doesn't work port forwarding is used. I've read that before but didn't really understand what it meant (i thought the service itsself is forwading ports already).
When you are on your local machine you will do the following with the oc
oc port-forward <pod-name> <local-port>:<remote-port>
You'll get the info that the port is forwarded. Now the thing is, that in your app you will now connect to localhost (even on your local machine)
2. App running on OpenShift
After you will upload your code to OpenShift(In my case, just Add to project --> Node.js --> Add your repo), localhost will not be working any longer.
What took a while for me to understand is that as long as you are in the same project you will have a lot of information in your environment variables.
So just check the name of the service of your database (in my case mongodb) and you will find the host and port to use
Summary
Here's a little code example that works now, as well on the local machine as on OpenShift. I have already set up a persistand MongoDB on OpenShift called mongodb.
The code doesn't do much, but It will make a connection and tell you that it did, so you know it's working.
var mongoose = require('mongoose');
// Connect to Mongodb
var username = process.env.MONGO_DB_USERNAME || 'someUserName';
var password = process.env.MONGO_DB_PASSWORD || 'somePassword';
var host = process.env.MONGODB_SERVICE_HOST || '127.0.0.1';
var port = process.env.MONGODB_SERVICE_PORT || '27017';
var database = process.env.MONGO_DB_DATABASE || 'sampledb';
console.log('---DATABASE PARAMETERS---');
console.log('Host: ' + host);
console.log('Port: ' + port);
console.log('Username: ' + username);
console.log('Password: ' + password);
console.log('Database: ' + database);
var connectionString = 'mongodb://' + username + ':' + password +'#' + host + ':' + port + '/' + database;
console.log('---CONNECTING TO---');
console.log(connectionString);
mongoose.connect(connectionString);
mongoose.connection.once('open', (data) => {
console.log('Connection has been made');
console.log(data);
});