I need to set up variables inside my server.xml but this at the time of creating my pod, I did this and it did not work
server.xml
<Realm className="org.apache.catalina.realm.JDBCRealm" connectionURL="${db_url}" driverName="com.microsoft.sqlserver.jdbc.SQLServerDriver" roleNameCol="role" userCredCol="password" userNameCol="login" userRoleTable="userRole" userTable="v_login"/>
and my pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: dbtest
spec:
containers:
- name: dbtest-container
image: xxx.azurecr.io/iafoxteste:latest
ports:
- containerPort: 8080
env:
- name: db_url
value: "jdbc:sqlserver://xxx.database.windows.net:1433;database=xxx;user=xxx#iafox;password=xxxx;encrypt=true;trustServerCertificate=true;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"
unless java can do that natively kubernetes wont do that for you. so you need an init script that would read env. variables and replace tokens in your server.xml. or make your app do that somehow.
kubernetes cant do token replacement.
As it was mentioned kubernetes doesn't do it for you. In order to pass that value to tomcat you need to add db_url as java system property ex. -db_url="jdbc:sqlserver://xxx.database.windows.net:1433;database=xxx;user=xxx#iafox;password=xxxx;encrypt=true;....". Then you need to have a starter shell scripts that gets this value from environment variable and pass that to your CATALINA_OPTS.
Check this stackoverflow question Java system properties and environment variables
Related
I have AKV integrated with AKS using CSI driver (documentation).
I can access them in the Pod by doing something like:
## show secrets held in secrets-store
kubectl exec busybox-secrets-store-inline -- ls /mnt/secrets-store/
## print a test secret 'ExampleSecret' held in secrets-store
kubectl exec busybox-secrets-store-inline -- cat /mnt/secrets-store/ExampleSecret
I have it working with my PostgreSQL deployment doing the following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment-prod
namespace: prod
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
aadpodidbinding: aks-akv-identity
spec:
containers:
- name: postgres
image: postgres:13-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB_FILE
value: /mnt/secrets-store/PG-DATABASE
- name: POSTGRES_USER_FILE
value: /mnt/secrets-store/PG-USER
- name: POSTGRES_PASSWORD_FILE
value: /mnt/secrets-store/PG-PASSWORD
- name: POSTGRES_INITDB_ARGS
value: "-A md5"
- name: PGDATA
value: /var/postgresql/data
volumeMounts:
- name: postgres-storage-prod
mountPath: /var/postgresql
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
volumes:
- name: postgres-storage-prod
persistentVolumeClaim:
claimName: postgres-storage-prod
- name: file-storage-prod
persistentVolumeClaim:
claimName: file-storage-prod
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service-prod
namespace: prod
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
Which works fine.
Figured all I'd need to do is swap out stuff like the following:
- name: PGPASSWORD
valueFrom:
secretKeyRef:
name: app-prod-secrets
key: PGPASSWORD
For:
- name: POSTGRES_PASSWORD
value: /mnt/secrets-store/PG-PASSWORD
# or
- name: POSTGRES_PASSWORD_FILE
value: /mnt/secrets-store/PG-PASSWORD
And I'd be golden, but that does not turn out to be the case.
In the Pods it is reading in the value as a string, which makes me confused about two things:
Why does this work for the PostgreSQL deployment but not my Django API, for example?
Is there a way to add them in env: without turning them in secrets and using secretKeyRef?
The CSI Driver injects the secrets in the pod by placing them as files on the file system. There will be one file per secret where
The filename is the name of the secret (or the alias specified in the secret provider class)
The content of the file is the value of the secret.
The CSI does not create environment variables of the secrets. The recomended way to add secrets as environment variables is to let CSI create a Kubernetes secret and then use the native secretKeyRef construct
Why does this work for the PostgreSQL deployment but not my Django API, for example?
In you Django API app you set an environment variable POSTGRES_PASSWORD
to the value /mnt/secrets-store/PG-PASSWORD. i.e you simply say that a certain variable should contain a certain value, nothing more. Thus the variable ill contaain the pat, not the secret value itself.
The same is true for the Postgres deployment it is just a path in an environment variable. The difference lies within how the Postgres deployment interprets the value. When the environment variables ending in _FILE is used Postgres does not expect the environment variable itself to contain the secret, but rather a path to a file that does. From the docs of the Postgres image:
As an alternative to passing sensitive information via environment
variables, _FILE may be appended to some of the previously listed
environment variables, causing the initialization script to load the
values for those variables from files present in the container. In
particular, this can be used to load passwords from Docker secrets
stored in /run/secrets/<secret_name> files. For example:
$ docker run --name some-postgres -e POSTGRES_PASSWORD_FILE=/run/secrets/postgres-passwd -d postgres
Currently, this is only supported for POSTGRES_INITDB_ARGS,
POSTGRES_PASSWORD, POSTGRES_USER, and POSTGRES_DB.
Is there a way to add them in env: without turning them in secrets and using secretKeyRef?
No, not out of the box. What you could do is to have an entrypoint script in your image that reads all the files in your secret folder and sets them as environment variables (The name of the variables being the filenames and the value the file content) before it starts the main application. That way the application can access the secrets as environment variables.
I have kubernetes running on OVH without a problem. But i recently reinstalled the build server because of other issues and setup everything but when trying to apply files it gives this horrable error.. did i miss something? and what does this error really mean?
+ kubectl apply -f k8s
unable to recognize "k8s/driver-cluster-ip-service.yaml": no matches for kind "Service" in version "v1"
unable to recognize "k8s/driver-deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
unable to recognize "k8s/driver-mysql-cluster-ip-service.yaml": no matches for kind "Service" in version "v1"
unable to recognize "k8s/driver-mysql-deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
unable to recognize "k8s/driver-mysql-persistent-volume-claim.yaml": no matches for kind "PersistentVolumeClaim" in version "v1"
unable to recognize "k8s/driver-phpmyadmin-cluster-ip-service.yaml": no matches for kind "Service" in version "v1"
unable to recognize "k8s/driver-phpmyadmin-deployment.yaml": no matches for kind "Deployment" in version "apps/v1"
I tried all previous answes on SO but none worked out for me. I don't think that i really need it, "correct me if i am wrong on that". I really would like to get some help with this.
I have installed kubectl and i got a config file that i use.
And when i execute the kubectl get pods command i am getting the pods that where deployed before
These are some of the yml files
k8s/driver-cluster-ip-service.yaml
apiVersion: v1
kind: Service
metadata:
name: driver-cluster-ip-service
spec:
type: ClusterIP
selector:
component: driver-service
ports:
- port: 3000
targetPort: 8080
k8s/driver-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: driver-deployment
spec:
replicas: 1
selector:
matchLabels:
component: driver-service
template:
metadata:
labels:
component: driver-service
spec:
containers:
- name: driver
image: repo.taxi.com/driver-service
imagePullPolicy: Always
ports:
- containerPort: 8080
imagePullSecrets:
- name: taxiregistry
dockerfile
FROM maven:3.6.0-jdk-8-slim AS build
COPY . /home/app/
RUN rm /home/app/controllers/src/main/resources/application.properties
RUN mv /home/app/controllers/src/main/resources/application-kubernetes.properties /home/app/controllers/src/main/resources/application.properties
RUN mvn -f /home/app/pom.xml clean package
FROM openjdk:8-jre-slim
COPY --from=build /home/app/controllers/target/controllers-1.0.jar /usr/local/lib/driver-1.0.0.jar
EXPOSE 8080
ENTRYPOINT ["java","-jar","/usr/local/lib/driver-1.0.0.jar"]
kubectl get pods command
kubectl api-versions
solution found
I had to place the binary file in a .kube folder which should be placed in the root directory
In my case i had to manually create the .kube folder in the root directory first.
After that I had my env variable set to that folder and placed my config file with my settings in there as well
Then i had to share the folder with the jenkins user and applied rights to the jenkins group
Jenkins was not up to date, so I had to restart the jenkins server.
And it worked like a charm!
Keep in mind to restart the jenkins server so that the changes you make will take affect on jenkins.
We have recently moved to micro services based architecture for our enterprise application. we are using Kubernetes cluster to host all our micro services.
Currently we didn't configure ELK for manage our logs, just storing application logs into azure blob storage.
We are facing issue, when multiple POD instances are running for one services, since all instances use same log file to update the content. due to this, instances are getting stuck and getting memory leak issue.
I have configured mount path in docker container , and my logback property , has below entry to write the logs.
<property name="DEV_HOME" value="/mnt/azure/<service-name>/logs" />
Is there a way to get the pod instance name in log configuration , so that i can add one more level down, to have separate logs for different instances.
Or is there better way to handle this scenario.
<property name="DEV_HOME" value="/mnt/azure/<service-name>/<instances>/logs" />
It should be possible to set the Pod information (including the name) as environment variables as mentioned here. In the application read the environment variable and log appropriately.
As mentioned by #PraveenSripati, downward api is the solution. However, there are cases where one is forced to use third-party libraries and not able to use environment variables to override the location.
In past we have configured our pods to use a combination of downward API and a K8S command and args to run a custom script before starting the application.
Let's assume the log location is set to /opt/logs, in that case, we do something like below, We keep this script in the container while builing it. For instance we call this entrypoint.sh ,
#!/bin/bash
log_location="/opt/log-$MY_POD_NAME/$MY_POD_IP"
mkdir -p /opt/log-$MY_POD_NAME/$MY_POD_IP
rm -f /opt/log
ln -s /opt/log $log_location
exec <Application_start_command>
Kubernetes POD definition
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- /opt/entrypoint.sh
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
restartPolicy: Never
This will create a symlink where the third-party library can write without changing any code there.
Lets say we have an application which accesses a file. This App is a jar which is packaged into an image and pushed to Registry for the Kubernetes to run it. But when we create the Pod, we need to configure a volume also in it. When we specify a volume we give a path, how do we place the file in that volume from lets say our virtual machine?
Please help me in understanding this with an explanation. Also should we create a storage so that its accessible from kubernetes cluster? please explain relevent topic as well to understand this.
Note: we are using azure cli
I think the best approach would be to create a ConfigMap with the data you want to use from your application. Then you just need to mount the ConfigMap as a volume in the Pod's (explained here) that need the data.
You can easily create a ConfigMap from a file like
kubectl create configmap your-configmap-name --from-file=/some/path/to/file
And then mount it in your Pod
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config
I need to be able to use the batch/v2alpha1 and apps/v1alpha1 on k8s. Currently, I'm running a cluster with 1.5.0-beta.1 installed. I would prefer to do this in the deployment script, but all I can find are the fields
"apiVersionDefault": "2016-03-30",
"apiVersionStorage": "2015-06-15",
And nowhere can I find anything about what dates to use to update those. There are also some instructions in the kubernetes docs which explain how to use the --runtime-config flag on the kubes-apiserver.. so follow those, I ssh'd into master, found the kube-apiserver manifest file and edited it to look like such:
apiVersion: "v1"
kind: "Pod"
metadata:
name: "kube-apiserver"
namespace: "kube-system"
labels:
tier: control-plane
component: kube-apiserver
spec:
hostNetwork: true
containers:
- name: "kube-apiserver"
image: "gcr.io/google_containers/hyperkube-amd64:v1.5.0-beta.1"
command:
- "/hyperkube"
- "apiserver"
- "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota"
- "--address=0.0.0.0"
- "--allow-privileged"
- "--insecure-port=8080"
- "--secure-port=443"
- "--cloud-provider=azure"
- "--cloud-config=/etc/kubernetes/azure.json"
- "--service-cluster-ip-range=10.0.0.0/16"
- "--etcd-servers=http://127.0.0.1:4001"
- "--tls-cert-file=/etc/kubernetes/certs/apiserver.crt"
- "--tls-private-key-file=/etc/kubernetes/certs/apiserver.key"
- "--client-ca-file=/etc/kubernetes/certs/ca.crt"
- "--service-account-key-file=/etc/kubernetes/certs/apiserver.key"
- "--v=4"
- "--runtime-config=batch/v2alpha1,apps/v1alpha1"
volumeMounts:
- name: "etc-kubernetes"
mountPath: "/etc/kubernetes"
- name: "var-lib-kubelet"
mountPath: "/var/lib/kubelet"
volumes:
- name: "etc-kubernetes"
hostPath:
path: "/etc/kubernetes"
- name: "var-lib-kubelet"
hostPath:
path: "/var/lib/kubelet"
That pretty much nuked my cluster.. so I'm at a complete loss now. I'm going to have to rebuild the cluster, so I'd prefer to add this in the deployment template, but really any help would be appreciated.
ACS-Engine clusters allow the ability to specify most any options you desire to override - see this document for the cluster definitions. I don't think a post-deployment script exists because there are no common options you want to change with the apicontroller and other k8s components after a deployment other than K8s version upgrades. For this purpose there are scripts included in ACS-Engine and other options for various cloud providers and flavors of kubernetes (i.e. Tectonic has a mechanism for auto-upgrades).
To manually override the values after the deployment of an ACS-Engine deployed K8s cluster, you can manually update the manifests here:
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-controller-manager.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
And also update the values in the kubelet here (i.e. to update the version of kubernetes): /etc/default/kubelet
Of course you'll want to kubectl drain your nodes before making these changes, reboot the node, and once the node comes back online and is running properly kubectl uncordon the node.
Hard to say why your cluster was nuked without knowing more information. In general, I'd say it is probably best if you are making lots of changes to apiversions and configurations you probably want a new cluster.