Log Configuration for Kubernetes multiple POD instances for same services - azure

We have recently moved to micro services based architecture for our enterprise application. we are using Kubernetes cluster to host all our micro services.
Currently we didn't configure ELK for manage our logs, just storing application logs into azure blob storage.
We are facing issue, when multiple POD instances are running for one services, since all instances use same log file to update the content. due to this, instances are getting stuck and getting memory leak issue.
I have configured mount path in docker container , and my logback property , has below entry to write the logs.
<property name="DEV_HOME" value="/mnt/azure/<service-name>/logs" />
Is there a way to get the pod instance name in log configuration , so that i can add one more level down, to have separate logs for different instances.
Or is there better way to handle this scenario.
<property name="DEV_HOME" value="/mnt/azure/<service-name>/<instances>/logs" />

It should be possible to set the Pod information (including the name) as environment variables as mentioned here. In the application read the environment variable and log appropriately.

As mentioned by #PraveenSripati, downward api is the solution. However, there are cases where one is forced to use third-party libraries and not able to use environment variables to override the location.
In past we have configured our pods to use a combination of downward API and a K8S command and args to run a custom script before starting the application.
Let's assume the log location is set to /opt/logs, in that case, we do something like below, We keep this script in the container while builing it. For instance we call this entrypoint.sh ,
#!/bin/bash
log_location="/opt/log-$MY_POD_NAME/$MY_POD_IP"
mkdir -p /opt/log-$MY_POD_NAME/$MY_POD_IP
rm -f /opt/log
ln -s /opt/log $log_location
exec <Application_start_command>
Kubernetes POD definition
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- /opt/entrypoint.sh
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
restartPolicy: Never
This will create a symlink where the third-party library can write without changing any code there.

Related

How to mount local volume hostPath with AKS?

I am trying to create a Kubernetes pod and mounting a volume from local hostpath. I am using Azure Kubernetes cluster. Following is my yaml for creating pod
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- mountPath: /opt/myfolder
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /Users/kkadam/minikube/myfolder
# this field is optional
I have few files under myfolder which I want to use inside container. Files are present in local volume but not inside container.
What could be an issue?
You can not add local path to container running on AKS. You have to add the file on specific node where POD is scheduled.
If both are on same node POD and files then you can mount the files as the volume to the container and use it.
However if your POD is schedule to another node then you will not be able to access the files inside the container.
If due to any reason your node restarted or deleted during auto-scaling you might lose the data.
Judging by what you said in your comment and your config, especially the path /Users/kkadam/minikube/myfolder which is typically a Mac OS path, it seems that you're trying to mount your local volume (probably your mac) in a pod deployed on AKS.
That's the problem.
In order to make it work, you need to put the files you're trying to mount on the node running your pod (which is in AKS).

azure kubernetes service - self signed cert on private registry

I have a tunnel created between my azure subscription and my on-prem servers. ON prem we have an artifactory server that is housing all of our docker images. For all internal servers we have a company wide CA trust and all certs are generated from this.
However, when I try to deploy something to aks and reference this docker registry. I am getting a cert error because the nodes themselves do not trust the "in house" self signed cert.
Is there anyway to get the root CA chain added to the nodes? Or a way to tell the docker daemon on the aks nodes this is an insecure registry?
Not one hundred percent sure, but you can try to use the docker config to create the secret for image pull, the command like this:
cat ~/.docker/config.json | base64
Then create the secret like this:
apiVersion: v1
kind: Secret
metadata:
name: registrypullsecret
data:
.dockerconfigjson: <base-64-encoded-json-here>
type: kubernetes.io/dockerconfigjson
Use this secret in your deployment or pod as the value of imagePullSecrets. For more details, see Using a private Docker Registry with Kubernetes.
For the beginning I would recommend you to use curl to check connection between your azure cluster and on prem server.
Please use curl and curl -k and check if they both works(-k allow connections to SSL sites without certs, I assume it won't work, what means You don't have on prem certs on azure cluster)
If curl -k won't work then you need to copy and add certs from on prem to azure cluster.
Links which should help you do that
https://docs.docker.com/ee/enable-client-certificate-authentication/
https://askubuntu.com/questions/73287/how-do-i-install-a-root-certificate
And found some informations about doing that with docker daemon
https://docs.docker.com/registry/insecure/
I hope it will help you. Let me know if you have any more questions.
It looks like you are having the same problem described here: https://github.com/kubernetes/kubernetes/issues/43924.
This solution should probably work for you:
As far as I remember this was a docker issue, not a kubernetes one.
Docker does not use linux's ca certs. Nobody knows why.
You have to install those certs manually (on every node that could
spawn those pods) so that docker can use them:
/etc/docker/certs.d/mydomain.com:1234/ca.crt
This is a highly annoying issue as you have to butcher your nodes
after bootstrapping to get those certs in there. And kubernetes spawns
nodes all the time. How this issue has not been solved yet is a
mystery to me. It's a complete showstopper IMO.
Then it's just a question of how to run this for every node. You could do that with a DaemonSet which runs a script from a ConfigMap, as described here: https://cloud.google.com/solutions/automatically-bootstrapping-gke-nodes-with-daemonsets. That article refers to a GitHub project https://github.com/GoogleCloudPlatform/solutions-gke-init-daemonsets-tutorial.
The magic is in the DaemonSet.yaml:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-initializer
labels:
app: default-init
spec:
selector:
matchLabels:
app: default-init
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
name: node-initializer
app: default-init
spec:
volumes:
- name: root-mount
hostPath:
path: /
- name: entrypoint
configMap:
name: entrypoint
defaultMode: 0744
initContainers:
- image: ubuntu:18.04
name: node-initializer
command: ["/scripts/entrypoint.sh"]
env:
- name: ROOT_MOUNT_DIR
value: /root
securityContext:
privileged: true
volumeMounts:
- name: root-mount
mountPath: /root
- name: entrypoint
mountPath: /scripts
containers:
- image: "gcr.io/google-containers/pause:2.0"
name: pause
You could modify the script that is in the ConfigMap to pull your cert and put it in the correct directory.

Kubernetes Mounting Hostpath to specific location within container

I'm trying to translate a bunch of docker-compose files into kubernetes yamls. I have used kompose, which has gotten me part way, but I'm getting stuck on one particular part for multiple containers.
This is one of the containers. Notice the docker container is mounting /u/data/. . . to /var/lib/mysql. This is actually necessary as the mysql directory contains the database and configurations.
server1-backend-mysql:
image: mysql
container_name: server-backend-mysql
restart: always
volumes:
- /u/data/server-backend-mysql:/var/lib/mysql
networks:
- eolnet
What is the correct way to make this happen in Kubernetes? Note that for k8 I will be mounting an nfs volume (this is only for testing purposes).
I did look into hostpath, but so far no luck.
When declaring a Pod, specify the volume at spec.volumes, and then the volume mount at spec.containers[*].volumeMounts:
apiVersion: v1
kind: Pod
metadata:
name: server1-backend-mysql
spec:
containers:
- image: mysql
name: mysql
volumeMounts:
- mountPath: /var/lib/mysql
name: mysql-data
volumes:
- name: mysql-data
hostPath:
path: /u/data/...
type: Directory
When declaring a Deployment or StatefulSet (which you should do instead of declaring a Pod), move the respective configurations to spec.template.spec.volumes and spec.template.spec.containers[*].volumeMounts.
For more information, have a look at the documentation.
As a side note unrelated to your question: if you're planning to run MySQL from a NFS volume, keep in mind that running MySQL from NFS is possible, but not something that MySQL is really optimized for. Be sure to configure your MySQL server accordingly, and check if your environment permits you to use a networked block device (and not a network file system) like a Ceph RBD volume or similar.

How to upload a file to kubernetes cluster for my Apps to access it?

Lets say we have an application which accesses a file. This App is a jar which is packaged into an image and pushed to Registry for the Kubernetes to run it. But when we create the Pod, we need to configure a volume also in it. When we specify a volume we give a path, how do we place the file in that volume from lets say our virtual machine?
Please help me in understanding this with an explanation. Also should we create a storage so that its accessible from kubernetes cluster? please explain relevent topic as well to understand this.
Note: we are using azure cli
I think the best approach would be to create a ConfigMap with the data you want to use from your application. Then you just need to mount the ConfigMap as a volume in the Pod's (explained here) that need the data.
You can easily create a ConfigMap from a file like
kubectl create configmap your-configmap-name --from-file=/some/path/to/file
And then mount it in your Pod
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "ls /etc/config/" ]
volumeMounts:
- name: config-volume
mountPath: /etc/config
volumes:
- name: config-volume
configMap:
# Provide the name of the ConfigMap containing the files you want
# to add to the container
name: special-config

How to update Kubernetes API version in deployment script or using --runtime-config

I need to be able to use the batch/v2alpha1 and apps/v1alpha1 on k8s. Currently, I'm running a cluster with 1.5.0-beta.1 installed. I would prefer to do this in the deployment script, but all I can find are the fields
"apiVersionDefault": "2016-03-30",
"apiVersionStorage": "2015-06-15",
And nowhere can I find anything about what dates to use to update those. There are also some instructions in the kubernetes docs which explain how to use the --runtime-config flag on the kubes-apiserver.. so follow those, I ssh'd into master, found the kube-apiserver manifest file and edited it to look like such:
apiVersion: "v1"
kind: "Pod"
metadata:
name: "kube-apiserver"
namespace: "kube-system"
labels:
tier: control-plane
component: kube-apiserver
spec:
hostNetwork: true
containers:
- name: "kube-apiserver"
image: "gcr.io/google_containers/hyperkube-amd64:v1.5.0-beta.1"
command:
- "/hyperkube"
- "apiserver"
- "--admission-control=NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,ResourceQuota"
- "--address=0.0.0.0"
- "--allow-privileged"
- "--insecure-port=8080"
- "--secure-port=443"
- "--cloud-provider=azure"
- "--cloud-config=/etc/kubernetes/azure.json"
- "--service-cluster-ip-range=10.0.0.0/16"
- "--etcd-servers=http://127.0.0.1:4001"
- "--tls-cert-file=/etc/kubernetes/certs/apiserver.crt"
- "--tls-private-key-file=/etc/kubernetes/certs/apiserver.key"
- "--client-ca-file=/etc/kubernetes/certs/ca.crt"
- "--service-account-key-file=/etc/kubernetes/certs/apiserver.key"
- "--v=4"
- "--runtime-config=batch/v2alpha1,apps/v1alpha1"
volumeMounts:
- name: "etc-kubernetes"
mountPath: "/etc/kubernetes"
- name: "var-lib-kubelet"
mountPath: "/var/lib/kubelet"
volumes:
- name: "etc-kubernetes"
hostPath:
path: "/etc/kubernetes"
- name: "var-lib-kubelet"
hostPath:
path: "/var/lib/kubelet"
That pretty much nuked my cluster.. so I'm at a complete loss now. I'm going to have to rebuild the cluster, so I'd prefer to add this in the deployment template, but really any help would be appreciated.
ACS-Engine clusters allow the ability to specify most any options you desire to override - see this document for the cluster definitions. I don't think a post-deployment script exists because there are no common options you want to change with the apicontroller and other k8s components after a deployment other than K8s version upgrades. For this purpose there are scripts included in ACS-Engine and other options for various cloud providers and flavors of kubernetes (i.e. Tectonic has a mechanism for auto-upgrades).
To manually override the values after the deployment of an ACS-Engine deployed K8s cluster, you can manually update the manifests here:
/etc/kubernetes/manifests/kube-apiserver.yaml
/etc/kubernetes/manifests/kube-controller-manager.yaml
/etc/kubernetes/manifests/kube-scheduler.yaml
And also update the values in the kubelet here (i.e. to update the version of kubernetes): /etc/default/kubelet
Of course you'll want to kubectl drain your nodes before making these changes, reboot the node, and once the node comes back online and is running properly kubectl uncordon the node.
Hard to say why your cluster was nuked without knowing more information. In general, I'd say it is probably best if you are making lots of changes to apiversions and configurations you probably want a new cluster.

Resources