Mount PVC in cronjob and statefulset - cron

I have two services that I would like to access a PersistentVolumeClaim.
One is a StatefulSet that is reading from the volume (and serving content to end users), the other is a Kubernetes CronJob that periodically updates the contents of the data in obtained by a PVC.
Right now I'm running into the issue that my PVC is backed by a PV (not NFS, Ceph, or the like) and one service grabs the volume making the other not start.
How can I make it so both of these services have access to the volume?
And is there a way to add a CronJob to my StatefulSet the same way I add more containers?

Have you checked the accessModes of your pv and pvc?
If you want more than one pod to be able to mount the volume you'll need to use ReadOnlyMany or ReadWriteMany
Persistent Volume Docs
As for your second question, no, there's no way to "add a CronJob to [a] StatefulSet". They are separate and distinct API objects.

Related

MongoDB change stream - Duplicate records / Multiple listeners

My question is an extension of the earlier discussion here:
Mongo Change Streams running multiple times (kind of): Node app running multiple instances
In my case, the application is deployed on Kubernetes pods. There will be at least 3 pods and a maximum of 5 pods. The solution mentioned in the above link suggests to use <this instance's id> in the $mod operator. Since the application is deployed to K8s pods, pod names are dynamic. How can I achieve a similar solution for my scenario?
if you are running stateless workload i am not sure why you want to fix name of POD(deployment).
Fixing PODs names is only possible with stateful sets.
You should be using statefulset instead of deployment, replication controllers(RC), however, replication controllers are replaced with ReplicaSets.
StatefulSet Pods have a unique identity comprised of an ordinal. For any StatefulSet with N replicas, each Pod in the StatefulSet will be assigned an integer ordinal, from 0 up through N-1, which will be unique across Set.

Azure kubernetes - Secure pod access to resources

I am coming from Windows OS background and have limited knowledge on Linux.
As per the Microsoft documentation
For your applications to run correctly, pods should run as a defined user or group and not as root. The securityContext for a pod or container lets you define settings such as runAsUser or fsGroup to assume the appropriate permissions. Only assign the required user or group permissions, and don't use the security context as a means to assume additional permissions. The runAsUser, privilege escalation, and other Linux capabilities settings are only available on Linux nodes and pods.
and they provided the below sample
kind: Pod
metadata:
name: security-context-demo
spec:
securityContext:
fsGroup: 2000
containers:
- name: security-context-demo
image: nginx:1.15.5
securityContext:
runAsUser: 1000
allowPrivilegeEscalation: false
capabilities:
add: ["NET_ADMIN", "SYS_TIME"]
Based on the given description, I understood that pod is not given root access and given limited access on network & time.
What does runAsUser: 1000 & fsGroup: 2000 mean? How to setup the fsGroup?
runAsUser: 1000 field specifies that for any Containers in the Pod, all processes run with user ID 1000
fsGroup: 2000 specifies all processes of the container are also part of the supplementary group ID 2000.This group ID is associated with the emptyDir volume mounted in the pod and with any files created in that volume. You should remember that only certain volume types allow the kubelet to change the ownership of a volume to be owned by the pod. If the volume type allows this (as emptyDir volume type) the owning group ID will be the fsGroup.
One thing to remember with fsGroup is that when you try to mount a folder from your host. As Docker mounts the host volume preserving UUID and GUID from the host, permission issues in the Docker volume are possible. The user running the container may not have the appropriate privileges to write in the volume.
Possible solutions are running the container with the same UUID and GUID as the host or change the permissions of the host folder before mounting it to the container.
For more details on fsGroup refer to docs

Persistant storage Azure container

I've been struggling in a couple of days now with how to set up persistent storage in a custom docker container deployed on Azure.
Just for the ease, I've used the official Wordpress image in my container and provided the database credentials through environment variables, so far so good. The application is stateless and the data is stored in a separate MySQL service in Azure.
How to handle content files like server logs or uploaded images, those are placed in /var/www/html/wp-content/upload and will be removed if the container gets removed or if restoring a backup snapshot. Is it possible to mount this directory to a host location? Is it possible to mount this directory so it will be accessible through the FTP to the App Service?
Ok, I realized that it's not possible to mount volumes to a single container app. To mount a volume you must use Docker Compose and mount the volume as in the example below.
Also, make sure you set the application setting WEBSITES_ENABLE_APP_SERVICE_STORAGE to TRUE
version: '3.3'
services:
wordpress:
image: wordpress
volumes:
- ${WEBAPP_STORAGE_HOME}/site/wwwroot:/var/www/html
ports:
- "8000:80"
restart: always
With this, your uploaded files will be persisted and also included in the snapshot backups.
Yes, you can do this and you should read about PV (persistent volume) and PVC (persistent volume claims) which allows mounting volumes onto your cluster.
In your case, you can mount:
Azure Files - basically a managed NFS endpoint mounted on the k8s cluster
Azure Volumes - basically managed disk volumes mounted on the k8s cluster

Why OpenEBS mount path is still available on host machine even after deletion of PVC?

I have deleted PVC, but it shows that OpenEBS mount folder is still present with the PVC folder name and data. This is not releasing space on my host machine. Why it is so?
OpenEBS volumes of storage engine (cas type) jiva - use replica pods configured with a hostPath to save the data. Upon deletion of a OpenEBS Volume - the target and replica pods are deleted, but not the associated hostPath and the contents within it. The following issue tracks the implementation of a feature to clean up the contents of the jiva replica hostPath. https://github.com/openebs/openebs/issues/1246
Starting with OpenEBS 0.7, the OpenEBS volumes of cas type - cStor are supported. In case of cstor volumes, the associated data of the volume is deleted when the corresponding PV/PVC is deleted.

How does PersistentVolume work with hostPath?

I have deployed Gitlab to my azure kubernetes cluster with a persistant storage defined the following way:
kind: PersistentVolume
apiVersion: v1
metadata:
name: gitlab-data
namespace: gitlab
spec:
capacity:
storage: 8Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/tmp/gitlab-data"
That worked fine for some days. Suddenly all my data stored in Gitlab is gone and I don't know why. I was assuming that the hostPath defined PersistentVolumen is really persistent, because it is saved on a node and somehow replicated to all existing nodes. But my data now is lost and I cannot figure out why. I looked up the uptime of each node and there was no restart. I logged in into the nodes and checked the path and as far as I can see the data is gone.
So how do PersistentVolume Mounts work in Kubernetes? Are the data saved really persistent on the nodes? How do multiple nodes share the data, if a deployment is split to multiple nodes? Is hostPath reliable persistent storage?
hostPath doesn't share or replicate data between nodes and once your pod starts on another node, the data will be lost. You should consider to use some external shared storage.
Here's the related quote from the official docs:
HostPath (single node testing only – local storage is not supported in any way and WILL NOT WORK in a multi-node cluster)
from kubernetes.io/docs/user-guide/persistent-volumes/

Resources