I have Jenkins POD that mounts a pv with a pvc.
Now I want to create a cronjob that use the same pvc in order to do some log rotation on jenkins build.
How can I access to Jenkins PVC from cronjob in order to do some batch procedures on PV?
How can I access to Jenkins PVC from cronjob in order to do some batch procedures on PV?
Personally I think you can consider the following ways to share Jenkins PVC with CronJob pods.
Share PV which is created as ReadWriteMany with two PVC, such as Jenkins PVC and CrontJob PVC. Refer Sharing an NFS mount across two persistent volume claims
for more details.
OR mount Jenkins PVC when CronJob Pod start up after stopping the Jenkins Pod.
It's required to stop Jenkins Pod before mount the PVC with Cronjob pod.
I hope it help you.
Related
I'm trying to follow this article: https://airflow.apache.org/docs/apache-airflow/1.10.6/howto/write-logs.html
so Airflow will start writing logs to blob storage but the problem is I do not now how to configure Airflow to do that. In my case, Airflow is running on Kubernetes Cluster and deployment is done via Helm chart.
I tried to log into webserver Pod but #airflow user is not authorized to create any files in AIRFLOW_HOME directory. I was trying to use sudo but I can't find password (I'm not even sure if it works airflow is not in sudoers anyway )
Should I do all of this in docker image and just restart Airflow?
I am not too familiar with Helm Chart setups but maybe it is worth a try to add the variables for remote logging in the values.yaml file like this:
config:
logging:
remote_logging=True
log_conn_id=<their AWS conn id>
remote_base_log_folder=s3://bucket-name/logs
Plus define a normal Airflow connection either via an ENV variable in the Dockerfile or via the UI and provide that as the AWS conn id.
If that does not work my next attempt would be to use ENV variables for all of the settings in the Dockerfile:
# allow remote logging and provide a connection ID
ENV AIRFLOW__LOGGING__REMOTE_LOGGING=True
ENV AIRFLOW__LOGGING__REMOTE_LOG_CONN_ID=${AMAZONS3_CON_ID}
# specify the location of your remote logs using your bucket name
ENV AIRFLOW__LOGGING__REMOTE_BASE_LOG_FOLDER=s3://${S3BUCKET_NAME}/logs
# optional: serverside encryption for S3 logs
ENV AIRFLOW__LOGGING__ENCRYPT_S3_LOGS=True
Also if you are on pre-2 Airflow consider upgrading if you can, it is worth it imho. :)
I have a container on Azure. When the container starts, it will run a script to modify some configuration files under /var/lib/myservice/conf/. I also want to mount an Azure Files volume in this container with volume mount path is /var/lib/myservice/. The problem is that the container cannot run successfully. If I change the volume path to /var/lib/myservice/logs/ it will start successfully. I think the problem is because when mouting, my script cannot find the configuration files so it cannot modify it. Folder /logs is intact so the container starts successfully.
I'm sorry if my question maybe a bit confusing. Any one can help me how to mouting directory /var/lib/myservice/ successfully ? Thank you very much.
The problem is that if you mount the Azure Files volumes to the path /var/lib/myservice/, then the volume will overwrite the path and leave it empty as the Azure files. But the files in that path are necessary for your service initial. So the container cannot run successfully.
The logs are not necessary for your service initial, so it does not affect your service when you mount to the path of the log.
I need a shared volume accessible from multiple pods for caching files in RAM on each node.
The problem is that the emptyDir volume provisioner (which supports Memory as its medium) is available in Volume spec but not in PersistentVolume spec.
Is there any way to achieve this, except by creating a tmpfs volume manually on each host and mounting it via local or hostPath provisioner in the PV spec?
Note that Docker itself supports such volumes:
docker volume create --driver local --opt type=tmpfs --opt device=tmpfs \
--opt o=size=100m,uid=1000 foo
I don't see any reason why k8s doesn't. Or maybe it does, but it's not obvious?
I tried playing with local and hostPath PVs with mountOptions but it didn't work.
EmtpyDir tied to lifetime of a pod, so it can't be used via shared with multiple pods.
What you request, is additional feature and if you look at below github discussions, you will see that you are not the first that asking for this feature.
consider a tmpfs storage class
Also according your mention that docker supports this tmpfs volume, yes it supports, but you can't share this volume between containers. From Documentation
Limitations of tmpfs mounts:
Unlike volumes and bind mounts, you can’t
share tmpfs mounts between containers.
I am trying to log into a kubernetes pod using the kubectl exec command. I am successful but it logs me in as the root user. I have created some other users too as part of the system build.
Command being used is "kubectl exec -it /bin/bash". I guess this means that run /bin/bash on the pod which results into a shell entry into the container.
Can someone please guide me on the following -
How to logon using a non-root user?
Is there a way to disable root user login?
How can I bind our organization's ldap into the container?
Please let me know if more information is needed from my end to answer this?
Thanks,
Anurag
You can use su - <USERNAME> to login as a non-root user.
Run cat /etc/passwd to get a list of all available users then identify a user with a valid shell compiler e.g
/bin/bash or /bin/sh
Users with /bin/nologin and /bin/false as the set compiler are used by system processes and as such you can't log in as them.
I think its because the container user is root, that is why when you kubectl exec into it, the default user is root. If you run your container or pod with non root then kubectl exec will not be root.
In most cases, there is only one process that runs in a Docker container inside a Kubernetes Pod. There are no other processes that can provide authentication or authorization features. You can try to run a wrapper with several nested processes in one container, but this way you spoil the containerization idea to run an immutable application code with minimum overhead.
kubectl exec runs another process in the same container environment with the main process, and there is no option to set the user ID for this process.
However, you can do it by using docker exec with the additional option:
--user , -u Username or UID (format: <name|uid>[:<group|gid>])
In any case, these two articles might be helpful for you to run IBM MQ in Kubernetes cluster
Availability and scalability of IBM MQ in containers
Administering Kubernetes
I'm trying to host Jenkins in a Docker container in the Azure App Service. This means it's 'linux' hosting.
By default the jenkins/jenkins-2.110-alpine Docker image stores its data in the /var/jenkins_home folder in the container. I want this data/config persisted to Azure persistent storage so that it's persisted across container restarts.
I've read documentation and blogs stating that you can have container data persisted if it's stored in the /home folder.
So I've customized the Jenkins Dockerfile to look like this...
FROM jenkins/jenkins:2.110-alpine
USER root
RUN mkdir /home/jenkins
RUN ln -s /var/jenkins_home /home/jenkins
USER jenkins
However, when I deploy to Azure App Service I don't see the file in my /home folder (looking in Kudu console). The app starts just fine, but I lose all of my data when I restart my container.
What am I missing?
That's expected because you only persist a symlink (ln -s /var/jenkins_home /home/jenkins) on the Azure host. All the files physically exist inside the container.
To do this, you have to actually change Jenkins configuration to store all data in /home/jenkins which you have already created in your Dockerfile above.
A quick search for Jenkins data folder suggests that you set the environment variable JENKINS_HOME to your directory.
In your Dockerfile:
ENV JENKINS_HOME /home/jenkins