I am testing with securityContext but I cant start a pod when I set runAsNonRoot to true.
I use vagrant to deploy a master and two minions and ssh to the host machine as the user abdelghani :
id $USER
uid=1001(abdelghani) gid=1001(abdelghani) groups=1001(abdelghani),27(sudo)
Cluster information:
Kubernetes version: 4.4.0-185-generic
Cloud being used: (put bare-metal if not on a public cloud)
Installation method: manual
Host OS: ubuntu16.04.6
CNI and version:
CRI and version:
apiVersion: v1
kind: Pod
metadata:
name: buggypod
spec:
containers:
- name: container
image: nginx
securityContext:
runAsNonRoot: true
I do :
kubectl apply -f pod.yml
it says pod mybugypod created but when I check with :
kubectl get pods
the pod’s status is CreateContainerConfigError
what is it I am doing wrong?
I try to run the pod based on your requirement. And the reason it failed is the Nginx require to modify some configuration in /etc/ owned by root and when you runAsNonRoot it fails as it cannot edit the Nginx default config.
This is the error you actually get when you run it.
10-listen-on-ipv6-by-default.sh: error: can not modify /etc/nginx/conf.d/default.conf (read-only file system?)
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2020/08/13 17:28:55 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
2020/08/13 17:28:55 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
The spec I ran.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: buggypod
name: buggypod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
containers:
- image: nginx
name: buggypod
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
My suggestion is you create a custom Nginx image with a Dockerfile that also creates user and provides permissions to the folders /var/cache/nginx, /etc/nginx/conf.d, /var/log/nginx for the newly created user. Such that you achieve running the container as Non-Root.
Nginx service will expect a read and write permission to its configuration path (/etc/nginx) by default non root user would have that access to the path that is the reason it is failing.
You just set runAsNonRoot but you can't expect or guarantee that container will start the service as user 1001. Please try setting runAsUser explicitly to 1001 like below, this should resolve your issue.
apiVersion: v1
kind: Pod
metadata:
name: buggypod
spec:
containers:
- name: container
image: nginx
securityContext:
runAsUser: 1001
Related
I am toying with the spark operator in kubernetes, and I am trying to create a Spark Application resource with the following manifest.
apiVersion: "sparkoperator.k8s.io/v1beta2"
kind: SparkApplication
metadata:
name: pyspark-pi
namespace: spark-jobs
spec:
batchScheduler: volcano
batchSchedulerOptions:
priorityClassName: routine
type: Python
pythonVersion: "3"
mode: cluster
image: "<image_name>"
imagePullPolicy: Always
mainApplicationFile: local:///spark-files/csv_data.py
arguments:
- "10"
sparkVersion: "3.0.0"
restartPolicy:
type: OnFailure
onFailureRetries: 3
onFailureRetryInterval: 10
onSubmissionFailureRetries: 5
onSubmissionFailureRetryInterval: 20
timeToLiveSeconds: 86400
driver:
cores: 1
coreLimit: "1200m"
memory: "512m"
labels:
version: 3.0.0
serviceAccount: driver-sa
volumeMounts:
- name: sparky-data
mountPath: /spark-data
executor:
cores: 1
instances: 2
memory: "512m"
labels:
version: 3.0.0
volumeMounts:
- name: sparky-data
mountPath: /spark-data
volumes:
- name: sparky-data
hostPath:
path: /spark-data
I am running this in kind, where I have defined a volume mount to my local system where the data to be processed is present. I can see the volume being mounted in the kind nodes. But when I create the above resource, the driver pod crashes by giving the error 'no such path'. I printed the contents of the root directory of the driver pod and I could not see the mounted volume. What is the problem here and how do I fix this?
The issue is related to permissions. When mounting a volume to a pod, you need to make sure that the permissions are set correctly. Specifically, you need to make sure that the user or group that is running the application in the pod has the correct permissions to access the data.You should also make sure that the path to the volume is valid, and that the volume is properly mounted.To check if a path exists, you can use the exec command:
kubectl exec <pod_name> -- ls
Try to add security context which gives privilege and access control settings for a Pod
For more information follow this document.
I'm using pm2 to watch the directory holding the source code for my app-server's NodeJS program, running within a Kubernetes cluster.
However, I am getting this error:
ENOSPC: System limit for number of file watchers reached
I searched on that error, and found this answer: https://stackoverflow.com/a/55763478
# insert the new value into the system config
echo fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -p
However, I tried running that in a pod on the target k8s node, and it says the command sudo was not found. If I remove the sudo, I get this error:
sysctl: setting key "fs.inotify.max_user_watches": Read-only file system
How can I modify the file-system watcher limit from the 8192 found on my Kubernetes node, to a higher value such as 524288?
I found a solution: use a privileged Daemon Set that runs on each node in the cluster, which has the ability to modify the fs.inotify.max_user_watches variable.
Add the following to a node-setup-daemon-set.yaml file, included in your Kubernetes cluster:
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: node-setup
namespace: kube-system
labels:
k8s-app: node-setup
spec:
selector:
matchLabels:
name: node-setup
template:
metadata:
labels:
name: node-setup
spec:
containers:
- name: node-setup
image: ubuntu
command: ["/bin/sh","-c"]
args: ["/script/node-setup.sh; while true; do echo Sleeping && sleep 3600; done"]
env:
- name: PARTITION_NUMBER
valueFrom:
configMapKeyRef:
name: node-setup-config
key: partition_number
volumeMounts:
- name: node-setup-script
mountPath: /script
- name: dev
mountPath: /dev
- name: etc-lvm
mountPath: /etc/lvm
securityContext:
allowPrivilegeEscalation: true
privileged: true
volumes:
- name: node-setup-script
configMap:
name: node-setup-script
defaultMode: 0755
- name: dev
hostPath:
path: /dev
- name: etc-lvm
hostPath:
path: /etc/lvm
---
apiVersion: v1
kind: ConfigMap
metadata:
name: node-setup-config
namespace: kube-system
data:
partition_number: "3"
---
apiVersion: v1
kind: ConfigMap
metadata:
name: node-setup-script
namespace: kube-system
data:
node-setup.sh: |
#!/bin/bash
set -e
# change the file-watcher max-count on each node to 524288
# insert the new value into the system config
sysctl -w fs.inotify.max_user_watches=524288
# check that the new value was applied
cat /proc/sys/fs/inotify/max_user_watches
Note: The file above could probably be simplified quite a bit. (I was basing it on this guide, and left in a lot of stuff that's probably not necessary for simply running the sysctl command.) If others succeed in trimming it further, while confirming that it still works, feel free to make/suggest those edits to my answer.
You do not want to run your container as a privileged container if you can help it.
The solution here is to set the following kernel parameters, then restart your container(s). The container(s) will use the variables from the kernel that your container is running within. This is because containers do not run separate kernels on Linux hosts (containers use the same kernel).
fs.inotify.max_user_watches=10485760
fs.aio-max-nr=10485760
fs.file-max=10485760
kernel.pid_max=10485760
kernel.threads-max=10485760
You should paste the above into: /etc/sysctl.conf.
I deployed my first container, I got info:
deployment.apps/frontarena-ads-deployment created
but then I saw my container creation is stuck in Waiting status.
Then I saw the logs using kubectl describe pod frontarena-ads-deployment-5b475667dd-gzmlp and saw MountVolume error which I cannot figure out why it is thrown:
Warning FailedMount 9m24s kubelet MountVolume.SetUp
failed for volume "ads-filesharevolume" : mount failed: exit status 32 Mounting command:
systemd-run Mounting arguments: --description=Kubernetes transient
mount for
/var/lib/kubelet/pods/85aa3bfa-341a-4da1-b3de-fb1979420028/volumes/kubernetes.io~azure-file/ads-filesharevolume
--scope -- mount -t cifs -o username=frontarenastorage,password=mypassword,file_mode=0777,dir_mode=0777,vers=3.0
//frontarenastorage.file.core.windows.net/azurecontainershare
/var/lib/kubelet/pods/85aa3bfa-341a-4da1-b3de-fb1979420028/volumes/kubernetes.io~azure-file/ads-filesharevolume
Output: Running scope as unit
run-rf54d5b5f84854777956ae0e25810bb94.scope. mount error(115):
Operation now in progress Refer to the mount.cifs(8) manual page (e.g.
man mount.cifs)
Before I run the deployment I created a secret in Azure, using the already created azure file share, which I referenced within the YAML.
$AKS_PERS_STORAGE_ACCOUNT_NAME="frontarenastorage"
$STORAGE_KEY="mypassword"
kubectl create secret generic fa-fileshare-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
In that file share I have folders and files which I need to mount and I reference azurecontainershare in YAML:
My YAML looks like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: frontarena-ads-deployment
labels:
app: frontarena-ads-deployment
spec:
replicas: 1
template:
metadata:
name: frontarena-ads-aks-test
labels:
app: frontarena-ads-aks-test
spec:
containers:
- name: frontarena-ads-aks-test
image: faselect-docker.dev/frontarena/ads:test1
imagePullPolicy: Always
ports:
- containerPort: 9000
volumeMounts:
- name: ads-filesharevolume
mountPath: /opt/front/arena/host
volumes:
- name: ads-filesharevolume
azureFile:
secretName: fa-fileshare-secret
shareName: azurecontainershare
readOnly: false
imagePullSecrets:
- name: fa-repo-secret
selector:
matchLabels:
app: frontarena-ads-aks-test
The Issue was because of the different Azure Regions in which AKS cluster and Azure File Share are deployed. If they are in the same Region you would not have this issue.
I am running kubernetes in Azure where I have created a storage account and an azure file (file share)
From my local Ubuntu machine I can successfully mount the share with:
$ sudo mount -t cifs //mystorage.....windows.net/data /home/demo/azureshare -o vers=3.0,username=mystorage,password=-C5DM...tHRow==
But when I try to do the same from a running ubuntu pod I get:
$ kubectl exec diag-app-9d8fcc878e-5r6g -it bash
root#diag-app-9d8fcc878e-5r6g:~# sudo mount -vv -t cifs //mystorage.....windows.net/data /home/user/azureshare -o vers=3.0,username=mystorage,password=-C5DM...tHRow==
mount.cifs kernel mount options: ip=xx.xxx.xxx.xxx,unc=\\mystorage.....windows.net\data,vers=3.0,user=mystorage,pass=********
mount error(13): Permission denied
Refer to the mount.cifs(8) manual page (e.g. man mount.cifs) and kernel log messages (dmesg)
I have tried with securityContext:
apiVersion: extensions/v1beta1
kind: Deployment
...
spec:
securityContext:
runAsUser: 0
containers:
...
But that gives:
Unable to apply new capability set.
So I have added:
apiVersion: extensions/v1beta1
kind: Deployment
...
spec:
securityContext:
runAsUser: 0
containers:
...
securityContext:
capabilities:
add:
- NET_ADMIN
- SYS_ADMIN
- DAC_READ_SEARCH
But still the same error. And also tried:
apiVersion: extensions/v1beta1
kind: Deployment
...
spec:
containers:
...
securityContext:
runAsUser: 0
capabilities:
add:
- NET_ADMIN
- SYS_ADMIN
- DAC_READ_SEARCH
Still same error.
The above is NOT something I am planning on doing in production I am just trying to understand why I cannot mount the share directly from inside a pod.
Any suggestions?
I know it's late. I had the same problem and had to deactivate appamor via
apiVersion: v1
kind: Pod
metadata:
annotations:
container.apparmor.security.beta.kubernetes.io/container_name: unconfined
I have reactjs app running on my pod and I have mounted source code from the host machine to the pod. It works fine but when I change my code in the host machine, pod source code also changes but when I run the site it has not affected the application. here is my manifest, what I'm doing wrong?
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: webapp
spec:
replicas: 1
minReadySeconds: 15
strategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
maxSurge: 1
template:
metadata:
labels:
app: webapp
tier: frontend
phase: development
spec:
containers:
- name: webapp
image: xxxxxx
command:
- npm
args:
- run
- dev
env:
- name: environment
value: dev
- name: AUTHOR
value: webapp
ports:
- containerPort: 3000
volumeMounts:
- mountPath: /code
name: code
imagePullSecrets:
- name: regcred
volumes:
- name: code
hostPath:
path: /hosthome/xxxx/development/react-app/src
and i know for a fact npm is not watching my changes, how can i resolve it in pods?
Basically, you need to reload your application everytime you change your code and your pods don't reload or restart when you change the code under the /code directory. You will have to re-create your pod since you are using a deployment you can either:
kubectl delete <pod-where-your-app-is-running>
or
export PATCH='{"spec":{"template":{"metadata":{"annotations":{"timestamp":"'$(date)'"}}}}}'
kubectl patch deployment webapp -p "$PATCH"
Your pods should restart after that.
what Rico has mentioned is correct, you need to patch or rebuild with every changes, but you can avoid that by running minikube without vm-driver here is the command to run minikube without vm-driver only works in Linux, by doing this you can mount host path to pod. hope this will help
sudo minikube start --bootstrapper=localkube --vm-driver=none --apiserver-ips 127.0.0.1 --apiserver-name localhost -v=1