I created a dockerfile which is use user 10001
if it is possible to kubernetes deployment with following security context
user 10001
securityContext:
runAsUser: 1000
runAsGroup: 3000
fsGroup: 2000
Yes, you can remote into the container and check with id command: uid=1000 gid=3000 groups=2000
Related
I would like to have a docker container active only during certain time's in the day so that a Test Automation can run? Is it possible?
Yes it is possible. Cronjobs is designed to run a job periodically on a given schedule, written in Cron format. A Job creates one or more Pods and will continue to retry execution of the Pods until a specified number of them successfully terminate.
To run your automation tests
You should create a Cronjob definition
Set the cron timer
Call your CMD
Here is a sample Hello Wordl example:
apiVersion: batch/v1
kind: CronJob
metadata:
name: hello
spec:
schedule: "*/1 * * * *"
jobTemplate:
spec:
template:
spec:
containers:
- name: hello
image: busybox
imagePullPolicy: IfNotPresent
command:
- /bin/sh
- -c
- date; echo Hello from the Kubernetes cluster
restartPolicy: OnFailure
You haven't given much information beside running and stopping a container. One of the simplest way is to use the Docker CLI to run an instance of your container in Azure Container Instances. You create and use a context for Azure and then, using docker run will create an ACI and run your container in Azure.
docker login azure
docker context create aci myacicontext
docker context use myacicontext
docker run -p 80:80 [yourImage]
docker rm [instanceName]
Ref: https://www.docker.com/blog/running-a-container-in-aci-with-docker-desktop-edge/ https://learn.microsoft.com/en-us/azure/container-instances/quickstart-docker-cli
I am testing with securityContext but I cant start a pod when I set runAsNonRoot to true.
I use vagrant to deploy a master and two minions and ssh to the host machine as the user abdelghani :
id $USER
uid=1001(abdelghani) gid=1001(abdelghani) groups=1001(abdelghani),27(sudo)
Cluster information:
Kubernetes version: 4.4.0-185-generic
Cloud being used: (put bare-metal if not on a public cloud)
Installation method: manual
Host OS: ubuntu16.04.6
CNI and version:
CRI and version:
apiVersion: v1
kind: Pod
metadata:
name: buggypod
spec:
containers:
- name: container
image: nginx
securityContext:
runAsNonRoot: true
I do :
kubectl apply -f pod.yml
it says pod mybugypod created but when I check with :
kubectl get pods
the pod’s status is CreateContainerConfigError
what is it I am doing wrong?
I try to run the pod based on your requirement. And the reason it failed is the Nginx require to modify some configuration in /etc/ owned by root and when you runAsNonRoot it fails as it cannot edit the Nginx default config.
This is the error you actually get when you run it.
10-listen-on-ipv6-by-default.sh: error: can not modify /etc/nginx/conf.d/default.conf (read-only file system?)
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete; ready for start up
2020/08/13 17:28:55 [warn] 1#1: the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
nginx: [warn] the "user" directive makes sense only if the master process runs with super-user privileges, ignored in /etc/nginx/nginx.conf:2
2020/08/13 17:28:55 [emerg] 1#1: mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
nginx: [emerg] mkdir() "/var/cache/nginx/client_temp" failed (13: Permission denied)
The spec I ran.
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
run: buggypod
name: buggypod
spec:
securityContext:
runAsNonRoot: true
runAsUser: 1000
containers:
- image: nginx
name: buggypod
resources: {}
dnsPolicy: ClusterFirst
restartPolicy: Always
status: {}
My suggestion is you create a custom Nginx image with a Dockerfile that also creates user and provides permissions to the folders /var/cache/nginx, /etc/nginx/conf.d, /var/log/nginx for the newly created user. Such that you achieve running the container as Non-Root.
Nginx service will expect a read and write permission to its configuration path (/etc/nginx) by default non root user would have that access to the path that is the reason it is failing.
You just set runAsNonRoot but you can't expect or guarantee that container will start the service as user 1001. Please try setting runAsUser explicitly to 1001 like below, this should resolve your issue.
apiVersion: v1
kind: Pod
metadata:
name: buggypod
spec:
containers:
- name: container
image: nginx
securityContext:
runAsUser: 1001
I am creating a container group with a container that runs E2E on a website. How can I stop the entire group when one of the containers have stop running? (in this case the E2E tests)
I am creating this through a pipeline and I need to stop the front end container one the test are done.
apiVersion: 2018-10-01
location: northeurope
name: e2e-uat
properties:
containers:
# name of the instance in Azure.
- name: e2etestcafe
properties:
image: registry.azurecr.io/e2e/e2etestcafe:latest
resources:
requests:
cpu: 1
memoryInGb: 3
- name: customerportal
properties:
image: registry.azurecr.io/e2e/customerportal:latest
resources:
requests:
cpu: 1
memoryInGb: 1
ports:
- port: 80
osType: Linux
restartPolicy: never
tags: null
type: Microsoft.ContainerInstance/containerGroups
For this requirement, the ACI does not have the feature that you expect as I know. So you need to check the containers' state yourself.
I recommend you create a script with a loop to check the containers' state until it meets the situation you expect, then stop the whole container group. In the Azure DevOps, you can use the release pipeline with three stages, one for creation, second for checking the state with running the script, third for stop the container group.
To check the containers' state, I think the CLI command is helpful below:
az container show -g myResourceGroup -n myContainerGroup --query containers[*].instanceView.currentState.state
It will output all the containers' state in an array.
In my local machine I created a Windows Docker/nano server container and was able to 'push' this container into an Azure Container Registry using this command (The reason why I had to use the Windows container is because I have to use CSOM in the ASP.NET Core and it is not possible in Linux)
docker push MyContainerRegistry.azurecr.io/myimage:v1
That Docker container IS visible inside the Azure container registry which is MyContainerRegistry
I know that in order to run it I have to create a Container Instance; however, our management team doesn't want to go with that path and wants to use AKS instead
We do have an AKS cluster created
The kubectl IS running in our Azure shell
I tried to create an AKS pod using this command
kubectl apply -f myyaml.yaml
These are contents of yaml file
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: mypod
spec:
replicas: 1
template:
metadata:
labels:
app: mypod
spec:
containers:
- name: mypod
image: MyContainerRegistry.azurecr.io/itataxsync:v1
ports:
- containerPort: 80
imagePullSecrets:
- name: mysecret
nodeSelector:
beta.kubernetes.io/os: windows
The pod successfully created.
When I run 'get pods' I see a newly created pod
However, when I get into details of this pod, I see the following
"Warning FailedScheduling 3m (x2 over 3m) default-scheduler 0/3
nodes are available: 3 node(s) didn't match node selector."
Does it mean that I simply can't run Docker Windows container in Azure using AKS?
Is there any way I can run Docker Windows container in Azure at all?
Thank you very much for your help!
You cannot yet have windows nodes on AKS, you can, however, use AKS engine (examples).
Bear in mind that windows support in kubernetes is a bit lacking, so you will run into issues, unfortunately.
I have tectonic kubernetes cluster installed on Azure. It's made from tectonic-installer GH repo, from master (commit 0a7a1edb0a2eec8f3fb9e1e612a8ef1fd890c332).
> kubectl version
Client Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.2", GitCommit:"922a86cfcd65915a9b2f69f3f193b8907d741d9c", GitTreeState:"clean", BuildDate:"2017-07-21T08:23:22Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"7", GitVersion:"v1.7.3+coreos.0", GitCommit:"42de91f04e456f7625941a6c4aaedaa69708be1b", GitTreeState:"clean", BuildDate:"2017-08-07T19:44:31Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
On the cluster I created storage class, PVC and pod as in: https://gist.github.com/mwieczorek/28b7c779555d236a9756cb94109d6695
But the pod cannot start. When I run:
kubectl describe pod mypod
I get in events:
FailedMount Unable to mount volumes for pod "mypod_default(afc68bee-88cb-11e7-a44f-000d3a28f26a)":
timeout expired waiting for volumes to attach/mount for pod "default"/"mypod". list of unattached/unmounted volumes=[mypd]
In kubelet logs (https://gist.github.com/mwieczorek/900db1e10971a39942cba07e202f3c50) I see:
Error: Volume not attached according to node status for volume "pvc-61a8dc6a-88cb-11e7-ad19-000d3a28f2d3"
(UniqueName: "kubernetes.io/azure-disk//subscriptions/abc/resourceGroups/tectonic-cluster-mwtest/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-61a8dc6a-88cb-11e7-ad19-000d3a28f2d3") pod "mypod" (UID: "afc68bee-88cb-11e7-a44f-000d3a28f26a")
When I create PVC - new disc on Azure is created.
And after creating pod - I see on the azure portal that the disc is attached to worker VM where the pod is scheduled.
> fdisk -l
shows:
Disk /dev/sdc: 2 GiB, 2147483648 bytes, 4194304 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
I found similar issue on GH ( kubernetes/kubernetes/issues/50150) but I have cluster built from master so it's not the udev rules (I checked - file /etc/udev/rules.d/66-azure-storage.rules exists)
Does anybody knows if it's a bug (maybe know issue)?
Or am I doing something wrong?
Also: how can I troubleshoot that further?
I had test in lab, use your yaml file to create pod, after one hour, it still show pending.
root#k8s-master-ED3DFF55-0:~# kubectl get pod
NAME READY STATUS RESTARTS AGE
mypod 0/1 Pending 0 1h
task-pv-pod 1/1 Running 0 2h
We can use this yaml file to create pod:
PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mypvc
namespace: kube-public
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
Output:
root#k8s-master-ED3DFF55-0:~# kubectl get pvc --namespace=kube-public
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
mypvc Bound pvc-1b097337-8960-11e7-82fc-000d3a191e6a 100Gi RWO default 3h
Pod:
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
Output:
root#k8s-master-ED3DFF55-0:~# kubectl get pods
NAME READY STATUS RESTARTS AGE
task-pv-pod 1/1 Running 0 3h
As a workaround, we can use default as the storageclass.
In Azure, there are managed disk and unmanaged disk. if your nodes are use managed disk, two storage classes will be created to provide access to create Kubernetes persistent volumes using Azure managed disks.
They are managed-premium and managed-standard and map to Standard_LRS and Premium_LRS managed disk types respectively.
If your nodes are use non-managed disk, the default storage class will be used if persistent volume resources don't specify a storage class as part of the resource definition.
The default storage class uses non-managed blob storage and will provision the blob within an existing storage account present in the resource group or provision a new storage account.
Non-managed persistent volume types are available on all VM sizes.
More information about managed disk and non-managed disk, please refer to this link.
Here is the test result:
root#k8s-master-ED3DFF55-0:~# kubectl get pvc --namespace=default
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
shared Pending standard-managed 2h
shared1 Pending managed-standard 15m
shared12 Pending standard-managed 14m
shared123 Bound pvc-a379ced4-897c-11e7-82fc-000d3a191e6a 2Gi RWO default 12m
task-pv-claim Bound pvc-3cefd456-8961-11e7-82fc-000d3a191e6a 3Gi RWO default 3h
Update:
Here is my K8s agent's unmanaged disk:
In your case, "kubectl describe pod-name" does not provide suffiecient info, you need to provide k8s contoller manager logs for troubleshooting
Get the controller manager logs on master:
#get the "CONTAINER ID" of "/hyperkube controlle"
docker ps -a | grep "hyperkube controlle" | awk -F ' ' '{print $1}'
#get controller manager logs
docker logs "CONTAINER ID" > "CONTAINER ID".log 2>&1 &
Provisioning should be very quick. Check your controller logs to make sure the PV required by the PVC is provisioned correctly:
Navigate to Azure portal > cluster > Activity Log
Remove filter for namespaces and look for "Update Storage Account Create" entries.
In our case we needed to register our cluster subscription for the 'Microsoft.Storage' namespace so that the controller can provision the required PV. You can do this with the azure cli:
az provider register --namespace Microsoft.Storage
I had a similar issue, this command worked for me.
az resource update --ids /subscriptions/<SUBSCRIPTION-ID>/resourcegroups/<RESOURCE-GROUP>/providers/Microsoft.ContainerService/managedClusters/<AKS-CLUSTER-NAME>/agentpools/<NODE-GROUP-NAME>