POD Security Policy Evaluation order (multiple roles) - security

Running 1.15 on AWS EKS.
By default AWS provides eks.privileged PSP (documented here: https://docs.aws.amazon.com/eks/latest/userguide/pod-security-policy.html). This is assigned to all authenticated users.
I then create a more restrictive PSP eks.restricted:
---
# restricted pod security policy
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
creationTimestamp: null
labels:
kubernetes.io/cluster-service: "true"
eks.amazonaws.com/component: pod-security-policy
name: eks.restricted
spec:
allowPrivilegeEscalation: false
allowedCapabilities:
- '*'
fsGroup:
rule: RunAsAny
hostPorts:
- max: 65535
min: 0
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
The above is a non-mutating PSP. I also modify the default eks.privilged PSP to make it modifying by adding the following annotations
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
Finally I update the clusterrole to add in the new PSP I created:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: eks:podsecuritypolicy:privileged
labels:
kubernetes.io/cluster-service: "true"
eks.amazonaws.com/component: pod-security-policy
rules:
- apiGroups:
- policy
resourceNames:
- eks.privileged
- eks.restricted
resources:
- podsecuritypolicies
verbs:
- use
What this accomplishes is that eks.restricted becomes the default PSP do to the fact that it is non-mutating (https://v1-15.docs.kubernetes.io/docs/concepts/policy/pod-security-policy/#policy-order and the order of the list doesn't matter).
That is great. But what I am trying to accomplish is create a single namespace that defaults to eks.restricted while all others default to eks.privileged.
I attempted to do this as such.
First I removed eks.restricted from ClusterRole eks:podsecuritypolicy:privileged so that eks.privileged is now the cluster wide default. Within my namespace I created a new role
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
labels:
eks.amazonaws.com/component: pod-security-policy
kubernetes.io/cluster-service: "true"
name: eks:podsecuritypolicy:restricted
rules:
- apiGroups:
- policy
resourceNames:
- eks.restricted
resources:
- podsecuritypolicies
verbs:
- use
This Role grants use to PSP eks.restricted. I then bound this new Role to a ServiceAccount within my example namespace.
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp-restricted
namespace: psp-example
roleRef:
kind: Role
name: eks:podsecuritypolicy:restricted
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: privileged-sa
namespace: psp-example
Finally I created a deployment, which violates PSP eks.restricted that uses this serviceAccount
apiVersion: apps/v1
kind: Deployment
metadata:
name: centos-deployment
namespace: psp-example
labels:
app: centos
spec:
replicas: 3
selector:
matchLabels:
app: centos
template:
metadata:
labels:
app: centos
spec:
serviceAccountName: privileged-sa
containers:
- name: centos
#image: centos:centos7
image: datinc/permtest:0
command:
- '/bin/sleep'
- '60000'
My assumption would be that this would function as in my initial example/test at the start of this post. My combined access is to both eks.privileged due to it being bound to system:authenticated group and eks.restricted bound to the serviceAccount my deployment is running under. Since eks.restricted is non-mutating it should be the one that applies and such it should fail to create PODs. But that isn't what happens. The PODs start up just fine.
As a further test I added eks.privileged to the SA Role (listed above) expecting it to function like in my original example. It does not, PODs create just fine.
Trying to figure out why this is.

At AWS, your deployment uses ServiceAccount replicaset-controller in namespace kube-system, so you need to remove this from ClusterRoleBinding eks:podsecuritypolicy:authenticated or delete that.
Kindly check this article for the detail :
https://dev.to/anupamncsu/pod-security-policy-on-eks-mp9

Related

Unable to create a Kubernetes namespace with a GitLab Ci/CD pipeline and Terraform

I'm trying to create a namespace in an AKS cluster that was created using Terraform and a GitLab CI/CD pipeline.
I have the same error already discussed in this question.
I'm following the instructions but I still have the same error.
These are the .yaml files I created
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: modify-namespace
rules:
- apiGroups: [""]
resources:
- namespaces
verbs:
- create
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: gitlab-runner-role-default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: modify-namespace
subjects:
- kind: ServiceAccount
name: default
namespace: gitlab-runner
and this is the Terraform code to create the namespace
resource "kubernetes_namespace" "demo_namespace" {
metadata {
name = "demo"
}
}
What I'm doing wrong?

Pod CrashLoopBackOff error with automountServiceAccountToken set to false

Hi Kubernetes Experts,
I was using the following ServiceAccount creation config:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-scheduler
namespace: kube-system
and the following Deployment config:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: my-scheduler
namespace: kube-system
spec:
...
template:
metadata:
labels:
component: scheduler
tier: control-plane
version: second
k8s-custom-scheduler: my-scheduler
spec:
serviceAccountName: my-scheduler
Things were working fine, And now I want to make my pod more secure setting automountServiceAccountToken to false.
I changed my ServiceAccount creation and deployment config:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-scheduler
namespace: kube-system
automountServiceAccountToken: false
Deployment config:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: my-scheduler
namespace: kube-system
spec:
...
template:
metadata:
labels:
component: scheduler
tier: control-plane
version: second
k8s-custom-scheduler: my-scheduler
spec:
serviceAccountName: my-scheduler
automountServiceAccountToken: false
After setting this my scheduler pod is not coming up and it says CrashLoopBackOff
Error:
I0325 17:37:50.304810 1 flags.go:33] FLAG: --write-config-to=""
I0325 17:37:50.891504 1 serving.go:319] Generated self-signed cert in-memory
W0325 17:37:51.168023 1 authentication.go:387] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0325 17:37:51.168064 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0325 17:37:51.168072 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0325 17:37:51.168089 1 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0325 17:37:51.168102 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
W0325 17:37:51.168111 1 options.go:298] Neither --kubeconfig nor --master was specified. Using default API client. This might not work.
invalid configuration: no configuration has been provided
I believe we need to configure something more along with automountServiceAccountToken: false.
Can someone point me to the additional configurations needed to use automountServiceAccountToken: false?
Configure Service Accounts for Pods
You can access the API from inside a pod using automatically mounted service account credentials.
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account or for a particular pod.
So, when you are creating a ServiceAccount and a Deployment like in your example yaml files, credentials for accessing the Kubernetes API are not automatically mounted to the Pod. But your k8s Deployment 'my-scheduler' requires them to access the API.
You can test your ServiceAccount with some dummy Deployment of nginx, for example. And it will work without mounting credentials.
Also, if you create a ServiceAccount like in your example:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-scheduler
namespace: kube-system
automountServiceAccountToken: false
You can manually mount the API credentials like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-scheduler
namespace: kube-system
...
spec:
...
template:
metadata:
labels:
app: my-scheduler
spec:
containers:
- image: <YOUR_IMAGE>
imagePullPolicy: Always
name: my-scheduler
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access
readOnly: true
serviceAccountName: my-scheduler
volumes:
- name: kube-api-access
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
See the Managing Service Accounts link for more information.

Sync Azure Key Vault Secret with Kubernetes dockerconfigjson Secret

I am trying to sync an Azure Key Vault Secret with a Kubernetes Secret of type dockerconfigjson by applying the following yaml manifest with the 4 objects Pod, SecretProviderClass, AzureIdentity and AzureIdentityBinding.
All configuration around key vault access and managed identity RBAC rules have been done and proven to work, as I have access to the Azure Key Vault secret from within the running Pod.
But, when applying this manifest, and according to the documentation here, I expect to see the kubernetes secret regcred reflecting the Azure Key Vault Secret when I create the Pod with mounted secret volume, but the kubernetes secret remains unchanged. I have also tried to recreate the Pod in an attempt to trigger the sync but in vain.
Since this is a very declarative way of configuring this functionality, I am also confused where to look at logs for troubleshooting.
Can someone lead me to what may I be doing wrong?
apiVersion: v1
kind: Pod
metadata:
name: demo
labels:
aadpodidbinding: webapp
spec:
containers:
- name: demo
image: mcr.microsoft.com/oss/azure/aad-pod-identity/demo:v1.6.3
volumeMounts:
- name: web-app-secret
mountPath: "/mnt/secrets"
readOnly: true
nodeSelector:
kubernetes.io/os: linux
volumes:
- name: web-app-secret
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: web-app-secret-provide
---
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: web-app-secret-provide
spec:
provider: azure
parameters:
usePodIdentity: "true"
keyvaultName: <key-vault-name>
objects: |
array:
- |
objectName: registryPassword
objectType: secret
tenantId: <tenant-id>
secretObjects:
- data:
- key: .dockerconfigjson
objectName: registryPassword
secretName: regcred
type: kubernetes.io/dockerconfigjson
---
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
name: kv-managed-identity
spec:
type: 0
resourceID: <resource-id>
clientID: <client-id>
---
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
name: kv-managed-binding
spec:
azureIdentity: kv-managed-identity
selector: web-app

How to deploy .NET core web and worker projects to Kubernetes in single deployment?

I am relatively new to Docker and Kubernetes technologies. My requirement is to deploy one web and one worker (.Net background service) project in a single deployment.
this is how my deployment.yml file looks like :
apiVersion : apps/v1
kind: Deployment
metadata:
name: worker
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: worker
template:
metadata:
labels:
app: worker
spec:
containers:
- name: worker
image: xxxxx.azurecr.io/worker:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
#ports:
#- containerPort: 80
apiVersion : apps/v1
kind: Deployment
metadata:
name: web
spec:
progressDeadlineSeconds: 3600
replicas: 1
selector:
matchLabels:
app: web
template:
metadata:
labels:
app: web
spec:
containers:
- name: web
image: xxxxx.azurecr.io/web:#{Build.BuildId}#
#image: xxxxx.azurecr.io/web
imagePullPolicy: Always
ports:
- containerPort: 80
this is how my service.yml file looks like :
apiVersion: v1
kind: Service
metadata:
name: worker
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: worker
---
apiVersion: v1
kind: Service
metadata:
name: web
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: web
What I have found is if I keep both in service.yml file then its only deploying one in Kubernetes and if I comment one and execute one by one then its deploying to Kubernetes.
Is there any rule that we can’t have both in single file? Any reason why it’s not working together however working individually?
One more ask is there any way we can look into worker service pod something like taking remote of that and see what exactly going on there....even if it’s a console application then anyway to read what’s its printing on console after deployment.?
This issue was resolved in the comments section and I decided to provide a Community Wiki answer just for better visibility to other community members.
It is possible to group multiple Kubernetes resources in the same file, but it is important to separate them using three dashes (“---”).
It's also worth mentioning that resources will be created in the order they appear in the file.
For more information, see the Organizing resource configurations documentation.
I've created an example to demonstrate how we can create a simple app-1 application (Deployment + Service) using a single manifest file:
$ cat app-1.yml
apiVersion: v1
kind: Service
metadata:
labels:
app: app-1
name: app-1
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: app-1
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: app-1
name: app-1
spec:
replicas: 1
selector:
matchLabels:
app: app-1
template:
metadata:
labels:
app: app-1
spec:
containers:
- image: nginx
name: nginx
NOTE: Resources are created in the order they appear in the file:
$ kubectl apply -f app-1.yml
service/app-1 created
deployment.apps/app-1 created
$ kubectl get deploy,svc
NAME READY UP-TO-DATE
deployment.apps/app-1 1/1 1
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
service/app-1 ClusterIP 10.8.14.179 <none> 80/TCP

Shared Azure File Storage with Statefulset on AKS

I have a Statefulset with 3 instances on Azure Kubernetes 1.16, where I try to use Azure File storage to create a single file share for the 3 instances.
I use Azure Files dynamic where all is declarative i.e. storage account, secrets, pvc's and pv's are created automatically.
Manifest with VolumeClaimTemplate
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: xxx
spec:
replicas: 3
...
volumeClaimTemplates:
- metadata:
name: xxx-data-shared
spec:
accessModes: [ ReadWriteMany ]
storageClassName: azfile-zrs-sc
resources:
requests:
storage: 1Gi
The StorageClass:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: azfile-zrs-sc
provisioner: kubernetes.io/azure-file
reclaimPolicy: Retain
volumeBindingMode: WaitForFirstConsumer
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=0
- gid=0
- mfsymlinks
- cache=strict
parameters:
resourceGroup: xxx
skuName: Standard_ZRS
shareName: data
Instead of one share, I end up with 3 pv's each referring to a separate created Azure Storage Account each with a share data.
Question: Can I use the Azure Files dynamic, with additional configuration in the manifest to get a single file share? Or will I have to do Static?
Turns out that volumeClaimTemplates is not the right place (reference).
Instead use persistentVolumeClaim.
For Azure File Storage this becomes:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-shared-claim
spec:
accessModes:
- ReadWriteMany
storageClassName: azfile-zrs-sc
resources:
requests:
storage: 1Gi
And refer to it in the manifest:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: xxx
spec:
replicas: 3
...
template:
spec:
containers:
...
volumeMounts:
- name: data-shared
mountPath: /data
volumes:
- name: data-shared
persistentVolumeClaim:
claimName: data-shared-claim

Resources