Hi Kubernetes Experts,
I was using the following ServiceAccount creation config:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-scheduler
namespace: kube-system
and the following Deployment config:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: my-scheduler
namespace: kube-system
spec:
...
template:
metadata:
labels:
component: scheduler
tier: control-plane
version: second
k8s-custom-scheduler: my-scheduler
spec:
serviceAccountName: my-scheduler
Things were working fine, And now I want to make my pod more secure setting automountServiceAccountToken to false.
I changed my ServiceAccount creation and deployment config:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-scheduler
namespace: kube-system
automountServiceAccountToken: false
Deployment config:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
component: scheduler
tier: control-plane
name: my-scheduler
namespace: kube-system
spec:
...
template:
metadata:
labels:
component: scheduler
tier: control-plane
version: second
k8s-custom-scheduler: my-scheduler
spec:
serviceAccountName: my-scheduler
automountServiceAccountToken: false
After setting this my scheduler pod is not coming up and it says CrashLoopBackOff
Error:
I0325 17:37:50.304810 1 flags.go:33] FLAG: --write-config-to=""
I0325 17:37:50.891504 1 serving.go:319] Generated self-signed cert in-memory
W0325 17:37:51.168023 1 authentication.go:387] failed to read in-cluster kubeconfig for delegated authentication: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0325 17:37:51.168064 1 authentication.go:249] No authentication-kubeconfig provided in order to lookup client-ca-file in configmap/extension-apiserver-authentication in kube-system, so client certificate authentication won't work.
W0325 17:37:51.168072 1 authentication.go:252] No authentication-kubeconfig provided in order to lookup requestheader-client-ca-file in configmap/extension-apiserver-authentication in kube-system, so request-header client certificate authentication won't work.
W0325 17:37:51.168089 1 authorization.go:177] failed to read in-cluster kubeconfig for delegated authorization: open /var/run/secrets/kubernetes.io/serviceaccount/token: no such file or directory
W0325 17:37:51.168102 1 authorization.go:146] No authorization-kubeconfig provided, so SubjectAccessReview of authorization tokens won't work.
W0325 17:37:51.168111 1 options.go:298] Neither --kubeconfig nor --master was specified. Using default API client. This might not work.
invalid configuration: no configuration has been provided
I believe we need to configure something more along with automountServiceAccountToken: false.
Can someone point me to the additional configurations needed to use automountServiceAccountToken: false?
Configure Service Accounts for Pods
You can access the API from inside a pod using automatically mounted service account credentials.
In version 1.6+, you can opt out of automounting API credentials for a service account by setting automountServiceAccountToken: false on the service account or for a particular pod.
So, when you are creating a ServiceAccount and a Deployment like in your example yaml files, credentials for accessing the Kubernetes API are not automatically mounted to the Pod. But your k8s Deployment 'my-scheduler' requires them to access the API.
You can test your ServiceAccount with some dummy Deployment of nginx, for example. And it will work without mounting credentials.
Also, if you create a ServiceAccount like in your example:
apiVersion: v1
kind: ServiceAccount
metadata:
name: my-scheduler
namespace: kube-system
automountServiceAccountToken: false
You can manually mount the API credentials like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-scheduler
namespace: kube-system
...
spec:
...
template:
metadata:
labels:
app: my-scheduler
spec:
containers:
- image: <YOUR_IMAGE>
imagePullPolicy: Always
name: my-scheduler
volumeMounts:
- mountPath: /var/run/secrets/kubernetes.io/serviceaccount
name: kube-api-access
readOnly: true
serviceAccountName: my-scheduler
volumes:
- name: kube-api-access
projected:
defaultMode: 420
sources:
- serviceAccountToken:
expirationSeconds: 3607
path: token
- configMap:
items:
- key: ca.crt
path: ca.crt
name: kube-root-ca.crt
- downwardAPI:
items:
- fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
path: namespace
See the Managing Service Accounts link for more information.
Related
I have yaml file that has deployment and persistant volume with azure fileshare.
Scenario 1 - The file mount happens successfully when trying to mount only the logs folder with azure files share. This pretty much works as expected.
Scenario 2 - When I try to mount the application configuration file, the file mount fails with azure fileshare. The pod keeps restarting each time and I am unable to find the files as well.
What am I trying to achieve here?
I have the Azure Fileshare folder that is empty before running the yaml and after running the yaml I am expecting the application files from the pod to be shown in the Azure Fileshare... I guess that isn't happening and actually Azure Fileshare empty folder overwrites the folder/files in the pod that has the application.
Is there any way to view the pod application files in the Azure Fileshare while starting?
ex- just like the bind mount in docker-compose
Please find the yaml file below
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-product
namespace: my-pool
labels:
app: my-product
spec:
replicas: 1
selector:
matchLabels:
app: my-product
template:
metadata:
labels:
app: my-product
spec:
containers:
- image: myproductimage:latest
name: my-product
imagePullPolicy: Always
envFrom:
- configMapRef:
name: configmap
env:
- name: env-file
value: my-product
volumeMounts:
- name: azure
mountPath: /opt/kube/my-product
imagePullSecrets:
- name: secret1
hostname: my-product
volumes:
- name: azure
persistentVolumeClaim:
claimName: fileshare-pvc
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: fileshare-pv
labels:
usage: fileshare-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: azure-file
azureFile:
secretName: secret2
shareName: myfileshare-folder
readOnly: false
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: fileshare-pvc
namespace: my-pool
spec:
accessModes:
- ReadWriteOnce
storageClassName: azure-file
resources:
requests:
storage: 5Gi
selector:
# To make sure we match the claim with the exact volume, match the label
matchLabels:
usage: fileshare-pv
I am trying to sync an Azure Key Vault Secret with a Kubernetes Secret of type dockerconfigjson by applying the following yaml manifest with the 4 objects Pod, SecretProviderClass, AzureIdentity and AzureIdentityBinding.
All configuration around key vault access and managed identity RBAC rules have been done and proven to work, as I have access to the Azure Key Vault secret from within the running Pod.
But, when applying this manifest, and according to the documentation here, I expect to see the kubernetes secret regcred reflecting the Azure Key Vault Secret when I create the Pod with mounted secret volume, but the kubernetes secret remains unchanged. I have also tried to recreate the Pod in an attempt to trigger the sync but in vain.
Since this is a very declarative way of configuring this functionality, I am also confused where to look at logs for troubleshooting.
Can someone lead me to what may I be doing wrong?
apiVersion: v1
kind: Pod
metadata:
name: demo
labels:
aadpodidbinding: webapp
spec:
containers:
- name: demo
image: mcr.microsoft.com/oss/azure/aad-pod-identity/demo:v1.6.3
volumeMounts:
- name: web-app-secret
mountPath: "/mnt/secrets"
readOnly: true
nodeSelector:
kubernetes.io/os: linux
volumes:
- name: web-app-secret
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: web-app-secret-provide
---
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: web-app-secret-provide
spec:
provider: azure
parameters:
usePodIdentity: "true"
keyvaultName: <key-vault-name>
objects: |
array:
- |
objectName: registryPassword
objectType: secret
tenantId: <tenant-id>
secretObjects:
- data:
- key: .dockerconfigjson
objectName: registryPassword
secretName: regcred
type: kubernetes.io/dockerconfigjson
---
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentity
metadata:
name: kv-managed-identity
spec:
type: 0
resourceID: <resource-id>
clientID: <client-id>
---
apiVersion: "aadpodidentity.k8s.io/v1"
kind: AzureIdentityBinding
metadata:
name: kv-managed-binding
spec:
azureIdentity: kv-managed-identity
selector: web-app
I'm trying to build a Azure DevOps Linux Build Agent in Azure Kubernetes Service.
I created the yaml file and created the secrets to use inside of the file.
I applied the file and have "CreateContainerConfigError" with my pod in a "waiting" state.
I run command
"kubectl get pod <pod name> -o yaml"
and it states the secret "vsts" could not be found.
I find this weird because I used "kubectl get secrets" and I see the secrets "vsts-account" and "vsts-token" listed.
You may check your kubernetes configuration, which is supposed to be like below:
apiVersion: v1
kind: ReplicationController
metadata:
name: vsts-agent
spec:
replicas: 1
template:
metadata:
labels:
app: vsts-agent
version: "0.1"
spec:
containers:
– name: vsts-agent
image: microsoft/vsts-agent:ubuntu-16.04-docker-18.06.1-ce-standard
env:
– name: VSTS_ACCOUNT
valueFrom:
secretKeyRef:
name: vsts
key: VSTS_ACCOUNT
– name: VSTS_TOKEN
valueFrom:
secretKeyRef:
name: vsts
key: VSTS_TOKEN
– name: VSTS_POOL
value: dockerized-vsts-agents
volumeMounts:
– mountPath: /var/run/docker.sock
name: docker-volume
volumes:
– name: docker-volume
hostPath:
path: /var/run/docker.sock
You may follow the blog below to see whether it helps you:
https://mohitgoyal.co/2019/01/10/run-azure-devops-private-agents-in-kubernetes-clusters/
Running 1.15 on AWS EKS.
By default AWS provides eks.privileged PSP (documented here: https://docs.aws.amazon.com/eks/latest/userguide/pod-security-policy.html). This is assigned to all authenticated users.
I then create a more restrictive PSP eks.restricted:
---
# restricted pod security policy
apiVersion: extensions/v1beta1
kind: PodSecurityPolicy
metadata:
creationTimestamp: null
labels:
kubernetes.io/cluster-service: "true"
eks.amazonaws.com/component: pod-security-policy
name: eks.restricted
spec:
allowPrivilegeEscalation: false
allowedCapabilities:
- '*'
fsGroup:
rule: RunAsAny
hostPorts:
- max: 65535
min: 0
runAsUser:
rule: MustRunAsNonRoot
seLinux:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
volumes:
- '*'
The above is a non-mutating PSP. I also modify the default eks.privilged PSP to make it modifying by adding the following annotations
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
Finally I update the clusterrole to add in the new PSP I created:
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: eks:podsecuritypolicy:privileged
labels:
kubernetes.io/cluster-service: "true"
eks.amazonaws.com/component: pod-security-policy
rules:
- apiGroups:
- policy
resourceNames:
- eks.privileged
- eks.restricted
resources:
- podsecuritypolicies
verbs:
- use
What this accomplishes is that eks.restricted becomes the default PSP do to the fact that it is non-mutating (https://v1-15.docs.kubernetes.io/docs/concepts/policy/pod-security-policy/#policy-order and the order of the list doesn't matter).
That is great. But what I am trying to accomplish is create a single namespace that defaults to eks.restricted while all others default to eks.privileged.
I attempted to do this as such.
First I removed eks.restricted from ClusterRole eks:podsecuritypolicy:privileged so that eks.privileged is now the cluster wide default. Within my namespace I created a new role
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
annotations:
labels:
eks.amazonaws.com/component: pod-security-policy
kubernetes.io/cluster-service: "true"
name: eks:podsecuritypolicy:restricted
rules:
- apiGroups:
- policy
resourceNames:
- eks.restricted
resources:
- podsecuritypolicies
verbs:
- use
This Role grants use to PSP eks.restricted. I then bound this new Role to a ServiceAccount within my example namespace.
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: psp-restricted
namespace: psp-example
roleRef:
kind: Role
name: eks:podsecuritypolicy:restricted
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: privileged-sa
namespace: psp-example
Finally I created a deployment, which violates PSP eks.restricted that uses this serviceAccount
apiVersion: apps/v1
kind: Deployment
metadata:
name: centos-deployment
namespace: psp-example
labels:
app: centos
spec:
replicas: 3
selector:
matchLabels:
app: centos
template:
metadata:
labels:
app: centos
spec:
serviceAccountName: privileged-sa
containers:
- name: centos
#image: centos:centos7
image: datinc/permtest:0
command:
- '/bin/sleep'
- '60000'
My assumption would be that this would function as in my initial example/test at the start of this post. My combined access is to both eks.privileged due to it being bound to system:authenticated group and eks.restricted bound to the serviceAccount my deployment is running under. Since eks.restricted is non-mutating it should be the one that applies and such it should fail to create PODs. But that isn't what happens. The PODs start up just fine.
As a further test I added eks.privileged to the SA Role (listed above) expecting it to function like in my original example. It does not, PODs create just fine.
Trying to figure out why this is.
At AWS, your deployment uses ServiceAccount replicaset-controller in namespace kube-system, so you need to remove this from ClusterRoleBinding eks:podsecuritypolicy:authenticated or delete that.
Kindly check this article for the detail :
https://dev.to/anupamncsu/pod-security-policy-on-eks-mp9
I am trying to connect to internal load balancer using the below link:
https://learn.microsoft.com/en-us/azure/aks/internal-lb
I see a non existing user in error message I am receiving:
Warning CreatingLoadBalancerFailed 3m (x7 over 9m) service-controller Error creating load balancer (will retry): failed to ensure load balancer for service default/azure-vote-front: network.SubnetsClient#Get: Failure responding to request: StatusCode=403 -- Original Error: autorest/azure: Service returned an error. Status=403 Code="AuthorizationFailed" Message="The client '91c18461-XXXXXXXX---1441d7bcea67' with object id '91c18461-XXXXXXXXX-1441d7bcea67' does not have authorization to perform action 'Microsoft.Network/virtualNetworks/subnets/read' over scope '/subscriptions/996b68c3-ec32-46d4-8d0e-80c6da2c1a3b/resourceGroups/<<resource group>>/providers/Microsoft.Network/virtualNetworks/<<VNET>>/subnets/<<subnet id>>
When I search this user in my azure subscription, I do not find it.
Any help shall be highly appreciated
Below is my manifest file
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-back
spec:
replicas: 1
template:
metadata:
labels:
app: azure-vote-back
spec:
containers:
- name: azure-vote-back
image: redis
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: azure-vote-front
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: azure-vote-front
spec:
containers:
- name: azure-vote-front
image: phishbotstagingregistry.azurecr.io/azure-vote-front:v1
ports:
- containerPort: 80
resources:
requests:
cpu: 250m
limits:
cpu: 500m
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
annotations:
service.beta.kubernetes.io/azure-load-balancer-internal: "true"
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
When you created AKS you provided wrong credentials (or stripped permissions later). So the service principal AKS is not authorized to create that resource (which the error clearly states).
Code="AuthorizationFailed" Message="The client
'91c18461-XXXXXXXX---1441d7bcea67' with object id
'91c18461-XXXXXXXXX-1441d7bcea67' does not have authorization to
perform action 'Microsoft.Network/virtualNetworks/subnets/read' over
scope
'/subscriptions/996b68c3-ec32-46d4-8d0e-80c6da2c1a3b/resourceGroups/<>/providers/Microsoft.Network/virtualNetworks/<>/subnets/<>
You can use az aks list --resource-group <your-resource-group> to find your service principal, but the error kinda gives that away.