jobs.batch is forbidden: User ' '"system:serviceaccount:default:default" cannot list resource "jobs" in API group "batch" in the namespace "default" - node.js

I am using Kubernetes javascript client with, in-cluster configurations to interact with the cluster.
I am trying to get the list of jobs
app.js(Node)
app.get("/", (req, res) => {
k8sApi2
.listNamespacedJob("default")
.then((res) => {
console.log(res.body);
res.send(res.body);
})
.catch((err) => console.log(err));
});
But this is the log of the pod I am getting.
Here are my deployment
Service
Also, I created a role and a role binding but still, I have no idea what makes this issue.
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: node-apis
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-apis
rules:
- apiGroups:
- ""
- "apps"
- "batch"
resources:
- endpoints
- deployments
- pods
- jobs
verbs:
- get
- list
- watch
- create
- delete
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-apis
namespace: default
subjects:
- kind: ServiceAccount
name: node-apis
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: node-apis
I am new to Kubernetes, any help?

You need to use the service account by specifying it in the spec section of the pod.Since you are not doing that it's using the default service account which does not have Role and RoleBinding permitting the operation, leading to forbidden error.
spec:
serviceAccountName: node-apis
containers:
...
Alternatively you can give permission to the default service account in the RoleBinding
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: node-apis
namespace: default
subjects:
- kind: ServiceAccount
name: default
namespace: default
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: node-apis

Related

KEDAScalerFailed : no azure identity found for request clientID

I tried various different methods but not able to access the Azure Storage Queues via PodIdentity. The resource group, client ID already exists.
The steps:-
kubectl create namespace keda
helm install keda kedacore/keda --set podIdentity.activeDirectory.identity= --namespace keda
kubectl create namespace myapp
The first few sections of myapp.yaml :
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentity
metadata:
name: <idvalue>
namespace: myapp
spec:
clientID: "<clientId>"
resourceID: "<resourceId>"
type: 0
---
apiVersion: aadpodidentity.k8s.io/v1
kind: AzureIdentityBinding
metadata:
name: <idvalue>-binding
namespace: myapp
spec:
azureIdentity: <idvalue>
selector: <idvalue> #keeping same as identity
---
The rest of the file is the deployment section, so not pasting here.
Then ran the Helm to deploy the myapp.yaml via myappInt.values.yaml file ->
helm install -f C:\MyApp\myappInt.values.yaml (this file contains the clustername, role etc.)
myappInt.values.yaml file:-
image:
registry: <registryname>
deployment:
environment: INT
clusterName: <clustername>
clusterRole: <clusterrole>
region: <region>
processingRegion: <processingregion>
azureIdentityClientId: "<clientId>"
azureIdentityResourceId: "<resourceId>"
Then the scaler ->
kubectl apply -f c:\MyApp\kedascaling.yaml --namespace myapp
The kedascaling.yaml:-
apiVersion: keda.sh/v1alpha1
kind: TriggerAuthentication
metadata:
name: keda-pod-identity-auth
spec:
podIdentity:
provider: azure
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: myapp-scaledobject
namespace: myapp
spec:
scaleTargetRef:
name: myapp # Corresponds with Deployment Name
minReplicaCount: 2
maxReplicaCount: 3
triggers:
- type: azure-queue
metadata:
queueName: myappqueue # Required
accountName: myappstorage # Required when pod identity is used
queueLength: "1" # Required
authenticationRef:
name: keda-pod-identity-auth # AuthenticationRef would need pod identity
Finally it gives the error below:-
kind: Event
apiVersion: v1
metadata:
name: myapp-scaledobject.16def024b939fdf2
namespace: myappnamespace
uid: someuid
resourceVersion: '186302648'
creationTimestamp: '2022-03-23T06:55:54Z'
managedFields:
- manager: keda
operation: Update
apiVersion: v1
time: '2022-03-23T06:55:54Z'
fieldsType: FieldsV1
fieldsV1:
f:count: {}
f:firstTimestamp: {}
f:involvedObject:
f:apiVersion: {}
f:kind: {}
f:name: {}
f:namespace: {}
f:resourceVersion: {}
f:uid: {}
f:lastTimestamp: {}
f:message: {}
f:reason: {}
f:source:
f:component: {}
f:type: {}
involvedObject:
kind: ScaledObject
namespace: myapp
name: myapp-scaledobject
uid: <some id>
apiVersion: keda.sh/v1alpha1
resourceVersion: '<some version>'
**reason: KEDAScalerFailed
message: |
no azure identity found for request clientID**
source:
component: keda-operator
firstTimestamp: '2022-03-23T06:55:54Z'
lastTimestamp: '2022-03-23T07:30:54Z'
count: 71
type: Warning
eventTime: null
reportingComponent: ''
reportingInstance: ''
Any idea what I am doing wrong here? Any help would be greatly appreciated. Asked at Keda repo but no response.
I had a similar error recently... I needed to make sure that the AAD Pod Identity was in the same namespace as the KEDA operator service.
Whatever identity you assigned to KEDA when creating KEDA with HELM, ensure that it's within the same namespace (which in your instance is "keda").
For example after running:
helm install keda kedacore/keda --set podIdentity.activeDirectory.identity=my-keda-identity --namespace keda
if my-keda-identity is not in namespace "keda" then the KEDA operator will not be able to bind AAD because it can't find it. If you need to update the AAD reference you can simply run:
helm upgrade keda kedacore/keda --set podIdentity.activeDirectory.identity=my-second-app-reference --namespace keda
Next, recreate the KEDA operator pod (I like to do this to test things out in a clean manor) and then run the following command to see if binding worked:
kubectl logs -n keda <keda-operator-pod-name> -c keda-operator
You should see the error go away (as long as the identity has access to retrieve queue messages from Azure Storage via RBAC)

How to authenticate against AAD (Azure Active Directory) with oauth2_proxy and obtain Access Token

I'm trying to authenticate against AAD (Azure Active Directory) with oauth2_proxy used in Kubernetes to obtain Access Token.
First of all, I'm struggling to get the correct authentication flow to work.
Second, after being redirected to my application, Access Token is not in the request headers specified in oauth2_proxy documentation.
Here is some input on authentication against Azure Active Directory (AAD) using oauth2_proxy in kubernetes.
First you need to create an application in AAD and add it email, profile and User.Read permissions to Microsoft Graph.
The default behavior of authentication flow, is that after login against Microsoft authentication server, you will be redirected to root of website with authentication code (e.g. https://exampler.com/). You would expect the Access Token to be visible there -this is a faulty assumption. The url that Access Token is injected into is https://exampler.com/oauth2 !!!
Successful configuration of oauth2_proxt that worked is below.
oauth2-proxy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: oa2p
spec:
replicas: 1
selector:
matchLabels:
k8s-app: oauth2-proxy
template:
metadata:
labels:
k8s-app: oauth2-proxy
spec:
containers:
- args:
- --provider=oidc
- --azure-tenant=88888888-aaaa-bbbb-cccc-121212121212
- --email-domain=example.com
- --http-address=0.0.0.0:4180
- --set-authorization-header=true
- --set-xauthrequest=true
- --pass-access-token=true
- --pass-authorization-header=true
- --pass-user-headers=true
- --pass-host-header=true
- --skip-jwt-bearer-tokens=true
- --oidc-issuer-url=https://login.microsoftonline.com/88888888-aaaa-bbbb-cccc-121212121212/v2.0
env:
- name: OAUTH2_PROXY_CLIENT_ID
valueFrom:
secretKeyRef:
name: oauth2-proxy-secret
key: OAUTH2_PROXY_CLIENT_ID
- name: OAUTH2_PROXY_CLIENT_SECRET
valueFrom:
secretKeyRef:
name: oauth2-proxy-secret
key: OAUTH2_PROXY_CLIENT_SECRET
- name: OAUTH2_PROXY_COOKIE_SECRET
valueFrom:
secretKeyRef:
name: oauth2-proxy-secret
key: OAUTH2_PROXY_COOKIE_SECRET
image: quay.io/oauth2-proxy/oauth2-proxy:v7.1.3
imagePullPolicy: Always
name: oauth2-proxy
ports:
- containerPort: 4180
protocol: TCP
---
apiVersion: v1
kind: Service
metadata:
labels:
k8s-app: oauth2-proxy
name: oauth2-proxy
namespace: oa2p
spec:
ports:
- name: http
port: 4180
protocol: TCP
targetPort: 4180
selector:
k8s-app: oauth2-proxy
ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oa2p
namespace: oa2p
annotations:
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/limit-rps: "1"
nginx.ingress.kubernetes.io/auth-url: "https://$host/oauth2/auth"
nginx.ingress.kubernetes.io/auth-signin: "https://$host/oauth2/start?rd=$escaped_request_uri"
nginx.ingress.kubernetes.io/auth-response-headers: "X-Auth-Request-Email,X-Auth-Request-Preferred-Username"
spec:
tls:
- hosts:
- oa2p.example.com
secretName: oa2p-tls
rules:
- host: oa2p.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: oa2p
port:
number: 8080
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: oa2p-proxy
namespace: oa2p
annotations:
kubernetes.io/ingress.class: nginx
nginx.ingress.kubernetes.io/limit-rps: "1"
nginx.ingress.kubernetes.io/proxy-buffer-size: "8k"
spec:
tls:
- hosts:
- oa2p.example.com
secretName: oa2p-tls
rules:
- host: oa2p.example.com
http:
paths:
- path: /oauth2
pathType: Prefix
backend:
service:
name: oauth2-proxy
port:
number: 4180

Create kubeconfig with restricted permission

I need to create a kubeconfig with restricted access, I want to be able to provide permission to update configmap in specific namesapce, how can I create such a kubeconfig with the following permission
for specefic namespace (myns)
update only configmap (mycm)
Is there a simple way to create it ?
The tricky part here is that I need that some program will have access to cluster X and modify only this comfigMap, How would I do it from outside process without providing the full kubeconfig file which can be problematic from security reason
To make it clear, I own the cluster, I just want to give to some program restricted permissions
This is not straight forward. But still possible.
Create the namespace myns if not exists.
$ kubectl create ns myns
namespace/myns created
Create a service account cm-user in myns namespace. It'll create a secret token as well.
$ kubectl create sa cm-user -n myns
serviceaccount/cm-user created
$ kubectl get sa cm-user -n myns
NAME SECRETS AGE
cm-user 1 18s
$ kubectl get secrets -n myns
NAME TYPE DATA AGE
cm-user-token-kv5j5 kubernetes.io/service-account-token 3 63s
default-token-m7j9v kubernetes.io/service-account-token 3 96s
Get the token and ca.crt from cm-user-token-kv5j5 secret.
$ kubectl get secrets cm-user-token-kv5j5 -n myns -oyaml
Base64 decode the value of token from cm-user-token-kv5j5.
Now create a user using the decoded token.
$ kubectl config set-credentials cm-user --token=<decoded token value>
User "cm-user" set.
Now generate a kubeconfig file kubeconfig-cm.
apiVersion: v1
kind: Config
clusters:
- cluster:
certificate-authority-data: <ca.crt value from cm-user-token-kv5j5 secret>
server: <kubernetes server>
name: <cluster>
contexts:
- context:
cluster:<cluster>
namespace: myns
user: cm-user
name: cm-user
current-context: cm-user
users:
- name: cm-user
user:
token: <decoded token>
Now create a role and rolebinding for sa cm-user.
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: myns
name: cm-user-role
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["update", "get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cm-user-rb
namespace: myns
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: cm-user-role
subjects:
- namespace: myns
kind: ServiceAccount
name: cm-user
We are done. Now using this kubeconfig file you can update the mycm configmap. It doesn't have any other privileges.
$ kubectl get cm -n myns --kubeconfig kubeconfig-cm
NAME DATA AGE
mycm 0 8s
$ kubectl delete cm mycm -n myns --kubeconfig kubeconfig-cm
Error from server (Forbidden): configmaps "mycm" is forbidden: User "system:serviceaccount:myns:cm-user" cannot delete resource "configmaps" in API group "" in the namespace "myns"
You need to use RBAC and define role and then bind that role to a user or serviceaccount using rolebinding
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: configmap-reader
rules:
- apiGroups: [""] # "" indicates the core API group
resources: ["configmaps"]
verbs: ["update", "get", "list"]
---
apiVersion: rbac.authorization.k8s.io/v1
# This role binding allows "jane" to read config maps in the "default" namespace.
# You need to already have a Role named "configmap-reader" in that namespace.
kind: RoleBinding
metadata:
name: read-configmap
namespace: default
subjects:
# You can specify more than one "subject"
- kind: User
name: jane # "name" is case sensitive
apiGroup: rbac.authorization.k8s.io
roleRef:
# "roleRef" specifies the binding to a Role / ClusterRole
kind: Role #this must be Role or ClusterRole
name: configmap-reader # this must match the name of the Role or ClusterRole you wish to bind to
apiGroup: rbac.authorization.k8s.io
https://kubernetes.io/docs/reference/access-authn-authz/rbac/

Kubernetes volume mounting

I ' m trying to mount a directory to my pods but always it shows me an error "no file or directory found"
This is my yaml file used for the deployment :
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp1-deployment
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
volumes:
- name: test-mount-1
persistentVolumeClaim:
claimName: task-pv-claim-1
containers:
- name: myapp
image: 192.168.11.168:5002/dev:0.0.1-SNAPSHOT-6f4b1db
command: ["java -jar /jar/myapp1-0.0.1-SNAPSHOT.jar --spring.config.location=file:/etc/application.properties"]
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/etc/application.properties"
#subPath: application.properties
name: test-mount-1
# hostNetwork: true
imagePullSecrets:
- name: regcred
#volumes:
# - name: test-mount
and this is the persistance volume config :
kind: PersistentVolume
apiVersion: v1
metadata:
name: test-mount-1
labels:
type: local
app: myapp
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/share"
and this the claim volume config :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim-1
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
and this for the service config used for the deployment :
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
externalIPs:
- 192.168.11.145
ports:
- protocol: TCP
port: 8080
nodePort: 31000
type: LoadBalancer
status:
loadBalancer:
ingress:
If any one can help , I will be grateful and thanks .
You haven't included your storage class in your question, but I'm assuming you're attempting local storage on a node. Might be a simple thing to check, but does the directory exist on the node where your pod is running? And is it writeable? Depending on how many worker nodes you have, it looks like your pod could be running on any node, and the pv isn't set to any particular node. You could use node affinity to ensure that your pod runs on the same node that contains the directory referenced in your pv, if that's the issue.
Edit, if it's nfs, you need to change your pv to include:
nfs:
path: /mnt/share
server: <nfs server node ip/fqdn>
Example here

How to use kubernetes secrets in nodejs application?

I have one kubernetes cluster on gcp, running my express and node.js application, operating CRUD operations with MongoDB.
I created one secret, containing username and password,
connecting with mongoDB specifiedcified secret as environment in my kubernetes yml file.
Now My question is "How to access that username and password
in node js application for connecting mongoDB".
I tried process.env.SECRET_USERNAME and process.env.SECRET_PASSWORD
in Node.JS application, it is throwing undefined.
Any idea ll'be appreciated .
Secret.yaml
apiVersion: v1
data:
password: pppppppppppp==
username: uuuuuuuuuuuu==
kind: Secret
metadata:
creationTimestamp: 2018-07-11T11:43:25Z
name: test-mongodb-secret
namespace: default
resourceVersion: "00999"
selfLink: /api-path-to/secrets/test-mongodb-secret
uid: 0900909-9090saiaa00-9dasd0aisa-as0a0s-
type: Opaque
kubernetes.yaml
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:deployment.kubernetes.io/
revision: "4"
creationTimestamp: 2018-07-11T11:09:45Z
generation: 5
labels:
name: test
name: test
namespace: default
resourceVersion: "90909"
selfLink: /api-path-to/default/deployments/test
uid: htff50d-8gfhfa-11egfg-9gf1-42010gffgh0002a
spec:
replicas: 1
selector:
matchLabels:
name: test
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
type: RollingUpdate
template:
metadata:
creationTimestamp: null
labels:
name: test
spec:
containers:
- env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
key: username
name: test-mongodb-secret
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
key: password
name: test-mongodb-secret
image: gcr-image/env-test_node:latest
imagePullPolicy: Always
name: env-test-node
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
status:
availableReplicas: 1
conditions:
- lastTransitionTime: 2018-07-11T11:10:18Z
lastUpdateTime: 2018-07-11T11:10:18Z
message: Deployment has minimum availability.
reason: MinimumReplicasAvailable
status: "True"
type: Available
observedGeneration: 5
readyReplicas: 1
replicas: 1
updatedReplicas: 1
Yourkubernetes.yaml file specifies which environment variable to store your secret so it is accessible by apps in that namespace.
Using kubectl secrets cli interface you can upload your secret.
kubectl create secret generic -n node-app test-mongodb-secret --from-literal=username=a-username --from-literal=password=a-secret-password
(the namespace arg -n node-app is optional, else it will uplaod to the default namespace)
After running this command, you can check your kube dashboard to see that the secret has been save
Then from you node app, access the environment variable process.env.SECRET_PASSWORD
Perhaps in your case the secretes are created in the wrong namespace hence why undefined in yourapplication.
EDIT 1
Your indentation for container.env seems to be wrong
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
- name: SECRET_PASSWORD
valueFrom:
secretKeyRef:
name: mysecret
key: password
restartPolicy: Never

Resources