Kubernetes volume mounting - linux

I ' m trying to mount a directory to my pods but always it shows me an error "no file or directory found"
This is my yaml file used for the deployment :
apiVersion: apps/v1
kind: Deployment
metadata:
name: myapp1-deployment
labels:
app: myapp
spec:
replicas: 3
selector:
matchLabels:
app: myapp
template:
metadata:
labels:
app: myapp
spec:
volumes:
- name: test-mount-1
persistentVolumeClaim:
claimName: task-pv-claim-1
containers:
- name: myapp
image: 192.168.11.168:5002/dev:0.0.1-SNAPSHOT-6f4b1db
command: ["java -jar /jar/myapp1-0.0.1-SNAPSHOT.jar --spring.config.location=file:/etc/application.properties"]
ports:
- containerPort: 8080
volumeMounts:
- mountPath: "/etc/application.properties"
#subPath: application.properties
name: test-mount-1
# hostNetwork: true
imagePullSecrets:
- name: regcred
#volumes:
# - name: test-mount
and this is the persistance volume config :
kind: PersistentVolume
apiVersion: v1
metadata:
name: test-mount-1
labels:
type: local
app: myapp
spec:
storageClassName: manual
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt/share"
and this the claim volume config :
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim-1
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 5Gi
and this for the service config used for the deployment :
apiVersion: v1
kind: Service
metadata:
name: myapp-service
spec:
selector:
app: myapp
externalIPs:
- 192.168.11.145
ports:
- protocol: TCP
port: 8080
nodePort: 31000
type: LoadBalancer
status:
loadBalancer:
ingress:
If any one can help , I will be grateful and thanks .

You haven't included your storage class in your question, but I'm assuming you're attempting local storage on a node. Might be a simple thing to check, but does the directory exist on the node where your pod is running? And is it writeable? Depending on how many worker nodes you have, it looks like your pod could be running on any node, and the pv isn't set to any particular node. You could use node affinity to ensure that your pod runs on the same node that contains the directory referenced in your pv, if that's the issue.
Edit, if it's nfs, you need to change your pv to include:
nfs:
path: /mnt/share
server: <nfs server node ip/fqdn>
Example here

Related

Azure CLI error json: cannot unmarshal array into Go value of type unstructured.detector

I am new to Azure Kubernetes Service. I have created an Azure Kubernetes cluster and tried to deploy some workload in it. The .yaml file as follows
- apiVersion: v1
kind: Namespace
metadata:
name: azure-vote
spec:
finalizers:
- kubernetes
- apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
namespace: azure-vote
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: 'yes'
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
- apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
namespace: azure-vote
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
- apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
namespace: azure-vote
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
beta.kubernetes.io/os: linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: azure-vote-back
- apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
namespace: azure-vote
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
When I deploy this .yaml via Azure CLI it gives me a validation error but doesn't indicate where is it? When I run the kubectl apply -f ./filename.yaml --validate=false it gives "cannot unmarshal array into Go value of type unstructured.detector" error. However, when I run the same yaml in Azure portal UI it runs without any error. Appreciate if someone can mention the reason for this and how to fix this.
I tried to run the code you have provided in the Portal as well as Azure CLI. It successfully got created in Portal UI by adding the YAML code but using Azure CLI I received the same error as you :
After doing some modifications in your YAML file and validating it , I ran the same command again and it successfully got deployed in Azure CLI:
YAML File:
apiVersion: v1
kind: Namespace
metadata:
name: azure-vote
spec:
finalizers:
- kubernetes
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-back
namespace: azure-vote
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-back
template:
metadata:
labels:
app: azure-vote-back
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-back
image: mcr.microsoft.com/oss/bitnami/redis:6.0.8
env:
- name: ALLOW_EMPTY_PASSWORD
value: "yes"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 6379
name: redis
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-back
spec:
ports:
- port: 6379
selector:
app: azure-vote-back
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: azure-vote-front
namespace: azure-vote
spec:
replicas: 1
selector:
matchLabels:
app: azure-vote-front
template:
metadata:
labels:
app: azure-vote-front
spec:
nodeSelector:
"kubernetes.io/os": linux
containers:
- name: azure-vote-front
image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 250m
memory: 256Mi
ports:
- containerPort: 80
env:
- name: REDIS
value: "azure-vote-back"
---
apiVersion: v1
kind: Service
metadata:
name: azure-vote-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: azure-vote-front
Output:

Azure Key Vault integration with AKS works for nginx tutorial Pod, but not actual project deployment

Per the title, I have the integration working following the documentation.
I can deploy the nginx.yaml and after about 70 seconds I can print out secrets with:
kubectl exec -it nginx -- cat /mnt/secrets-store/secret1
Now I'm trying to apply it to a PostgreSQL deployment for testing and I get the following from the Pod description:
Warning FailedMount 3s kubelet MountVolume.SetUp failed for volume "secrets-store01-inline" : rpc error: code = Unknown desc = failed to mount secrets store objects for pod staging/postgres-deployment-staging-69965ff767-8hmww, err: rpc error: code = Unknown desc = failed to mount objects, error: failed to get keyvault client: failed to get key vault token: nmi response failed with status code: 404, err: <nil>
And from the nmi logs:
E0221 22:54:32.037357 1 server.go:234] failed to get identities, error: getting assigned identities for pod staging/postgres-deployment-staging-69965ff767-8hmww in CREATED state failed after 16 attempts, retry duration [5]s, error: <nil>. Check MIC pod logs for identity assignment errors
I0221 22:54:32.037409 1 server.go:192] status (404) took 80003389208 ns for req.method=GET reg.path=/host/token/ req.remote=127.0.0.1
Not sure why since I basically copied the settings from the nignx.yaml into the postgres.yaml. Here they are:
# nginx.yaml
kind: Pod
apiVersion: v1
metadata:
name: nginx
namespace: staging
labels:
aadpodidbinding: aks-akv-identity-binding-selector
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
# postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment-staging
namespace: staging
labels:
aadpodidbinding: aks-akv-identity-binding-selector
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
spec:
containers:
- name: postgres
image: postgres:13-alpine
ports:
- containerPort: 5432
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
- name: postgres-storage-staging
mountPath: /var/postgresql
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
- name: postgres-storage-staging
persistentVolumeClaim:
claimName: postgres-storage-staging
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service-staging
namespace: staging
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
Suggestions for what the issue is here?
Oversight on my part... the aadpodidbinding should be in the template: per:
https://azure.github.io/aad-pod-identity/docs/best-practices/#deploymenthttpskubernetesiodocsconceptsworkloadscontrollersdeployment
The resulting YAML should be:
# postgres.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres-deployment-production
namespace: production
spec:
replicas: 1
selector:
matchLabels:
component: postgres
template:
metadata:
labels:
component: postgres
aadpodidbinding: aks-akv-identity-binding-selector
spec:
containers:
- name: postgres
image: postgres:13-alpine
ports:
- containerPort: 5432
env:
- name: POSTGRES_DB_FILE
value: /mnt/secrets-store/DEV-PGDATABASE
- name: POSTGRES_USER_FILE
value: /mnt/secrets-store/DEV-PGUSER
- name: POSTGRES_PASSWORD_FILE
value: /mnt/secrets-store/DEV-PGPASSWORD
- name: POSTGRES_INITDB_ARGS
value: "-A md5"
- name: PGDATA
value: /var/postgresql/data
volumeMounts:
- name: secrets-store01-inline
mountPath: /mnt/secrets-store
readOnly: true
- name: postgres-storage-production
mountPath: /var/postgresql
volumes:
- name: secrets-store01-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: aks-akv-secret-provider
- name: postgres-storage-production
persistentVolumeClaim:
claimName: postgres-storage-production
---
apiVersion: v1
kind: Service
metadata:
name: postgres-cluster-ip-service-production
namespace: production
spec:
type: ClusterIP
selector:
component: postgres
ports:
- port: 5432
targetPort: 5432
Adding template in spec will resolve the issue, use label "aadpodidbinding: "your azure pod identity selector" in the template labels section in deployment.yaml file
sample deployment file
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
aadpodidbinding: azure-pod-identity-binding-selector
spec:
containers:
- name: nginx
image: nginx
env:
- name: SECRET
valueFrom:
secretKeyRef:
name: test-secret
key: key
volumeMounts:
- name: secrets-store-inline
mountPath: "/mnt/secrets-store"
readOnly: true
volumes:
- name: secrets-store-inline
csi:
driver: secrets-store.csi.k8s.io
readOnly: true
volumeAttributes:
secretProviderClass: dev-1spc

write access error for mounted volume on kubernetes

When we were deploying active-mq in azure kubernetes service(aks), where active-mq data folder mounted on azure managed disk as a persistent volume claim. Below is the yaml used for deployment.
ActiveMQ Image used: rmohr/activemq
Kubernetes Version: v1.15.7
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: activemqcontainer
spec:
replicas: 1
selector:
matchLabels:
app: activemqcontainer
template:
metadata:
labels:
app: activemqcontainer
spec:
securityContext:
runAsUser: 1000
fsGroup: 2000
runAsNonRoot: false
containers:
- name: web
image: azureregistry.azurecr.io/rmohractivemq
imagePullPolicy: IfNotPresent
ports:
- containerPort: 61616
volumeMounts:
- mountPath: /opt/activemq/data
subPath: data
name: volume
- mountPath: /opt/apache-activemq-5.15.6/conf/activemq.xml
name: config-xml
subPath: activemq.xml
imagePullSecrets:
- name: secret
volumes:
- name: config-xml
configMap:
name: active-mq-xml
- name: volume
persistentVolumeClaim:
claimName: azure-managed-disk
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: azure-managed-disk
spec:
accessModes:
- ReadWriteOnce
storageClassName: managed-premium
resources:
requests:
storage: 100Gi
Getting below error.
WARN | Failed startup of context o.e.j.w.WebAppContext#517566b{/admin,file:/opt/apache-activemq-5.15.6/webapps/admin/,null}
java.lang.IllegalStateException: Parent for temp dir not configured correctly: writeable=false
at org.eclipse.jetty.webapp.WebInfConfiguration.makeTempDirectory(WebInfConfiguration.java:336)[jetty-all-9.2.25.v20180606.jar:9.2.25.v20180606]
at org.eclipse.jetty.webapp.WebInfConfiguration.resolveTempDirectory(WebInfConfiguration.java:304)[jetty-all-9.2.25.v20180606.jar:9.2.25.v20180606]
at org.eclipse.jetty.webapp.WebInfConfiguration.preConfigure(WebInfConfiguration.java:69)[jetty-all-9.2.25.v20180606.jar:9.2.25.v20180606]
at org.eclipse.jetty.webapp.WebAppContext.preConfigure(WebAppContext.java:468)[jetty-all-9.2.25.v20180606.jar:9.2.25.v20180606]
at org.eclipse.jetty.webapp.WebAppContext.doStart(WebAppContext.java:504)[jetty-all-9.2.25.v20180606.jar:9.2.25.v20180606]
at org.eclipse.jetty.util.component.AbstractLifeCycle.start(AbstractLifeCycle.java:68)[jetty-all-9.2.25.v20180606.jar:9.2.25.v20180606]
at org.eclipse.jetty.util.component.ContainerLifeCycle.start(ContainerLifeCycle.java:132)[jetty-all-9.2.25.v20180606.jar:9.2.25.v20180606]
at org.eclipse.jetty.util.component.ContainerLifeCycle.doStart(ContainerLifeCycle.java:114)[jetty-all-9.2.25.v20180606.jar:9.2.25.v20180606]
at org.eclipse.jetty.server.handler.AbstractHandler.doStart(AbstractHandler.java:61)[jetty-all-9.2.25.v20180606.jar:9.2.2
Its a warning from activemq web admin console. Jetty which hosts web console is unable to create temp directory.
WARN | Failed startup of context o.e.j.w.WebAppContext#517566b{/admin,file:/opt/apache-activemq-5.15.6/webapps/admin/,null}
java.lang.IllegalStateException: Parent for temp dir not configured correctly: writeable=false
You can override default temp directory by setting up environment variable ACTIVEMQ_TMP as below in container spec
env:
- name: ACTIVEMQ_TMP
value : "/tmp"

Azure kubernetes deployment error - 0/1 nodes are available: 1 node(s) didn't match node selector

I am working on deploying one of my application to the Azure Kubernetes.
I have ACR and AKS configured, I am trying the deployment through azure CLI.
Here is the kubernetes deployment file content
kind: Deployment
metadata:
name: pocaksimage1
spec:
replicas: 1
template:
metadata:
labels:
app: pocaksimage1
spec:
nodeSelector:
"beta.kubernetes.io/os": windows
containers:
- name: pocaksimage1
image: pocaksimage1
ports:
- containerPort: 6379
name: pocaksimage1
---
apiVersion: v1
kind: Service
metadata:
name: pocaksimage1
spec:
ports:
- port: 6379
selector:
app: pocaksimage1
---
apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: pocaksimage1
spec:
replicas: 1
strategy:
rollingUpdate:
maxSurge: 1
maxUnavailable: 1
minReadySeconds: 5
template:
metadata:
labels:
app: pocaksimage1
spec:
nodeSelector:
"beta.kubernetes.io/os": windows
containers:
- name: pocaksimage1
image: repo
ports:
- containerPort: 80
resources:
requests:
cpu: 250m
limits:
cpu: 500m
env:
- name: PRE_PROD
value: "pocaksimage1"
imagePullSecrets:
- name: pocsecret
---
apiVersion: v1
kind: Service
metadata:
name: pocaksimage1-front
spec:
type: LoadBalancer
ports:
- port: 80
selector:
app: pocaksimage1-front
The error I am getting is "0/1 nodes are available: 1 node(s) didn't match node selector."
Please help me to get this resolved.
Thanks
I suppose the issue is with the fact AKS doesnt yet support windows nodes, so you dont really have windows nodes. You can create AKS with windows nodes, but its in preview at this point in time.
https://github.com/Azure/AKS/blob/master/previews.md#windows

How to mount cassandra data location to azure file share using stateful set kubernetes

I am setting up 3 node Cassandra cluster on Azure using Statefull set Kubernetes and not able to mount data location in azure file share.
I am able to do using default kubenetes storage but not with Azurefile share option.
I have tried the following steps given below, finding difficulty in volumeClaimTemplates
apiVersion: "apps/v1"
kind: StatefulSet
metadata:
name: cassandra
labels:
app: cassandra
spec:
serviceName: cassandra
replicas: 3
selector:
matchLabels:
app: cassandra
template:
metadata:
labels:
app: cassandra
spec:
containers:
- name: cassandra
image: cassandra
imagePullPolicy: Always
ports:
- containerPort: 7000
name: intra-node
- containerPort: 7001
name: tls-intra-node
- containerPort: 7199
name: jmx
- containerPort: 9042
name: cql
env:
- name: CASSANDRA_SEEDS
value: cassandra-0.cassandra.default.svc.cluster.local
- name: MAX_HEAP_SIZE
value: 256M
- name: HEAP_NEWSIZE
value: 100M
- name: CASSANDRA_CLUSTER_NAME
value: "Cassandra"
- name: CASSANDRA_DC
value: "DC1"
- name: CASSANDRA_RACK
value: "Rack1"
- name: CASSANDRA_ENDPOINT_SNITCH
value: GossipingPropertyFileSnitch
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumeMounts:
- mountPath: /var/lib/cassandra/data
name: pv002
volumeClaimTemplates:
- metadata:
name: pv002
spec:
storageClassName: default
accessModes:
- ReadWriteOnce
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv002
accessModes:
- ReadWriteOnce
azureFile:
secretName: storage-secret
shareName: xxxxx
readOnly: false
claimRef:
namespace: default
name: az-files-02
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: az-files-02
spec:
accessModes:
- ReadWriteOnce
---
apiVersion: v1
kind: Secret
metadata:
name: storage-secret
type: Opaque
data:
azurestorageaccountname: xxxxx
azurestorageaccountkey: jjbfjbsfljbafkljasfkl;jf;kjd;kjklsfdhjbsfkjbfkjbdhueueeknekneiononeojnjnjHBDEJKBJBSDJBDJ==
I should able to mount data folder of each cassandra node into azure file share.
For using azure file in statefulset, I think you could following this example: https://github.com/andyzhangx/demo/blob/master/linux/azurefile/attach-stress-test/statefulset-azurefile1-2files.yaml

Resources