Dapr already allows us to subscribe multiple apps to a topic declaratively.
But can we use just one subscription.yaml file to subscribe an app to multiple topics declaratively?
Something like:
apiVersion: dapr.io/v1alpha1
kind: Subscription
metadata:
name: myevent-subscription
spec:
- topic: newOrder
route: /orders
pubsubname: pubsub
- topic: newProduct
route: /productCatalog/products
pubsubname: pubsub
scopes:
- myapp
I know how to subscribe an app to multiple topics programmatically, but "the declarative approach removes the Dapr dependency from your code and allows, for example, existing applications to subscribe to topics, without having to change code"ยน and it's what I want.
You should be able to have multiple Subscription entries in a single file, certainly for Kubernetes deployments, like this.
apiVersion: dapr.io/v1alpha1
kind: Subscription
metadata:
name: orders-subscription
spec:
topic: newOrder
route: /orders
pubsubname: pubsub
scopes:
- app1
- app2
---
apiVersion: dapr.io/v1alpha1
kind: Subscription
metadata:
name: products-subscription
spec:
topic: newProduct
route: /productCatalog/products
pubsubname: pubsub
scopes:
- app1
- app2
Related
we have defined our YAML with
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
name: mypod
volumeMounts:
- name: azure
mountPath: /mnt/azure
volumes:
- name: azure
azureFile:
secretName: azure-secret
shareName: aksshare
readOnly: false
and we will before the deployment create secret with kubectl command:
$AKS_PERS_STORAGE_ACCOUNT_NAME
$STORAGE_KEY
kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME \
--from-literal=azurestorageaccountkey=$STORAGE_KEY
We already have that existing file share as Azure File Share resource and we have file stored in it.
I am confused if we need to manage and define as well yamls for
kind: PersistentVolume
and
kind: PersistentVolumeClaim
or the above YAML is completely enough?
Are PV and PVC required only if we do not have our file share already created on Azure?
I've read the docs https://kubernetes.io/docs/concepts/storage/persistent-volumes/ but still feeling confused when they need to be defined and when it is OK not to use them at all during the overall deployment process.
Your Pod Yaml is ok.
The Kubernetes Persistent Volumes is a newer abstraction. If your application instead uses PersistentVolumeClaim it is decoupled from the type of storage you use (in your case Azure File Share) so your app can be deployed to e.g. AWS or Google Cloud or Minikube on your desktop without any changes. Your cluster need to have some support for PersistentVolumes and that part can be tied to a specific storage system.
So, to decouple your app yaml from specific infrastructure, it is better to use PersistentVolumeClaims.
Persistent Volume Example
I don't know about Azure File Share, but there is good documentation on Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS).
Application config
Persistent Volume Claim
Your app, e.g. a Deployment or StatefulSet can have this PVC resource
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-azurefile
spec:
accessModes:
- ReadWriteMany
storageClassName: my-azurefile
resources:
requests:
storage: 5Gi
Then you need to create a StorageClass resource that probably is unique for each type of environment, but need to have the same name and support the same access modes. If the environment does not support dynamic volume provisioning you may to have manually create PersistentVolume resource as well.
Examples in different environments:
The linked doc Dynamically create and use a persistent volume with Azure Files in AKS) describes for Azure.
See AWS EFS doc for creating ReadWriteMany volumes in AWS.
Blog about ReadWriteMany storage in Minikube
Pod using Persistent Volume Claim
You typically deploy apps using a Deployment or a StatefulSet but the part declaring the Pod template is similar, except that you probably want to use volumeClaimTemplate instead of PersistentVolumeClaim for StatefulSet.
See full example on Create a Pod using a PersistentVolumeClaim
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: file-share
persistentVolumeClaim:
claimName: my-azurefile # this must match your name of PVC
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: file-share
Is it possible to specify the name (or post script to the automatically generated name) for azure fileshare storage that is dynamically provisioned with kubernetes.
The automatically provisioned storage names look as follows:
kubernetes-dynamic-pvc-1254de92-8668-4245-bf78-2512fsgdges6
And I would like to change this to something like:
kubernetes-dynamic-pvc-1254de92-8668-4245-bf78-2512fsgdges6-username
either through specifying a new name (with generated UUID) or specifying a post script to the auto generated name.
The current deployment works by only specifying a PVC for the dynamically provisioned storage and the name can therefore not be specified in the PV file.
The yaml file for the storage-class contains the following:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: retain-fileshare-storage
provisioner: kubernetes.io/azure-file
parameters:
skuName: Standard_LRS
allowVolumeExpansion: True
reclaimPolicy: Retain
The yaml file for the PVC contains the following:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pvc
labels:
app: my-app
chart: my-chart
release: my-release
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2Gi
storageClassName: retain-fileshare-storage
To clarify:
I am not interested in the name of the PVC, but in the name of the actual resource (a fileshare in a storage account in this case) on Azure.
I currently have an Azure Kubernetes cluster running with Promtheus and Grafana deployments. Prometheus is using the local cluster storage for the database and I want to mount a persistent volume in the Kubernetes cluster that points back to an Azure Storage Account (file share) for the Prometheus database.
I would like to do this because it seems cleaner than setting up a remote-write configuration and solves the issue that remote-writes solve and that is 'scalability and durability'. I've done some testing and proven out this does in fact work for a non-production, low traffic environment.
I would like to know if there are any pitfalls I should be aware of if I do move forward with this plan. Has anybody else done this and encountered any issues?
Create storage class to be used for prometheus data. Update the details in Prometheus manifest file. sample is given below
apiVersion: monitoring.coreos.com/v1
kind: Prometheus
metadata:
name: k8s
labels:
prometheus: k8s
spec:
replicas: 2
version: PROMETHEUS_VERSION
externalUrl: PROMETHEUS_EXTERNAL_URL
serviceAccountName: prometheus-k8s
serviceMonitorSelector:
matchExpressions:
- {key: k8s-app, operator: Exists}
ruleSelector:
matchLabels:
role: alert-rules
prometheus: k8s
nodeSelector:
node_label_key: node_label_value
resources:
requests:
memory: PROMETHEUS_MEMORY_REQUEST
retention: PROMETHEUS_STORAGE_RETENTION
securityContext:
fsGroup: 2000
runAsNonRoot: true
runAsUser: 1000
storage:
class: STORAGE_CLASS_TYPE
selector:
resources:
volumeClaimTemplate:
metadata:
annotations:
annotation1: prometheus
spec:
storageClassName: STORAGE_CLASS_TYPE
resources:
requests:
storage: PROMETHEUS_STORAGE_VOLUME_SIZE
I am connecting my spark job with an existing ignite cluster. I use a service account name spark for it. My driver is able to access the ignite pods, but my executors are not able to access that.
This is what executor log looks like
Caused by: java.io.IOException: Server returned HTTP response code: 403 for URL: https://35.192.214.68/api/v1/namespaces/default/endpoints/ignite
I guess it's due to some privileges. Is there a way to explicitly specify service account for executors as well?
Thanks in advance.
The similar issue was discussed here.
Most likely you need to grant more permissions to a service account which is used for running Ignite.
This way you are able to create and bind one more role to the service account:
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
name: ignite
namespace: default
rules:
- apiGroups:
- ""
resources: # Here is resources you can access
- pods
- endpoints
verbs: # That is what you can do with them
- get
- list
- watch
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: ignite
roleRef:
kind: ClusterRole
name: ignite
apiGroup: rbac.authorization.k8s.io
subjects:
- kind: ServiceAccount
name: <service account name>
namespace: default
Also, if your namespace is not default you need to update that one in yaml-files and specify it in TcpDiscoveryKubernetesIpFinder configuration.
When trying to create a kubernetes replica set from a yaml file, then I always get this error on AKS:
kubectl create -f kubia-replicaset.yaml error: unable to recognize
"kubia-replicaset.yaml": no matches for apps/, Kind=ReplicaSet
I tried it with several different files and also the samples from the K8s documentation, but all result in this failure. Creating Pods and RCs works
below is the yaml file:
apiVersion: apps/v1beta2
kind: ReplicaSet
metadata:
name: kubia
spec:
replicas: 3
selector:
matchLabels:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: luksa/kubia
Change apps/v1beta2 to apps/v1 works for me.
You are supposed advised to use deployments now:
A Deployment controller provides declarative updates for Pods and
ReplicaSets.
You describe a desired state in a Deployment object, and the
Deployment controller changes the actual state to the desired state at
a controlled rate. You can define Deployments to create new
ReplicaSets, or to remove existing Deployments and adopt all their
resources with new Deployments.
And this piece:
Kubectl rolling update updates Pods and ReplicationControllers in a
similar fashion. But Deployments are recommended, since they are
declarative, server side, and have additional features, such as
rolling back to any previous revision even after the rolling update is
done.
Also, take a look here