Problem:
Flink task manager reports: apache.kafka.common.errors.TimeoutException: Timeout expired while fetching topic metadata
Deployment overview:
A Java project to try out Stateful Functions.The streaming app reads messages from Kafka, processes messages and sends the final result to kafka egress.
Deployed on Azure:
Azure Event Hub (Kafka Endpoint) as ingress and egress
Azure Kubernetes Service as k8s deployment
Azure Data Lake Gen 2 as storage for checkpoint
Deployment is good, job manager and task manager has been launched, then I see task failed to run due to the exception
Diagnostics:
I created a simple Java consumer with the identical kafka config,
just with a different consumer group. The Java app works well both
on my laptop and in AKS (deployed in the same namespace as the
stateful function app is) So I get a conclusion that the Event Hub
and my kafka config are both good.
I checked the task manager log (kubectl logs xxx), and the kafka properties have been correctly loaded. The sasl.jaas.config shows as "sasl.jaas.config = [hidden]" but I assume this is by design.
My Kafka Settings:
I'm using the following config:
kind: io.statefun.kafka.v1/ingress
spec:
id: io.streaming/eventhub-ingress
address: xxxx.servicebus.windows.net:9093
consumerGroupId: group-receiver-00
startupPosition:
type: group-offsets
topics:
- topic: streaming-topic-rec-32
valueType: streaming.types/rec
targets:
- streaming.fns/bronze_rec
- topic: streaming-topic-eng-32
valueType: streaming.types/eng
targets:
- streaming.fns/bronze_eng
properties:
- request.timeout.ms: 60000
- security.protocol: SASL_SSL
- sasl.mechanism: PLAIN
- sasl.jaas.config: org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="primary connection string of the event hub ns";
Can anyone help me with this? Thank you!
Resolved after reducing replicas of task manager. No config changed
I have a spark application and want to deploy this on a Kubernetes cluster.
Following the below documentation I have managed to create an empty Kubernetes cluster, generated docker image using the Dockerfile provided under kubernetes/dockerfiles/spark/Dockerfile and deployed this on the cluster using spark-submit in a Dev environment.
https://spark.apache.org/docs/latest/running-on-kubernetes.html
However, in a 'proper' environment we have a managed Kubernetes cluster (bespoke unlike EKS etc.) and will have to provide pod configuration files to get deployed.
I believe you can supply Pod template file as an argument to the spark-submit command.
https://spark.apache.org/docs/latest/running-on-kubernetes.html#pod-template
How can I do this without spark-submit? And are there any example yaml files?
PS: we have limited access to this cluster, e.g. we can install Helm charts but not operator or controller.
You could try to use k8s Spark CRD https://github.com/GoogleCloudPlatform/spark-on-k8s-operator and provide a pod configuration through it.
I have a kubernetes cluster (v1.24.3) running in Azure with 3 nodepools called small, standard and large. For each of these nodepools I have added a label named type, where the value is SMALL-2CPU-8GB, STANDARD-4CPU-16GB and LARGE-8CPU-32GB respectively. These nodepools are also configured with the autoscaler from Azure, and the min is 0 and the max is 10.
Now, I am deploying my applications which are required to run in each of these nodepools depending on the specification - for example, one of the apps requires a small node, so it is requesting to run in the nodepool called small with a label type=SMALL-2CPU-8GB and so on.
The way I am requesting this is by setting the nodeSelector in the manifest of the application. Exactly this is the portion of the template:
# App 1
podTemplate:
spec:
nodeSelector:
type: LARGE-8CPU-32GB
agentpool: large
# App 2
podTemplate:
spec:
nodeSelector:
type: STANDARD-4CPU-16GB
agentpool: standard
# App 3
podTemplate:
spec:
nodeSelector:
type: SMALL-4CPU-16GB
agentpool: small
...
When I apply the manifest to the cluster, the pods are in pending state with the message:
Normal NotTriggerScaleUp 43m (x13 over 45m) cluster-autoscaler pod didn't trigger scale-up: 3 node(s) didn't match Pod's node affinity/selector, 1 not ready for scale-up
And I can see that the node count is still 0, so the pod is not triggering the autoscaler to request a new node.
My question is, how to make the autoscaler work when I am requesting nodes (even when the nodepool has zero nodes) via the nodeSelector? Should I specify a different label or use taints?
I have enabled azure policies via terraform and applied to AKS cluster. I can see pods are deployed, up and running. I applied in-built initiative here too with effect "audit" to test out how azure policies works on aks cluster.
$ kubectl get pods -n gatekeeper-system
NAME READY STATUS RESTARTS AGE
gatekeeper-audit-77754c7d8-g44qb 1/1 Running 0 44h
gatekeeper-controller-78cff9c89-7pftn 1/1 Running 0 44h
gatekeeper-controller-78cff9c89-8dsfg 1/1 Running 0 44h
I found a dashboard https://grafana.com/grafana/dashboards/15763
But some of the metrics are different/missing. Not sure, because, azure managing this gatekeeper!?. I see below some panel are displaying and metrics are available in prometheus. For example below opa_scorecard_constraint_violations not avilable.
How to monitor azure policies via prometheus properly
I don't think metrics like opa_scorecard_constraint_violations can be exported when you're using Azure Policies(+Gatekeeper)
However you can export gatekeeper metrics, you just need to create a service monitor to hit the proper endpoint.
My service monitor looks like this:
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
labels:
monitoring: prometheus
name: gatekeeper-system-pod-monitor
namespace: monitoring
spec:
jobLabel: gatekeeper.sh/system
namespaceSelector:
matchNames:
- gatekeeper-system
podMetricsEndpoints:
- honorLabels: true
path: /metrics
port: metrics
selector:
matchLabels:
gatekeeper.sh/system: "yes"
grafana screenshot:
metrics
Disclaimer: This question is very specific about the used platforms and the UseCase we are trying to solve with it. Also it compares two approaches we currently use at least in a development stage and are trying to compare, but perhaps don't fully understand yet. I am asking for guidance on this very specific topic...
A) We are running a Kafka cluster as Kafka Tasks on DC/OS, where persistence of data is maintained via local Disk Storage which is provisioned on the very same host as the according kafka broker instance.
B) We are trying to run Kafka on Kubernetes (via Strimzi Operator), specifically Azure Kubernetes Service (AKS) and are struggling to get reliable Data Persistence using the StorageClasses you get in AKS. We tried three possibilities:
(Default) Azure Disk
Azure File
emptyDir
I see two major issues with Azure Disk, as we are able to set the Kafka Pod Affinity in a manner that they do not end up on the same maintenance zone / host, we have no instrument to bind the according PersistentVolume anywhere near the Pod. There is nothing like NodeAffinity for AzureDisks. Also it is fairly common that an Azure Disk ends up on another host than its corresponding pod, which might be limited by network bandwidth then?
With Azure File we don't have issues because of maintenance zones which are going down temporarily, but as a high latency storage option it doesn't seem to be a good fit and also Kafka has trouble to delete / update files on retention.
So I ended up using an ephemeral Storage Cluster which is commonly NOT recommended but doesn't come with the problems above. The Volume "lives" near the pod and is available to it as long as the pod itself runs on any node. In the maintenance case pod AND volume die together. As long as I am able to maintain a quorum, I don't see where this might cause issues.
Is there anything like podAffinity for PersistentVolumes as Azure-Disk is per definition Node bound?
What are the major downsides in using emptyDir for persistence in a Kafka Cluster on Kubernetes?
Is there anything like podAffinity for PersistentVolumes as Azure-Disk
is per definition Node bound?
As I know, there is nothing like podaffinity for PersistentVolumes as Azure-Disk. The azure disk should be attached to the node, so if the pod changes the host node, then the pod can't use the volume on that disk. Only the Azure file share is podAffinity.
What are the major downsides in using emptyDir for persistence in a
Kafka Cluster on Kubernetes?
You can take a look at the emptyDir:
scratch space, such as for a disk-based merge sort
This is the most thing you need to watch out for when you use the AKS. You need to calculate the disk space, perhaps you need to attach multiple Azure disks to the nodes.
Starting off - I'm not sure what you mean about an Azure Disk ending up on a node other than where the pod is assigned - that shouldn't be possible, per my understanding (for completeness, you can do this on a VM with the shared disks feature outside of AKS, but as far as I'm aware that's not supported in AKS for dynamic disks at the time of writing). If you're looking at the volume.kubernetes.io/selected-node annotation on the PVC, I don't believe that's updated after initial creation.
You can reach the configuration you're looking for by using a statefulset with antiaffinity. Consider this statefulset. It creates three pods, which must be in different availability zones. I'm deploying this to an AKS cluster with a nodepool (nodepool2) with two nodes per AZ:
❯ kubectl get nodes -o jsonpath='{range .items[*]}{.metadata.name}{","}{.metadata.labels.topology\.kubernetes\.io\/zone}{"\n"}{end}'
aks-nodepool1-25997496-vmss000000,0
aks-nodepool2-25997496-vmss000000,westus2-1
aks-nodepool2-25997496-vmss000001,westus2-2
aks-nodepool2-25997496-vmss000002,westus2-3
aks-nodepool2-25997496-vmss000003,westus2-1
aks-nodepool2-25997496-vmss000004,westus2-2
aks-nodepool2-25997496-vmss000005,westus2-3
Once the statefulset is deployed and spun up, you can see each pod was assigned to one of the nodepool2 nodes:
❯ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
echo-0 1/1 Running 0 3m42s 10.48.36.102 aks-nodepool2-25997496-vmss000001 <none> <none>
echo-1 1/1 Running 0 3m19s 10.48.36.135 aks-nodepool2-25997496-vmss000002 <none> <none>
echo-2 1/1 Running 0 2m55s 10.48.36.72 aks-nodepool2-25997496-vmss000000 <none> <none>
Each pod created a PVC based on the template:
❯ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
demo-echo-0 Bound pvc-bf6104e0-c05e-43d4-9ec5-fae425998f9d 1Gi RWO managed-premium 25m
demo-echo-1 Bound pvc-9d9fbd5f-617a-4582-abc3-ca34b1b178e4 1Gi RWO managed-premium 25m
demo-echo-2 Bound pvc-d914a745-688f-493b-9b82-21598d4335ca 1Gi RWO managed-premium 24m
Let's take a look at one of the PVs that was created:
apiVersion: v1
kind: PersistentVolume
metadata:
annotations:
pv.kubernetes.io/bound-by-controller: "yes"
pv.kubernetes.io/provisioned-by: kubernetes.io/azure-disk
volumehelper.VolumeDynamicallyCreatedByKey: azure-disk-dynamic-provisioner
creationTimestamp: "2021-04-05T14:08:12Z"
finalizers:
- kubernetes.io/pv-protection
labels:
failure-domain.beta.kubernetes.io/region: westus2
failure-domain.beta.kubernetes.io/zone: westus2-3
name: pvc-9d9fbd5f-617a-4582-abc3-ca34b1b178e4
resourceVersion: "19275047"
uid: 945ad69a-92cc-4d8d-96f4-bdf0b80f9965
spec:
accessModes:
- ReadWriteOnce
azureDisk:
cachingMode: ReadOnly
diskName: kubernetes-dynamic-pvc-9d9fbd5f-617a-4582-abc3-ca34b1b178e4
diskURI: /subscriptions/02a062c5-366a-4984-9788-d9241055dda2/resourceGroups/rg-sandbox-aks-mc-sandbox0-westus2/providers/Microsoft.Compute/disks/kubernetes-dynamic-pvc-9d9fbd5f-617a-4582-abc3-ca34b1b178e4
fsType: ""
kind: Managed
readOnly: false
capacity:
storage: 1Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: demo-echo-1
namespace: zonetest
resourceVersion: "19275017"
uid: 9d9fbd5f-617a-4582-abc3-ca34b1b178e4
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: failure-domain.beta.kubernetes.io/region
operator: In
values:
- westus2
- key: failure-domain.beta.kubernetes.io/zone
operator: In
values:
- westus2-3
persistentVolumeReclaimPolicy: Delete
storageClassName: managed-premium
volumeMode: Filesystem
status:
phase: Bound
As you can see, that PV has a required nodeAffinity for nodes in failure-domain.beta.kubernetes.io/zone with value westus2-3. This ensures that the pod that owns that PV will only ever get placed on a node in westus2-3, and that PV will be bound to the node the disk is running on when the pod is started.
At this point, I deleted all the pods to get them on the other nodes:
❯ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
echo-0 1/1 Running 0 4m4s 10.48.36.168 aks-nodepool2-25997496-vmss000004 <none> <none>
echo-1 1/1 Running 0 3m30s 10.48.36.202 aks-nodepool2-25997496-vmss000005 <none> <none>
echo-2 1/1 Running 0 2m56s 10.48.36.42 aks-nodepool2-25997496-vmss000003 <none> <none>
There's no way to see it via Kubernetes, but you can see via the Azure portal that managed disk kubernetes-dynamic-pvc-bf6104e0-c05e-43d4-9ec5-fae425998f9d, which backs pv pvc-bf6104e0-c05e-43d4-9ec5-fae425998f9d, which backs PVC zonetest/demo-echo-0, is listed as Managed by: aks-nodepool2-25997496-vmss_4, so it's been removed and assigned to the node where the pod is running.
Portal screenshot showing disk attached to node 4
If I were to remove nodes such that I didn't have nodes in AZ 3, I wouldn't be able to start pod echo-1, since it's bound to a disk in AZ 3, which can't be attached to a node not in AZ 3.