I am trying to use spark on Kubernetes cluster (existing setup: emr + yarn). Our use case is to handle too many jobs including short lived ones (few seconds to 15 minutes). Also, we have peak hours when many workers needs to run to handle 100s of jobs running concurrently.
So what I want to achieve, running master and fixed few workers (say 5) all time and increase workers to 40-50 at peak time. Also, I will prefer to use dynamic allocation.
I am setting it as below
Master image (spark-master:X)
FROM <BASE spark 3.1 Image build using dev/make-distribution.sh -Pkubernetes in spark>
ENTRYPOINT ["/opt/spark/sbin/start-master.sh", "-p", "8081", "<A long running server command that can accept get traffic on 8080 to submit jobs>"]
Worker worker image (spark-worker:X)
FROM <BASE spark 3.1 Image build using dev/make-distribution.sh -Pkubernetes in spark>
ENTRYPOINT ["/opt/spark/sbin/start-worker.sh", "spark//spark-master:8081" ,"-p", "8081", "<A long running server command to keep up the worker>"]
Deplyments
apiVersion: apps/v1
kind: Deployment
metadata:
name: spark-master-server
spec:
replicas: 1
selector:
matchLabels:
component: spark-master-server
template:
metadata:
labels:
component: spark-master-server
spec:
containers:
- name: spark-master-server
image: spark-master:X
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8081
---
apiVersion: v1
kind: Service
metadata:
name: spark-master
spec:
type: ClusterIP
ports:
- port: 8081
targetPort: 8081
selector:
component: spark-master-server
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: spark-worker-instance
spec:
replicas: 3
selector:
matchLabels:
component: spark-worker-instance
template:
metadata:
labels:
component: spark-worker-instance
spec:
containers:
- name: spark-worker-server
image: spark-worker:X
imagePullPolicy: IfNotPresent
ports:
- containerPort: 8081
Questions
Is this setup recommended ?
How can we submit new job from within Kubernetes cluster in absence of yarn (k8s) ?
Reason we are trying not create master and driver dynamically per job (as given in example - http://spark.apache.org/docs/latest/running-on-kubernetes.html) is that it may be an overhead for large no. of small jobs.
Is this setup recommended ?
Don't think so.
Dynamic Resource Allocation is a property of a single Spark application "to dynamically adjust the resources your application occupies based on the workload."
Dynamic Resource Allocation spans its resource requirements regardless of available nodes in a cluster. As long as there are resources available and a cluster manager could assign them to a Spark application these resources are free to go.
What you seem to be trying to set up is how to scale the cluster itself up and down. In your case it's Spark Standalone and although technically it's possible with ReplicaSets (just a guess) I've never heard any earlier attempts at it. You're on your own as Spark Standalone does not support it out of the box.
That I think is an overkill since you're building a multi-layer cluster environment: using a cluster manager (Kubernetes) to host another cluster manager (Spark Standalone) for Spark applications. Given Spark on Kubernetes supports Dynamic Allocation out of the box the only worry of yours should simply be how to "throw in" more CPU and memory on demand while resizing Kubernetes cluster. You should rely on the capabilities of Kubernetes to resize itself up and down rather than Spark Standalone on Kubernetes.
The spark on k8s operator may provide at least one mechanism to dynamically provision the resources you need to do safe scaling of resources based on demand.
https://github.com/GoogleCloudPlatform/spark-on-k8s-operator
My thinking is that instead of performing a direct spark submit to a master in a static spark cluster, you could make a call to the k8s API to provision the required instance; or otherwise define these as a cron schedule.
In the dynamic provisioning scenario one thing to consider is how your workloads get distributed across the cluster; we can’t use simple HPA rules for this as it may not be safe to tear down a worker on CPU/Mem levels; spawning a separate cluster for each on demand workload bypasses this but may not be optimal. I would be interested to hear how you get on.
Related
I am using Azure Kubernetes, and trying to set TCP_Keepalive on a container basis.
Is there away of achieving that?
You could do this via sysctls on the pod manifest in AKS/Kubernetes:
spec:
securityContext:
sysctls:
- name: "net.ipv4.tcp_keepalive_time"
value: "45"
Here is also further documentation:
https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/
https://docs.syseleven.de/metakube/de/tutorials/confiugre-unsafe-sysctls
We are using Spark & Cassandra in an application which is deployed on bare metal/VM. To connect Spark to Cassandra, we are using following properties in order to enable SSL :
spark.cassandra.connection.ssl.keyStore.password
spark.cassandra.connection.ssl.keyStore.type
spark.cassandra.connection.ssl.protocol
spark.cassandra.connection.ssl.trustStore.path
spark.cassandra.connection.ssl.trustStore.password
spark.cassandra.connection.ssl.trustStore.type
spark.cassandra.connection.ssl.clientAuth.enabled
Now I am trying to migrate same application in Kubernetes. I have following questions :
Do I need to change above properties in order to connect spark to Cassandra cluster in Kubernetes?
Does above properties will work or did I miss something ?
Can anyone point to some document or link which can help me ?
Yes, these properties will continue to work when you run your job on Kubernetes. The only thing that you need to take into account is that all properties with name ending with .path need to point to the actual files with trust & key stores. On Kubernetes, you need to take care of exposing them as secrets, mounted as files. First you need to create a secret, like this:
apiVersion: v1
data:
spark.truststore: base64-encoded truststore
kind: Secret
metadata:
name: spark-truststore
type: Opaque
and then in the spec, point to it:
spec:
containers:
- image: nginx
name: nginx
volumeMounts:
- mountPath: "/some/path"
name: spark-truststore
readOnly: true
volumes:
- name: spark-truststore
secret:
secretName: spark-truststore
and point configuration option to given path, like: /some/path/spark.truststore
In the official documentation here, the pod.spec.container.resources.limits is defined as follows :
"Limits describes the maximum amount of compute resources allowed."
I understand that k8s prohibits a pod from consuming more resources than specified in limits.
The documentation does not say that a pod is not scheduled in a node that does not have the amount of resources specified in limits.
For instance, if each node in my cluster has 2 cpus, and I try to deploy a pod defining a cpu limit to 3, my pod will never be running and will be in status Pending.
Here is the example template : mypod.yml
apiVersion: v1
kind: Pod
metadata:
name: secondbug
spec:
containers:
- name: container
image: nginx
resources:
limits:
cpu: 3
Is this behaviour intended and why?
You have 3 kinds of pods, pods with no request definition, pods that are burstable which have limits and requests and the third kind which has only limits or requests defined.
For the third case if you only define limits or requests, the other one will be defined by the first one meaning:
resources:
limits:
cpu: 3
Is actually:
resources:
limits:
cpu: 3
requests:
cpu: 3
So you cant really define only one without the other. So in your case, you want to define requests for it so it won't be defined by the limits.
I tried to use K8s to setup spark cluster (I use standalone deployment mode, and I cannot use k8s deployment mode for some reason)
I didn't set any cpu related arguments.
for spark, that means:
Total CPU cores to allow Spark applications to use on the machine (default: all available); only on worker
http://spark.apache.org/docs/latest/spark-standalone.html
for k8s pods, that means:
If you do not specify a CPU limit for a Container, then one of these situations applies:
The Container has no upper bound on the CPU resources it can use. The Container could use all of the CPU resources available on the Node where it is running.
The Container is running in a namespace that has a default CPU limit, and the Container is automatically assigned the default limit. Cluster administrators can use a LimitRange to specify a default value for the CPU limit.
https://kubernetes.io/docs/tasks/configure-pod-container/assign-cpu-resource/
...
Addresses:
InternalIP: 172.16.197.133
Hostname: ubuntu
Capacity:
cpu: 4
memory: 3922Mi
pods: 110
Allocatable:
cpu: 4
memory: 3822Mi
pods: 110
...
But my spark worker only use 1 core (I have 4 cores on the worker node and the namespace has no resource limits).
That means the spark worker pod only used 1 core of the node (which should be 4).
How can I write yaml file to set the pod to use all available cpu cores?
Here is my yaml file:
---
apiVersion: v1
kind: Namespace
metadata:
name: spark-standalone
---
kind: DaemonSet
apiVersion: apps/v1
metadata:
name: spark-slave
namespace: spark-standalone
labels:
k8s-app: spark-slave
spec:
selector:
matchLabels:
k8s-app: spark-slave
updateStrategy:
type: RollingUpdate
rollingUpdate:
maxUnavailable: 1
template:
metadata:
name: spark-slave
namespace: spark-standalone
labels:
k8s-app: spark-slave
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/edge
operator: Exists
hostNetwork: true
containers:
- name: spark-slave
image: spark:2.4.3
command: ["/bin/sh","-c"]
args:
- "
${SPARK_HOME}/sbin/start-slave.sh
spark://$(SPARK_MASTER_IP):$(SPARK_MASTER_PORT)
--webui-port $(SPARK_SLAVE_WEBUI_PORT)
&&
tail -f ${SPARK_HOME}/logs/*
"
env:
- name: SPARK_MASTER_IP
value: "10.4.20.34"
- name: SPARK_MASTER_PORT
value: "7077"
- name: SPARK_SLAVE_WEBUI_PORT
value: "8081"
---
Kubernetes - No upper bound
The Container has no upper bound on the CPU resources it can use. The Container could use all of the CPU resources available on the Node where it is running
Unless you confgure a limit on CPU for your pod, it can use all available CPU resources on the node.
Consider dedicated nodes
If you are running other workload on the same node, they also consume CPU resources, and may be guaranteed CPU resources if they have configured request for CPU. Consider use a dedicated node for your workload using NodeSelector and Taints and Tolerations.
Spark - No upper bound
You configure the slave with parameters to the start-slave.sh e.g. --cores X to limit CPU core usage.
Total CPU cores to allow Spark applications to use on the machine (default: all available); only on worker
Multithreaded workload
In the end, if pod can use multiple CPU cores depends on how your application uses threads. Some things only uses a single thread, so the application must be designed for multithreading and have something parallelized to do.
I hit absolutely the same issue with a Spark worker. Knowing the fact, that Java sometimes failed to calculate CPU correctly, I tried to specify the CPU request or limit in Pod spec and the worker automatically understood what the environment it is. And needed cores will be assigned to the Spark worker executor.
Also, I faced this behavior in k8s only. In Docker Swarm, all available CPU cores had been taken by a worker.
What's more, in default templates for 'cores' parameters for Spark worker 1 core is mentioned. I believe it could be taken in case of the wrong calculation of CPU cores.
I am running Cassandra on Kubernetes (3 instances) and want to expose it to the outside, my application is not yet in Kubernetes. So i crated a load balanced service like so:
apiVersion: v1
kind: Service
metadata:
namespace: getquanty
labels:
app: cassandra
name: cassandra
annotations:
kubernetes.io/tls-acme: "true"
spec:
clusterIP:
ports:
- port: 9042
name: cql
nodePort: 30001
- port: 7000
name: intra-node
nodePort: 30002
- port: 7001
name: tls-intra-node
nodePort: 30003
- port: 7199
name: jmx
nodePort: 30004
selector:
app: cassandra
type: LoadBalancer
This is the result is:
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
cassandra 10.55.249.88 GIVEN_IP_GCE_LB 9042:30001/TCP,7000:30002/TCP,7001:30003/TCP,7199:30004/TCP 26m
I am able to connect using sh (cqlsh GIVEN_IP_GCE_LB ) but when i try to add data to Cassandra using the datastax driver for node, i got this:
message: 'Cannot achieve consistency level SERIAL',
info: 'Represents an error message from the server',
code: 4096,
consistencies: 8,
required: 1,
alive: 0,
coordinator: '35.187.166.68:9042' },
'10.52.4.32:9042': 'Host considered as DOWN',
'10.52.2.15:9042': 'Host considered as DOWN' },
info: 'Represents an error when a query cannot be performed because no host is available or could be reached by the driver.',
message: 'All host(s) tried for query failed. First host tried, 35.187.166.68:9042: ResponseError: Cannot achieve consistency level SERIAL. See innerErrors.' }
My first though was I need to expose the other ports too, so I did (intra-node, tls-intra-node, jmx), but it was the same error.
Kubernetes gives you access to proxy, i tried to proxy from my machine using the constructed URL for the pod to test if i have access but i cannot connect using cqlsh:
http://127.0.0.1:8001/api/v1/namespaces/qq/pods/cassandra-0:cql/proxy
I am out of ideas, the one thing left to try is to expose every instance (make a service for every instance) which is very ugly, but it will let me connect to the nodes from the outside until i migrate the application to Kubernetes.
Does any one have ideas how to expose Cassandra nodes to the internet and make the Datastax driver aware of all the nodes? Thank you for your time.
After more reading I found out that the replication strategy was the one causing the problem, NetworkStrategy is suitable for multi-cluster, I have one, so I changed the replication to simple with the number of nodes i had, now every thing works as expected.
EDIT 1:
Putting databases on Kube is not a good solution, I ended up making a standalone cluster, added it to the same Network as kube, and was able to access it from kube pods.
Kube is made to manage application and make them 'elastic', i don't think people really need to scale databases as quick as applications, furthermore, the scaling of a database is not the same operation as a stateless application.
You need to use headless service for the replication controller you created.
Your service should be something like :
apiVersion: v1
kind: Service
metadata:
labels:
app: cassandra
name: cassandra
spec:
clusterIP: None
ports:
- port: 9042
selector:
app: cassandra
Also, you can take reference for the below link and bring up a cassandra cluster.
https://github.com/kubernetes/kubernetes/tree/master/examples/storage/cassandra
I would recommend to run cassandra pod via replication controller or statefulset or daemonset because then kubernetes manages restart/rescheduling of the pod whenever required.