Does Serialization degrade Spark performance? - apache-spark

Given the following case class:
case class User(name:String, age:Int)
An RDD is created from a List of instances of Users
The following code filters the RDD to remove users above the age of 50
trait Process {
def test {
val rdd = ... // create RDD
rdd.filter(_.age>50)
}
}
In order to add logging, a separate validate function is created and passed to the filter, as follows:
trait Process {
def validate(user:User) {
if (user.age>50) {
true
}
else {
println("FAILED VALIDATION")
false
}
}
def test {
val rdd = ... // create RDD
rdd.filter(validate)
}
}
The following exception is thrown:
org.apache.spark.SparkException: Task not serializable
The code works by making the class in which the validate function is defined serializable:
trait Process extends Serializable
Is this the correct way to handle the Task not serializable exception, or is there a performance degradation to using serialization within Spark? Are there any better ways to do this?
Thanks

is there a performance degradation to using serialization within Spark
Task serialization (as opposed to data serialization, that occurs when shuffling / collecting data) is rarely noticeable performance-wise, as long as the serialized objects are small. Task serialization occurs once per task (regardless of the amount of data processed).
In this case (serializing the Process instance), the performance impact would probably be negligible since it's a small object.
The risk with this assumption ("Process is small, so it's OK") is that over time, Process might change: it would be easy for developers not to notice that this class gets serialized, so they might add members that would make this slower.
Are there any better ways to do this
You can avoid serialization completely by using static methods - methods of objects instead of classes. In this case, you can create a companion object for Process:
import Process._
trait Process {
def test {
val rdd = ... // create RDD
rdd.filter(validate)
}
}
object Process {
def validate(user:User) {
if (user.age>50) {
true
} else {
println("FAILED VALIDATION")
false
}
}
Objects are "static", so Spark can use them without serialization.

Related

Child thread not seeing updates made by main thread

I'm implementing a SparkHealthListener by extending SparkListener class.
#Component
class ClusterHealthListener extends SparkListener with Logging {
val appRunning = new AtomicBoolean(false)
val executorCount = new AtomicInteger(0)
override def onApplicationStart(applicationStart: SparkListenerApplicationStart) = {
logger.info("Application Start called .. ")
this.appRunning.set(true)
logger.info(s"[appRunning = ${appRunning.get}]")
}
override def onExecutorAdded(executorAdded: SparkListenerExecutorAdded) = {
logger.info("Executor add called .. ")
this.executorCount.incrementAndGet()
logger.info(s"[executorCount = ${executorCount.get}]")
}
}
appRunning and executorCount are two variables declared in ClusterHealthListener class. ClusterHealthReporterThread will only read the values.
#Component
class ClusterHealthReporterThread #Autowired() (healthListener: ClusterHealthListener) extends Logging {
new Thread {
override def run(): Unit = {
while (true) {
Thread.sleep(10 * 1000)
logger.info("Checking range health")
logger.info(s"[appRunning = ${healthListener.appRunning.get}] [executorCount=${healthListener.executorCount.get}]"
}
}
}.start()
}
ClusterHealthReporterThread is always reporting the initialized values regardless of the changes made to the variable by main thread? What am I doing wrong? Is this because I inject healthListener to ClusterHealthReporterThread?
Update
I played around a bit and looks like it has something to do with the way i initiate spark listener.
If I add the spark listener like this
val sparkContext = SparkContext.getOrCreate(sparkConf)
sparkContext.addSparkListener(healthListener)
Parent thread will show appRunning as 'false' always but shows executor count correctly. Child thread (health reporter) will also show proper executor counts but appRunning was always reporting 'false' like that of the main thread.
Then I stumbled across this Why is SparkListenerApplicationStart never fired? and tried setting listener at the spark config level,
.set("spark.extraListeners", "HealthListener class path")
If I do this, main thread will report 'true' for appRunning and will report correct executor counts but child thread will always report 'false' and '0' value for executors.
I can't immediately see what's wrong here, you might have found an interesting edge case.
I think #m4gic's comment might be correct, that the logging library is perhaps caching that interpolated string? It looks like you are using https://github.com/lightbend/scala-logging which claims that this interpolation "has no effect on behavior", so maybe not. Please could you follow his suggestion to retry without using that feature and report back?
A second possibility: I wonder if there is only one ClusterHealthListener in the system? Perhaps the autowiring is causing a second instance to be created? Can you log the object ids of the ClusterHealthListener reference in both locations and verify that they are the same object?
If neither of those suggestions fix this, are you able to post a working example that I can play with?

What is the best Scala thread-safe way to write to a BufferedWriter?

I have a simple method that writes a line of data to a File followed by a new line that is executed asynchronously.
def writeToFile(bw: BufferedWriter, str: String) = {
bw.write(str)
bw.newLine
}
When my program runs I'm getting "mixed up" rows in the file due to the async nature of the calls. For instance...say writeToFile(bw, "foo") is executed 3 times asynchronously I may get:
correct output
foo
foo
foo
possible incorrect output
foofoo
foo
I'm able to avoid this possibility by using synchronized method like this:
def writeToFile(bw: BufferedWriter, str: String) = synchronized {
bw.write(str)
bw.newLine
}
From what I researched I can't determine how "safe" this is in regards to scaling my application. The only examples I can find using synchronized is when accessing collections, not writing to a file. My application is built in the Play! Framework 2.4.2.
I personally would create an akka actor for each BufferedWriter what will encapsulate it completely.
import java.io.BufferedWriter
import akka.actor._
import playground.BufferedWriterActor.WriteToBuffer
object BufferedWriterActor {
val name = "BufferedWriterActor"
def props(bw: BufferedWriter) = Props(classOf[BufferedWriterActor], bw)
case class WriteToBuffer(str: String)
}
class BufferedWriterActor(bw: BufferedWriter) extends Actor {
def receive: Actor.Receive = {
case WriteToBuffer(str) =>
bw.write(str)
bw.newLine()
}
}
Use it like this:
import akka.actor.{ActorSystem, Props}
object HelloWorld {
def main(args: Array[String]): Unit = {
val system = ActorSystem("mySystem")
// Share this actor across all your threads.
val myActor = system.actorOf(BufferedWriterActor.props(bw), BufferedWriterActor.name)
// Send messages to this actor from all you threads.
myActor ! BufferedWriterActor.WriteToBuffer("The Text")
}
}
This will chain all calls to this buffer in a single thread.
More info on akka and its actors is here:
http://akka.io/
http://doc.akka.io/docs/akka/snapshot/scala/actors.html
Also play framework itself uses akka so you should be able to use its default ActorSystem, but I do not remember how exactly, sorry.

How do I pass functions into Spark transformations during scalatest?

I am using Flatspec to run a test and keep hitting an error because I pass a function into map. I've encountered this problem a few times, but just found a workaround by just using an anonymous function. That doesn't seem to be possible in this case. Is there a way of passing functions into transformations in scalatest?
code:
“test” should “fail” in {
val expected = sc.parallelize(Array(Array(“foo”, “bar”), Array(“bar”, “qux”)))
def validateFoos(firstWord: String): Boolean = {
if (firstWord == “foo”) true else false
}
val validated = expected.map(x => validateFoos(x(0)))
val trues = expected.map(row => true)
assert(None === RDDComparisons.compareWithOrder(validated, trues))
}
error:
org.apache.spark.SparkException: Task not serializable
*This uses Holden Karau's Spark testing base:
https://github.com/holdenk/spark-testing-base
The "normal" way of handing this is to define the outer class to be serilizable, this is a bad practice in anything except for tests since you don't want to ship a lot of data around.

Actor's value sometimes returns null

I have an Actor and some other object:
object Config {
val readValueFromConfig() = { //....}
}
class MyActor extends Actor {
val confValue = Config.readValueFromConfig()
val initValue = Future {
val a = confValue // sometimes it's null
val a = Config.readValueFromConfig() //always works well
}
//..........
}
The code above is a very simplified version of what I actually have. The odd thing is that sometimes val a = confValue returns null, whereas if I replace it with val a = Config.readValueFromConfig() then it always works well.
I wonder, is this due to the fact that the only way to interact with an actor is sending it a message? Therefore, since val confValue is not a local variable, I must either use val a = Config.readValueFromConfig() (a different object, not an actor) or val a = self ! GetConfigValue and read the result afterwards?
val readValueFromConfig() = { //....}
This gives me a compile error. I assume you mean without parentheses?
val readValueFromConfig = { //....}
Same logic with different timing gives different result = a race condition.
val confValue = Config.readValueFromConfig() is always executed during construction of MyActor objects (because it's a field of MyActor). Sometimes this is returning null.
val a = Config.readValueFromConfig() //always works well is always executed later - after MyActor is constructed, when the Future initValue is executed by it's Executor. It seems this never returns null.
Possible causes:
Could be explained away if the body of readValueFromConfig was dependent upon another
parallel/async operation having completed. Any chance you're reading the config asynchronously? Given the name of this method, it probably just reads synchronously from a file - meaning this is not the cause.
Singleton objects are not threadsafe?? I compiled your code. Here's the decompilation of your singleton object java class:
public final class Config
{
public static String readValueFromConfig()
{
return Config..MODULE$.readValueFromConfig();
}
}
public final class Config$
{
public static final MODULE$;
private final String readValueFromConfig;
static
{
new ();
}
public String readValueFromConfig()
{
return this.readValueFromConfig;
}
private Config$()
{
MODULE$ = this;
this.readValueFromConfig = // ... your logic here;
}
}
Mmmkay... Unless I'm mistaken, that ain't thread-safe.
IF two threads are accessing readValueFromConfig (say Thread1 accesses it first), then inside method private Config$(), MODULE$ is unsafely published before this.readValueFromConfig is set (reference to this prematurely escapes the constructor). Thread2 which is right behind can read MODULE$.readValueFromConfig before it is set. Highly likely to be a problem if '... your logic here' is slow and blocks the thread - which is precisely what synchronous I/O does.
Moral of story: avoid stateful singleton objects from Actors (or any Threads at all, including Executors) OR make them become thread-safe through very careful coding style. Work-Around: change to a def, which internally caches the value in a private val.
I wonder, is this due to the fact that the only way to interact with an actor is sending it a message? Therefore, since val confValue is not a local variable, I must either use val a = Config.readValueFromConfig() (a different object, not an actor)
Just because it's not an actor, doesn't mean it's necessarily safe. It probably isn't.
or val a = self ! GetConfigValue and read the result afterwards?
That's almost right. You mean self ? GetConfigValue, I think - that will return a Future, which you can then map over. ! doesn't return anything.
You cannot read from an actor's variables directly inside a Future because (in general) that Future could be running on any thread, on any processor core, and you don't have any memory barrier there to force the CPU caches to reload the value from main memory.

Simple parallelization with shared non-threadsafe resource in Scala

I frequently want to parallelize a task that relies on a non-threadsafe shared resource. Consider the following non-threadsafe class. I want to do a map over a data: Vector[String].
class Processor { def apply: String => String }
Basically, I want to create n threads, each with an instance of Processor and a partition of the data. Scala parallel collections have spoiled me into thinking the parallelization should be dirt simple. However, they don't seem well suited for this problem. Yes, I can use actors but Scala actors might become deprecated and Akka seems like overkill.
The first thing that comes to mind is to have a synchronized map Thread -> Processor and then use parallel collections, looking up my Processor in this thread-safe map. Is there a better way?
Instead of building your own synchronized map, you can use ThreadLocal. That will guarantee a unique Processor per thread.
val processors = new ThreadLocal[Processor] {
def initialValue() = new Processor
}
data.par.map(x => processors.get.apply(x))
Alternatively you try using an executor service configured to use specified number of threads explicitly:
val processors = new ThreadLocal[Processor] {
override def initialValue() = new Processor
}
val N = 4
// create an executor with fixed number of threads
val execSvc = Executors.newFixedThreadPool(N)
// create the tasks
data foreach {
loopData =>
execSvc.submit(new Runnable() {
def run = processors.get().apply(loopData)
})
}
// await termination
execSvc.shutdown()
while(!execSvc.awaitTermination(1, TimeUnit.SECONDS)) {
;
}
// processing complete!

Resources