I have a very simple Flink application in Scala. I have 2 simple streams. I am broadcasting one of my stream to the other stream. Broadcasted stream is containing rules and just checking whether the other is stream's tuples are inside of rules or not. Everything is working fine and my code is like below.
This is an infinite running application. I wonder if there is any possibility for JVM to collect my rules object as garbage or not.
Does anyone has any idea? Many thanks in advance.
object StreamBroadcasting extends App {
val env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI()
val stream = env
.socketTextStream("localhost", 9998)
.flatMap(_.toLowerCase.split("\\W+").filter(_.nonEmpty))
.keyBy(l => l)
val ruleStream = env
.socketTextStream("localhost", 9999)
.flatMap(_.toLowerCase.split("\\W+").filter(_.nonEmpty))
val broadcastStream: DataStream[String] = ruleStream.broadcast
stream.connect(broadcastStream)
.flatMap(new SimpleConnect)
.print
class SimpleConnect extends RichCoFlatMapFunction[String, String, (String, Boolean)] {
private var rules: Set[String] = Set.empty[String] // Can JVM collect this object after a long time?
override def open(parameters: Configuration): Unit = {}
override def flatMap1(value: String, out: Collector[(String, Boolean)]): Unit = {
out.collect(value, rules.contains(value))
}
override def flatMap2(value: String, out: Collector[(String, Boolean)]): Unit = {
rules = rules.+(value)
}
}
env.execute("flink-broadcast-streams")
}
No, the Set of rules will not be garbage collected. It will stick around forever. (Of course, since you're not using Flink's broadcast state, the rules won't survive an application restart.)
Related
I want to continuously elaborate rows of a dataset stream (originally initiated by a Kafka): based on a condition I want to update a Radis hash. This is my code snippet (lastContacts is the result of a previous command, which is a stream of this type: org.apache.spark.sql.DataFrame = [serialNumber: string, lastModified: long]. This expands to org.apache.spark.sql.Dataset[org.apache.spark.sql.Row]):
class MyStreamProcessor extends ForeachWriter[Row] {
override def open(partitionId: Long, version: Long): Boolean = {
true
}
override def process(record: Row) = {
val stringHashRDD = sc.parallelize(Seq(("lastContact", record(1).toString)))
sc.toRedisHASH(stringHashRDD, record(0).toString)(redisConfig)
}
override def close(errorOrNull: Throwable): Unit = {}
}
val query = lastContacts
.writeStream
.foreach(new MyStreamProcessor())
.start()
query.awaitTermination()
I receive a huge stack trace, which the relevant part (I think) is this: java.io.NotSerializableException: org.apache.spark.sql.streaming.DataStreamWriter
Could anyone explain why this exception occurs and how to avoid? Thank you!
This question is related to the following two:
DataFrame to RDD[(String, String)] conversion
Call a function with each element a stream in Databricks
Spark Context is not serializable.
Any implementation of ForeachWriter must be serializable because each task will get a fresh serialized-deserialized copy of the provided object. Hence, it is strongly recommended that any initialization for writing data (e.g. opening a connection or starting a transaction) is done after the open(...) method has been called, which signifies that the task is ready to generate data.
In your code, you are trying to use spark context within process method,
override def process(record: Row) = {
val stringHashRDD = sc.parallelize(Seq(("lastContact", record(1).toString)))
*sc.toRedisHASH(stringHashRDD, record(0).toString)(redisConfig)*
}
To send data to redis, you need to create your own connection and open it in the open method and then use it in the process method.
Take a look how to create redis connection pool. https://github.com/RedisLabs/spark-redis/blob/master/src/main/scala/com/redislabs/provider/redis/ConnectionPool.scala
Lately, as a part of a scientific research, I've been developing an application that streams (or at least should) data from Travis CI and GitHub, using their REST API's. The purpose of this is to get insight into the commit-build relationship, in order to further perform numerous analysis.
For this, I've implemented the following Travis custom receiver:
object TravisUtils {
def createStream(ctx : StreamingContext, storageLevel: StorageLevel) : ReceiverInputDStream[Build] = new TravisInputDStream(ctx, storageLevel)
}
private[streaming]
class TravisInputDStream(ctx : StreamingContext, storageLevel : StorageLevel) extends ReceiverInputDStream[Build](ctx) {
def getReceiver() : Receiver[Build] = new TravisReceiver(storageLevel)
}
private[streaming]
class TravisReceiver(storageLevel: StorageLevel) extends Receiver[Build](storageLevel) with Logging {
def onStart() : Unit = {
new BuildStream().addListener(new BuildListener {
override def onBuildsReceived(numberOfBuilds: Int): Unit = {
}
override def onBuildRepositoryReceived(build: Build): Unit = {
store(build)
}
override def onException(e: Exception): Unit = {
reportError("Exception while streaming travis", e)
}
})
}
def onStop() : Unit = {
}
}
Whereas the receiver uses my custom made TRAVIS API library (developed in Java using Apache Async Client). However, the problem is the following: the data that I should be receiving is continuous and changes i.e. is being pushed to Travis and GitHub constantly. As an example, consider the fact that GitHub records per second approx. 350 events - including push events, commit comment and similar.
But, when streaming either GitHub or Travis, I do get the data from the first two batches, but then afterwards, the RDD's apart of the DStream are empty - although there is data to be streamed!
I've checked so far couple of things, including the HttpClient used for omitting requests to the API, but none of them did actually solve this problem.
Therefore, my question is - what could be going on? Why isn't Spark streaming the data after period x passes. Below, you may find the set context and configuration:
val configuration = new SparkConf().setAppName("StreamingSoftwareAnalytics").setMaster("local[2]")
val ctx = new StreamingContext(configuration, Seconds(3))
val stream = GitHubUtils.createStream(ctx, StorageLevel.MEMORY_AND_DISK_SER)
// RDD IS EMPTY - that is what is happenning!
stream.window(Seconds(9)).foreachRDD(rdd => {
if (rdd.isEmpty()) {println("RDD IS EMPTY")} else {rdd.collect().foreach(event => println(event.getRepo.getName + " " + event.getId))}
})
ctx.start()
ctx.awaitTermination()
Thanks in advance!
Suppose I have scala app that is based on Futures, on scala.concurrent to handle the asyn / concurency (no actors were used so far).
In many places I use log4j for loggining stuff into log file.
Since it is I/O I suppose I may improve performance by sending the log message to LoggingActor
something like this:
def stuffTodo(arg:String)(implicit ex:ExecutionContext) : Future[Result] = {
// important performant work
// ..
logAcrot ! LogMessage("message-1")
// ...
}
where msg: case class LogMessage(msg:String, implicit ex:ExecutionContext)
then in ActorLog
def receive = {
case LogMessage(msg:String, ex:ExecutionContext) ⇒ {log.info(msg + ex)}
}
I've seen other approaches that basically wrap scala.concurent.ExecutionContext (with current thread) and use Mapped Diagnostic Context-magic (log4j) and do logging by attaching the thread-id to the log-message. But ultimately it blocks/slows down the thread/execution (as far as I understand) and makes the app slower.
In this case with this Actor, the logging stays independent / async and sequential at the same time.
Is it a good idea to go this way? Sharing experience story? pros/cons/concerns? Asking before trying on heavy load..
Akka already had good support for logging, which is documentated on the page Logging - Akka Documentation. I don't think it's necessary or desirable to create a logger actor in your system particularly when this mechanism already exists.
You may already use the LoggingAdaptor class that performs logging asynchronously, it does this by posting it onto an event-bus. You should be able to use this same mechanism in an actor or outside.
Take a look at Logging.scala
Logging Mixin
Scala has a mixin for actors, that creates the logger for the actor and allows the MDC to be set. From documentation:
import Logging.MDC
final case class Req(work: String, visitorId: Int)
class MdcActorMixin extends Actor with akka.actor.DiagnosticActorLogging {
var reqId = 0
override def mdc(currentMessage: Any): MDC = {
reqId += 1
val always = Map("requestId" -> reqId)
val perMessage = currentMessage match {
case r: Req => Map("visitorId" -> r.visitorId)
case _ => Map()
}
always ++ perMessage
}
def receive: Receive = {
case r: Req => {
log.info(s"Starting new request: ${r.work}")
}
}
}
Logging outside an actor
There are two examples in the Logging.scala file of createing a LoggingSource outside of an actor:
trait MyType { // as an example
def name: String
}
implicit val myLogSourceType: LogSource[MyType] = new LogSource[MyType] {
def genString(a: MyType) = a.name
}
class MyClass extends MyType {
val log = Logging(eventStream, this) // will use "hallo" as logSource
def name = "hallo"
}
The second variant is used for including the actor system’s address:
trait MyType { // as an example
def name: String
}
implicit val myLogSourceType: LogSource[MyType] = new LogSource[MyType] {
def genString(a: MyType) = a.name
def genString(a: MyType, s: ActorSystem) = a.name + "," + s
}
class MyClass extends MyType {
val sys = ActorSystem("sys")
val log = Logging(sys, this) // will use "hallo,akka://sys" as logSource
def name = "hallo"
}
The MDC can be set on the logging adaptor if you need it.
Logging Thread and MDC
The documentation also covers your point on the logging thread, when the logging is asynchronously performed by another thread.
Since the logging is done asynchronously the thread in which the
logging was performed is captured in Mapped Diagnostic Context (MDC)
with attribute name sourceThread. With Logback the thread name is
available with %X{sourceThread} specifier within the pattern layout
configuration.
It looks like over engineering.
I would go with logback, and implement custom asynchonous Appender, if I would do something like this, so I would not have to change my code, if I decide not to discontinue.
Otherwise logging libraries are pretty good bet, I would try maybe tuning it to get better performance, and rely on others experience in implementing this feature.
So I'm trying to work with both Squeryl and Akka Actors. I've done a lot of searching and all I've been able to find is the following Google Group post:
https://groups.google.com/forum/#!topic/squeryl/M0iftMlYfpQ
I think I might have shot myself in the foot as I originally created this factory pattern so I could toss around Database objects.
object DatabaseType extends Enumeration {
type DatabaseType = Value
val Postgres = Value(1,"Postgres")
val H2 = Value(2,"H2")
}
object Database {
def getInstance(dbType : DatabaseType, jdbcUrl : String, username : String, password : String) : Database = {
Class.forName(jdbcDriver(dbType))
new Database(Session.create(
_root_.java.sql.DriverManager.getConnection(jdbcUrl,username,password),
squerylAdapter(dbType)))
}
private def jdbcDriver(db : DatabaseType) = {
db match {
case DatabaseType.Postgres => "org.postgresql.Driver"
case DatabaseType.H2 => "org.h2.Driver"
}
}
private def squerylAdapter(db : DatabaseType) = {
db match {
case DatabaseType.Postgres => new PostgreSqlAdapter
case DatabaseType.H2 => new H2Adapter
}
}
}
Originally in my implementation, I tried surrounding all my statements in using(session), but I'd keep getting the dreaded "No session is bound to the current thread" error, so I added the session.bindToCuirrentThread to the constructor.
class Database(session: Session) {
session.bindToCurrent
def failedBatch(filename : String, message : String, start : Timestamp = now, end : Timestamp = now) =
batch.insert(new Batch(0,filename,Some(start),Some(end),ProvisioningStatus.Fail,Some(message)))
def startBatch(batch_id : Long, start : Timestamp = now) =
batch update (b => where (b.id === batch_id) set (b.start := Some(start)))
...more functions
This worked reasonably well, until I got to Scala Actors.
class TransferActor() extends Actor {
def databaseInstance() = {
val dbConfig = config.getConfig("provisioning.database")
Database.getInstance(DatabaseType.Postgres,
dbConfig.getString("jdbcUrl"),
dbConfig.getString("username"),
dbConfig.getString("password"))
}
lazy val config = ConfigManager.current
override def receive: Actor.Receive = { /* .. do some work */
I constantly get the following:
[ERROR] [03/11/2014 17:02:57.720] [provisioning-system-akka.actor.default-dispatcher-4] [akka://provisioning-system/user/$c] No session is bound to current thread, a session must be created via Session.create
and bound to the thread via 'work' or 'bindToCurrentThread'
Usually this error occurs when a statement is executed outside of a transaction/inTrasaction block
java.lang.RuntimeException: No session is bound to current thread, a session must be created via Session.create
and bound to the thread via 'work' or 'bindToCurrentThread'
I'm getting a fresh Database object each time, not caching it with a lazy val, so shouldn't that constructor always get called and attach to my current thread? Does Akka attach different threads to different actors and swap them around? Should I just add a function to call session.bindToCurrentThread each time I'm in an actor? Seems kinda hacky.
Does Akka attach different threads to different actors and swap them around?
That's exactly how the actor model works. The idea is that you can have a small thread pool servicing a very large number of threads because actors only need to use a thread when they have a message waiting to be processed.
Some general tips for Squeryl.... A session is a one to one association with a JDBC connection. The main advantage of keeping Sessions open is that you can have a transaction open that gives you a consistent view of the database as you perform multiple operations. If you don't need that, make your session/transaction code granular to avoid these types of issues. If you do need it, don't rely on Sessions being available in a thread local context. Use the transaction(session){} or transaction(sessionFactory){} methods to explicitly tell Squeryl where you want your Session to come from.
I have an Actor and some other object:
object Config {
val readValueFromConfig() = { //....}
}
class MyActor extends Actor {
val confValue = Config.readValueFromConfig()
val initValue = Future {
val a = confValue // sometimes it's null
val a = Config.readValueFromConfig() //always works well
}
//..........
}
The code above is a very simplified version of what I actually have. The odd thing is that sometimes val a = confValue returns null, whereas if I replace it with val a = Config.readValueFromConfig() then it always works well.
I wonder, is this due to the fact that the only way to interact with an actor is sending it a message? Therefore, since val confValue is not a local variable, I must either use val a = Config.readValueFromConfig() (a different object, not an actor) or val a = self ! GetConfigValue and read the result afterwards?
val readValueFromConfig() = { //....}
This gives me a compile error. I assume you mean without parentheses?
val readValueFromConfig = { //....}
Same logic with different timing gives different result = a race condition.
val confValue = Config.readValueFromConfig() is always executed during construction of MyActor objects (because it's a field of MyActor). Sometimes this is returning null.
val a = Config.readValueFromConfig() //always works well is always executed later - after MyActor is constructed, when the Future initValue is executed by it's Executor. It seems this never returns null.
Possible causes:
Could be explained away if the body of readValueFromConfig was dependent upon another
parallel/async operation having completed. Any chance you're reading the config asynchronously? Given the name of this method, it probably just reads synchronously from a file - meaning this is not the cause.
Singleton objects are not threadsafe?? I compiled your code. Here's the decompilation of your singleton object java class:
public final class Config
{
public static String readValueFromConfig()
{
return Config..MODULE$.readValueFromConfig();
}
}
public final class Config$
{
public static final MODULE$;
private final String readValueFromConfig;
static
{
new ();
}
public String readValueFromConfig()
{
return this.readValueFromConfig;
}
private Config$()
{
MODULE$ = this;
this.readValueFromConfig = // ... your logic here;
}
}
Mmmkay... Unless I'm mistaken, that ain't thread-safe.
IF two threads are accessing readValueFromConfig (say Thread1 accesses it first), then inside method private Config$(), MODULE$ is unsafely published before this.readValueFromConfig is set (reference to this prematurely escapes the constructor). Thread2 which is right behind can read MODULE$.readValueFromConfig before it is set. Highly likely to be a problem if '... your logic here' is slow and blocks the thread - which is precisely what synchronous I/O does.
Moral of story: avoid stateful singleton objects from Actors (or any Threads at all, including Executors) OR make them become thread-safe through very careful coding style. Work-Around: change to a def, which internally caches the value in a private val.
I wonder, is this due to the fact that the only way to interact with an actor is sending it a message? Therefore, since val confValue is not a local variable, I must either use val a = Config.readValueFromConfig() (a different object, not an actor)
Just because it's not an actor, doesn't mean it's necessarily safe. It probably isn't.
or val a = self ! GetConfigValue and read the result afterwards?
That's almost right. You mean self ? GetConfigValue, I think - that will return a Future, which you can then map over. ! doesn't return anything.
You cannot read from an actor's variables directly inside a Future because (in general) that Future could be running on any thread, on any processor core, and you don't have any memory barrier there to force the CPU caches to reload the value from main memory.