Corda RPC Connection Pooling/Caching - rpc

Does Corda has connection pooling feature? How to handle multiple RPC user connection pooling...
Appreciate if you could redirect to any opensource implementation/guide for RPC Connection Pooling/caching...

Here's an example of how to pool the node RPC connections:
import net.corda.client.rpc.CordaRPCClient
import net.corda.client.rpc.CordaRPCConnection
import net.corda.core.utilities.NetworkHostAndPort
import net.corda.core.utilities.contextLogger
import net.corda.core.utilities.getOrThrow
import net.corda.node.services.Permissions
import net.corda.testing.driver.DriverParameters
import net.corda.testing.driver.NodeParameters
import net.corda.testing.driver.driver
import net.corda.testing.node.User
import org.junit.Test
import java.util.concurrent.ConcurrentHashMap
import java.util.concurrent.LinkedBlockingQueue
data class UserParams(val username: String, val password: String)
class PooledRpcConnections(address: NetworkHostAndPort): AutoCloseable {
val userToPool = ConcurrentHashMap<UserParams, LinkedBlockingQueue<CordaRPCConnection>>()
val client = CordaRPCClient(address)
fun <A> withConnection(userParams: UserParams, block: (CordaRPCConnection) -> A): A {
val queue = userToPool.getOrPut(userParams) { LinkedBlockingQueue() }
val connection = queue.poll() ?: client.start(userParams.username, userParams.password)
return try {
block(connection)
} finally {
queue.add(connection)
}
}
override fun close() {
for (queue in userToPool.values) {
do {
val connection = queue.poll()
connection?.close()
} while (connection != null)
}
}
}
class Test {
companion object {
val log = contextLogger()
}
#Test
fun poolWorks() {
val users = ('a' .. 'f').map { User(it.toString(), it.toString(), setOf(Permissions.all())) }
val userParams = users.map { UserParams(it.username, it.password) }
driver(DriverParameters(startNodesInProcess = true, notarySpecs = emptyList())) {
log.info("Starting node for users ${users.map { it.username }}")
val node = startNode(NodeParameters(rpcUsers = users)).getOrThrow()
log.info("Starting pool")
PooledRpcConnections(node.rpcAddress).use { pool ->
val N = 1000
log.info("Making $N requests using pooled connections")
(1 .. N).toList().parallelStream().forEach { i ->
val user = userParams[i % users.size]
pool.withConnection(user) { connection ->
log.info("USER[${user.username}] CONNECTION[${connection.hashCode()}] NODE_TIME[${connection.proxy.currentNodeTime()}]")
}
}
log.info("Done! Number of connections used per user: ${pool.userToPool.map { it.key.username to it.value.size }}")
}
}
}
}

Related

Spark custom receicer not get data

I'm using spark streaming to ingest my company's internal data source. I followed this tutorial to write a receiver: https://spark.apache.org/docs/latest/streaming-custom-receivers.html. But in Spark UI streaming tag, I always see 0 msgs coming in. Also I don't see any errors in driver logs. Really confused what goes wrong. (To connect to the internal data source, need to create a client, then listen() will keep running to get the new msgs) I doubt is it because of the listen mode on the data source?
My Receiver
class MyReceiver(val clientId: String, val token: String, val env: String) extends Receiver[String](StorageLevel.MEMORY_AND_DISK_2) {
def onStart() {
new Thread("My Data Source") { override def run() { receive() } }.start()
}
def onStop() { }
private def receive() {
while(!isStopped()) {
try {
val client = new Client(clientId, token, "STAGE")
client.connect()
client.listen(Client.Topic, new ClientMsgHandler() {
override def process(event: ClientMsg): Unit = {
val msg: String = event.getBody
store(msg)
}
override def onException(event: ClientEvent): Unit = {
}
})
} catch {
case ce: java.net.ConnectException =>
System.out.println("Could not connect")
case t: Throwable =>
System.out.println("Error receiving data")
}
}
}
}
==================================================================
Create Stream
class MyStream(sc: SparkContext, sqlContext: SQLContext, cpDir: String) {
def creatingFunc(): StreamingContext = {
val ssc = new StreamingContext(sc, Seconds(3))
// Set the active SQLContext so that we can access it statically within the foreachRDD
SQLContext.setActiveSession(sqlContext)
ssc.checkpoint(cpDir)
val ClientId = <Myclientid>
val Token = <Mytoken>
val env = "STAGE"
val stream = ssc.receiverStream(new MyReceiver(ClientId, Token, env))
stream.foreachRDD { rdd => println("Here"+rdd.take(10).mkString(", "))
}
ssc
}
}
==================================================================
Start Streaming
val checkpoint_dir = <my_checkpoint_dir>
val MyDataSourceStream = new MyStream(sc, sqlContext, checkpoint_dir)
val ssc = StreamingContext.getActiveOrCreate(checkpoint_dir, MyDataSourceStream.creatingFunc _)
ssc.start()
ssc.awaitTermination()
Updates:
Since it's an internal source, I cannot share Client source code. But I've tested the connection. It works for below code and msg can be printed out correctly. You can think of Client as an external lib which has no connection issues.
val ClientId = <myclientid>
val Token = <mytoken>
val client = new EVClient(ClientId, Token, "STAGE")
client.connect()
client.listen(Client.Topic, new ClientMsgHandler() {
override def onEvent(event: ClientMsg): Unit = {
val res = event.getBody
println(res)
}
override def onException(event: ClientEvent): Unit = {
}
})

Loading indicator does not hide if api failed to retrieve data although it hides if api succeed to retrieve data in Android Paging library

I have a remote server from where I want to fetch 20 items(Job) per api call and show them in RecyclerView using paging library.
For that, I want to show a loading indicator at the beginning of the first api call when list of items is being fetched from the server. Everything is okay if data is fetched successfully. That means the loading indicator got invisible if data loaded successfully. The code is given bellow.
JobService.KT
#GET(Constants.API_JOB_LIST)
fun getJobPost(
#Query("page") pageNumber: Int
): Observable<Response<JobResponse>>
JobResponse.kt
data class JobResponse(
#SerializedName("status") val status: Int? = null,
#SerializedName("message") val message: Any? = null,
#SerializedName("data") val jobData: JobData? = null
)
JobData.kt
data class JobData(
#SerializedName("jobs") val jobs: List<Job?>? = null,
#SerializedName("total") val totalJob: Int? = null,
#SerializedName("page") val currentPage: Int? = null,
#SerializedName("showing") val currentlyShowing: Int? = null,
#SerializedName("has_more") val hasMore: Boolean? = null
)
NetworkState.kt
sealed class NetworkState {
data class Progress(val isLoading: Boolean) : NetworkState()
data class Failure(val errorMessage: String?) : NetworkState()
companion object {
fun loading(isLoading: Boolean): NetworkState = Progress(isLoading)
fun failure(errorMessage: String?): NetworkState = Failure(errorMessage)
}
}
Event.kt
open class Event<out T>(private val content: T) {
private var hasBeenHandled = false
fun getContentIfNotHandled() = if (hasBeenHandled) {
null
} else {
hasBeenHandled = true
content
}
fun peekContent() = content
}
JobDataSource.kt
class JobDataSource(
private val jobService: JobService,
private val compositeDisposable: CompositeDisposable
) : PageKeyedDataSource<Int, Job>() {
val paginationState: MutableLiveData<Event<NetworkState>> = MutableLiveData()
val initialLoadingState: MutableLiveData<Event<NetworkState>> = MutableLiveData()
val totalJob: MutableLiveData<Event<Int>> = MutableLiveData()
companion object {
private const val FIRST_PAGE = 1
}
override fun loadInitial(params: LoadInitialParams<Int>, callback: LoadInitialCallback<Int, Job>) {
compositeDisposable += jobService.getJobPost(FIRST_PAGE)
.performOnBackgroundOutputOnMain()
.doOnSubscribe { initialLoadingState.postValue(Event(loading(true))) }
.doOnTerminate { initialLoadingState.postValue(Event(loading(false))) }
.subscribe({
if (it.isSuccessful) {
val jobData = it.body()?.jobData
totalJob.postValue(Event(jobData?.totalJob!!))
jobData.jobs?.let { jobs -> callback.onResult(jobs, null, FIRST_PAGE+1) }
} else {
val error = Gson().fromJson(it.errorBody()?.charStream(), ApiError::class.java)
when (it.code()) {
CUSTOM_STATUS_CODE -> initialLoadingState.postValue(Event(failure(error.message!!)))
else -> initialLoadingState.postValue(Event(failure("Something went wrong")))
}
}
}, {
if (it is IOException) {
initialLoadingState.postValue(Event(failure("Check Internet Connectivity")))
} else {
initialLoadingState.postValue(Event(failure("Json Parsing error")))
}
})
}
override fun loadAfter(params: LoadParams<Int>, callback: LoadCallback<Int, Job>) {
compositeDisposable += jobService.getJobPost(params.key)
.performOnBackgroundOutputOnMain()
.doOnSubscribe { if (params.key != 2) paginationState.postValue(Event(loading(true))) }
.doOnTerminate { paginationState.postValue(Event(loading(false))) }
.subscribe({
if (it.isSuccessful) {
val jobData = it.body()?.jobData
totalJob.postValue(Event(jobData?.totalJob!!))
jobData.jobs?.let { jobs -> callback.onResult(jobs, if (jobData.hasMore!!) params.key+1 else null) }
} else {
val error = Gson().fromJson(it.errorBody()?.charStream(), ApiError::class.java)
when (it.code()) {
CUSTOM_STATUS_CODE -> initialLoadingState.postValue(Event(failure(error.message!!)))
else -> initialLoadingState.postValue(Event(failure("Something went wrong")))
}
}
}, {
if (it is IOException) {
paginationState.postValue(Event(failure("Check Internet Connectivity")))
} else {
paginationState.postValue(Event(failure("Json Parsing error")))
}
})
}
override fun loadBefore(params: LoadParams<Int>, callback: LoadCallback<Int, Job>) {}
}
JobDataSourceFactory.kt
class JobDataSourceFactory(
private val jobService: JobService,
private val compositeDisposable: CompositeDisposable
): DataSource.Factory<Int, Job>() {
val jobDataSourceLiveData = MutableLiveData<JobDataSource>()
override fun create(): DataSource<Int, Job> {
val jobDataSource = JobDataSource(jobService, compositeDisposable)
jobDataSourceLiveData.postValue(jobDataSource)
return jobDataSource
}
}
JobBoardViewModel.kt
class JobBoardViewModel(
private val jobService: JobService
) : BaseViewModel() {
companion object {
private const val PAGE_SIZE = 20
private const val PREFETCH_DISTANCE = 20
}
private val jobDataSourceFactory: JobDataSourceFactory = JobDataSourceFactory(jobService, compositeDisposable)
var jobList: LiveData<PagedList<Job>>
init {
val config = PagedList.Config.Builder()
.setPageSize(PAGE_SIZE)
.setInitialLoadSizeHint(PAGE_SIZE)
.setPrefetchDistance(PREFETCH_DISTANCE)
.setEnablePlaceholders(false)
.build()
jobList = LivePagedListBuilder(jobDataSourceFactory, config).build()
}
fun getPaginationState(): LiveData<Event<NetworkState>> = Transformations.switchMap<JobDataSource, Event<NetworkState>>(
jobDataSourceFactory.jobDataSourceLiveData,
JobDataSource::paginationState
)
fun getInitialLoadingState(): LiveData<Event<NetworkState>> = Transformations.switchMap<JobDataSource, Event<NetworkState>>(
jobDataSourceFactory.jobDataSourceLiveData,
JobDataSource::initialLoadingState
)
fun getTotalJob(): LiveData<Event<Int>> = Transformations.switchMap<JobDataSource, Event<Int>>(
jobDataSourceFactory.jobDataSourceLiveData,
JobDataSource::totalJob
)
}
JobBoardFragment.kt
class JobBoardFragment : BaseFragment() {
private val viewModel: JobBoardViewModel by lazy {
getViewModel { JobBoardViewModel(ApiFactory.jobListApi) }
}
private val jobAdapter by lazy {
JobAdapter {
val bundle = Bundle()
bundle.putInt(CLICKED_JOB_ID, it.jobId!!)
navigateTo(R.id.jobBoard_to_jobView, R.id.home_navigation_fragment, bundle)
}
}
override fun getLayoutResId() = R.layout.fragment_job_board
override fun initWidget() {
job_list_recycler_view.adapter = jobAdapter
back_to_main_image_view.setOnClickListener { onBackPressed() }
}
override fun observeLiveData() {
with(viewModel) {
jobList.observe(this#JobBoardFragment, Observer {
jobAdapter.submitList(it)
})
getInitialLoadingState().observe(this#JobBoardFragment, Observer {
it.getContentIfNotHandled()?.let { state ->
when (state) {
is Progress -> {
if (state == loading(true)) {
network_loading_indicator.visible()
} else {
network_loading_indicator.visibilityGone()
}
}
is Failure -> context?.showToast(state.errorMessage.toString())
}
}
})
getPaginationState().observe(this#JobBoardFragment, Observer {
it.getContentIfNotHandled()?.let { state ->
when (state) {
is Progress -> {
if (state == loading(true)) {
pagination_loading_indicator.visible()
} else {
pagination_loading_indicator.visibilityGone()
}
}
is Failure -> context?.showToast(state.errorMessage.toString())
}
}
})
getTotalJob().observe(this#JobBoardFragment, Observer {
it.getContentIfNotHandled()?.let { state ->
job_board_text_view.visible()
with(profile_completed_image_view) {
visible()
text = state.toString()
}
}
})
}
}
}
But the problem is if data fetching failed due to internet connectivity or any other server related problem loading indicator does not invisible that means it still loading though I make the loadingStatus false and error message is shown. it means .doOnTerminate { initialLoadingState.postValue(Event(loading(false))) } is not called if error occured. This is the first problem. Another problem is loadInitial() and loadAfter() is being called simultaneously at the first call. But I just want the loadInitial() method is called at the beginning. after scrolling loadAfter() method will be called.
Try replacing all your LiveData's postValue() methods by setValue() or simply .value =.
The problem is that the postValue() method is for updating the value from a background thread to observers in the main thread. In this case you are always changing the values from the main thread itself, so you should use .value =.
Hope it's not too late.

Scala Chat Application, separate threads for local IO and socket IO

I'm writing a chat application in Scala, the problem is with the clients, the client reads from StdIn (which blocks) before sending the data to the echo server, so if multiple clients are connected then they don't receive data from the server until reading from StdIn has completed. I'm thinking that local IO, i.e reading from StdIn and reading/writing to the socket should be on separate threads but I can't think of a way to do this, below is the Client singleton code:
import java.net._
import scala.io._
import java.io._
import java.security._
object Client {
var msgAcc = ""
def main(args: Array[String]): Unit = {
val conn = new ClientConnection(InetAddress.getByName(args(0)), args(1).toInt)
val server = conn.connect()
println("Enter a username")
val user = new User(StdIn.readLine())
println("Welcome to the chat " + user.username)
sys.addShutdownHook(this.shutdown(conn, server))
while (true) {
val txMsg = StdIn.readLine()//should be on a separate thread?
if (txMsg != null) {
conn.sendMsg(server, user, txMsg)
val rxMsg = conn.getMsg(server)
val parser = new JsonParser(rxMsg)
val formattedMsg = parser.formatMsg(parser.toJson())
println(formattedMsg)
msgAcc = msgAcc + formattedMsg + "\n"
}
}
}
def shutdown(conn: ClientConnection, server: Socket): Unit = {
conn.close(server)
val fileWriter = new BufferedWriter(new FileWriter(new File("history.txt"), true))
fileWriter.write(msgAcc)
fileWriter.close()
println("Leaving chat, thanks for using")
}
}
below is the ClientConnection class used in conjunction with the Client singleton:
import javax.net.ssl.SSLSocket
import javax.net.ssl.SSLSocketFactory
import javax.net.SocketFactory
import java.net.Socket
import java.net.InetAddress
import java.net.InetSocketAddress
import java.security._
import java.io._
import scala.io._
import java.util.GregorianCalendar
import java.util.Calendar
import java.util.Date
import com.sun.net.ssl.internal.ssl.Provider
import scala.util.parsing.json._
class ClientConnection(host: InetAddress, port: Int) {
def connect(): Socket = {
Security.addProvider(new Provider())
val sslFactory = SSLSocketFactory.getDefault()
val sslSocket = sslFactory.createSocket(host, port).asInstanceOf[SSLSocket]
sslSocket
}
def getMsg(server: Socket): String = new BufferedSource(server.getInputStream()).getLines().next()
def sendMsg(server: Socket, user: User, msg: String): Unit = {
val out = new PrintStream(server.getOutputStream())
out.println(this.toMinifiedJson(user.username, msg))
out.flush()
}
private def toMinifiedJson(user: String, msg: String): String = {
s"""{"time":"${this.getTime()}","username":"$user","msg":"$msg"}"""
}
private def getTime(): String = {
val cal = Calendar.getInstance()
cal.setTime(new Date())
"(" + cal.get(Calendar.HOUR_OF_DAY) + ":" + cal.get(Calendar.MINUTE) + ":" + cal.get(Calendar.SECOND) + ")"
}
def close(server: Socket): Unit = server.close()
}
You can add concurrency by using Scala Akka Actors. As of this writing the current Scala version is 2.11.8. See Actor documentation here:
http://docs.scala-lang.org/overviews/core/actors.html
This chat example is old but demonstrates a technique to handle in the neighborhood of a million simultaneous clients using Actors:
http://doc.akka.io/docs/akka/1.3.1/scala/tutorial-chat-server.html
Finally you can also Google the Twitter Finagle project which uses Scala and provides servers with concurrency. A lot of work to learn it I think...

how to make the program pause when actor is running

For example
import scala.actors.Actor
import scala.actors.Actor._
object Main {
class Pong extends Actor {
def act() {
var pongCount = 0
while (true) {
receive {
case "Ping" =>
if (pongCount % 1000 == 0)
Console.println("Pong: ping "+pongCount)
sender ! "Pong"
pongCount = pongCount + 1
case "Stop" =>
Console.println("Pong: stop")
exit()
}
}
}
}
class Ping(count: Int, pong: Actor) extends Actor {
def act() {
var pingsLeft = count - 1
pong ! "Ping"
while (true) {
receive {
case "Pong" =>
if (pingsLeft % 1000 == 0)
Console.println("Ping: pong")
if (pingsLeft > 0) {
pong ! "Ping"
pingsLeft -= 1
} else {
Console.println("Ping: stop")
pong ! "Stop"
exit()
}
}
}
}
}
def main(args: Array[String]): Unit = {
val pong = new Pong
val ping = new Ping(100000, pong)
ping.start
pong.start
println("???")
}
}
I try to print "???" after the two actors call exit(), but now it is printed before "Ping: Stop" and "Pong stop"
I have try have a flag in the actor, flag is false while actor is running, and flag is true when actor stops, and in the main func, there is a while loop, such as while (actor.flag == false) {}, but it doesn't works, it is a endless loop:-/
So, please give me some advice.
If you need synchronous calls in akka, use ask pattern. Like
Await.result(ping ? "ping")
Also, you'd better use actor system to create actors.
import akka.actor.{ActorRef, Props, Actor, ActorSystem}
import akka.pattern.ask
import akka.util.Timeout
import scala.concurrent.Await
import scala.concurrent.duration._
import scala.concurrent.ExecutionContext.Implicits.global
object Test extends App {
implicit val timeout = Timeout(3 second)
val system = ActorSystem("ActorSystem")
class Pong extends Actor {
def receive: Receive = {
case "Ping" =>
println("ping")
context.stop(self)
}
}
lazy val pong = system.actorOf(Props(new Pong), "Pong")
val x = pong.ask("Ping")
val res = Await.result(x, timeout.duration)
println("????")
system.shutdown()
}

spark-streaming and connection pool implementation

The spark-streaming website at https://spark.apache.org/docs/latest/streaming-programming-guide.html#output-operations-on-dstreams mentions the following code:
dstream.foreachRDD { rdd =>
rdd.foreachPartition { partitionOfRecords =>
// ConnectionPool is a static, lazily initialized pool of connections
val connection = ConnectionPool.getConnection()
partitionOfRecords.foreach(record => connection.send(record))
ConnectionPool.returnConnection(connection) // return to the pool for future reuse
}
}
I have tried to implement this using org.apache.commons.pool2 but running the application fails with the expected java.io.NotSerializableException:
15/05/26 08:06:21 ERROR OneForOneStrategy: org.apache.commons.pool2.impl.GenericObjectPool
java.io.NotSerializableException: org.apache.commons.pool2.impl.GenericObjectPool
at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
...
I am wondering how realistic it is to implement a connection pool that is serializable. Has anyone succeeded in doing this ?
Thank you.
To address this "local resource" problem what's needed is a singleton object - i.e. an object that's warranted to be instantiated once and only once in the JVM. Luckily, Scala object provides this functionality out of the box.
The second thing to consider is that this singleton will provide a service to all tasks running on the same JVM where it's hosted, so, it MUST take care of concurrency and resource management.
Let's try to sketch(*) such service:
class ManagedSocket(private val pool: ObjectPool, val socket:Socket) {
def release() = pool.returnObject(socket)
}
// singleton object
object SocketPool {
var hostPortPool:Map[(String, Int),ObjectPool] = Map()
sys.addShutdownHook{
hostPortPool.values.foreach{ // terminate each pool }
}
// factory method
def apply(host:String, port:String): ManagedSocket = {
val pool = hostPortPool.getOrElse{(host,port), {
val p = ??? // create new pool for (host, port)
hostPortPool += (host,port) -> p
p
}
new ManagedSocket(pool, pool.borrowObject)
}
}
Then usage becomes:
val host = ???
val port = ???
stream.foreachRDD { rdd =>
rdd.foreachPartition { partition =>
val mSocket = SocketPool(host, port)
partition.foreach{elem =>
val os = mSocket.socket.getOutputStream()
// do stuff with os + elem
}
mSocket.release()
}
}
I'm assuming that the GenericObjectPool used in the question is taking care of concurrency. Otherwise, access to each pool instance need to be guarded with some form of synchronization.
(*) code provided to illustrate the idea on how to design such object - needs additional effort to be converted into a working version.
Below answer is wrong!
I'm leaving the answer here for reference, but the answer is wrong for the following reason. socketPool is declared as a lazy val so it will get instantiated with each first request for access. Since the SocketPool case class is not Serializable, this means that it will get instantiated within each partition. Which makes the connection pool useless because we want to keep connections across partitions and RDDs. It makes no difference wether this is implemented as a companion object or as a case class. Bottom line is: the connection pool must be Serializable, and apache commons pool is not.
import java.io.PrintStream
import java.net.Socket
import org.apache.commons.pool2.{PooledObject, BasePooledObjectFactory}
import org.apache.commons.pool2.impl.{DefaultPooledObject, GenericObjectPool}
import org.apache.spark.streaming.dstream.DStream
/**
* Publish a Spark stream to a socket.
*/
class PooledSocketStreamPublisher[T](host: String, port: Int)
extends Serializable {
lazy val socketPool = SocketPool(host, port)
/**
* Publish the stream to a socket.
*/
def publishStream(stream: DStream[T], callback: (T) => String) = {
stream.foreachRDD { rdd =>
rdd.foreachPartition { partition =>
val socket = socketPool.getSocket
val out = new PrintStream(socket.getOutputStream)
partition.foreach { event =>
val text : String = callback(event)
out.println(text)
out.flush()
}
out.close()
socketPool.returnSocket(socket)
}
}
}
}
class SocketFactory(host: String, port: Int) extends BasePooledObjectFactory[Socket] {
def create(): Socket = {
new Socket(host, port)
}
def wrap(socket: Socket): PooledObject[Socket] = {
new DefaultPooledObject[Socket](socket)
}
}
case class SocketPool(host: String, port: Int) {
val socketPool = new GenericObjectPool[Socket](new SocketFactory(host, port))
def getSocket: Socket = {
socketPool.borrowObject
}
def returnSocket(socket: Socket) = {
socketPool.returnObject(socket)
}
}
which you can invoke as follows:
val socketStreamPublisher = new PooledSocketStreamPublisher[MyEvent](host = "10.10.30.101", port = 29009)
socketStreamPublisher.publishStream(myEventStream, (e: MyEvent) => Json.stringify(Json.toJson(e)))

Resources