Do we really need to import Corda's code for RPC ? How in the future? - rpc

I know that Corda is in the process of removing its web server module and in the documentation they suggest the use of other frameworks.
In one example ("spring-observable-stream") they use Spring Boot for the server-side APIs and use an RPC call to the actual running Corda node. That's fine and comparable with what I have to do.
In that example, the author import the specific Corda's RPC code, along with the code of the actual flow (and states) needed.
What I want to ask here is if it's possible to avoid that tangle and keep the web server APIs independent from the actual Corda/CordApp's code by using a general RPC library (any advice ?).
If, instead, I must import the Corda-specific code (there is a reason ?), I'd like to ask you:
What is the minimum necessary to do so from a Gradle perspective ?
Is it possible to implement some sort of plugin on the CordApp to reduce that tangle ?
To be honest, I'm interested in a more general way for interacting with the CordApp (e.g. from Python), but I know that due to the AMQP integration not yet ready, we have to stay on the JVM for now. So, feel free to answer just about what we need to do as of today from Kotlin (which I have to use for a short-term PoC)…
Thank you in advance!

Currently, your server has to depend on the Corda RPC library to interact with nodes via RPC. Corda doesn't yet expose an independent format for sending and receiving messages via RPC.
Your server also needs to depend on any CorDapps that contain flows that the server will start via RPC, or that contain types that will be returned via RPC. Otherwise, your server will not be able to start the flows or deserialise the returned types.
If you're using Gradle, here's what a minimal dependencies block might look like:
dependencies {
compile "org.jetbrains.kotlin:kotlin-stdlib-jre8:$kotlin_version"
cordaCompile "net.corda:corda-rpc:$corda_release_version"
compile "com.github.corda:cordapp-example:release-V1-SNAPSHOT"
}
Here, we're depending on the corda-rpc library, and also using JitPack to depend on the CorDapp where we define the flows and states that we want to start/return via RPC.
If you want, you can modularise the CorDapp so that all the classes you need to depend on for RPC are included in a separate module, and only depend on that module.

Related

NestJS Microservice vs Standalone app - which approach is better for working with AMAZON SQS?

So we have decided to use NestJS to build our web-app with, and we have this ongoing argument about whether we should use a Microservice or a Standalone app to implement our queue-interactions module with.
A bit of background - we use Amazon SQS as our queue provider, and we use the
bbc/sqs-consumer package for handling the connection.
Now one approach is to use a microservice, in a similar fashion to what is done here: https://github.com/algoan/nestjs-components/tree/master/packages/google-pubsub-microservice
I believe the implications are pretty clear, and it seems as if the NestJS documentation really pushes you towards microservices here, if only because all the biult-in implementations are for queues/pubSub services (rabbitMQ, kafka, redis...).
On the other hand, you can choose to use a standalone app, which I feel is basically a microservice but without controllers.
Since we opted for using a 3rd party package to handle the actual transport and all the technical details, this feels in a way more appropriate. We don't actually need to send the messages from the messageHandler to some controller and then process it, if we can process it directly from the messageHandler, no controllers included.
Personally, it seems to me that if we don't want to go into details with the transport implementation (i.e. use sqs-consumer package for it) then the microservice approach, while works perfectly, is an overkill. A standalone app feels like it would give us the benefits of separating the "main" and the "queues" processes, while maintaining simplicity of implementation as much as possible.
Conversely, using a Microservice feels more natural to others. The way to think about it is that it doesn't matter whether we choose to implement transport ourselves or use some package, the semantic meaning is the same in the way that we have some messages coming into our app from outside, thus using a custom transport Microservice really is the most appropriate solution.
What do you guys think about it?
Would you use the Microservice or the standalone approach?
And in general, when would you choose Microservice over a Standalone app and vice-versa?

Replaying RPC calls for testing purposes

We are using a 3rd party library (Google Spanner) that uses gRPC in a node application. One of pain points we have is ability to easily mock responses from this library for testing purposes.
If anyone had similar issues, were you able to solve it? I was thinking of a tool that could record/replay rpc calls (there are many great libraries for recording/replaying HTTP calls) but couldn't find anything similar for RPC. I came up across Google's rpcreplay (https://github.com/GoogleCloudPlatform/google-cloud-go/tree/master/rpcreplay) but to my understanding it's intended to be used in Go applications.
At Traffic Parrot we have been working on a solution to your problem in our service virtualization tool which includes a user interface that can be used to define the mock behaviour.
We have recently added a tutorial on how to mock gRPC responses over the wire given a proto file.
You can also find information on how to record and replay over the wire in the documentation.

Does anyone used rpc framework inside libevent?

I have a multiserver multiclient application and I would like to keep some common data managed by a single daemon (to avoid a nightmare f concurrency), so the servers can just ask it when they need to manipulate the shared data.
I am already using libevent in the servers so I would like to stick to it and use it's RPC framework but I could not find an example of it used in real world.
Google Protobuf provides a RPC framework. And it is also used inside Google for RPC and many other things.
Protobuf is a library for data exchanging.
It handles data serialization, deserialization, compression, and so on.
It is created and opensourced by Google.
However, they didn't opensource the part of RPC implementation.
It only provides a framework.
You can integrate Protobuf with your existing libevent program.
I have personally implemented a RPC with Protobuf and libev(which is a similar project to libevent). And they work fine.

How do I use the Asterisk Audiohooks API?

I have an VOIP application i'd like to implement, that requires me to process the audio from a call in real time during the call.
Currently, I'm using Asterisk to handle my calls, and it looks like there's a functionality built in called Audiohooks which is designed to let me access the audiostream, and process it from the dialplay
However, I can not find any documentation whatsoever on how to actually create my own audio hook, nor any recent examples on how it should be done. Are there resources that show how I could use this?
Thanks
That api is availible when you do c/c++ modules for asterisk. No external API.
For examples you can check MixMonitor,func_volume,app_conference and other similar application already developed.
Hint: after work done, you have test for memory leaks and hi-load/concurrent load. Code must be thread-safe.

Getting notifications on database changes: is it possible to watch entries in riak?

I'm looking for an efficient way to subscribe to events in riak from node. I would like to be able to be notified of changes on an entry from riak.
For example when one node.js server updates an entry, another server using and watching that entry receives the updated entry or a notification about its update automatically.
If this is impossible is there an efficient messaging system that can be efficiently used across node.js servers?
Riak implements what are called Pre and Post commit hooks. Post-commits, which will be triggered when a write successfully occurs (and is presumably what you want) can only be written in Erlang code and Riak needs to be configured to trigger your custom Erlang function, as a property on the appropriate bucket.
Depending on your needs and the scale of your application, there can be several options for your Erlang setup to notify your Node.js server(s). It would be relatively easy to write an Erlang function that would send a HTTP request to your Node.js server, but that carries quite a lot of overhead, that may very well be inappropriate for your application. A lot Better, but slightly more complicated, would be to use a pub/sub system like those offered by Redis or ZeroMQ (just to name a couple), that are battle-tested and proven to perform very well under heavy load. If you want to go with ZeroMQ, see this guide on how to implement very reliable pub/sub.
Both of these messaging tools, as well as many others, can notify your Node.js instance of updates to watched entries from either Riak or the Node.js instance that's effectively modifying the data. The second option (Node.js to Node.js) might be simpler since it wouldn't require you to learn Erlang if you're not familiar with it. Both of these tools have node.js libraries that are very well-tested:
Zeromq.node
redis-node
And if you were to use them to send out notifications from within Riak as post-commit hooks, here are the corresponding erlang drivers:
Erlzmq2
Eredis

Resources