How to implement something similar to Storm DRPC in Samza? - rpc

I have samza job with a number of tasks, each of which holds some state in its embedded store. I want to expose this store for reading to outside world via some kind of RPC mechanism. What could be the best solution for this?
Here is one paragraph in Samza documentation about it:
Samza does not currently have an equivalent API to DRPC,
but you can build it yourself using Samza’s stream
processing primitives.
The only solution which comes to my mind is to make my tasks, in addition to normal processing, to consume request messages with some correlation IDs on a special request topic, and to put response messages with the same correlation IDs into special response topic. So it's like RPC-over-Kafka solution which seems to me suboptimal.
Any thoughts are welcome!

As far as I remember the embedded store is backed up in a Kafka topic. When you set something in the store, the message is produced to the topic. Thus you can consume this topic and you can "clone" the embedded store to a different database. Then you can query the database. Or you can use just the database instead of the embedded store. But this approach could lead to performance issues in your Samza job...

Related

Is it possible to make a Poller (or PollableMessageSource) to poll messages as List?

Following the example found in GitHub https://github.com/spring-cloud/spring-cloud-gcp/tree/master/spring-cloud-gcp-samples/spring-cloud-gcp-pubsub-polling-binder-sample regarding polling messages from a PubSub subscription, I was wondering...
Is it possible to make a PollableMessageSource retrieve List<Message<?>> instead of a single message per poll?
I've seen the #Poller notation only being used in Source typed objects, never in Processor or Sink. Is it possible to use in such context when for example using #StreamListener or with a functional approach?
The PollableMessageSource binding and Source stream applications are fully based on the Poller and MessageSource abstraction from Spring Integration where its contract is to produce a single message to the channel configured. The point of the messaging is really to process a single message not affecting others. The failure for one message doesn't mean to fail others in the flow.
On the other hand you probably mean GCP Pub/Sub messages to be produced as a list in the Spring message payload. That is really possible, but via some custom code from Pub/Sub consumer and MessageSource impl. Although I would think twice to expect some batched from the source. Probably you may utilize an aggregator to build some small windows if your further logic is about processing as list. But again: it is going to be a single Spring message.
May be better to start thinking about a reactive function implementation where you indeed can expect a Flux<Message<?>> as an input and Spring Cloud Stream framework will take care for you how to emit the data from Pub/Sub into the reactive stream you expect.
See more info in docs: https://docs.spring.io/spring-cloud-stream/docs/3.1.0/reference/html/spring-cloud-stream.html#_reactive_functions_support

How to control idempotency of messages in an event-driven architecture?

I'm working on a project where DynamoDB is being used as database and every use case of the application is triggered by a message published after an item has been created/updated in DB. Currently the code follows this approach:
repository.save(entity);
messagePublisher.publish(event);
Udi Dahan has a video called Reliable Messaging Without Distributed Transactions where he talks about a solution to situations where a system can fail right after saving to DB but before publishing the message as messages are not part of a transaction. But in his solution I think he assumes using a SQL database as the process involves saving, as part of the transaction, the correlationId of the message being processed, the entity modification and the messages that are to be published. Using a NoSQL DB I cannot think of a clean way to store the information about the messages.
A solution would be using DynamoDB streams and subscribe to the events published either using a Lambda or another service to transformed them into domain-specific events. My problem with this is that I wouldn't be able to send the messages from the domain logic, the logic would be spread across the service processing the message and the Lambda/service reacting over changes and the solution would be platform-specific.
Is there any other way to handle this?
I can't say a specific solution based on DynamoDB since I've not used this engine ever. But I've built an event driven system on top of MongoDB so I can share my learnings you might find useful for your case.
You can have different approaches:
1) Based on an event sourcing approach you can just save the events/messages your use case produce within a transaction. In Mongo when you are just inserting/appending new items to the same collection you can ensure atomicity. Anyway, if the engine does not provide that capability the query operation is so centralized that you are reducing the possibility of an error at minimum.
Once all the events are stored, you can then consume them and project them to a given state and then persist the updated state in another transaction.
Here you have to deal with eventual consistency as data will be stale in your read model until you have projected the events.
2) Another approach is applying the UnitOfWork pattern where you cache all the query operations (insert/update/delete) to save both events and the state. Once your use case finishes, you execute all the cached queries against the database (flush). This way although the operations are not atomic you are again centralizing them quite enough to minimize errors.
Of course the best is to use an ACID database if you require that capability and any other approach will be a workaround to get close to it.
About publishing the events I don't know if you mean they are published to a messaging transportation mechanism such as rabbitmq, Kafka, etc. But that must be a background process where you fetch the events from the DB and publishes them in order to break the 2 phase commit within the same transaction.

store messages of messaging application

Actually, I am developing a messaging app and use cassandra as a database and kafka as a message broker.
My question is:
Do I need to store the messages between users in cassandra? If I do so, then size of my database will grow really fast.
As I am using a messaging queue, the messages are stored as long as these were not delievered. I have heard, that messaging apps (such as Facebook Messanger, WhatsApp) does not store the message content between users in a database, but only use a queuing system (XMPP, MQTT) which deletes messages as soon as they are delieverd. So no need for storing in external database. Am I right?
What is best practice? Besides, do I need to store the messaging content from the legal perspective (government or the like) for a period of time (for example, 2 years)?
Looking at http://www.planetcassandra.org/apache-cassandra-use-cases/, there are a lot using cassandra as a database backend for messaging apps. However, it is a antipattern to use cassandra as a message-queue (see cassandra docs).
Using Cassandra as a queue is clearly an anti pattern
However Cassandra is a good fit to store messages, read my blog post on KillrChat: http://www.doanduyhai.com/blog/?p=1859 for a possible data model for message storage

Handling failures in Thrift in general

I read through the official documentation and the official whitepaper, but I couldn't find a satisfying answer to how Thrift handles failures in the following scenario:
Say you have a client sending a method call to a server to insert an entry in some data structure residing in that server (it doesn't really matter what it is). Suppose the server has processed the call and inserted the entry but the client couldn't receive a response due to a network failure. In such a case, how should the client handle this? A simple retry of sending the call would possibly result in a duplicate entry being inserted. Does the Thrift library persist the response somewhere so that it can resend to the client when it is back online? Or is it the application's responsibility to do so?
Would appreciate it if someone could point out the details of how it works, besides directing to its source code.
The question is an interesting one, but it is by no means limited to Thrift. A better name would be
Handling failures in asynchronous or remote calls in general
because that's in essence, what it is. Altough in the specific case of an RPC-style API like, for example, a Thrift service, the client blocks and it seems to be an synchronous call, it really isn't that way.
The whole problem can be rephrased to the more general question about
Designing robust distributed systems
So what is the main problem, that we have to deal with? We have to assume that every call we do may fail. In particular, it can fail in three ways:
request died
request sent, server processing successful, response died
request sent, server processing failed, response died
In some cases, this is not a big deal, regardless of the exact case we have. If the client just wants to retrieve some values, he can simply re-query and will get some results eventually if he tries often enough.
In other cases, especially when the client modifies data on the server, it may become more problematic. The general recommendation in such cases is to make the service calls idempotent, meaning: regardless, how often I do the same call, the end result is always the same. This could be achieved by various means and more or less depends on the use case.
For example, one method is it to send some logical "ticket" values along with each request to filter out doubled or outdated requests on the server. The server keeps track and/or checks these tickets, before the processing starts eventually. But again, if that method suits your needs depends on your use case.
The Command and Query Responsibility Segregation (CQRS) pattern is another approach to deal with the complexity. It basically breaks the API into setters and getters. I'd recommend to look into that topic, but it is not useful for every scenario. I'd also recommend to look at the Data Consistency Primer article. Last not least the CAP theorem is always a good read.
Good Service/API design is not simple, and the fact, that we have to deal with a distributed parallel system does not make it easier, quite the opposite.
Let me try to give a straight answer.
... is it the application's responsibility to do so?
Yes.
There're 4 types of Exceptions involved in Thrift RPC, including TTransportException, TProtocolException, TApplicationException, and User-defined exceptions.
Based on the book Programmer's Guide to Apache Thrift, the former 2 are local exceptions, while the latter 2 are not.
As the names imply, TTransportException includes exceptions like NOT_OPEN, TIMED_OUT, and TProtocolException includes INVALID_DATA, BAD_VERSION, etc. These exceptions are not propagated from the server the the client and act much like normal language exceptions.
TApplicationExceptions involve problems such as calling a method that isn’t implemented or failing to provide the necessary arguments to a method.
User-defined Exceptions are defined in IDL files and raised by the user code.
For all of these exceptions, no retry operations are done by Thrift RPC framework itself. Instead, they should be handled properly by the application code.

at-most-once and exactly-once

I am studying Distributed Systems and when it comes to the RPC part, I have heard about these two semantics (at-most-once and exactly-once). I understand that the at-most-once is used on databases for instances, when we don't want duplicate execution.
First question:
How is this achieved? How does the server know that it shouldnt execute the request again? It might be a duplicate but it might be a legitimate request as well.
The second question is:
What is the difference between the two semantics in the title? I can read :). I know that at-most-once might not be executed at all but, what does exactly-once do that guarantees the execution?
Here is a pretty good explanation of the different types of messaging semantics for your second question:
At-most-once semantics: The easiest type of semantics to achieve, from an engineering complexity perspective, since it can be done in a fire-and-forget way. There's rarely any need for the components of the system to be stateful. While it's the easiest to achieve, at-most-once is also the least desirable type of messaging semantics. It provides no absolute message delivery guarantees since each message is delivered once (best case scenario) or not at all.
At-least-once semantics: This is an improvement on at-most-once semantics. There might be multiple attempts at delivering a message, so at least one attempt is successful. In other words, there's a chance messages may be duplicated, but they can't be lost. While not ideal as a system-wide characteristic, at-least-once semantics are good enough for use cases where duplication of data is of little concern or scenarios where deduplication is possible on the consumer side.
Exactly-once semantics: The ultimate message delivery guarantee and the optimal choice in terms of data integrity. As its name suggests, exactly-once semantics means that each message is delivered precisely once. The message can neither be lost nor delivered twice (or more times). Exactly-once is by far the most dependable message delivery guarantee. It’s also the hardest to achieve.
That's all part of this blog post about Exactly-once message processing (Disclosure: I work for Ably)
Hope this helps 😄
In cases of at most once semantics, request is sent again in case of failure, but request is filtered on the server for duplicates.
In exactly once semantics, request is sent again, request is filtered for duplicate and there is a guarantee for the server to restart after failure and start processing requests from where it crashed.
But exactly once is not realizable because what happens when client sends request, and before it reaches the server, server crashes. There is no way of tracking the request.
http://de.wikipedia.org/wiki/Remote_Procedure_Call#Fehlersemantik
To correct Hesper's answer-
Earlier, exactly once RPC was not realisable but a research paper in 2015 [1] proved that it is possible to do so. Basically RIFL paradigm guarantees safety of exactly one execution of an RPC that is executed is stored durably
[1]: Lee, Collin, et al. "Implementing linearizability at large scale and low latency." Proceedings of the 25th Symposium on Operating Systems Principles. ACM, 2015
Bump, I'm studying this too and found this, hope it helps (helped me),
At-least-once versus at-most-once?
let's take an example: acquiring a lock
if client and server stay up, client receives lock
if client fails, it may have the lock or not (server needs a plan!)
if server fails, client may have lock or not
at-least-once: client keeps trying
at-most-once: client will receive an exception
what does a client do in the case of an exception?
need to implement some application-specific protocol
ask server, do i have the lock?
server needs to have a plan for remembering state across reboots
e.g., store locks on disk.
at-least-once (if we never give up)
clients keep trying. server may run procedure several times
server must use application state to handle duplicates
if requests are not idempotent
but difficult to make all request idempotent
e.g., server good store on disk who has lock and req id
check table for each requst
even if server fails and reboots, we get correct semantics
What is right?
depends where RPC is used.
simple applications:
at-most-once is cool (more like procedure calls)
more sophisticated applications:
need an application-level plan in both cases
not clear at-once gives you a leg up
=> Handling machine failures makes RPC different than procedure calls
quoted from distributed systems and paradigms 2nd edition
For the first question I believe that each request should have a unique id attached to it. Therefore even if the client sends two requests that have the exact same command the server is able to filter and distinguish via the unique id of the request.
For the second question I think this article helps define the semantics for an rpc call. http://www.cs.unc.edu/~dewan/242/f97/notes/ipc/node27.html

Resources