RxAndroid + Retrofit 2, getting a list of books - retrofit2

So, let's say I need to get a list of favorite books for an Android app.
I have the list of ids, but I can only get one book at a time, so, I don't really have a bookAPI.getFavoriteBooks(listOfFavoriteIds) method call (the server doesn't have an endpoint for that), but instead I'd have to call bookAPI.getBook(id) for each id in the list to get all the favorite books, and after I get them I should return a list with the results.
The answers I've found so far assume that there's a method Observable<List<Book>> getFavoriteBooks(List<Integer> ids)I could call, but in this case I don't have that.
Is there a way of solving this question using RxAndroid and Retrofit 2?

It's hard to say from your question if this fits your needs, but you could try:
Observable.fromIterable(listOfIds)
.flatMap(new Function<Integer, ObservableSource<Book>>() {
#Override
public ObservableSource<Book> apply(Integer integer)
throws Exception {
return bookApi.getBook(id);
}
})
.toList()
Let me explain what's happening here. fromIterable creates an observable that emits each element in the iterable as an event. In this case it will emit each book id.
You then flat map said observable to your api observable. So in this case you're mapping each emitted Id to an observable that emits each book from the api.
Finally, you collect all the emitted books as a list. Once you subscribe to this stream it will have a list as a single event:
// whatever way you get the above stream
.subscribe(new Consumer<List<Book>>() {
#Override
public void accept(List<Book> result) throws Exception {
// do whatever you want with result
}
});
Just make sure to use the right schedulers for your use case.
(Careful because this subscribe call doesn't handle errors, but I guess you can figure that out pretty easily)

Related

What type of data should be passed to domain events?

I've been struggling with this for a few days now, and I'm still not clear on the correct approach. I've seen many examples online, but each one does it differently. The options I see are:
Pass only primitive values
Pass the complete model
Pass new instances of value objects that refer to changes in the domain/model
Create a specific DTO/object for each event with the data.
This is what I am currently doing, but it doesn't convince me. The example is in PHP, but I think it's perfectly understandable.
MyModel.php
class MyModel {
//...
private MediaId $id;
private Thumbnails $thumbnails;
private File $file;
//...
public function delete(): void
{
$this->record(
new MediaDeleted(
$this->id->asString(),
[
'name' => $this->file->name(),
'thumbnails' => $this->thumbnails->toArray(),
]
)
);
}
}
MediaDeleted.php
final class MediaDeleted extends AbstractDomainEvent
{
public function name(): string
{
return $this->payload()['name'];
}
/**
* #return array<ThumbnailArray>
*/
public function thumbnails(): array
{
return $this->payload()['thumbnails'];
}
}
As you can see, I am passing the ID as a string, the filename as a string, and an array of the Thumbnail value object's properties to the MediaDeleted event.
How do you see it? What type of data is preferable to pass to domain events?
Updated
The answer of #pgorecki has convinced me, so I will put an example to confirm if this way is correct, in order not to change too much.
It would now look like this.
public function delete(): void
{
$this->record(
new MediaDeleted(
$this->id,
new MediaDeletedEventPayload($this->file->copy(), $this->thumbnail->copy())
)
);
}
I'll explain a bit:
The ID of the aggregate is still outside the DTO, because MediaDeleted extends an abstract class that needs the ID parameter, so now the only thing I'm changing is the $payload array for the MediaDeletedEventPayload DTO, to this DTO I'm passing a copy of the value objects related to the change in the domain, in this way I'm passing objects in a reliable way and not having strange behaviours if I pass the same instance.
What do you think about it?
A domain event is simply a data-holding structure or class (DTO), with all the information related to what just happened in the domain, and no logic. So I'd say Create a specific DTO/object for each event with the data. is the best choice. Why don't you start with the less is more approach? - think about the consumers of the event, and what data might they need.
Also, being able to serialize and deserialize the event objects is a good practice, since you could want to send them via a message broker (although this relates more to integration events than domain events).

Coordinating emission and subscription in Kotlin coroutines with hot flows

I am trying to design an observable task-like entity which would have the following properties:
Reports its current state changes reactively
Shares state and result events: new subscribers will also be notified if the change happens after they've subscribed
Has a lifecycle (backed by CoroutineScope)
Doesn't have suspend functions in the interface (because it has a lifecycle)
The very basic code is something like this:
class Worker {
enum class State { Running, Idle }
private val state = MutableStateFlow(State.Idle)
private val results = MutableSharedFlow<String>()
private val scope = CoroutineScope(Dispatchers.Default)
private suspend fun doWork(): String {
println("doing work")
return "Result of the work"
}
fun start() {
scope.launch {
state.value = State.Running
results.emit(doWork())
state.value = State.Idle
}
}
fun state(): Flow<State> = state
fun results(): Flow<String> = results
}
The problems with this arise when I want to "start the work after I'm subscribed". There's no clear way to do that. The simplest thing doesn't work (understandably):
fun main() {
runBlocking {
val worker = Worker()
// subscriber 1
launch {
worker.results().collect { println("received result $it") }
}
worker.start()
// subscriber 2 can also be created "later" and watch
// for state()/result() changes
}
}
This prints only "doing work" and never prints a result. I understand why this happens (because collect and start are in separate coroutines, not synchronized in any way).
Adding a delay(300) to coroutine inside doWork "fixes" things, results are printed, but I'd like this to work without artificial delays.
Another "solution" is to create a SharedFlow from results() and use its onSubscription to call start(), but that didn't work either last time I've tried.
My questions are:
Can this be turned into something that works or is this design initially flawed?
If it is flawed, can I take some other approach which would still hit all the goals I have specified in the beginning of the post?
Your problem is that your SharedFlow has no buffer set up, so it is emitting results to its (initially zero) current collectors and immediately forgetting them. The MutableSharedFlow() function has a replay parameter you can use to determine how many previous results it should store and replay to new collectors. You will need to decide what replay amount to use based on your use case for this class. For simply displaying latest results in a UI, a common choice is a replay of 1.
Depending on your use case, you may want to give your CoroutineScope a SupervisorJob() in its context so it isn't destroyed by any child job failing.
Side note, your state() and results() functions should be properties by Kotlin convention, since they do nothing but return references. Personally, I would also have them return read-only StateFlow/SharedFlow instead of just Flow to clarify that they are not cold.

the right way to return a Single from a CompletionStage

I'm playing around with reactive flows using RxJava2, Micronaut and Cassandra. I'm new to rxjava and not sure what is the correct way to return a of List Person in the best async manner?
data is coming from a Cassandra Dao interface
public interface PersonDAO {
#Query("SELECT * FROM cass_drop.person;")
CompletionStage<MappedAsyncPagingIterable<Person>> getAll();
}
that gets injected into a micronaut controller
return Single.just(personDAO.getAll().toCompletableFuture().get().currentPage())
.subscribeOn(Schedulers.io())
.map(people -> HttpResponse.ok(people));
OR
return Single.just(HttpResponse.ok())
.subscribeOn(Schedulers.io())
.map(it -> it.body(personDAO.getAll().toCompletableFuture().get().currentPage()));
OR switch to RxJava3
return Single.fromCompletionStage(personDAO.getAll())
.map(page -> HttpResponse.ok(page.currentPage()))
.onErrorReturn(throwable -> HttpResponse.ok(Collections.emptyList()));
Not a pro of RxJava nor Cassandra :
In your first and second example, you are blocking the thread executing the CompletionStage with get, even if you are doing it in the IO thread, I would not recommand doing so.
You are also using a Single wich can emit, only one value, or an error. Since you want to return a List, I would sugest to go for at least an Observable.
Third point, the result from Cassandra is paginated, I don't know if it's intentionnaly but you list only the first page, and miss the others.
I would try a solution like the one below, I kept using the IO thread (the operation may be costly in IO) and I iterate over the pages Cassandra fetch :
/* the main method of your controller */
#Get()
public Observable<Person> listPersons() {
return next(personDAO.getAll()).subscribeOn(Schedulers.io());
}
private Observable<Person> next(CompletionStage<MappedAsyncPagingIterable<Person>> pageStage) {
return Single.fromFuture(pageStage.toCompletableFuture())
.flatMapObservable(personsPage -> {
var o = Observable.fromIterable(personsPage.currentPage());
if (!personsPage.hasMorePages()) {
return o;
}
return o.concatWith(next(personsPage.fetchNextPage()));
});
}
If you ever plan to use reactor instead of RxJava, then you can give cassandra-java-driver-reactive-mapper a try.
The syntax is fairly simple and works in compile-time only.

The test failure message for mockito verify

For a parameter class
class Criteria {
private Map params;
public getMap(){ return params; }
}
and a service method accept this criteria
class Service{
public List<Person> query(Criteria criteria){ ... }
}
A custom featureMatcher is used to match the criteria key
private Matcher<Criteria> hasCriteria(final String key, final Matcher<?> valueMatcher){
return new FeatureMatcher<Criteria, Object>((Matcher<? super Object>)valueMatcher, key, key){
#Override protected Object featureValueOf(Criteria actual){
return actual.getMap().get(key);
}
}
}
when using mockito to veryify the arguments:
verify(Service).query((Criteria) argThat("id", hasCriteria("id", equalTo(new Long(12)))));
The error message shows that:
Argument(s) are different! Wanted:
Service.query(
id <12L>
);
-> at app.TestTarget.test_id (TestTarget.java:134)
Actual invocation has different arguments:
Service.query(
app.Criteria#509f5011
);
If I use ArugmentCaptor,
ArgumentCaptor<Criteria> argument = ArgumentCaptor.forClass(Criteria.class);
verify(Service).query(argument.capture());
assertThat(argument.getValue(), hasCriteria("id", equalTo(new Long(12))));
The message is much better:
Expected: id <12L> but id was <2L>
How can I get such message, without using ArgumentCaptor?
The short answer is to adjust the Criteria code, if it's under your control, to write a better toString method. Otherwise, you may be better off using the ArgumentCaptor method.
Why is it hard to do without ArgumentCaptor? You know you're expecting one call, but Mockito was designed to handle it even if you have a dozen similar calls to evaluate. Even though you're using the same matcher implementation, with the same helpful describeMismatch implementation, assertThat inherently tries once to match where verify sees a mismatch and keeps trying to match any other call.
Consider this:
// in code:
dependency.call(true, false);
dependency.call(false, true);
dependency.call(false, false);
// in test:
verify(mockDependency).call(
argThat(is(equalTo(true))),
argThat(is(equalTo(true))));
Here, Mockito wouldn't know which of the calls was supposed to be call(true, true); any of the three might have been it. Instead, it only knows that there was a verification you were expecting that was never satisfied, and that one of three related calls might have been close. In your code with ArgumentCaptor, you can use your knowledge that there's only one call, and provide a more-sane error message; for Mockito, the best it can do is to output all the calls it DID receive, and without a helpful toString output for your Criteria, that's not very helpful at all.

AutoFac IoC, DDD, and inter-Repository Dependencies

I have two POCO types, A and B. I have a repository for each, Rep<A> and Rep<B>, both of which implement IRep<A> and IRep<B> served up by an IoC container (AutoFac in this case).
There are several kinds of repositories - load-on-demand from a DB (or whatever), lazy-loaded in-memory collections, cached web-service results, etc. Callers can't tell the difference. Both Rep<A> and Rep<B> happen to be in-memory collections as A's and B's don't change very much and live a long time.
One of the properties of B is an A. What I do now is, when a B is asked for its A, B gets IRep<A> to find its A and returns it. It does this every time - every request for B's A involves IRep<A>.Find(). The upside is B's never hold onto A's and each request takes into account whatever the state of Rep happens to be. The downside is a lot of IoC/IRep<A> churn.
I am thinking of using the Lazy<T> pattern here so that a B asks IRep<A> once and holds onto what it got. But what happens if an A is deleted from its repository?
I am looking for a clean way for Rep<A> to notify whoever is interested when it has changed. In my example, a certain B's A may be deleted, so I would like Rep<A> to raise an event when something is deleted, or added, etc. Rep<B> might subscribe to this event to clean up any B's that refer to A's that are now gone, etc. How to wire it up?
Ideally nothing changes when instantiating a Rep<A>. It should have no idea who listens, and A's might be manipulated all day long without ever firing up a Rep.
But when Rep<B> is born it needs a way to subscribe to Rep<A>'s event. There might not be a Rep<A> alive yet, but surely there will be once a B is asked for its A, so it seems ok to fire up a Rep<A>.
In essense, when Rep<B> is instantiated, it want it to register itself with Rep<A> for the event notification. I don't want to pollute the IRep<T> interface becaue this shouldn't matter to anyone outside the Repository layer. And other types of repositories might not have to worry about this at all.
Does this make any sense?
What if you made the Rep<A> return an "observable" object that can evaluate to an A, and also has a subscribable event that is raised when something about that A changes? Just a thought. This way, you don't have to have the handlers check to make sure that their A changed; if the event they're listening for is fired, it concerns their instance and not any other.
You might code it as follows:
public class Observable<T>:IDisposable
{
private T instance;
public T Instance
{
get{return instance;}
set{
instance = value;
var handlers = ReferenceChanged;
if(handlers != null) handlers(this, instance);
}
public static implicit operator T(Observable<T> obs)
{
return obs.Instance;
}
//DO NOT attach anonymous delegates or lambdas to this event, or you'll cause a leak
public event EventHandler<T> ReferenceChanged;
public void Dispose()
{
var handlers = ReferenceChanged;
if(handlers != null) handlers(this, null);
foreach(var handler in handlers) ReferenceChanged -= handler;
}
}
public class Rep<T>
{
private Dictionary<T, Observable<T>> observableDictionary = new Dictionary<T, Observable<T>>();
...
public Observable<T> GetObservableFactory(Predicate<T> criteria)
{
//criteria should test only uniquely-identifying information
if(observableDictionary.Keys.Any(criteria))
return observableDictionary[observableDictionary.Keys.First(criteria)];
else
{
//TODO: get object from source according to criteria and set to variable queryResult
var observable = new Observable<T>{Instance = queryResult};
observableDictionary.Add(queryResult, observable);
return observable;
}
}
}
...
var observableA = myRepA.GetObservable(myCriteria);
observableA.ReferenceChanged += DoSomethingWhenReferenceChanges;
Now, the consuming code will be notified if the internal reference is changed, or the observable is disposed (which also disposes of the internal reference). To have the observable also notify consuming code if child references of As change, A must itself be observable, firing an event handled by Observable<T> which will "bubble" it through either ReferenceChanged or a more specific handler such as InstanceDataChanged,(or whatever you want to call it).

Resources