Is there a way to detect if an IndexWriter has pending commits? - rust

In tantivy, is there a way to tell if an IndexWriter has any uncommitted changes?
There is nothing obvious in the IndexWriter documentation, but perhaps there is a less obvious way?
If there isn't a way to tell if a commit() is needed, is it cheap to do a commit() if there are no pending changes?

Related

Is there a way to judge whether all the events have been consumed?

Is there a way to judge whether all the events have been consumed? Similar to the BlockingQueue.isEmpty() method.
I want to add some business logic when consumers are idle.
Looking forward to your reply.
Yes, the ringbuffer has remainingCapacity which tells you how many slots remain free. So you would then need to compare that to the capacity of the ring buffer to check if its empty.

Calling Save Changes Multiple Times

I was wondering if anyone has done any perf tests around the effect calling EF Cores SaveChangesAsync() has on performance if there are no changes to be saved.
Essentially I am assuming it's basically nothing and therefore isn't a big deal to call it "just in case"?
(I am trying to do something with tracking user activity in middleware in asp net core and essentially on the way out I want to make sure save changes was called to persist the activity to the database. There is a chance that it has already been called on the context depending on the operation of the user and if that's the case I don't want to incur the cost of a second operation when the activity could be persisted as part of the normal transaction/round trip)
As you can see in implementation if there are no changes, nothing will be done. As far it has impact to performance, I don't know. But of course calling SaveChanges or SaveChangesAsync without any changes will have a performance impact in relation to don't call them.
That's the same behavior like EF6 has too.

EventSourcing race condition

Here is the nice article which describes what is ES and how to deal with it.
Everything is fine there, but one image is bothering me. Here it is
I understand that in distributed event-based systems we are able to achieve eventual consistency only. Anyway ... How do we ensure that we don't book more seats than available? This is especially a problem if there are many concurrent requests.
It may happen that n aggregates are populated with the same amount of reserved seats, and all of these aggregate instances allow reservations.
I understand that in distributes event-based systems we are able to achieve eventual consistency only, anyway ... How to do not allow to book more seats than we have? Especially in terms of many concurrent requests?
All events are private to the command running them until the book of record acknowledges a successful write. So we don't share the events at all, and we don't report back to the caller, without knowing that our version of "what happened next" was accepted by the book of record.
The write of events is analogous to a compare-and-swap of the tail pointer in the aggregate history. If another command has changed the tail pointer while we were running, our swap fails, and we have to mitigate/retry/fail.
In practice, this is usually implemented by having the write command to the book of record include an expected position for the write. (Example: ES-ExpectedVersion in GES).
The book of record is expected to reject the write if the expected position is in the wrong place. Think of the position as a unique key in a table in a RDBMS, and you have the right idea.
This means, effectively, that the writes to the event stream are actually consistent -- the book of record only permits the write if the position you write to is correct, which means that the position hasn't changed since the copy of the history you loaded was written.
It's typical for commands to read event streams directly from the book of record, rather than the eventually consistent read models.
It may happen that n-AggregateRoots will be populated with the same amount of reserved seats, it means having validation in the reserve method won't help, though. Then n-AggregateRoots will emit the event of successful reservation.
Every bit of state needs to be supervised by a single aggregate root. You can have n different copies of that root running, all competing to write to the same history, but the compare and swap operation will only permit one winner, which ensures that "the" aggregate has a single internally consistent history.
There are going to be a couple of ways to deal with such a scenario.
First off, an event stream would have the current version as the version of the last event added. This means that when you would not, or should not, be able to persist the event stream if the event stream is not at the version when loaded. Since the very first write would cause the version of the event stream to be increased, the second write would not be permitted. Since events are not emitted, per se, but rather a result of the event sourcing we would not have the type of race condition in your example.
Well, if your commands are processed behind a queue any failures should be retried. Should it not be possible to process the request you would enter the normal "I'm sorry, Dave. I'm afraid I can't do that" scenario by letting the user know that they should try something else.
Another option is to start the processing by issuing an update against some table row to serialize any calls to the aggregate. Probably not the most elegant but it does cause a system-wide block on the processing.
I guess, to a large extent, one cannot really trust the read store when it comes to transactional processing.
Hope that helps :)

RTOS: requesting non-sleeping task to wake up causes next call to sleep() to not sleep - is that good?

I'm rewriting existing real-time kernel TNKernel; I have used it for a couple of years, but I don't like many of its design decisions (as well as implementation details), so I decided to fork it and have fun implementing what I want. Anyone who is interested might read additional information at the project page on bitbucket.
TNKernel has one strange, in my opinion, feature: it has service tn_task_sleep(int timeout) which puts current task to sleep, and it has tn_task_wakeup(struct TN_Task *task) which wakes currently sleeping task up.
The strangeness is that it is legal to call tn_task_wakeup() on non-sleeping task; in this case, special flag like wakeup_request will be set, and on the next call to tn_task_sleep() this flag will be cleared, and task won't sleep.
All of this seems to me as a completely dirty hack, it probably might be used as a workaround to avoid race condition problems, or as a hacky replacement for semaphore.
It just encourages programmer to go with hacky approach, instead of creating straightforward semaphore and provide proper synchronization. So, I'm willing to remove this service from my project. Is this good idea to get rid of it, or have I missed something important? Why do we ever need that?
Since no one said that I'm wrong, I assumed that I'm right and removed these strange "features" from my kernel.

are single redis commands executed in isolation?

I'm using Node and Redis.
If I issue a redis.set() command, is there any chance that whilst that is being set, another read can occur with the old value?
No, you will never have that problem. One of the basic virtues of Redis is that it has a tight event loop which executes the commands, so they are naturally atomic.
This page has more on the topic (see subheading "Atomicity"), and about Redis in general.
Assuming you're talking about two truly concurrent accesses, one write and one read, there is essentially no meaning to this question. If a write is itself atomic and the value is never seen as anything other than the old or new value, then a reader who reads at "about the same time" as a writer may legitimately see either the old or the new value.

Resources