Calculating performance metrics using trace.json for simulation in UnetStack3 - groovy

I am working on the tool to calculate different performance metrics (like average end-to-end delay, throughput, packet delivery ratio, etc.) for the simulation of underwater networks in UnetStack3. I have done an implementation in python that parses the trace.json and calculates end-to-end delay. However, it works only for topology with one-hop communication, as I have considered the MessageID of the events. Further, I analyzed the implementation of the VizTrace tool in Julia and tried to extend the implementation. However, I am unable to figure out how to co-relate events that occur in different nodes for calculating performance measures in a multi-hop topology. Please let me know what approach I should follow with Python and with the vizTrace.

Every event entry in the trace.json file contains a few useful pieces of information to help you associate events across nodes:
component: name and class of the agent, along with the node on which it is running
threadID: unique identifier within a node that associates related events together.
stimulus: contains messageID of the message that caused this event.
response: contains messageID of the message that was sent in response to this event.
For more details, see https://blog.unetstack.net/whats-new-in-UnetStack-3.3
Tracing an event through the agents in the node simply involves collating the events with the same threadID. In order to trace an event across nodes, you need to look at the messageID of the response messages, and find the equivalent stimulus message (same messageID) on the next node. Then you do the same from that node to the following one, until you reach the destination.
If you are using the HalfDuplexModem simulation model, then these messages that go across nodes (and hence across threadIDs) will be the HalfDuplexModem$TX messages. Example: https://blog.unetstack.net/assets/img/mermaid-diagram-20210408195013.svg

Related

Calculating end-to-end delay for a multi-hop network simulation using trace.json in UnetStack

I have done simulation for a topology of 3 nodes (Node-A, B, and C). Where Node-A is the source and Node-C is the sink, Node-B is the intermediate node between A and C. Now I want to calculate the end-to-end delay for the packets sent from source to sink. For this, I am parsing the trace.json generated after the simulation. However, in trace.json the threadID or the messageID generated for the datagram at the source(Node-A) is not the same when that datagram is forwarded at the intermediate node (Node-B). Thus, when I parse the events pertaining to the sink (Node-C) the threadID or the messageID matches with events pertaining to the intermediate node (Node-B) but not with the source (Node-A). So I want to know, how I can calculate end-to-end delay for the packets transmitted between source and sink. Please let me know, also let me know is there any other approach I can follow to calculate the same. please find the generated trace.json of the simulation here
Each node has its own thread of execution. The JSON has stimulus and response message IDs for each entry, allowing you to trace across agents and nodes. To understand this better, you may want to read this blog article.
You may also want to look at the viztrace utility. This utility does the heavy lifting of matching up stimulus-responses to create a coherent trace from the logs, and can even generate sequence diagrams for you. You could modify this utility to meet your needs, or use the output from it for easier analysis.

How are the missing events replayed?

I am trying to learn more about CQRS and Event Sourcing (Event Store).
My understanding is that a message queue/bus is not normally used in this scenario - a message bus can be used to facilitate communication between Microservices, however it is not typically used specifically for CQRS. However, the way I see it at the moment - a message bus would be very useful guaranteeing that the read model is eventually in sync hence eventual consistency e.g. when the server hosting the read model database is brought back online.
I understand that eventual consistency is often acceptable with CQRS. My question is; how does the read side know it is out of sync with the write side? For example, lets say there are 2,000,000 events created in Event Store on a typical day and 1,999,050 are also written to the read store. The remaining 950 events are not written because of a software bug somewhere or because the server hosting the read model is offline for a few secondsetc. How does eventual consistency work here? How does the application know to replay the 950 events that are missing at the end of the day or the x events that were missed because of the downtime ten minutes ago?
I have read questions on here over the last week or so, which talk about messages being replayed from event store e.g. this one: CQRS - Event replay for read side, however none talk about how this is done. Do I need to setup a scheduled task that runs once per day and replays all events that were created since the date the scheduled task last succeeded? Is there a more elegant approach?
I've used two approaches in my projects, depending on the requirements:
Synchronous, in-process Readmodels. After the events are persisted, in the same request lifetime, in the same process, the Readmodels are fed with those events. In case of a Readmodel's failure (bug or catchable error/exception) the error is logged and that Readmodel is just skipped and the next Readmodel is fed with the events and so on. Then follow the Sagas, that may generate commands that generate more events and the cycle is repeated.
I use this approach when the impact of a Readmodel's failure is acceptable by the business, when the readiness of a Readmodel's data is more important than the risk of failure. For example, they wanted the data immediately available in the UI.
The error log should be easily accessible on some admin panel so someone would look at it in case a client reports inconsistency between write/commands and read/query.
This also works if you have your Readmodels coupled to each other, i.e. one Readmodel needs data from another canonical Readmodel. Although this seems bad, it's not, it always depends. There are cases when you trade updater code/logic duplication with resilience.
Asynchronous, in-another-process readmodel updater. This is used when I use total separation of the Readmodel from the other Readmodels, when a Readmodel's failure would not bring the whole read-side down; or when a Readmodel needs another language, different from the monolith. Basically this is a microservice. When something bad happens inside a Readmodel it necessary that some authoritative higher level component is notified, i.e. an Admin is notified by email or SMS or whatever.
The Readmodel should also have a status panel, with all kinds of metrics about the events that it has processed, if there are gaps, if there are errors or warnings; it also should have a command panel where an Admin could rebuild it at any time, preferable without a system downtime.
In any approach, the Readmodels should be easily rebuildable.
How would you choose between a pull approach and a push approach? Would you use a message queue with a push (events)
I prefer the pull based approach because:
it does not use another stateful component like a message queue, another thing that must be managed, that consume resources and that can (so it will) fail
every Readmodel consumes the events at the rate it wants
every Readmodel can easily change at any moment what event types it consumes
every Readmodel can easily at any time be rebuild by requesting all the events from the beginning
there order of events is exactly the same as the source of truth because you pull from the source of truth
There are cases when I would choose a message queue:
you need the events to be available even if the Event store is not
you need competitive/paralel consumers
you don't want to track what messages you consume; as they are consumed they are removed automatically from the queue
This talk from Greg Young may help.
How does the application know to replay the 950 events that are missing at the end of the day or the x events that were missed because of the downtime ten minutes ago?
So there are two different approaches here.
One is perhaps simpler than you expect - each time you need to rebuild a read model, just start from event 0 in the stream.
Yeah, the scale on that will eventually suck, so you won't want that to be your first strategy. But notice that it does work.
For updates with not-so-embarassing scaling properties, the usual idea is that the read model tracks meta data about stream position used to construct the previous model. Thus, the query from the read model becomes "What has happened since event #1,999,050"?
In the case of event store, the call might look something like
EventStore.ReadStreamEventsForwardAsync(stream, 1999050, 100, false)
Application doesn't know it hasn't processed some events due to a bug.
First of all, I don't understand why you assume that the number of events written on the write side must equal number of events processed by read side. Some projections may subscribe to the same event and some events may have no subscriptions on the read side.
In case of a bug in projection / infrastructure that resulted in a certain projection being invalid you might need to rebuild this projection. In most cases this would be a manual intervention that would reset the checkpoint of projection to 0 (begining of time) so the projection will pick up all events from event store from scratch and reprocess all of them again.
The event store should have a global sequence number across all events starting, say, at 1.
Each projection has a position tracking where it is along the sequence number. The projections are like logical queues.
You can clear a projection's data and reset the position back to 0 and it should be rebuilt.
In your case the projection fails for some reason, like the server going offline, at position 1,999,050 but when the server starts up again it will continue from this point.

"Resequencing" messages after processing them out-of-order

I'm working on what's basically a highly-available distributed message-passing system. The system receives messages from someplace over HTTP or TCP, perform various transformations on it, and then sends it to one or more destinations (also using TCP/HTTP).
The system has a requirement that all messages sent to a given destination are in-order, because some messages build on the content of previous ones. This limits us to processing the messages sequentially, which takes about 750ms per message. So if someone sends us, for example, one message every 250ms, we're forced to queue the messages behind each other. This eventually introduces intolerable delay in message processing under high load, as each message may have to wait for hundreds of other messages to be processed before it gets its turn.
In order to solve this problem, I want to be able to parallelize our message processing without breaking the requirement that we send them in-order.
We can easily scale our processing horizontally. The missing piece is a way to ensure that, even if messages are processed out-of-order, they are "resequenced" and sent to the destinations in the order in which they were received. I'm trying to find the best way to achieve that.
Apache Camel has a thing called a Resequencer that does this, and it includes a nice diagram (which I don't have enough rep to embed directly). This is exactly what I want: something that takes out-of-order messages and puts them in-order.
But, I don't want it to be written in Java, and I need the solution to be highly available (i.e. resistant to typical system failures like crashes or system restarts) which I don't think Apache Camel offers.
Our application is written in Node.js, with Redis and Postgresql for data persistence. We use the Kue library for our message queues. Although Kue offers priority queueing, the featureset is too limited for the use-case described above, so I think we need an alternative technology to work in tandem with Kue to resequence our messages.
I was trying to research this topic online, and I can't find as much information as I expected. It seems like the type of distributed architecture pattern that would have articles and implementations galore, but I don't see that many. Searching for things like "message resequencing", "out of order processing", "parallelizing message processing", etc. turn up solutions that mostly just relax the "in-order" requirements based on partitions or topics or whatnot. Alternatively, they talk about parallelization on a single machine. I need a solution that:
Can handle processing on multiple messages simultaneously in any order.
Will always send messages in the order in which they arrived in the system, no matter what order they were processed in.
Is usable from Node.js
Can operate in a HA environment (i.e. multiple instances of it running on the same message queue at once w/o inconsistencies.)
Our current plan, which makes sense to me but which I cannot find described anywhere online, is to use Redis to maintain sets of in-progress and ready-to-send messages, sorted by their arrival time. Roughly, it works like this:
When a message is received, that message is put on the in-progress set.
When message processing is finished, that message is put on the ready-to-send set.
Whenever there's the same message at the front of both the in-progress and ready-to-send sets, that message can be sent and it will be in order.
I would write a small Node library that implements this behavior with a priority-queue-esque API using atomic Redis transactions. But this is just something I came up with myself, so I am wondering: Are there other technologies (ideally using the Node/Redis stack we're already on) that are out there for solving the problem of resequencing out-of-order messages? Or is there some other term for this problem that I can use as a keyword for research? Thanks for your help!
This is a common problem, so there are surely many solutions available. This is also quite a simple problem, and a good learning opportunity in the field of distributed systems. I would suggest writing your own.
You're going to have a few problems building this, namely
2: Exactly-once delivery
1: Guaranteed order of messages
2: Exactly-once delivery
You've found number 1, and you're solving this by resequencing them in redis, which is an ok solution. The other one, however, is not solved.
It looks like your architecture is not geared towards fault tolerance, so currently, if a server craches, you restart it and continue with your life. This works fine when processing all requests sequentially, because then you know exactly when you crashed, based on what the last successfully completed request was.
What you need is either a strategy for finding out what requests you actually completed, and which ones failed, or a well-written apology letter to send to your customers when something crashes.
If Redis is not sharded, it is strongly consistent. It will fail and possibly lose all data if that single node crashes, but you will not have any problems with out-of-order data, or data popping in and out of existance. A single Redis node can thus hold the guarantee that if a message is inserted into the to-process-set, and then into the done-set, no node will see the message in the done-set without it also being in the to-process-set.
How I would do it
Using redis seems like too much fuzz, assuming that the messages are not huge, and that losing them is ok if a process crashes, and that running them more than once, or even multiple copies of a single request at the same time is not a problem.
I would recommend setting up a supervisor server that takes incoming requests, dispatches each to a randomly chosen slave, stores the responses and puts them back in order again before sending them on. You said you expected the processing to take 750ms. If a slave hasn't responded within say 2 seconds, dispatch it again to another node randomly within 0-1 seconds. The first one responding is the one we're going to use. Beware of duplicate responses.
If the retry request also fails, double the maximum wait time. After 5 failures or so, each waiting up to twice (or any multiple greater than one) as long as the previous one, we probably have a permanent error, so we should probably ask for human intervention. This algorithm is called exponential backoff, and prevents a sudden spike in requests from taking down the entire cluster. Not using a random interval, and retrying after n seconds would probably cause a DOS-attack every n seconds until the cluster dies, if it ever gets a big enough load spike.
There are many ways this could fail, so make sure this system is not the only place data is stored. However, this will probably work 99+% of the time, it's probably at least as good as your current system, and you can implement it in a few hundred lines of code. Just make sure your supervisor is using asynchronous requests so that you can handle retries and timeouts. Javascript is by nature single-threaded, so this is slightly trickier than normal, but I'm confident you can do it.

Spring Integration Kafka slow message processing when using KafkaNativeOffsetManager

We've been using SI Kafka for a new project here with much success. Prior to a recent switch, we were using the KafkaTopicOffsetManager for the management of our consumers topic offset. In order to not have additional topics per consumer/topic pair and to use Burrow or lag monitoring, we decided to use the latest KafkaNativeOffsetManager that uses the native offset management provided by Kafka. After making the switch though, we noticed that the consumption of messages from the target topic was continually lagging behind. We know this didn't happen with the KafkaTopicOffsetManager as we were using it for months prior to the switch. We also ran side-by-side tests and verified that the consumption of messages was in near real time with the production of messages when using KafkaTopicOffsetManager and the KafkaNativeOffsetManager was always increasingly lagging behind. Both offset managers are using default configuration and are both committing offsets after the message is processed (auto-acknowledge).
So I really have two questions, the first not be the primary of this SO post.
First question is why would this be the case that the native offset management is slower than using a topic for offset management?
Second question is, can we configure SI kafka to not commit offsets on the successful processing of each message but rather provide a different strategy? Our thought was that maybe we shouldn't be committing offsets so frequently and should be either doing them as batch update. For example, commit offsets after successfully processing 25 messages or after 30 seconds.
Thank you
When disabling autocommit and receiving the acknowledgment header, the only thing you need to do is to invoke acknowledge() once your message has been processed. This assumes that even if you are handling the message in a different thread, you will retain a reference to the Acknowledgment instance, either as such, or as part of the original Message - or you are copying headers if you are doing transformations. But the call needs to be made by your code.
Secondly, the performance issue - it is caused by the fact that the KafkaNativeOffsetManager implementation makes a blocking, relatively more expensive call to the brokers (relative to the simply sending a message to a compacted topic, as the KafkaTopicOffsetManager does. Generally speaking doing updates after every message is expensive, and in Spring XD we mitigate that by using https://github.com/spring-projects/spring-xd/blob/master/extensions/spring-xd-extension-kafka/src/main/java/org/springframework/integration/x/kafka/WindowingOffsetManager.java, which reduces the number of effective writes. I suppose we could do something similar for Spring Integration.
To wit: comparatively, 100000 updates complete in 9.8s with the KafkaNativeOffsetManager and in 0.382s with the KafkaTopicOffsetManager, as shown by https://github.com/mbogoevici/spring-integration-kafka/blob/perftest/src/test/java/org/springframework/integration/kafka/performance/OffsetManagerPerformanceTests.java (results gathered on my own machine). Results may be skewed somehow, but still indicate a big difference. Tracing in YourKit confirms the result.
Not sure what's the problem with the KafkaNativeOffsetManager, would be great if you share some investigation on the matter, some bottleneck place in our code in the JIRA.
For the deferred offset commit I can suggest autoCommitOffset = false on the KafkaMessageDrivenChannelAdapter. Having that the sent to the channel message will be enriched with the KafkaHeaders.ACKNOWLEDGMENT header in face of DefaultAcknowledgment. It really answers to your request:
/**
* Invoked when the message for which the acknowledgment has been created has been processed.
* Calling this method implies that all the previous messages in the partition have been processed already.
*/
void acknowledge();

How to get events count from Microsoft Azure EventHub?

I want to get events count from Microsoft Azure EventHub.
I can use EventHubReceiver.Receive(maxcount) but it is slow on big number of big events.
There is NamespaceManager.GetEventHubPartition(..).EndSequenceNumber property that seems to be doing the trick but I am not sure if it is correct approach.
EventHub doesn't have a notion of Message count, as EventHub is a high-Throughput, low-latency durable stream of events on cloud - getting the CORRECT current count at a given point of time, could be wrong the very next milli-second!! and hence, it wasn't provided :)
Hmm, we should have named EventHubs something like a StreamHub - which would make this obvious!!
If what you are looking for is - how much is the Receiver lagging behind - then EventHubClient.GetPartitionRuntimeInformation().LastEnqueuedSequenceNumber is your Best bet.
As long as no messages are sent to the partition this value remains constant :)
On the Receiver side - when a message is received - receivedEventData.SequenceNumber will indicate the Current sequence number you are processing and the diff. between EventHubClient.GetPartitionRuntimeInformation().LastEnqueuedSequenceNumber and EventData.SequenceNumber can indicate how much the Receiver of a Partition is lagging behind - based on which, the receiver process can Scale up or down the no. of Workers (work distribution logic).
more on Event Hubs...
You can use Stream Analytics, with a simple query:
SELECT
COUNT(*)
FROM
YourEventHub
GROUP BY
TUMBLINGWINDOW(DURATION(hh, <Number of hours in which the events happened>))
Of course you will need to specify a time window, but you can potentially run it from when you started collecting data to now.
You will be able to output to SQL/Blob/Service Bus et cetera.
Then you can get the message out of the output from code and process it. It is quite complicated for a one off count, but if you need it frequently and you have to write some code around it, it could be the solution for you.

Resources