I have a use case for Twilio compositions (two video/audio streams) which relies heavily on the timing/syncing accuracy of the participant streams.
We produce a composition of both videos, and also compositions of each individual audio streams. The word timings within each stream are extracted and compared which relies heavily on the syncing accuracy between the two streams.
In particular we require accurate syncing (to the best of the available data) even when there were network issues, disconnects/reconnects and so on.
Can any comment on how accurate and robust the timing/syncing of Twilio's compositions are for this use case?
Have you had similar use cases and can you comment on your experience with accuracy down to say ~100ms?
Sorry - I know this is a general and vague question - just not sure where else to ask.
Posting an answer here for everyone's future reference.
Having done hundreds of compositions in a wide variety of test scenarios (joining at different times, reconnections, dodgy internet, closing browser, etc) I can happily report that Twilio compositions system generates extremely reilable and accurate compositions in terms of timing and syncing.
However it seems necessary to include both participants in the composition to ensure the start times are consistent.
My solution to creating an individual participant's composition (fully synced with the other participants was:
Create a composition with the video from both participants
Make other participant's video z-ordered behind the main participant, and placed 16x16 in the corner
Only include the audio streams from the participant of interest
Doing this in turn for each participant produces individual participant compositions which are completely aligned with each other.
Related
I have a use case for Twilio compositions (two video/audio streams) which relies heavily on the timing/syncing accuracy of the participant streams.
We produce a composition of both videos, and also compositions of each individual audio streams. The word timings within each stream are extracted and compared which relies heavily on the syncing accuracy between the two streams.
In particular we require accurate syncing (to the best of the available data) even when there were network issues, disconnects/reconnects and so on.
Can any comment on how accurate and robust the timing/syncing of Twilio's compositions are for this use case?
Have you had similar use cases and can you comment on your experience with accuracy down to say ~100ms?
Sorry - I know this is a general and vague question - just not sure where else to ask.
Posting an answer here for everyone's future reference.
Having done hundreds of compositions in a wide variety of test scenarios (joining at different times, reconnections, dodgy internet, closing browser, etc) I can happily report that Twilio compositions system generates extremely reilable and accurate compositions in terms of timing and syncing.
However it seems necessary to include both participants in the composition to ensure the start times are consistent.
My solution to creating an individual participant's composition (fully synced with the other participants was:
Create a composition with the video from both participants
Make other participant's video z-ordered behind the main participant, and placed 16x16 in the corner
Only include the audio streams from the participant of interest
Doing this in turn for each participant produces individual participant compositions which are completely aligned with each other.
I am currently making a behavior assessment of different software modules regarding access to DB, Network, amount of memory allocations, etc.
The main goal is to pick a main use case( let's say system initialization) and recognize the modules that are:
Unnecessarily accessing DB.
Creating too many caches for same data.
Making too many allocations (or too big) at once.
Spawning many threads,
Network access
By assessing those, I could have an overview of the modules that need to be reworked in order to improve performance, delete redundant DB accesses, avoid CPU usage peaks, etc.
I found the sequence diagram a good candidate to represent the use cases behavior, but I am not sure how to depict their interaction with the above mentioned activities.
I could do something like shown in this picture, but that is an "invention" of tagging functions with colors. I not sure if it is too simplistic or childish (too many colors?).
I wonder if there is any specific UML diagram to represent these kind of interactions.
Using SDs is probably the most appropriate approach here. You might consider timing diagrams in certain cases if you need to present timing constraints. However, SDs already have a way to show timing constraints which is quite powerful.
You should adorn your diagram with a comment telling that the length of the colored self-calls represent percentage of use or something like that (or just adding a title telling this). Using colors is perfect by the way.
As a side note: (the colored) self-calls are shown with a self-pointing arrow like this
but I'd guess your picture can be understood by anyone and you can see that as nitpicking. And most likely they are not real self-calls but just indicators. So that's fine too.
tl;dr Whatever transports the message is appropriate.
I have a small UML assignment due Monday; it doesn't seem too complicated, and I'm not asking this site to solve it for me -- I'm just asking for clarification over a couple doubts of mine.
I'm just telling parts of the assignment because its content is probably not so relevant.
We're provided a basic use case where the actors "officer" (e.g. police officer) communicates with the actor "correspondent" in order to report an emergency. The use case is expressed in the form:
Use case name: Report emergency
Participating actors: Officer, correspondent
Flow of events: ...
Preconditions: ...
Postconditions: ...
Then we're given three scenarios that "refine" the use case. I say "refine" because they turn it upside-down: they involve team leaders, respondents, incident handling -- nothing that was even mentioned in the flow of events described by the very basic use case given.
On top of these scenarios we're given ten "events" (i.e. they basically chunk the three scenarios into ten easily recognizable sentences). The assignment asks us to make one collaboration diagram for each of these events.
I know that collaboration diagrams describe the behaviour of the system, i.e. how the different parts of the systems interact with each other. So I thought that, even with these "creative" scenarios given, I could make something out of them. But then this part comes:
"Collaboration diagrams should make use of controller, boundary, domain objects and other new fabricated software objects (e.g. data structure components) necessary to completely handle each event."
And then:
"Your assignment will be evaluated in terms of the quality of your design (i.e. modularity: low coupling, high cohesion)"
My questions are:
1) Are scenarios supposed to present so much new information compared to the basic use case?
2) Do I just have to draw ten simple collaboration diagrams? Using which classes?
3) Why are things like low coupling, high cohesion, domain objects, mentioned? What do they have to do with all of this?
1) A scenario is a detailed description of a use case. There can be several scenarios based on constraints. The use case itself just describes the sunny day scenario in a condensed format. The meat is in the scenarios.
2) Classes related to the UC can be extracted when going through the scenario. You will find text parts that tell certain functions need to be performed. Take these classes and place them in the collaboration diagram and connect them with the right message.
3) These are general design rules. Low coupling/high cohesion means good design (and vice versa). The domain objects are those which are in the center of the system and the sum of all use cases will deal with the sum of all domain objects.
I'm new to UML and I have crossed path with sequence diagram, and realized that there's 2 types: distributed and centralized. Can anyone explain me the differences?
centralized control, with one participant doing most of the processing and the other participants there to supply data.
Example:
Distributed control, in which the processing is split among many participants, each one doing a little bit of the algorithm
Example:
Both styles have their strengths and weaknesses. Most people, particularly those new to objects, are more used to centralized control. In many ways, it’s simpler, as all the processing is in one place; with distributed control, in contrast, you have the sensation of chasing around the objects, trying to find the program.
Despite this, object bigots like strongly prefer distributed control. One of the main goals of good design is to localize the effects of change. Data and behavior that accesses that data often change together. So putting the data and the behavior that uses it together in one place is the first rule of object-oriented design.
Furthermore, by distributing control, you create more opportunities for using polymorphism rather than using conditional logic. If the algorithms for product pricing are different for different types of product, the distributed control mechanism allows us to use subclasses of product to handle these variations.
I'll admit that I am still quite a newbie with DDD and even more so with CQRS. I also realize that DDD and/or CQRS might not be the right approach to every problem. Nevertheless, I like the principals but have some questions in the context of a current project.
The solution is a simulator that generates performance data based on the current configuration. Administrators can create and modify the specifications for simulations. Testers set some environmental conditions and run the simulator. The results are captured, aggregated and reported.
The solution consists of 3 component areas each with their own use-cases, domain logic and supporting data structure. As a result, a modular designed seems appealing as a way to segregate logic and separate concerns.
The first area would be the administrative aspect which allows users to create and modify the specifications. This would be a CRUD heavy 'module'.
The second area would be for executing the simulations. The domain model would be similar to the first area but optimized for executing the simulation as opposed to providing a convenient model for editing.
The third area is reporting.
From this I believe that I have three Bounding Contexts, yes? I have three clear entry points into the application, three sets of domain logic and three different data models to support the domain logic.
My first instinct is to follow these lines and create three modules (assemblies) that encapsulate the domain layer for each area. Should I also have three separate databases? Maybe more than three to support write versus read?
I gather this may be preferred for CQRS but am not sure how to go about it. It appears to me that CQRS suggests a set of back-end processes that move data around. But if that's the case, and data persistence is cross-cutting (as DDD suggests), then doesn't my data access code need awareness of all of the domain objects? If so, then is there a benefit to having separate modules?
Finally, something I failed to mention earlier is that specifications are considered 'drafts' until published, which makes then available for simulation. My PublishingService needs to have knowledge of the domain model for both the first and second areas so that when it responds to the SpecificationPublishedEvent, it can read the specification, translate the model and persist it for execution. This makes me think I don't have three bounding contexts after all. Or am I missing something in my analysis?
You may have a modular UI for this, but I don't see three separate domains in what you are describing necessarily.
First off, in CQRS reporting is not directly a domain model concern, it is a facet of the separated Read Model which takes on the responsibility of presenting the domain state optimized for reporting.
Second just because you have different things happening in the domain is not necessarily a reason to bound them away from each other. I'd take a read through the blue DDD book to get a bit better feel for what BCs look like.
I don't really understand your domain well enough but I'll try to give some general suggestions.
Start with where you talked about your PublishingService. I see a Specification aggregate root which takes a few commands that probably look like CreateNewSpecification, UpdateSpecification and PublishSpecification.
The events look similar and probably feel redundant: SpecificationCreated, SpecificationUpdated, SpecificationPublished. Which kind of sucks but a CRUD heavy model doesn't have very interesting behaviors. I'd also suggest finding an automated way to deal with model/schema changes on this aggregate which will be tedious if you don't use code generation, or handle the changes in a dynamic *emphasized text*way that doesn't require you to build new events each time.
Also you might just consider not using event sourcing for such an aggregate root since it is so CRUD heavy.
The second thing you describe seems to be about starting a simulation which will run based on a Specification and produce data during that simulation (I assume). An event driven architecture makes sense here to decouple updating the reporting data from the process that is producing the data. This has huge benefits if you are producing large amounts of data to process.
However it doesn't sound like a Simulation is necessarily the kind of AR that would benefit from Event Sourcing either. For a couple reasons:
Simulation really takes only one Command which is something like StartSimulation
Simulation then produces events over it's life-time which represent what is happening internally with the simulation
Simulation doesn't seem to ever receive any other Commands that could depend on the current state of the Simulation
Simulation is not interacted with by multiple clients/users simultaneously and as we pointed out it isn't really interacted with at all
In general, domain modeling is very specific to each individual project so it's hard to give you all the information you need to build your domain model. It will come as a result of spending a great deal of time trying to understand your user's needs and the problem they are trying to solve with the software. It likely will go through multiple refinements as you develop insights into their process.