I am studying a UML sequence Diagram and I came across method invocation so, I have noticed that there are two ways to make invocation for the method-behavior in Unified Modeling Language(UML) which is signal and message but I don't know how to specify which one of them and based on what ?I mean When to use message and when to use signal because I think this is a very important design decision and should be well chosen?
It actually is, but I think the terminology that you use is not very acurate (message and signal). All kind of communication between two objects in sequence diagram is considered to be a message.
However, there are two basic types of messages - synchronous and asynchronous.
A usual method invocation, when a method invoker waits blocked till the method execution is over is synchronous invocation, a synchronous message. The invoker will receive the return value from the invoked method and continue its own execution.
In consequence, here is only one thread of execution.
There is also a asynchronous communication, when an object somehow sends a message to another object and immediatelly continues its execution without waiting. Example of this are SMS message, UDP package send, etc.
Here, there are two independent threads of execution.
By a signal it is often ment - asynchronous message send.
Kirill Fakhroutdinov's page http://www.uml-diagrams.org/sequence-diagrams.html explains message as
Messages by Action Type
..A message reflects either an operation call and start of execution or a sending and reception of a signal...
Besides the synchronous/asynchronous nature of messages it also points to "send signal action" as used in activity diagrams
Asynchronous Signal
..Asynchronous signal message corresponds to asynchronous send signal action..
To me an important distinction in modeling messages vs signals is the unicast/multicast(broadcast) semantics. Signal specifically can be send from one place (with all necessary arguments packed) and received at multiple places
Sequence diagrams allow modeling of the multicast behavior using the found message and lost message concept
(I'm not 100% sure but I believe I'm close)
EDIT: adding reference to more formal explanation backing my argument that signals have something to do with unicast/multicast(broadcast) as response to comment by #Aleks
The book "The Unified Modeling Language Reference Manual" by James Rumbaugh, Ivar Jacobson, Grady Booch, Copyright © 1999 by Addison Wesley Longman, Inc. explains the difference between messages and signals e.g. using following words
Message..Semantics
..A message is the sending of a signal from one object (the sender) to one or more other objects (the receivers), or it is the call of an operation on one object (the receiver) by another object (the sender or caller). The implementation of a message may take various forms...
Signal event
..
A signal has an explicit list of parameters. It is explicitly sent by an object to another object or set of objects. A general broadcast of an event can be regarded as the sending of a signal to the set of all objects, although..
..
Signals are explicit means by which objects may communicate with each other asynchronously. To perform synchronous communication, two asynchronous signals must be used, one in each direction of communication..
EDIT: adding the 3 different message notations as they are visualized by Enterprise Architect
Note that due to the asynchronous and multicast nature of signals (as mentioned above) the corresponding notation does not include the "Return Value" part
Related
I am trying to model an application which runs multiple concurrent flows.
In this situation multiple threads can create events and store them in a buffer which are then collected and displayed by another thread. The receiving thread is supposed to block and wait for incoming events.
I have currently modelled it like this:
This example uses object flows. However I am not sure if this is the correct way to model this type of inter thread communication.
The other option I was looking at is using signals but I'm not sure about that either.
Any help would be appreciated.
Every activity requires all tokens to be offered before it can start. You will have to use a buffer node as a queue.
Object flows capture inter thread communication well.
You could also use signals, if you want to be more specific and your system uses in fact messages.
There is one problem in your diagram though: The Display Event action consumes all offered control tokens and one object token on each invocation. I can‘t tell from your diagram, but probably there is only one control token. That means, the action will only run once. The solution is, to delete the control flow. The action then starts for each incoming object token.
Each output pin acts as a local buffer. If tokens are generated faster than the event can be displayed, tokens will potentially pile up in multiple pins. In this case it is undefined which pin will be the source of the next token. This is not necessarily a problem, but if tokens shall be processed in chronological order, you need to use a central buffer. The symbol is a rectangle with the keyword «central buffer»
I have an application in which the worker thread needs to call a method on main thread.
I am planning to send a custom message via win32api.PostThreadMessage(with message WM_USER+X), and when this message is received, some function must get executed on main thread. What I am looking is, to register a method to the corresponding WM_USER_X message?
Look at the RegisterWindowMessage function, it does pretty much exactly what you are after (provides a message number that should not collide with any other). The one downside is that the message number is then not a constant but will vary from run to run of your program, this makes the message loop somewhat more complicated but is well worth it for this sort of thing.
I have been reading about developing an Autosar Software Component. I'm still confuse about WaitPoint and Event on internal behavior. What are the main differences between WaitPoint and Event in AUTOSAR Software Component? and it will be great if you can show me a sample of c code according to them.
An Event in AUTOSAR has two different meanings regarding software components. Either it triggers a RunnableEntity or it resolves a WaitPoint. If a RunnableEntity is triggered e.g. by a DataReceivedEvent the Rte will activate your RunnableEntity and then you can call Rte_Read() to read the data. The second case is when you define a WaitPoint for that RunnableEntity and let the DataReceivedEvent resolve it. If you then call Rte_Receive() the function will block until new data is received.
Usually, such a function is implemented by an OSEK WaitEvent() and if the Rte receives data, it will use the OSEK SetEvent function to wakeup the task that called WaitEvent().
Scala Actor : Will messages passed from one actor to another within the same process always be processed in the original order as sent ?
Yep. For two actors this condition does hold.
the messages are guaranteed to be ordered for a given pair of sender
and receiver actors. If an actor A sends messages X and Y in that
order, the actor B will receive no messages, only the message X, only
the message Y, or the message X, followed by the message Y.
Learning concurrent programming in scala, page 270
Akka guarantees that messages will be received in the same order that they were sent assuming they are delivered at all, but does not guarantee message delivery. The akka documentation is quite clear on this as follows:
"for a given pair of actors, messages sent directly from the first to the second will not be received out-of-order"
Also, while messages sent via an intermediate actor are normally not guaranteed to be delivered in the same order to the final destination actor, this limitation is "eliminated under certain conditions" when running in the context of a single JVM. As you are asking specifically about in-process messaging this may be relevant to you although the documentation advises not to rely on this behaviour.
There are examples and further clarification in the Akka documentation here: http://doc.akka.io/docs/akka/snapshot/general/message-delivery-reliability.html#The_Rules_for_In-JVM__Local__Message_Sends
I am redesigning the messaging system for my app to use intel threading building blocks and am stumped trying to decide between two possible approaches.
Basically, I have a sequence of message objects and for each message type, a sequence of handlers. For each message object, I apply each handler registered for that message objects type.
The sequential version would be something like this (pseudocode):
for each message in message_sequence <- SEQUENTIAL
for each handler in (handler_table for message.type)
apply handler to message <- SEQUENTIAL
The first approach which I am considering processes the message objects in turn (sequentially) and applies the handlers concurrently.
Pros:
predictable ordering of messages (ie, we are guaranteed a FIFO processing order)
(potentially) lower latency of processing each message
Cons:
more processing resources available than handlers for a single message type (bad parallelization)
bad use of processor cache since message objects need to be copied for each handler to use
large overhead for small handlers
The pseudocode of this approach would be as follows:
for each message in message_sequence <- SEQUENTIAL
parallel_for each handler in (handler_table for message.type)
apply handler to message <- PARALLEL
The second approach is to process the messages in parallel and apply the handlers to each message sequentially.
Pros:
better use of processor cache (keeps the message object local to all handlers which will use it)
small handlers don't impose as much overhead (as long as there are other handlers also to be run)
more messages are expected than there are handlers, so the potential for parallelism is greater
Cons:
Unpredictable ordering - if message A is sent before message B, they may both be processed at the same time, or B may finish processing before all of A's handlers are finished (order is non-deterministic)
The pseudocode is as follows:
parallel_for each message in message_sequence <- PARALLEL
for each handler in (handler_table for message.type)
apply handler to message <- SEQUENTIAL
The second approach has more advantages than the first, but non-deterministic ordering is a big disadvantage..
Which approach would you choose and why? Are there any other approaches I should consider (besides the obvious third approach: parallel messages and parallel handlers, which has the disadvantages of both and no real redeeming factors as far as I can tell)?
Thanks!
EDIT:
I think what I'll do is use #2 by default, but allow a "conversation tag" to be attached to each message. Any messages with the same tag are ordered and handled sequentially in relation to its conversation. Handlers are passed the conversation tag alongside the message, so they may continue the conversation if they require. Something like this:
Conversation c = new_conversation()
send_message(a, c)
...
send_message(b, c)
...
send_message(x)
handler foo (msg, conv)
send_message(z, c)
...
register_handler(foo, a.type)
a is handled before b, which is handled before z. x can be handled in parallel to a, b and z. Once all messages in a conversation have been handled, the conversation is destroyed.
I'd say do something even different. Don't send work to the threads. Have the threads pull work when they finish previous work.
Maintain a fixed amount of worker threads (the optimal amount equal to the number of CPU cores in the system) and have each of them pull sequentially the next task to do from the global queue after it finishes with the previous one. Obviously, you would need to keep track of dependencies between messages to defer handling of a message until its dependencies are fully handled.
This could be done with very small synchronization overhead - possibly only with atomic operations, no heavy primitives like mutexes or semaphores.
Also, if you pass a message to each handler by reference, instead of making a copy, having the same message handled simultaneously by different handlers on different CPU cores can actually improve cache performance, as higher levels of cache (usually from L2 upwards) are often shared between CPU cores - so when one handler reads a message into the cache, the other handler on the second core will have this message already in L2. So think carefully - do you really need to copy the messages?
If possible I would go for number two with some tweaks. Do you really need every message tp be in order? I find that to be an unusual case. Some messages we just need to handle as soon as possible, and then some messages need be processed before another message but not before every message.
If there are some messages that have to be in order, then mark them someway. You can mark them with some conversation code that lets the processor know that it must be processed in order relative to the other messages in that conversation. Then you can process all conversation-less messages and one message from each conversation concurrently.
Give your design a good look and make sure that only messages that need to be in order are.
I Suppose it comes down to wether or not the order is important. If the order is unimportant you can go for method 2. If the order is important you go for method 1. Depending on what your application is supposed to do, you can still go for method 2, but use a sequence number so all the messages are processed in the correct order (unless of cause if it is the processing part you are trying to optimize).
The first method also has unpredictable ordering. The processing of message 1 on thread 1 could take very long, making it possible that message 2, 3 and 4 have long been processed
This would tip the balance to method 2
Edit:
I see what you mean.
However why in method 2 would you do the handlers sequentially. In method 1 the ordering doesn't matter and you're fine with that.
E.g. Method 3: both handle the messages and the handlers in parallel.
Of course, here also, the ordering is unguaranteed.
Given that there is some result of the handlers, you might just store the results in an ordered list, this way restoring ordering eventually.