Could anyone explain in short what signal filtering feature of AUTOSAR COM module does? I can't find any concrete explanation in AUTOSAR_SWS_COM specification.
Actually, AUTOSAR_SWS_Com chapter "7.2.4 Filtering" pretty much describes exactly that.
On transmission side, the filter specifies the transmission mode conditions in order to trigger a PDU to be transmitted -> think of old CAN event or event-periodic messages. "OnChange" or "OnWrite", "CyclicOnActive" ...
On receiver side, the filter masks are used e.g. to discard certain signal/signalgroup processing within a PDU.
Com signal filters for Transmission pdus will be used to select the Transmission mode ComTxModeTrue / ComTxModeFalse. Signal value of that particular signal filter will be evaluated based on the selected modes.
by using this feature - Application can control the PDU flow.
Filter modes
Related
I am trying to model an application which runs multiple concurrent flows.
In this situation multiple threads can create events and store them in a buffer which are then collected and displayed by another thread. The receiving thread is supposed to block and wait for incoming events.
I have currently modelled it like this:
This example uses object flows. However I am not sure if this is the correct way to model this type of inter thread communication.
The other option I was looking at is using signals but I'm not sure about that either.
Any help would be appreciated.
Every activity requires all tokens to be offered before it can start. You will have to use a buffer node as a queue.
Object flows capture inter thread communication well.
You could also use signals, if you want to be more specific and your system uses in fact messages.
There is one problem in your diagram though: The Display Event action consumes all offered control tokens and one object token on each invocation. I can‘t tell from your diagram, but probably there is only one control token. That means, the action will only run once. The solution is, to delete the control flow. The action then starts for each incoming object token.
Each output pin acts as a local buffer. If tokens are generated faster than the event can be displayed, tokens will potentially pile up in multiple pins. In this case it is undefined which pin will be the source of the next token. This is not necessarily a problem, but if tokens shall be processed in chronological order, you need to use a central buffer. The symbol is a rectangle with the keyword «central buffer»
Is the Upnp stop command supposed to stop a renderer immediately or is it supposed to empty the buffer first and then stop the renderer?
Assuming you're asking about the Stop action on the AVTransport service, precise behaviour is undefined.
UPnP-av-AVTransport-v1-Service-20020625.pdf from the UPnP Forum docs bundle says the following about Stop:
This action stops the progression of the current resource that is
associated with the specified instance. Additionally, it is
recommended that the “output of the device” (defined below) should
change to something other than the current snippet of resource.
Although the exact nature of this change varies from device to device,
a common behavior is to immediately cease all “output” from the
device. Nevertheless, the exact behavior is defined by the
manufacturer of the device
There is no specification for how quickly progression of the current resource stops. This means that it is possible (and valid) for a device to play some/all of its buffered content before stopping.
If you are implementing a renderer, you should probably make reasonable efforts to discard at least some of your buffered content when Stop is invoked. If you are writing a control point, you can't make any assumptions in general. (You probably also don't care exactly how a device implements the action though.)
There may be further guarantees offered if you limit yourself to dealing with devices which are DLNA compatible. DLNA specs are not freely available so I can't say whether they mandate any particular interpretation of the AVTransport spec.
I am studying a UML sequence Diagram and I came across method invocation so, I have noticed that there are two ways to make invocation for the method-behavior in Unified Modeling Language(UML) which is signal and message but I don't know how to specify which one of them and based on what ?I mean When to use message and when to use signal because I think this is a very important design decision and should be well chosen?
It actually is, but I think the terminology that you use is not very acurate (message and signal). All kind of communication between two objects in sequence diagram is considered to be a message.
However, there are two basic types of messages - synchronous and asynchronous.
A usual method invocation, when a method invoker waits blocked till the method execution is over is synchronous invocation, a synchronous message. The invoker will receive the return value from the invoked method and continue its own execution.
In consequence, here is only one thread of execution.
There is also a asynchronous communication, when an object somehow sends a message to another object and immediatelly continues its execution without waiting. Example of this are SMS message, UDP package send, etc.
Here, there are two independent threads of execution.
By a signal it is often ment - asynchronous message send.
Kirill Fakhroutdinov's page http://www.uml-diagrams.org/sequence-diagrams.html explains message as
Messages by Action Type
..A message reflects either an operation call and start of execution or a sending and reception of a signal...
Besides the synchronous/asynchronous nature of messages it also points to "send signal action" as used in activity diagrams
Asynchronous Signal
..Asynchronous signal message corresponds to asynchronous send signal action..
To me an important distinction in modeling messages vs signals is the unicast/multicast(broadcast) semantics. Signal specifically can be send from one place (with all necessary arguments packed) and received at multiple places
Sequence diagrams allow modeling of the multicast behavior using the found message and lost message concept
(I'm not 100% sure but I believe I'm close)
EDIT: adding reference to more formal explanation backing my argument that signals have something to do with unicast/multicast(broadcast) as response to comment by #Aleks
The book "The Unified Modeling Language Reference Manual" by James Rumbaugh, Ivar Jacobson, Grady Booch, Copyright © 1999 by Addison Wesley Longman, Inc. explains the difference between messages and signals e.g. using following words
Message..Semantics
..A message is the sending of a signal from one object (the sender) to one or more other objects (the receivers), or it is the call of an operation on one object (the receiver) by another object (the sender or caller). The implementation of a message may take various forms...
Signal event
..
A signal has an explicit list of parameters. It is explicitly sent by an object to another object or set of objects. A general broadcast of an event can be regarded as the sending of a signal to the set of all objects, although..
..
Signals are explicit means by which objects may communicate with each other asynchronously. To perform synchronous communication, two asynchronous signals must be used, one in each direction of communication..
EDIT: adding the 3 different message notations as they are visualized by Enterprise Architect
Note that due to the asynchronous and multicast nature of signals (as mentioned above) the corresponding notation does not include the "Return Value" part
uevents has been sent from kernel space to user space through netlink socket.
In kernel, there must be something trigger uevent.
I guess there are two possibilities:
Hardware interrupt - this means, once hardware interruption happened, kernel sends event to user space to signal that there is some events happened.
software polling - this means, there is always a daemon to check these file system to see if there is anything changed. If so, then update these info to upper layer.
Could anyone provide your feedback?
Thanks
I can't agree with you about polling. uevent is event-based, so there is no polling.
Triggering uevent happened in many cases and I would rather start with figuring out what uevent types are exist?
Little searching and here you go - in include/linux/kobject.h
enum kobject_action {
KOBJ_ADD,
KOBJ_REMOVE,
KOBJ_CHANGE,
KOBJ_MOVE,
KOBJ_ONLINE,
KOBJ_OFFLINE,
KOBJ_MAX
};
So it's
Add event
Remove event
Change event
Move event
Online event
Offline event
KOBJ_MAX is special and marks and of enum.
There are 2 functions that actually sends uevent - kobject_uevent and kobject_uevent_env. These functions are called with on of the actions listed above.
Finally, to answer your questions. There are no predefined cases that will trigger uevent. If you search for calls of kobject_uevent and kobject_uevent_env you will see that it's happens in various callbacks in different unrelated kernel subsystems.
uevent is kernel facility to unify notifications from various unrelated drivers. So I think there are no well known list of things that will trigger uevent.
I would like to hook into, intercept, and generate keyboard (make/break) events under Linux before they get delivered to any application. More precisely, I want to detect patterns in the key event stream and be able to discard/insert events into the stream depending on the detected patterns.
I've seen some related questions on SO, but:
either they only deal with how to get at the key events (key loggers etc.), and not how to manipulate the propagation of them (they only listen, but don't intercept/generate).
or they use passive/active grabs in X (read more on that below).
A Small DSL
I explain the problem below, but to make it a bit more compact and understandable, first a small DSL definition.
A_: for make (press) key A
A^: for break (release) key A
A^->[C_,C^,U_,U^]: on A^ send a make/break combo for C and then U further down the processing chain (and finally to the application). If there is no -> then there's nothing sent (but internal state might be modified to detect subsequent events).
$X: execute an arbitrary action. This can be sending some configurable key event sequence (maybe something like C-x C-s for emacs), or execute a function. If I can only send key events, that would be enough, as I can then further process these in a window manager depending on which application is active.
Problem Description
Ok, so with this notation, here are the patterns I want to detect and what events I want to pass on down the processing chain.
A_, A^->[A_,A^]: expl. see above, note that the send happens on A^.
A_, B_, A^->[A_,A^], B^->[B_,B^]: basically the same as 1. but overlapping events don't change the processing flow.
A_, B_, B^->[$X], A^: if there was a complete make/break of a key (B) while another key was held (A), X is executed (see above), and the break of A is discarded.
(it's in principle a simple statemachine implemented over key events, which can generate (multiple) key events as output).
Additional Notes
The solution has to work at typing speed.
Consumers of the modified key event stream run under X on Linux (consoles, browsers, editors, etc.).
Only keyboard events influence the processing (no mouse etc.)
Matching can happen on keysyms (a bit easier), or keycodes (a bit harder). With the latter, I will just have to read in the mapping to translate from code to keysym.
If possible, I'd prefer a solution that works with both USB keyboards as well as inside a virtual machine (could be a problem if working at the driver layer, other layers should be ok).
I'm pretty open about the implementation language.
Possible Solutions and Questions
So the basic question is how to implement this.
I have implemented a solution in a window manager using passive grabs (XGrabKey) and XSendEvent. Unfortunately passive grabs don't work in this case as they don't capture correctly B^ in the second pattern above. The reason is that the converted grab ends on A^ and is not continued to B^. A new grab is converted to capture B if still held but only after ~1 sec. Otherwise a plain B^ is sent to the application. This can be verified with xev.
I could convert my implementation to use an active grab (XGrabKeyboard), but I'm not sure about the effect on other applications if the window manager has an active grab on the keyboard all the time. X documentation refers to active grabs as being intrusive and designed for short term use. If someone has experience with this and there are no major drawbacks with longterm active grabs, then I'd consider this a solution.
I'm willing to look at other layers of key event processing besides window managers (which operate as X clients). Keyboard drivers or mappings are a possibility as long as I can solve the above problem with them. This also implies that the solution doesn't have to be a separate application. I'm perfectly fine to have a driver or kernel module do this for me. Be aware though that I have never done any kernel or driver programming, so I would appreciate some good resources.
Thanks for any pointers!
Use XInput2 to make device(keyboard) floating, then monitor KeyPress and KeyRelease event on the device, using XTest to regenerate KeyPress & KeyRelease event.