autosar-RTE module responsible-to-handle the timing of Message to RTE - autosar

Can someone explain which module is responsible to handle the timing of Message to RTE?
I am really new to autosar, and I am really confused about this point

If you mean to transmit and receive IPDUs e.g. over CAN, there are the Com and the LdCom modules, which will handle them. Com will also handle Periodic, Triggered or Mixed handling of transmission. The Com has knowledge about ISignalGroups and ISignals mapped to ISignalIPdus, and which ISignals are triggering SendEvents. The same is for reception, and the mapping to SWC ports and triggering of e.g. DataReceivedEvent, DataReceiveErrorEvent ...
The information is derived from the SystemDescription used in the configuration.
Maybe you should take a closer look into the AUTOSAR_EXP_LayeredSoftwareArchitecture.pdf file.

Related

Is python3's BufferedProtocol an abstraction over TCP? Or is it so low-level that I have to implement the TCP things too?

I am referring to this: https://docs.python.org/3/library/asyncio-protocol.html#asyncio.BufferedProtocol
I haven't seen the answer to this question documented anywhere and I want to know the answer in advance of writing any code.
It seems to imply that it is a modification of asyncio.Protocol (for TCP) but seeing as though TCP is not mentioned for BufferedProtocol it's got me concerned that I'd have to contend with out of order packets etc.
Many thanks!
BufferedProtocol isn't a protocol based on TCP, it's an interface (base class) for custom implementation of asyncio protocols, specifically those that try to minimize the amount of copying. The docstring provides more details:
The idea of BufferedProtocol is that it allows to manually allocate and control the receive buffer. Event loops can then use the buffer provided by the protocol to avoid unnecessary data copies. This can result in noticeable performance improvement for protocols that receive big amounts of data. Sophisticated protocols can allocate the buffer only once at creation time.
Currently none of the protocols shipped with asyncio derive from BufferedProtocol, so the use case for this is user code that needs to achieve high throughput - see the BPO issue and the linked mailing list post for details.
seeing as though TCP is not mentioned for BufferedProtocol it's got me concerned that I'd have to contend with out of order packets etc.
Unless you are writing custom low-level asyncio code, you shouldn't care about BufferedProtocol at all. Regular asyncio TCP code calls functions such as open_connection or start_server, both of which handle provide a streaming abstraction on top of TCP sockets in the usual way (using a buffer, handling errors, etc.).
I can confirm - BufferedProtocol is for TCP only. Not for files or anything else. And it gives you a handle on a zero copy buffer to work with. That's basically all I wanted to know.

COM interface: Using STA instead of MTA

I am working with a COM interface that, according to ThreadingModel = "Free" in it's CLSID entry in the registry, supports multithreaded apartments. Multithreading seems to be implemented at a very basic level, however, very often method calls return a "Class is busy" status code.
Is there any risk of switching to STAs in CoInitializeEx and using interface marshaling to have the COM system serialize the requests and to avoid this behaviour (which I never experienced when only making calls from the main thread)?
Thanks!
Using an STA thread to host the COM object will not make any difference. COM pays attention to the ThreadingModel value in the registry. Since it is "Free", it will not see any need to marshal the interface pointer and will still makes the call from the worker thread.
You would have to monkey with the registry key and change it to "Both".
This is not a great solution, it will break at a drop of a hat. It is just far better to take care of this yourself. Use Control.Begin/Invoke() or Dispatcher.Begin/Invoke(), depending on which class library you use to implement the required message loop. Note that you now also have a choice, the COM marshaling is equivalent to Invoke() but you can (possibly) optimize by using BeginInvoke().
And last but not least, duplicating the locking that exists in the COM server that produces the "busy" error code is a possible solution. Non-zero odds that you'll solve this by acquiring your own lock before you make each call, thus serializing the calls yourself. Your worker thread will now block instead of having to deal with the error code. Contacting the author of the component would be wise, he can easily tell you which specific methods you should serialize.

epoll and set multiple interests at once

Interestingly, I cannot find any discussion on this rather than some
old slides from 2004.
IMHO, the current scheme of epoll() usage is begging for something
like epoll_ctlv() call. Although this call does not make sense for
typical HTTP web servers, it does make sense in a game server where
we are sending same data to multiple clients at once. This does not
seem hard to implement given the fact that epoll_ctl() is already there.
Do we have any reason for not having this functionality? Maybe no
optimization window, there?
You would typically only use epoll_ctl() to add and remove sockets from the epoll set as clients connect and disconnect, which doesn't happen very often.
Sending the same data to multiple sockets would rather require a version of send() (or write()) that takes a vector of file descriptors. The reason this hasn't been implemented is probably just because no-one with sufficient interest in it has done so yet (of course, there are lots of subtle issues - what if each destination file descriptor can only successfully write a different number of bytes).

winsock 2. thread safety for simultaneous send's. tcp

is it possible to have multiple threads sending on the same socket? will there be interleaving of the streams or will the socket block on the first thread (assuming tcp)? the majority of opinions i've found seems to warn against doing this for obvious fears of interleaving, but i've also found a few comments that state the opposite. are interleaving fears a carryover from winsock1 and are they well-founded for winsock2? is there a way to setup a winsock2 socket that would allow for lack of local synchronization?
two of the contrary opinions below... who's right?
comment 1
"Winsock 2 implementations should be completely thread safe. Simultaneous reads / writes on different threads should succeed, or fail with WSAEINPROGRESS, depending on the setting of the overlapped flag when the socket is created. Anyway by default, overlapped sockets are created; so you don't have to worry about it. Make sure you don't use NT SP6, if ur on SP6a, you should be ok !"
source
comment 2
"The same DLL doesn't get accessed by multiple processes as of the introduction of Windows 95. Each process gets its own copy of the writable data segment for the DLL. The "all processes share" model was the old Win16 model, which is luckily quite dead and buried by now ;-)"
source
looking forward to your comments!
jim
~edit1~
to clarify what i mean by interleaving. thread 1 sends the msg "Hello" thread 2 sends the msg "world!". recipient receives: "Hwoel lorld!". this assumes both messages were NOT sent in a while loop. is this possible?
I'd really advice against doing this in any case. The send functions might send less than you tell it to for various very legit reasons, and if another thread might enter and try to also send something, you're just messing up your data.
Now, you can certainly write to a socket from several threads, but you've no longer any control over what gets on the wire unless you've proper locking at the application level.
consider sending some data:
WSASend(sock,buf,buflen,&sent,0,0,0:
the sent parameter will hold the no. of bytes actually sent - similar to the return value of the send()function. To send all the data in buf you will have to loop doing a WSASend until all all the data actually get sent.
If, say, the first WSASend sends all but the last 4 bytes, another thread might go and send something while you loop back and try to send the last 4 bytes.
With proper locking to ensure that can't happen, it should e no problem sending from several threads - I wouldn't do it anyway just for the pure hell it will be to debug when something does go wrong.
is it possible to have multiple threads sending on the same socket?
Yes - although, depending on implementation this can be more or less visible. First, I'll clarify where I am coming from:
C# / .Net 3.5
System.Net.Sockets.Socket
The overall visibility (i.e. required management) of threading and the headaches incurred will be directly dependent on how the socket is implemented (synchronously or asynchronously). If you go the synchronous route then you have a lot of work to manually manage connecting, sending, and receiving over multiple threads. I highly recommend that this implementation be avoided. The efforts to correctly and efficiently perform the synchronous methods in a threaded model simply are not worth the comparable efforts to implement the asynchronous methods.
I have implemented an asynchronous Tcp server in less time than it took for me to implement the threaded synchronous version. Async is much easier to debug - and if you are intent on Tcp (my favorite choice) then you really have few worries in lost messages, missing data, or whatever.
will there be interleaving of the streams or will the socket block on the first thread (assuming tcp)?
I had to research interleaved streams (from wiki) to ensure that I was accurate in my understanding of what you are asking. To further understand interleaving and mixed messages, refer to these links on wiki:
Real Time Messaging Protocol
Transmission Control Protocol
Specifically, the power of Tcp is best described in the following section:
Due to network congestion, traffic load balancing, or other unpredictable network behavior, IP packets can be
lost, duplicated, or delivered out of order. TCP detects these problems, requests retransmission of lost
packets, rearranges out-of-order packets, and even helps minimize network congestion to reduce the
occurrence of the other problems. Once the TCP receiver has finally reassembled a perfect copy of the data
originally transmitted, it passes that datagram to the application program. Thus, TCP abstracts the application's
communication from the underlying networking details.
What this means is that interleaved messages will be re-ordered into their respective messages as sent by the sender. It is expected that threading is or would be involved in developing a performance-driven Tcp client/server mechanism - whether through async or sync methods.
In order to keep a socket from blocking, you can set it's Blocking property to false.
I hope this gives you some good information to work with. Heck, I even learned a little bit...

What are good sources to study the threading implementation of a XMPP application?

From my understanding the XMPP protocol is based on an always-on connection where you have no, immediate, indication of when an XML message ends.
This means you have to evaluate the stream as it comes. This also means that, probably, you have to deal with asynchronous connections since the socket can block in the middle of an XML message, either due to message length or a connection being slow.
I would appreciate one source per answer so we can mod them up and see what's the favourite.
Are you wanting to deal with multiple connections at once? Good asynch socket processing is a must in that case, to avoid one thread per connection.
Otherwise, you just need an XML parser that can deal with a chunk of bytes at a time. Expat is the canonical example; if you're in Java, try XP. These types of XML parsers will fire events as possible, and buffer partial stanzas until the rest arrives.
Now, to address your assertion that there is no notification when a stanza ends, that's not really true. The important thing is not to process the XML stream as if it is a sequence of documents. Use the following pseudo-code:
stanza = null
while parser has more:
switch on token type:
START_TAG:
elem = create element from parser state
if stanza is not null:
add elem as child of stanza
stanza = elem
END_TAG:
parent = parent of stanza
if parent is not null:
fire OnStanza event
stanza = parent
This approach should work with an event-based or pull parser. It only requires holding on to one pointer worth of state. Obviously, you'll also need to handle attributes, character data, entity references (like & and the like), and special-purpose the stream:stream tag, but this should get you started.
Igniterealtime.org provides an open source XMPP-server and client written in java
ejabberd is written in Erlang. I don't know the details of the ejabberd implementation, but one advantage of using Erlang is really inexpensive threads. I'll speculate they start a thread per XMPP connection. In Erlang terminology these would be called processes, but these are not protected-memory address spaces they are lightweight user-space threads.

Resources