Is there need to map ContainerIPDU to frame in arxml - autosar

Is there a need to map Container I PDU to frame?.
Also is it require to create IPDUTriggering for ContainerIPDU on any Physical Channel.
Is it necessary that all refer PDUTriggerings in ContainerIPDU are from same Physical Channel?.
Thanks.

Depends, yes, no :)
As ContainerIPdu is just a specific kind of IPdu, it needs to be mapped to a Frame if the underlying CommunicationCluster requires a mapping to Frames (which is not the case for Ethernet).
For the same reason, a PduTriggering that refers to the ContainerIPdu needs to be aggregated by all PhysicalChannels on which the ContainerIPdu is supposed to be transmitted.
It is not required that the PduTriggerings that refer to the contained IPdus of a ContainerIPdu belong to the same PhysicalChannel that owns the PduTriggering that refers to the ContainerIPdu itself. But for the actual transport of a contained IPdu on a given PhysicalChannel the latter needs to own PduTriggerings that refer to the respective contained IPdus.

Related

IO Mapping in Codesys

I am using a ifm AC1421 PLC. I am using a ifm AC2218 D/A module to operate the actuators i.e. proportional valves. A/D module AC2517 ifm is used to get the data from pressure sensors.
I wanted to have an idea on how is the I/O mapping done in Codesys i.e. at what address do i need to define them.
I have attached an image which shows the current assigned variables.
Say for ex. I have assigned my PV1 at %QW47 and PV2 at %QW50.
Can I not assign PV2 at %QW48 or %QW49?
If i assign them the PV2 doesn't get operated
Similar goes with the Sensors I have assigned at %IW32,33 and 34. Can i not assign them at %IW37,38 or 39?
Actuators
Sensors
AFAIK, unless you need to know the exact memory address of those inputs/variables elsewhere in your code, you shouldn't really pay attention to the address. What you should care about are the Channels. Where you should assign/map your variables to will depend on what channel your input is wired to.

IPFS search file mechanism

I am using IPFS(Inter Planetary File System) to store documents/files in a decentralized manner.
In order to search a file from the network, is there a record of all the hashes on the network(like leeches)?
How does my request travel through the network?
Apologies, but it's unclear to me if you intend to search the contents of files on the network or to just search for files on the network. I'm going to assume the latter, please correct me if that's wrong.
What follows is a bit of an oversimplification, but here it goes:
In order to search a file from the network, is there a record of all the hashes on the network(like leeches)?
There is not a single record, no. Instead, each of the ipfs nodes that makes up the network holds a piece of the total record. When you add a block to your node, the node will announce to the network that it will provide that block if asked to. The process of announcing means letting a number of other ipfs nodes in the network know that you have that block. Essentially, your node asks its peers who ask their peers, and so on, until you find some nodes with ids that are near the hash of the block. Near could be measured using something simple like xor.
The important thing to understand is that, given the hash for a block, your node finds other ipfs nodes in the network that have ids which are similar to the hash of the block, and tells them "if anyone asks, I have the block with this hash". This is important because someone who wants to go find the content for the same hash can use the same process to find nodes that have been told where the hash can be retrieved from.
How does my request travel through the network?
Basically the reverse of above.
You can read more about ipfs content routing in the following:
https://ipfs.io/ipfs/QmR7GSQM93Cx5eAg6a6yRzNde1FQv7uL6X1o4k7zrJa3LX/ipfs.draft3.pdf
https://discuss.ipfs.io/t/how-does-resolution-and-routing-work-with-ipfs/365/3

When and which function is used for modifying file on sysfs of Linux?

i'm analizing block layer's sysfs functions.
i added(attached) a file(diagram) which i made to explain function sequence flow of
/usr/src/linux-source-4.8.0/linux-source/4.8.0/block/blk-mq-sysfs.c.
I understanded these functions' relationship. but i couldn't find how kernel change values of attribute file.
i heard that these files are created in the /sysfs/ hierarchy by calling sysfs_create_group() function.
When i do some I/O requests, system make some files like below.
(i use nvme ssd 750 series)
root#leedoosol:/sys/devices/pci0000:00/0000:00:03.0/0000:03:00.0/nvme/nvme0/nvme0n1/mq/0/cpu0# ls
completed dispatched merged rq_list
kernel would have made these files to give us information about completed request numbers, dispatched number, merged number, pending request_list.
And kernel should have changed value of these file while dealing with I/O request. but i don't know when and how kernel change these value.
i want to know when and how kernel change these values of attribute file because i have to find out what these values of attribute file means exactly.
here my environments.
1.) 2 socket per 10cores
2.) Kernel version : 4.8.17
3.) intel SSD 750 series
maybe I found answer. show store functions are called when i read my attribute file.
kernel don't fix their value of attribute file. kernel don't need to.
when i use 'cat' to the attribute file(in my example 'dispatched'), the file will be opened and then, several structs concerned with that file will be created in ram(of coures, in case of sysfs, backing store will not exist).
read() function will be called and then, show() function will called.

How to implement GetTargetLUNs in vss Hardware provider?

I am implementing VSS Hardware provider for ZFS based iSCSI Target. We have implemented AreLunSupported, precommitsnapshot and commitsnapshot etc functions and till this point it is working fine. But after this it is failing with "VSS_E_NO_SNAPSHOTS_IMPORTED" error in LocateLun method. and I think we are not filling Target LUN information properly.
My questions are:
How to find serial number of target LUN ? Do I need to mount newly created snapshot and then get the serial number ?
Do we need to fill interconnect, storage identifier information also or can I just pass NULL for these.
Q: How to find serial number of target LUN ? Do I need to mount newly created snapshot and then get the serial number ?
No, you should not mount the snapshot at this point. You should use an out-of-band mechanism to directly communicate with your storage (I'm assuming your 'ZFS based iSCSI target' is coming from a NAS box), probably a REST API call, to figure out the serial number of the snapshot.
Let me elaborate some more on serial number of the snapshot:
VSS expects the 'shadow copy' to be a concrete, real volume, similar to the primary volume (in your case an iSCSI target)
Since you are using ZFS snapshots, without dwelling much into your exact implementation, you have 2 options to obtain the serial number for a concrete LUN:
a. If your storage allows exposing a ZFS snapshot directory as a iSCSI target, the create that iSCSI target and use its Page83 identifier
b. If not, create a ZFS clone using the ZFS snapshot and expose that as an iSCSI target and use its Page83 identifier
Q: Do we need to fill interconnect, storage identifier information also
or can I just pass NULL for these.
For all practical purposes, it usually suffices to simply copy the VDS_LUN_INFORMATION for the original source LUN and only edit the m_szSerialNumber field with that of the target LUN (assuming that the product ID, vendor ID etc. all will remain the same)
This link explains in detail what is expected out of a VSS Hardware Provider implementation: https://msdn.microsoft.com/en-us/library/windows/desktop/aa384600(v=vs.85).aspx
Unique Page 83 Information
Both the original LUN and the newly created shadow copy LUN must have
at least one unique storage identifier in the page 83 data. At least
one STORAGE_IDENTIFIER with a type of 1, 2, 3, or 8, and an
association of 0 must be unique on the original LUN and the newly
created shadow copy LUN.
Bonus chatter (Answer ends at this point):
Now, #2(b) above might raise eyebrows since you are creating a clone ahead-of-time and it is not yet being used. The reason for this is, the above steps need to be performed in IVssHardwareSnapshotProvider::FillInLunInfo and this same VDS_LUN_INFORMATION contents are passed later to IVssHardwareSnapshotProvider::LocateLuns (VSS is trying to tell you to locate the LUNs that you earlier told it were the shadow copy LUNs). Hence, regardless of whether you will be using the clone or not in future, you must have the concrete LUN (iSCSI target) created upfront.
A silver lining to this is: if you are sure that the workflow of the VSS Requestor will never mount the shadow copy, then you can get away with this by faking some (valid) info in VDS_LUN_INFORMATION during IVssHardwareSnapshotProvider::FillInLunInfo. For this to work, you will have to create a 'transportable' shadow copy (the VSS requestor uses the VSS_CTX_FILE_SHARE_BACKUP | VSS_VOLSNAP_ATTR_TRANSPORTABLE flags). The only use-case for such a shadow copy would be to perform a hardware-resync on it, in which the VSS Hardware Provider implements the IVssHardwareSnapshotProvider::ResyncLuns method and performs a ZFS snapshot rollback in it.

How should one use Disruptor (Disruptor Pattern) to build real-world message systems?

As the RingBuffer up-front allocates objects of a given type, how can you use a single ring buffer to process messages of various different types?
You can't create new object instances to insert into the ringBuffer and that would defeat the purpose of up-front allocation.
So you could have 3 messages in an async messaging pattern:
NewOrderRequest
NewOrderCreated
NewOrderRejected
So my question is how are you meant to use the Disruptor pattern for real-world messageing systems?
Thanks
Links:
http://code.google.com/p/disruptor-net/wiki/CodeExamples
http://code.google.com/p/disruptor-net
http://code.google.com/p/disruptor
One approach (our most common pattern) is to store the message in its marshalled form, i.e. as a byte array. For incoming requests e.g. Fix messages, binary message, are quickly pulled of the network and placed in the ring buffer. The unmarshalling and dispatch of different types of messages are handled by EventProcessors (Consumers) on that ring buffer. For outbound requests, the message is serialised into the preallocated byte array that forms the entry in the ring buffer.
If you are using some fixed size byte array as the preallocated entry, some additional logic is required to handle overflow for larger messages. I.e. pick a reasonable default size and if it is exceeded allocate a temporary array that is bigger. Then discard it when the entry is reused or consumed (depending on your use case) reverting back to the original preallocated byte array.
If you have different consumers for different message types you could quickly identify if your consumer is interested in the specific message either by knowing an offset into the byte array that carries the type information or by passing a discriminator value through on the entry.
Also there is no rule against creating object instances and passing references (we do this in a couple of places too). You do lose the benefits of object preallocation, however one of the design goals of the disruptor was to allow the user the choice of the most appropriate form of storage.
There is a library called Javolution (http://javolution.org/) that let's you defined objects as structs with fixed-length fields like string[40] etc. that rely on byte-buffers internally instead of variable size objects... that allows the token ring to be initialized with fixed size objects and thus (hopefully) contiguous blocks of memory that allow the cache to work more efficiently.
We are using that for passing events / messages and use standard strings etc. for our business-logic.
Back to object pools.
The following is an hypothesis.
If you will have 3 types of messages (A,B,C), you can make 3 arrays of those pre-allocated. That will create 3 memory zones A, B, C.
It's not like there is only one cache line, there are many and they don't have to be contiguous. Some cache lines will refer to something in zone A, other B and other C.
So the ring buffer entry can have 1 reference to a common ancestor or interface of A & B & C.
The problem is to select the instance in the pools; the simplest is to have the same array length as the ring buffer length. This implies a lot of wasted pooled objects since only one of the 3 is ever used at any entry, ex: ring buffer entry 1234 might be using message B[1234] but A[1234] and C[1234] are not used and unusable by anyone.
You could also make a super-entry with all 3 A+B+C instance inlined and indicate the type with some byte or enum. Just as wasteful on memory size, but looks a bit worse because of the fatness of the entry. For example a reader only working on C messages will have less cache locality.
I hope I'm not too wrong with this hypothesis.

Resources