Using REDHAWK Version 2.0.5,
Given a CHANNELIZER centered at 300MHz and a DDC attached to the CHANNELIZER centered at 301MHz. The DDC is set relative to the CHANNELIZER and in this case the DDC is centered at a 1MHz offset from the CHANNELIZER.
A) How should I present the DDC center frequency to a user in the frontend tuner status and allocation? For example, would they enter 1MHz or 301MHz to set the center frequency for the DDC? Currently I am using the latter version.
B) In version 2.1.0 of the REDHAWK manual in section F.5.2 it says the COL_RF SRI keyword is the center frequency of the collector and the CHAN_RF is the center frequency of the stream. In the above case, I set COL_RF to 300MHz and CHAN_RF to 301MHz but the REDHAWK IDE plots center at 300MHz for the DDC. Should the CHAN_RF be a relative value such as 1MHz? Currently, at 301MHz, the IDE plots appear to center at the COL_RF frequency of 300MHz.
C) When the CHANNELIZER center frequency changes, I only set the valid field in the allocation to false on attached DDCs. Is there any other special bookkeeping that needs to be done when this happens?
D) Should disabling or enabling the output from the CHANNELIZER also disable or enable the output for the attached DDCs?
E) Must deallocating the CHANNELIZER force all DDCs that are attached to deallocate?
A) All external interfaces (allocation, FrontendTuner ports, status properties, etc) assume RF values, not IF or offsets. Allocate or tune to 301MHz in order to center a DDC at 301MHz. The center_frequency field of the frontend_tuner_status property should be set to 301MHz for that DDC.
B) Your understanding of how to use COL_RF (300MHz) and CHAN_RF (301MHz) is correct. You may be able to work around this by reordering the SRI keywords to have CHAN_RF appear first, if necessary.
For (C) and (D), there are some design decisions that are left up to the developer since the implementation, as well as the hardware (if any), may impact those decisions. Here are some recommendations, though.
C) In general, if at any point the DDCs become invalid, they should be marked as such. It is possible to retune a CHANNELIZER by a small amount such that one or more DDCs still fall within the frequency range and remain valid, but that may also be hardware dependent. Additionally, it's recommended that DDCs only produce data when both enabled AND valid, so if marking invalid you may also want to stop producing data from the invalid DDCs.
D) CHANNELIZER and RX_DIGITIZER_CHANNELIZER tuners both have wideband input and narrowband DDC output. Some implementations of an RX_DIGITIZER_CHANNELIZER may have the ability to produce wideband digital output of the analog input (acting as an RX_DIGITIZER). In this scenario, the RX_DIGITIZER_CHANNELIZER output enable/disable controls the wideband output, while the DDCs output enable remain independently controlled. The behavior of a CHANNELIZER, which does not produce wideband output, is left as a design decision for the developer. For behavior consistent with RX_DIGITIZER_CHANNELIZER tuners, it's recommended that the DDCs remain independently controlled. Note that the enable for a tuner is specifically the output enable, and not an overall enable/disable of the tuner itself. For that reason, it's recommended that the enable for a CHANNELIZER not affect the data flow to the DDCs since that data flow is internal to the device. Again, this is all up to the developer and these are just recommendations since the spec leaves it open.
E) Yes, deallocating a CHANNELIZER should result in deallocation of all associated DDCs.
Related
Is it possible to SET the interface statistics in Linux after it's been brought up? I'm dealing with rrdtool (mrtg) that gets upset by a daily ifdown and ifup which brings the interface counters back to zero. Ideally I would like to continue counting from where I left and setting the interface values to what they were before the interface went down seems to be the easiest path.
I checked writing to /sys/class/net/ax0/statistics/rx_packets but that gives a Permission Denied error.
netstat, ifup, ifconfig and friends don't seem to support changing these values either.
Anything else I can try?
You can't set the kernel counters, no - but do you really need to?
MRTG will usually graph a rate, based on the difference between samples. So your MRTG/RRD will store packets-per-second values every cycle (usually 5min but maybe 1min). When your device resets the counters, then MRTG will see the value apparently go backwards - which will be discounted as out of range, so one failed sample. But, the next sample will work, and a new rate be given.
If you're getting a big spike in the MRTG graph at the point of the reset, this will be due to an incorrect 'counter rollover' detection. You can prevent this by either setting the MRTG AbsMax setting (to prevent this high value from being valid) or (better) by using SNMPv2 counters (where a reset is more obvious).
If you set your RRD file to have a large enough heartbeat and XFF, then this one missing sample will be interpolated, and so your graphs (which, remember, show the rate rather than the total) will continue to look fine.
Should you need the total, it can be derived by sum(rate x interval) which is automatically done by the Routers2 frontend for MRTG/RRD.
In Autosar NM 4.2.2 NM PDU Filter Algorithm,
What is significance of CanNmPnFilterMaskByte . I understood that it is used to Mask(AND) with incoming NM PDU with Partial Network info and decide to participate in communication or not. But please explain how exactly it works in brief.
You are actually talking about Partial Networking. So, if certain functional clusters are not needed anymore, they can go to sleep and save power.
ECUs supporting PN check all NmPdus for the PartialNetworkingInfo (PNI, where each bit represents a functional cluster status) in the NmPdus UserData.
The PnFilterMask actually filters out any irrelevant PNI info, the ECU is not interested in at all (because the ECU does not contribute in any way to these functions). If after applying the filter, everything is 0, the NmPdu is discarded, and does therefore not cause a restart of the Nm-Timeout Timer. Which brings actually the Nm into the Go-to-sleep phase, even though, NmPdus are still transmitted.
By ECU, also consider Gateways.
Update how to determine the mask
As described above, each bit represents a function.
Bit0 : Func0
..
Bit7: Func7
The OEM would now have to check, which ECUs in the Vehicle are necessary for which functions (also at certain state) required or not, and how to layout the vehicle networks.
Here are some function examples, and ECUs required excluding gateways:
ACC : 1 radar sensor front
EBA : 1 camera + 1..n radar sensor front
ParkDistanceControl (PDC): 4 Front- + 4 Rear Sensors + Visualization in Dashboard
Backup Camera: 1 Camera + Visualization ECU (the lines which tell according to steering angle / speed where the vehicle would move within the camera picture)
Blind Spot Detection (BSD) / LaneChangeAssist (LCA): 2 Radar sensors in the rear + MirrorLed Control and Buzzer Control ECU
Rear Cross Traffic Assist (RCTA) (w/ or w/o Brake + Alert): 2 Radar Sensors in the rear + MirrorLed Control and Buzzer Control ECU
Occupant Safe Exit (warn or keep doors closed in case something approaches): 2 rear radar sensors + DoorLock ECU(s)
The next thing is, that some functions are distributed over several ECUs.
e.g. the 2 rear radar sensors can do the whole BSD/LCA, RCTA, OSE functions, including maybe LED driver for the MirrorLEDs and a rear buzzer driver, or the send this information over CAN to a central ECU which handles the MirrorLEDs and a rear buzzer. (such short range radar sensors is, what I'm doing now for a long time, and the number of different functions grows over the years)
The camera can have some companion radar sensors (e.g. the one where ACC runs on or some short range radars) to help verify/classify image data / obejcts.
The PDC sensors are maybe also small ECUs giving out some information to a central PDC ECU, which actually handles the output to the dashboard.
So, not all of them need to be activated all the time and pull on the battery.
BSD/LCA, RCTA/B need to work while driving or parking, RCTA/B only when reverse gear is selected, BSD/LCA only with forward gear or neutral, PDC only when parking (low speed forward/reverse), Backup Camera only when reverse gear is in for parking, OSE can be active while standstill, with engine on (e.g. drop off passenger at traffic light) or without engine on (driver leaves and locks vehicle).
Now, for each of these cases, you need to know:
which ECUs are still required for each vehicle state and functional state
the network topology telling you, how these ECUs are connected.
You need to consider gateway ECUs here, since they have to route certain information between multiple networks.
You would assign 1 bit of the Nm Flags per function or function cluster (e.g. BSD/LCA / RCTA = 1bit, OSE = 1bit, BackupCam / PDC (e.g. "Parking mode") = 1bit
e.g. CanNmPnInfo Flags might be defined as:
Bit0 : PowerTrain
Bit1 : Navi/Dashboard Cluster
Bit2 : BSD/LCA/RCTA
Bit3 : ParkingMode
Bit4 : OSE
...
Bit7 : SmartKeyAutomaticBackDoor (DoorLock with key in near to detect swipe/motion to automatically backdoor)
It may also be possible to have CL15 devices without PNI, because the functions are only active while engine is on like ACC, EBA, TrafficJamAssist ... (even BSD/LCA/RCTA could be considered like that). You could handle them maybe without CL30 + PNI.
So, you now have an assignment of function to a bit in the PNI, and you know which ECUs are required.
e.g. the radar sensors in the rear need 0x34 (Bits 2,3,4), even though, they need to be aware of, that some ECUs might not deliver infos anymore, since they are off (e.g. Speed, SteeringAngle on Powertrain turned off after CL15 off -> OSE) and this is not an error (CAN Message Timeouts).
The gateway might need some more bits in the mask, in order to keep subnetworks alive, or to actually wake up some networks and their ECUs (e.g. Remote Key waking up DoorLock ECUs)
So a gateway in the rear might have 0xFC as a mask, but a front gateway 0x03.
The backup camera might be only activated in low-speed (<20km/h) and reverse gear, to power it up but PDCs can work without reverse gear.
The PNI flags are actually usually define by the OEM, because it is a vehicle level architectural item. This can not be defined usually by a supplier.
It should be actually part of the AUTOSAR ARXML SystemDescription. (see AUTOSAR_TPS_SystemTemplate.pdf)
EcuInstance --> CanCommunicationConnector (pnc* Attributes)
Usually, the AUTOSAR Configuration Tools should support to automatically extract this information to configure CanNm / Nm and ComM (User Requests) automatically.
Sorry for the delay, but finding a example to describe it can be quite tedious,
But I hope it helps.
Is there a way with WASAPI to determine if two devices (an input and an output device) are both synced to the same underlying clock source?
In all the examples I've seen input and output devices are handled separately - typically a different thread or event handle is used for each and I've not seen any discussion about how to keep two devices in sync (or how to handle the devices going out of sync).
For my app I basically need to do real-time input to output processing where each audio cycle I get a certain number of incoming samples and I send the same number of output samples. ie: I need one triggering event for the audio cycle that will be correct for both devices - not separate events for each device.
I also need to understand how this works in both exclusive and shared modes. For exclusive I guess this will come down to finding if devices have a common clock source. For shared mode some information on what Windows guarantees about synchronization of devices would be great.
You can use the IAudioClock API to detect drift of a given audio client, relative to QPC; if two endpoints share a clock, their drift relative to QPC will be identical (that is, they will have zero drift relative to each other.)
You can use the IAudioClockAdjustment API to adjust for drift that you can detect. For example, you could correct both sides for drift relative to QPC; you could correct either side for drift relative to the other; or you could split the difference and correct both sides to the mean.
An interesting issue arose with a device whose SWD_CLK pin is shared as a 'device boot mode' pin (ROM/Flash boot, etc.). The specification states that the SWD_CLK should be held high for some time before functioning as SWD_CLK.
The origen_swd plugin drives the clock high to 'enable' it, so the timeset for this pin must be 'return low' in order to clock. But, when I try to drive this high and hold it high, it begins clocking. Is there a way to disable the timeset for some time, then re-enable it when ready?
The workaround is to change the origen_swd to accept an option to either drive high or drive low to enable, then change the timeset in my application to return high.
Using metaprogramming to just grab and edit instance variables of the timeset may also be a solution, but is there a supported API to handle the tasks like the above?
Thanks
The way to do this would be by making two timing options for the given pin, one with the return low and one without.
tester.set_timeset "mode_entry", 40
pin(:swd_clk).drive!(1)
# Sometime later once in mode
tester.set_timeset "func_swd", 40
If the tester supports (e.g. V93K) you can also define multiple wave forms for a pin within the same timeset, as shown at the end of this guide section - http://origen-sdk.org/origen/guides/pattern/timing/#Complex_Timing
Then you would just have a single timeset selection and control the wave you want on the pin like this:
pin(:swd_clk).drive!(1) # Would be defined in the timing as always high
pin(:swd_clk).drive!('P') # Now start the clk pulse
Both of these approaches will work in the generated ATE patterns, however at the time of writing I believe that OrigenSim does not yet support the second approach, so you will have to use the multiple timesets.
As an aside, you sound like you are only looking for a solution that works in simulation and not necessarily required to have the two types of waves within the final ATE pattern.
In that case, you could also try poking the testbench's pin driver force data bit, though I haven't tried this:
tester.simulator.poke('origen.pins.swd_clk.force_data[1]', 1);
If you have success with that, we should think about adding a convenience API to do this kind of thing in simulation:
pin(:swd_clk).force!(1)
I have a device that has one 40MHz wideband input and can output one wideband output and 8 narrowband DDC channels from a device. I am not sure how to setup the FEI interface for this device. Should it be a CHANNELIZER or an RX_DIGITIZER_CHANNELIZER? I am leaning towards the latter due to the wideband output.
Regarding the allocation of such a device. For the RX_DIGITIZER_CHANNELIZER types, the manual is vague in the use of the CHANNELIZER portion in this instance. Should I still allow CHANNELIZERS to be allocated or just allocate the RX_DIGITIZER_CHANNELIZERs and DDCs? Should changes to the RX_DIGITIZER_CHANNELIZER determine when to drop the DDCs in this instance? If CHANNELIZERS can still be allocated with a RX_DIGITIZER_CHANNELIZER, how does this work?
A channelizer device takes already digitized data and provides DDCs from this digital stream. A RX_DIGITIZER_CHANNELIZER has a analog input and performs the receiver, digitizer and channelizer function in a non-separable device. It sounds like you choose correctly to use the RX_DIGITIZER_CHANNELIZER for a device with an analog input. Typically allocating the RX_DIGITIZER_CHANNELIZER takes care of allocating the Channelizer because all three parts are linked together. Therefore the only additional allocations are usually DDCs.