I am Running RH 2.0.8 on CentOS 7.2. Attempting to control Ettus N210 using v6.1.0 of the USRP_UHD Device. From the IDE console, I can see the USRP_UHD recognize/initialize the N210. I can allocate a channel (1MHz BW, 2Msps) from the available RX_Digitizer.
My issue - I connect to dataShort out with the IDE Plot Data and never see any data or SRI updates.
Using wireshark, I see data being output from the N210 over the network connection, just nothing plots. Same issue whether I launch Device via node/domain manager or in Sandbox.
Similar issue if I launch a waveform with a USRP_UHD dependency - connects & allocates properly but I never any data sent to the connected component in the waveform.
Curious if anyone else has had a similar experience.
UPDATE 12/17/2018:
After installing RH 2.2.1 on a CentOS 7.4 system, the USRP_UHD device appears to work correctly out of the box. I'm able to plot data and SRI from the dataShort_out port after allocating an RX_DIGITIZER.
The output port of the USRP_UHD is what is called a multi-out port, which is slightly different from a normal BulkIO output port. The main difference is that the port will only send data over connections which have a connection ID that has been mapped to a stream ID. With the USRP_UHD, this is done via allocation and the allocation ID. Read more here.
To plot data from a multi-out port using the IDE, the plot must be connected to the port using a connection ID that has been mapped to a stream ID, which for the USRP_UHD means the connection ID must be identical to one of the allocation IDs. You can specify the connection ID using the plotting wizard, or you can create a listener allocation with an allocation ID set to the connection ID of the plot (either option will work). See the following resources for more information:
Port Plot Wizard
Plotting a Tuned Receiver
Allocating a Listener
Connecting a waveform to a multi-out port must follow the same conventions and connect using a connection ID that has been mapped to a stream. This can be done by adding an FEI device dependency to the waveform *.sad.xml file (see first bullet below). This can also be done after launching a waveform (that does not contain an FEI device dependency) by specifying the connection ID for the connection between the waveform and the multi-out port. The connection ID will need to be identical to an allocation ID associated with the stream of data desired, which could be a listener allocation or the original control allocation. See the second and third bullets below for more information on this method.
Associating a Waveform with an FEI Device
Allocating a Listener
Connect Wizard
Note: Though the links I've provided are to the REDHAWK 2.2.1 manual, the content is applicable to all versions of REDHAWK, including REDHAWK 2.0.8. The IDE features you will need are also available in REDHAWK 2.0.8. The 2.0.8 manual should have similar content if you'd prefer to use the older manual.
Related
I have a Win11 laptop and I installed Yabe and was easily able to explore bacnet objects on my home thermostat. I'm trying to duplicate this on a Linux Laptop. My issue is that Yabe is not finding my thermostat on the Linux machine.
I'm running Linux Mint 21 Cinnamon 5.4.12. I installed Mono and downloaded Yabe. I am running with command "mono ./Yabe.exe". The Win11 laptop rules out thermostat setup/network issues. In the Yabe log window I get a message that says "error loading plugins". I did't try to install any plugins so I don't know where this is coming from and I'm not sure if it's even the root cause. Initially I just left the Yabe folder in my downloads folder. I also moved it to /usr/bin but that didn't solve anything. Any suggestions would be appreciated. I would really like not to have to use Win11 as it is a memory hog.
A similar question was raised on sourceforge but the answers have not helped me.
https://sourceforge.net/p/yetanotherbacnetexplorer/discussion/general/thread/1e78874922/?limit=25
Thank you for the suggestions. I ran Wireshark capture with filter "udp and port 47808" and received i-Am 100001 from the thermostat at 192.168.0.150 which is the static address I assigned. Like I said, since I literally have a Win-11 laptop sitting beside this one with Yabe installed and it sees the thermostat just fine, that rules out most network router issues. Also, I currently have the Linux firewall turned off. I believe it must be some bug with the Yabe installation on this version of Linux. I keep wanting to get away from Windows and rely solely on Linux and then I run into issues like this that make me realize why it's not universally adapted in industry.
At least for Windows, I believe that the plug-in DLLs are not strictly necessary/important; and you could drop the relevant plug-in DLLs alongside the 'YABE.exe' binary (- within the same folder); I've included a picture of plug-in DLLs' filenames.
Is both the (BACnet) client machine and server/thermostat machine using a public IP address, or at least a private IP address within the same subnet/network address range?
Have you got a Linux (and/or Windows) firewall blocking communication?
Can you see the 47808 port # open using the 'NMap' tool?
Also - for generic reference, an answer of mine for a half-similar question (- some points are could also be relevant here):
Things worth considering :-
Tools such as YABE, VTS and Wireshark - to learn from the success cases/successful instances of communication.
The network card (NIC) that your tools and/or libraries are using/selecting to send the ('service' request) messages - e.g. definitely don't mix routable addresses with non-routable 'private' addresses (between the BACnet 'client' IP & the 'server' IP).
(UDPv4-only) 'Broadcasts' will only work upon the local network (- if a BBMD is not present & correctly set-up to relay the broadcast on to another part/hop of the "internetwork"/connected networks).
If you're unlucky - with a particular device, your client port just might have to be 47808/0xBAC0; and just possibly for the broadcasts too.
Also try directed/'unicast' traffic/'service' requests too - e.g. attempting to read the device object instance # (DOIN) of a target device; check you've got/are specifying the correct DOIN when targeting/firing a request at a device.
Does the target device have a BACnet router or BACnet gateway in front of it (- therefore would also need the inclusion of a DNET & DADR paired values as part of addressing it)?
If so, are you talking the same variant of BACnet, e.g. IP - as in BACnet/IP between both the (BACnet) 'client' & 'server'/serving device?
If it's a commercial/enterprise device, does it have a IP whitelist - to allow for the processing of incoming requests?
I'm posing a question here directly in relation to this issue on github for node-serialport. In a nutshell something that used to work fine in v4.x of the library no longer works in v6.x of the library. I think it must have something to do with how the library is opening the COM port (options or something), and I suspect its artificially limiting the power delivered over USB in the current version of the library.
I wrote the simplest scripts that I could to reproduce the problem (scripts posted in the issue) using:
NodeJS and v4.x of the library [works]
NodeJS and v6.x of the library [fails]
Python and PySerial equivalent [works]
Following through on a recommendation by the repository maintainer, I researched and found a utility for windows called drstrace that allowed me to capture logs of the execution of each of these scripts executing for a period of time (these logs are posted as attachments in the referenced issue).
Now I'm stuck, as I don't know how to make heads or tails of the drstrace logs, though I feel confident that the difference is probably evident in comparing the three files. I just don't know enough about how to read the drstrace logs and windows drivers and system calls to break through.
I realized posting this question here is something of an act of desperation, but I figure it's worth a shot. Hopefully it is clear that I've not lacked in effort pursuing this on my own, I'm just over my head at this point, and could use help getting further. Any guidance would be appreciated. Most awesome would be someone who is versed in this level of diagnostics giving it a look and reading the tea leaves. It would be great to contribute back to such an important open source library.
Update 2017 Nov 10
I reached out to FTDI support asking:
I use the FT231X in many of my products. I need some help with
understanding how the Windows FTDI driver manages power. More to the
point, I'm hoping you can help me understand how to direct the driver
to allow the full 500mA allowed by USB to be delivered to my product
by a Windows computer.
The reply was:
Just use our FT_Prog utility to set the max VBUS current to 500
mA:
This drive current becomes available after the FT231X enumerates.
I haven't tried this advice yet, but I wanted to share it with anyone reading this. The fact remains that node-serialport 6.0.4 behavior differs from both node-serialport 4.0.7 behavior and pyserial behavior.
Here is an alternative theory you could look into:
Windows interacting with V6.x might be interacting with the flow control settings differently, which might be causing your device to respond with an unexpected state causing your test to fail.
I Read a bit more about windows drivers and how they manage that i only found out that its related to the hardware manufacturer i think its not a fail from serialport it self since its really using the drivers it self it adds no extras on that level.
i am Contributor of SerialPort and can tell you that it offers only bindings for the Operating System to node that means it don't does any actions it offers you only a API read the following from microsoft they say you should ask your hardware vendor
Power Management in Serial Port Drivers (Windows CE 5.0)
Windows CE 5.0
Send Feedback
The minimum power management that a serial port driver can provide is to put the serial port hardware into its lowest power consumption state with the HWPowerOff function, and to turn the serial port hardware fully back on with the HWPowerOn function. Both of these functions are implemented in the lower layer. Beyond this minimal processing, a serial port driver can conserve power more effectively by keeping the port powered down unless an application has opened the serial port. If there is no need for the driver to detect docking events for removable serial port devices, the driver can go one step further and remove power from the serial port's universal asynchronous receiver-transmitter (UART) chip, if no applications are using the port.
Most serial port hardware can support reading the port's input lines even without supplying power to the serial line driver. Consult the documentation for your serial port hardware to determine what parts of the serial port circuitry can be selectively powered on and off, and what parts must be powered for various conditions of use.
Source:
https://msdn.microsoft.com/en-us/library/aa447559.aspx
about changes from serialport v4 => 6
new Stream Interface
but nothing changed with the core opening method of the port.
also nothing changed in the bindings which open the port
node serialport is a collection of bindings written in c++
I use Red Hawk v2.1.0 to realize the AM demodulation part with three components.
Platform --> Xilinx Zynq 7035 (ARM Coretex A9*2)
Oparating System(OS)--> embedded Linux.
When connecting the RedHawk-IDE on the external PC with the Ether and displaying the waveform between the components, an abnormal sound is occured.
At this time, when I disconnect the LAN cable, the AM demodulation processing of Red Hawk inside the ARM will cease.
RedHawk inside the ARM appears to be waiting for requests from RedHawk-IDE on the external PC.
From this, it seems that abnormal noise will occur when requests from RedHawk-IDE on the external PC are delayed.
How can I keep RedHawk's AM demodulation processing inside the ARM running without stopping while connecting the RedHawk-IDE of the external PC and monitoring the waveform?
Environment is below.
CPU:Xilinx Zynq ARM CoretexA9 2cores 600MHz
OS:Embedded Linux Kernel 3.14 RealTimePatch
FrameLength:5.333ms(48kHz sampling, 256 data)
I have seen similar, if not identical issues, when running on an ARM board. Tracking down the exact issue may be difficult and in my experience hasn't been redhawk specific and has really been an issue with omniORB or its configuration. I believe one of the fixes for me was recompiling omniORB rather than using the omniORB package provided by my OS. (Which didn't make any sense to me at the time as I used the same flags & build process as the package maintainer)
First I would confirm this issue is specific to ARM. If it's easy enough to setup the same components, waveforms etc. on a 2nd x86_64 host and validate the problem does not occur.
Second I would try a "quick fix" of setting the omniORB timeouts on the arm host using the /etc/omniORB.cfg file and setting:
clientCallTimeOutPeriod = 2000
clientConnectTimeOutPeriod = 2000
This will set a 2 second timeout on CORBA interactions for both the connect portion and the call completion portion. In the past this has served as a quick fix for me but does not address the underlying issue. If this "fixes" it for you then you've at least narrowed part of the issue down and you could enable omniORB debugging using the traceLevel configuration option to find what call is timing out. See this sample configuration file for all options
If you want to dive into the underlying issues you'd need to see what the IDE and framework are doing when things lock up. With the IDE this is easy; simply find the PID of the java process and run kill -3 <pid> and a full stack trace will be printed in the terminal that is running the IDE. This can give you hints as to what calls are locked up. For the framework you'll need to use GDB and connect to the process in question and tell GDB to print the stack trace. You'd have to do some investigation ahead of time to determine which process is locking up.
If it ends up being an issue with the Java CORBA implementation on x86_64 talking with the C++ CORBA implementation on ARM you could also try launching / configuring / interacting with the ARM board via the REDHAWK python API from your x86_64 host. This may have better compatibility since they both use the same omniORB CORBA implementation.
I would like to retrieve the IPoIB frame bits for all the IPoIB frames on the fabric no matter if they are destined (LID + QPN level) for my machine or not.
Also, I should be able to re-inject the modified IPoIB frames directly to the InfiniBand HCA ports from the linux Kernel.
The logic for that has to be at the kernel level.
So in order to achieve this do I need to build a separate kernel module or IPoIB driver or IPoIB network interface
Note: I have just started learning Linux kernel module development for my project. I'm sorry if it is not the appropriate place to post this question.
You are going to have a big problem receiving IPoIB packets not destined for your machine. The fabric forwards packets based on the destination LID, and if the LID is not associated with your local port, you won't receive the packet.
I've installed the UHD device successfully on REDHAWK 1.9. I've tried adjusting the frontend tuner allocation property of the device, but when I try to run it, there is no activity showing up when I monitor the ports.
I don't even know if the redhawk device works properly because when I specify a random IP address, the device can still run normally.
So my question is: How can I use the USRP_UHD device in REDHAWK 1.9.0 to collect and demodulate a signal with a USRP N210?
I know the USRP is working because I am able to create and execute a simple demodulator in GNURadio, but I cannot replicate this in REDHAWK 1.9.
I'm able to start the component without errors, but nothing shows up when I monitor the ports.
You must first launch a Domain, create and launch a Node that contains a USRP_UHD Device, and configure the USRP_UHD Device with the IP address of the target USRP. Then, you must create and launch a Waveform with components that do the signal processing required (i.e. demodulation) and a usesdevice relationship that contains an allocation of the USRP_UHD Device.
Using Redhawk 1.9, I tested the behavior of the USRP_UHD Device with bad IP addresses to see if I could reproduce what you described. When I launched a Node containing a USRP_UHD configured with an invalid IP address, the following was reported:
ERROR:USRP_UHD_i - USRP COULD NOT BE INITIALIZED!
WARN:USRP_UHD_i - CAUGHT EXCEPTION WHEN INITIALIZING USRP. WAITING 1 SECOND AND TRYING AGAIN
ERROR:USRP_UHD_i - USRP COULD NOT BE INITIALIZED!
ERROR:USRP_UHD_i - Unable to initialize USRP!
I then configured the IP address with the correct value and the USRP_UHD initialized properly. Lastly, I configured an invalid IP address using the IDE, and the following error was reported in addition to the error above being printed again:
Failed to set property 'USRP_ip_address', due to Invalid Configuration. Unable to initialize USRP based on these properties
IDL:CF/PropertySet/InvalidConfiguration:1.0
Since you are not seeing this, there must be a problem with your setup. Describe your setup and the steps you take to launch the Redhawk USRP_UHD device, how you are “adjusting the frontend tuner allocation property”, and what do you mean when you say the Device “can still run normally”. It’s unclear if you are attempting to allocate properly -- are you doing this using a usesdevice relationship in the waveform SAD file, as is described on other SO posts?