How can I run a distributed domain under RedHawk - redhawksdr

I am trying to create a distributed domain using RedHawk 2.0.1 and cannot find enough information on setting it up in the manual. I have two related issues. I want to run the domain manager on the same host as the IDE but run one or more components on another node. I see how to create a new node project but do not see how to specify the network location it should run on. I can add it to the domain but it simply runs two device managers on the local host. I also do not see details of how to make specific components run on the alternate node. Does this require manually adding allocation properies?
The related issue is that I would like to use a non-x86 node as the remote node. I am trying to use an ARM processor and following the instructions in the Sub$100 manual I was able to build and install the runtime system on my ARM, but I find that the GPP device's GPP.spd.xml still has x86 as the processor name while the prf.xml has arm as the required property.
The manual seems to indicate that the binaries for all nodes will be in the sdr of the domain manager, so am I supposed to copy the sdr entries for my arm gpp device and all components back to the sdr of the domain manager host and then they will be deployed back to my arm at domain and waveform launch?
Are there better detailed instructions for distributed domains somewhere that I am missing?

I believe the last supported version of REDHAWK for the Sub$100 project was 1.10, so we're in uncharted territory. That being said, let's take a stab at it.
The first thing you should do is make sure the /etc/omniORB.cfg file for your Domain Manager looks like this:
InitRef = NameService=corbaname::<external IP>:2809
InitRef = EventService=corbaloc::<external IP>10.3.1.245:11169/omniEvents
where should be replaced with your network IP (i.e., not localhost or 127.0.0.1). Restart the CORBA naming and event services with this command:
sudo $OSSIEHOME/bin/cleanomni
The next step is to configure your ARM device to point to the Domain Manager. Edit the /etc/omniORB.cfg file on the ARM device to match the one from your Domain Manager, even the IP address. Note that you don't have to start the naming and event services on the ARM device.
Now for running the GPP on the ARM device, you will have to create that node on the ARM device, since the Domain doesn't know about that device yet and cannnot access its filesystem. Page 16 of the 1.10 version of the Sub$100 document (http://ufpr.dl.sourceforge.net/project/redhawksdr/redhawk-doc/1.10.0/REDHAWK-Sub100-Manual-v1.10.0.pdf) has the instructions for installing the GPP.
Note that the newest version of the GPP is actually a C++ Device now, so the second step should be 'cd framework-GPP/cpp' and the third step should be 'git checkout 2.0.1'. Once that's installed, there are still a couple of more issues to take care of. First, run the following command:
$SDRROOT/dev/devices/GPP/cpp/devconfig.py --location $SDRROOT/dev/devices/GPP
That will configure your GPP to recognize that it is on an ARM platform (as long as your processor is an armv7l processor).
Next, run the following:
$SDRROOT/dev/devices/GPP/cpp/create_node.py --domainname <RH Domain Name>
That will actually create the DeviceManager profile that will contain your GPP.
The final step involves making sure that the node will be configured correctly. Check out Page 21, Steps 5. Basically you can remove the x86_64 implementation and replace any instances of 'x86' with an 'armv7l'.
As for your question about building your components, yes, you have to build them for the platform of interest and then install them to the Domain Manager SDRROOT. If you have a cross-compiler set up to build your components (and the framework), this will make your life a lot easier. However, if you don't, the workaround is to build the components on your ARM device, then install the XML files and the executable to the Domain. In order to make any components work with your ARM GPP, they will need to have an ARM implementation with a processor name that matches that of your GPP in their SPD.
I know that's a lot and I haven't run through these instructions in a while, so let me know if you have any questions or anything doesn't work.

apparently replies are very limited in length so I'll call this an answer. Thanks for your response. I have actually tried part of this but will try to see if your information gets me any further. After writing this question I explored a little further. I found that the code I had compiled on the ARM and installed still had "x86" and "x86-64" in the domain profile for the device manager and no "armv7l" so I patched the profile and tried starting the device manager on the ARM manually (after setting the omniORB.cfg to point to the name server on the domain manager host. It started up fine and said it was trying to connect, and the name server on the domain manager host now had an entry for the ARM device manager but the IDE did not list the additional device manager and if I killed the ARM device manager it said that it was intrupted while waiting to register so I assume that the device manager registered with the name server but never got a reply from the domain manager. This does not make me hopeful that your steps will work, but I'll give them a try.

Update. Following more closely the steps in sub$100 document, it appears that $SDRROOT/dev/devices/GPP/cpp/devconfig.py did not edit GPP.spd.xml to put in the correct processor and compiler version but after hand editing these, I was able launch the full domain (domainManager, deviceManager, GPPdevice) on the ARM processor and was able to connect to this running domain from the IDE running on x86. After exporting and rebuilding my waveform components and editing their domain profiles I was able to use the IDE to launch successfully a very small three component waveform and control it. So running the entire domain on the ARM works ok.
But I still cannot start the deviceManager on the ARM and have it register with the DomainManager on x86 (after editing the DCD to point to the x86 domain, ie, running a distributed domain with two nodes. It starts and says it is registering with the domainManager, and it must partly succeed because the devMgr shows up in the NamingService under the domain, but the IDE never shows the new deviceManager in the domain. And the devMgr never starts the GPPdevice. If the devMgr is killed it prints "Interrupted waiting to register with DomainManager", so even though it got registered into the Naming Service it appears that the DomainManager never replied to the registration request.

Related

Unable to inject errors with Einj (mce-test, ras-tools)

I want to inject memory errors on my system to check whether RAS/EDAC system really works and logs errors on my memory (during boot or any runtime). I came across with many tools but I don't know which one to actually trust. The machine I want to test is a Sandy Bridge machine running Linux kernel 5.15.0-58-generic version. Specificially, I want to test my system with Einj tool (https://docs.kernel.org/firmware-guide/acpi/apei/einj.html). Although I followed the earlier steps in the link (BIOS supports Einj, CONFIG_DEBUG_FS, CONFIG_ACPI_APEI, CONFIG_ACPI_APEI_EINJ config parameters are set on my kernel), the files mentioned in the document: /sys/kernel/debug/apei/einj etc. are not present. How can I proceed with this tool? Or is there a better way/tool to inject memory errors to check the EDAC subsystem?

Yabe on Linux unable to locate bacnet device

I have a Win11 laptop and I installed Yabe and was easily able to explore bacnet objects on my home thermostat. I'm trying to duplicate this on a Linux Laptop. My issue is that Yabe is not finding my thermostat on the Linux machine.
I'm running Linux Mint 21 Cinnamon 5.4.12. I installed Mono and downloaded Yabe. I am running with command "mono ./Yabe.exe". The Win11 laptop rules out thermostat setup/network issues. In the Yabe log window I get a message that says "error loading plugins". I did't try to install any plugins so I don't know where this is coming from and I'm not sure if it's even the root cause. Initially I just left the Yabe folder in my downloads folder. I also moved it to /usr/bin but that didn't solve anything. Any suggestions would be appreciated. I would really like not to have to use Win11 as it is a memory hog.
A similar question was raised on sourceforge but the answers have not helped me.
https://sourceforge.net/p/yetanotherbacnetexplorer/discussion/general/thread/1e78874922/?limit=25
Thank you for the suggestions. I ran Wireshark capture with filter "udp and port 47808" and received i-Am 100001 from the thermostat at 192.168.0.150 which is the static address I assigned. Like I said, since I literally have a Win-11 laptop sitting beside this one with Yabe installed and it sees the thermostat just fine, that rules out most network router issues. Also, I currently have the Linux firewall turned off. I believe it must be some bug with the Yabe installation on this version of Linux. I keep wanting to get away from Windows and rely solely on Linux and then I run into issues like this that make me realize why it's not universally adapted in industry.
At least for Windows, I believe that the plug-in DLLs are not strictly necessary/important; and you could drop the relevant plug-in DLLs alongside the 'YABE.exe' binary (- within the same folder); I've included a picture of plug-in DLLs' filenames.
Is both the (BACnet) client machine and server/thermostat machine using a public IP address, or at least a private IP address within the same subnet/network address range?
Have you got a Linux (and/or Windows) firewall blocking communication?
Can you see the 47808 port # open using the 'NMap' tool?
Also - for generic reference, an answer of mine for a half-similar question (- some points are could also be relevant here):
Things worth considering :-
Tools such as YABE, VTS and Wireshark - to learn from the success cases/successful instances of communication.
The network card (NIC) that your tools and/or libraries are using/selecting to send the ('service' request) messages - e.g. definitely don't mix routable addresses with non-routable 'private' addresses (between the BACnet 'client' IP & the 'server' IP).
(UDPv4-only) 'Broadcasts' will only work upon the local network (- if a BBMD is not present & correctly set-up to relay the broadcast on to another part/hop of the "internetwork"/connected networks).
If you're unlucky - with a particular device, your client port just might have to be 47808/0xBAC0; and just possibly for the broadcasts too.
Also try directed/'unicast' traffic/'service' requests too - e.g. attempting to read the device object instance # (DOIN) of a target device; check you've got/are specifying the correct DOIN when targeting/firing a request at a device.
Does the target device have a BACnet router or BACnet gateway in front of it (- therefore would also need the inclusion of a DNET & DADR paired values as part of addressing it)?
If so, are you talking the same variant of BACnet, e.g. IP - as in BACnet/IP between both the (BACnet) 'client' & 'server'/serving device?
If it's a commercial/enterprise device, does it have a IP whitelist - to allow for the processing of incoming requests?

No name or address. CBCentralManager no longer working on macOS 12

every since I updated macOS to macOS 12, I have trouble using CoreBluetooth.
In one of my apps, I will list all BLE devices using the CGCentralManager class.
This has worked for years. But now, when I start my app, the following output appears in Xcode:
[CoreBluetooth] No name or address
[CoreBluetooth] No name or address
[CoreBluetooth] No name or address
[CoreBluetooth] No name or address
[CoreBluetooth] No name or address
The macOS Console app has many messages like this (I don't know if this is related, the process is bluetoothd instead of my app):
Destroying pairing agent for session <appname>
Erasing session 0x7f795824af00 from SessionMap for "appname-2890-84"
Received 'stop scan' request from session "com.apple.bluetoothd-central-143-2" updateScanParams:YES shouldUpdateState:YES
Stopping scan as there are no remaining scan agents permitted to scan
If my app is not running, the bluetoothd process seems to be rather quiet. Once started, the bluetoothd process seems to have some kind of problem. The question is: which one?
Disabling the Sandbox did not change anything, so I don't think that it has something to do with missing permissions.
I also built a very basic example in a new app. I instantiated a new CBCentralManager and started scanning. The devices were discovered.
I my main app, no delegate function is triggered. None at all.
Did anyone encounter the same issue?
UPDATE: It appears that Apple has fixed the bug in macOS 12.3.
Original answer below applies to 12.0, 12.1 and 12.2.
It appears that Apple has updated macOS to behave more like iOS. The docs for scanForPeripheralsWithServices:options: say:
Your app can scan for Bluetooth devices in the background by specifying the bluetooth-central background mode. To do this, your app must explicitly scan for one or more services by specifying them in the serviceUUIDs parameter. The CBCentralManager scan option has no effect while scanning in the background.
Command line programs cannot ever be considered the foreground app since they are not a .app and therefore the background scanning rules apply. (This is conjecture, but I suspect that NSWorkspace.frontmostApplication might be used to determine the "foreground" application).
If background scanning is acceptable and the Bluetooth devices in use include a service UUID in the advertising data, then a list of service UUIDs can be supplied to scanForPeripheralsWithServices:options:.
If not, then you have to create a signed .app to use foreground scanning.
Some additional details and an ugly workaround for running a command line tool without a GUI as a .app (outside of the XCode debugger) can be found at https://github.com/hbldh/bleak/issues/720. This link is Python-specific but one should be able to extrapolate it to other environments.

How to not stop RedHawk processing even if there is no request from RedHawk-IDE

I use Red Hawk v2.1.0 to realize the AM demodulation part with three components.
Platform --> Xilinx Zynq 7035 (ARM Coretex A9*2)
Oparating System(OS)--> embedded Linux.
When connecting the RedHawk-IDE on the external PC with the Ether and displaying the waveform between the components, an abnormal sound is occured.
At this time, when I disconnect the LAN cable, the AM demodulation processing of Red Hawk inside the ARM will cease.
RedHawk inside the ARM appears to be waiting for requests from RedHawk-IDE on the external PC.
From this, it seems that abnormal noise will occur when requests from RedHawk-IDE on the external PC are delayed.
How can I keep RedHawk's AM demodulation processing inside the ARM running without stopping while connecting the RedHawk-IDE of the external PC and monitoring the waveform?
Environment is below.
CPU:Xilinx Zynq ARM CoretexA9 2cores 600MHz
OS:Embedded Linux Kernel 3.14 RealTimePatch
FrameLength:5.333ms(48kHz sampling, 256 data)
I have seen similar, if not identical issues, when running on an ARM board. Tracking down the exact issue may be difficult and in my experience hasn't been redhawk specific and has really been an issue with omniORB or its configuration. I believe one of the fixes for me was recompiling omniORB rather than using the omniORB package provided by my OS. (Which didn't make any sense to me at the time as I used the same flags & build process as the package maintainer)
First I would confirm this issue is specific to ARM. If it's easy enough to setup the same components, waveforms etc. on a 2nd x86_64 host and validate the problem does not occur.
Second I would try a "quick fix" of setting the omniORB timeouts on the arm host using the /etc/omniORB.cfg file and setting:
clientCallTimeOutPeriod = 2000
clientConnectTimeOutPeriod = 2000
This will set a 2 second timeout on CORBA interactions for both the connect portion and the call completion portion. In the past this has served as a quick fix for me but does not address the underlying issue. If this "fixes" it for you then you've at least narrowed part of the issue down and you could enable omniORB debugging using the traceLevel configuration option to find what call is timing out. See this sample configuration file for all options
If you want to dive into the underlying issues you'd need to see what the IDE and framework are doing when things lock up. With the IDE this is easy; simply find the PID of the java process and run kill -3 <pid> and a full stack trace will be printed in the terminal that is running the IDE. This can give you hints as to what calls are locked up. For the framework you'll need to use GDB and connect to the process in question and tell GDB to print the stack trace. You'd have to do some investigation ahead of time to determine which process is locking up.
If it ends up being an issue with the Java CORBA implementation on x86_64 talking with the C++ CORBA implementation on ARM you could also try launching / configuring / interacting with the ARM board via the REDHAWK python API from your x86_64 host. This may have better compatibility since they both use the same omniORB CORBA implementation.

Linux - generic network configuration

I am creating a set of utilities to configure a Linux system for a particular network and domain configuration. One of the steps of this configuration is configuring the network interface. Although I can easily configure things like Samba, or NTP (since they have the same configuration file syntax, regardless of distro), networking seems to be a bit more difficult.
With Debian using /etc/network/interfaces, Fedora and Red Hat using /etc/sysconfig/network-scripts, and other distros using their respective network managers (I personally use netctl on Arch), getting a script to be able to configure a network interface seems nearly impossible. Is there any portable way of configuring a network interface?
Failing that, what would be a good way to create 'modules' for various distros? My project will use Autotools for configuring and installing the utilities, so some 'compilation' can be done then, but how can I detect which network manager is in use? Problems with this approach include what to do when no module exists for the network manager in use. Or should I leave it to the compiling user to decide which 'module' to enable?
This question then extends to packaging - when creating this as a package, how can I support the multiple network managers in existence, even on one distro? In Arch, for example, the user may have netctl installed, which uses one set of configuration files, or Wicd or perhaps GNOME network-manager.
You can create your software with plug-in API in mind. The API describes, how network interface can be configured. You would then have one plug-in for NM, one for Wicd, one for /etc/network/interfaces etc. The user can choose, what plug-ins to compile during the ./configure stage, so that he doesn't need to install unneeded dependencies.
For multiple enabled plug-ins (for example NM and Wicd) you can use D-Bus to check, what service is registered to manage network interfaces.

Resources