BalenaOS: PyGObject NM (NetworkManager?) not found/available inside container - python-3.x

I’m trying to get the device’s network info inside a container running on balenaOS 2.88.4+rev0 (supervisor version 14.3.3) on a Raspberry Pi Compute Module 4 using PyGObject or dbus-python (using Python 3.9.2). I’ve been using these examples, which are also mentioned in this video (I followed along to satisfy other dependencies/prerequisites, listed below).
In order to get this working, there are a few things that need be done:
docker-compose.yml:
set network-mode: "host"
add the following label: io.balena.features.dbus: '1'
(optional?) add:
cap_add:
- NET_ADMIN
the container already had privileged: true, so in this case the cap_add shouldn’t make a difference
add following environment variable: DBUS_SYSTEM_BUS_ADDRESS: "unix:path=/host/run/dbus/system_bus_socket"
add dependencies, I’ve tried both ways, but they both don’t seem to work properly (see later)
install network-manager package in the desired container, this allows for the usage of the nmcli command, which does work for me and shows the correct info. The network-manager should also include libnm, although I’m not entirely sure of this. (In these docs the same implementation is used as in the examples linked above, as well as in the next section.)
In both cases (point 2), here’s what I get:
import gi
→ no problem
gi.require_version('NM', '1.0') (I’m assuming NM stands for NetworkManager?)
→ ValueError: Namespace NM not available
from gi.repository import NM
→ ImportError: cannot import name NM, introspection typelib not found
Off course I’ve tried searching for these errors online, but documentation/information seems extremely sparse.
Running this inside docker on my local machine, or using balena push to push it to my device (local mode enabled for testing), these errors occur.
When I run this in a virtual environment (using Python 3.9.10) on ZorinOS 16.2 (heavily based on Ubuntu 22.04) it works without issues, leading me to believe there’s still some package or setting missing…
What am I missing or doing wrong?

Related

XSCT executes command in interactive shell but not within script

First, take note that I am using the Xilinx SDK 2018.2 on Kubuntu 22.04 because of my companies policy. I know from research, that the command I'm using is deprecated in newer versions, but in the version I am using, it works flawlessly - kind of... But read for yourself:
My task is to automate all steps in the FPGA build to create a pipeline which automatically builds and tests the FPGAs. To achieve this, I need to build the code - this works flawlessly in XSDK. For automation, this also has to work in the command line, so what I did is following the manual to find out how this is achieved. Everything works as expected if I write it in the interactive prompt like shown here:
user#ubuntuvm:~$ xsct
****** Xilinx Software Commandline Tool (XSCT) v2018.2
**** Build date : Jun 14 2018-20:18:43
** Copyright 1986-2018 Xilinx, Inc. All Rights Reserved.
xsct%
Then I can enter the commands I need to import all needed files and projects (hw, bsp, main project). With this toolset, everything works as expected.
Because I want to automate it via a pipeline, I decided to pack this into a script for easier access. The script contains exactly the commands I entered in the interactive shell and therefore looks like this:
user#ubuntuvm:~/gitrepos/repository$ cat ../autoBuildScript.tcl
setws /home/user/gitrepos/repository
openhw ./hps_packages/system.hdf
openbsp ./bsp_packages/system.mss
importprojects ./sources/mainApp
importprojects ./bsp_packages
importprojects ./hps_packages
regenbsp -bsp ./bsp_packages/system.mss
projects –clean
projects -build
The commands are identical to the ones entered via the interactive CLI tool, the only difference is that this is now packed into a script. The difference is, that this now does not build completely anymore. I get the following error:
user#ubuntuvm:~/gitrepos/repository$ xsct ../autoBuildScript.tcl
INFO: [Hsi 55-1698] elapsed time for repository loading 1 seconds
Starting xsdk. This could take few seconds... done
'mainApp' will not be imported... [ALREADY EXIST]
'bsp_packages' will not be imported... [ALREADY EXIST]
'hps_packages' will not be imported... [ALREADY EXIST]
/opt/Xilinx/SDK/2018.2/gnu/microblaze/lin
unexpected arguments: –clean
while executing
"error "unexpected arguments: $arglist""
(procedure "::xsdb::get_options" line 69)
invoked from within
"::xsdb::get_options args $options"
(procedure "projects" line 12)
invoked from within
"projects –clean"
(file "../autoBuildScript.tcl" line 8)
I've inserted projects -clean only, because I got the error before with projects -build and wanted to check, if this also happens with another argument.
In the internet I didn't really find anything according to my specific problem. Also I strictly held on to the official manual, in which the command is also used just as I use it - but with the result of it being working.
Also, I've checked the line endings (set to UNIX) because I suspected xsct to read maybe a newline character or something similar, with no result. This error also occurs, when I create the bsp and hardware from sketch. Also, to me the error looks like an internal one from Xilinx, but let me know what you think.
So, it appears that I just fixed the problem on my own. Thanks on everyone reading for being my rubber ducky.
Apparently, the version 2018.2 of XSDK has a few bugs, including inconsistency with their command interpretation. For some reason the command works in the interactive shell, but not in the script - because the command is in its short form. I just learned from a Xilinx tutorial, that projects -build is - even though it works - apparently not the full command. You usually need to clarify, that this command should come from the SDK like this: sdk projects -build. The interactive shell seems to ignore this fact for a reason - and so does the script for any command except projects. Therefore, I added the "sdk" prefix to all commands which I used from the SDK, just to be safe.
I cannot believe, that I just debugged 2 days for an error whose fix only contains 3 (+1 whitespace) letters.
Thanks everybody for reading and have a nice day

Error while cross-compiling chromium for ARM from x_64

I am trying to build chromium from source code following instructions at
https://chromium.googlesource.com/chromium/src/+/master/docs/linux/build_instructions.md
I have successfully built and tested chromium for amd device, Now I am trying to cross-compile it for arm device, However when I set the flag
target_cpu = "arm"
using
gn gen out/Debug --args='target_cpu="arm"'
I get the following error
ERROR at //build/config/linux/atk/BUILD.gn:13:1 (//build/toolchain/linux:clang_x86_v8_arm): Assertion failed.
assert(current_toolchain == default_toolchain)
^-----
See //ui/accessibility/BUILD.gn:294:20: which caused the file to be included.
configs += [ "//build/config/linux/atk" ]
Any leads would be appreciated
I have had the same issue where I tried building chromium on my Linux x64 server for an ARM based device, I was able to get past the issue by commenting the assert function out (as the function assert is usually for sanity checking based on my understanding). You can achieve this by modifying the file build/config/linux/atk/BUILD.gn and adding a # to the following code assert(current_toolchain == default_toolchain).
I followed the guidance on the webpage suggested by Asesh (followed Recipe 2) but the issue we are facing is to do with different current_toolchain and default_toolchain being set and would cause assert() to fail, the instructions given on the webpage doesn't seem to relate to out problem.(https://chromium.googlesource.com/chromium/src/+/master/docs/linux/chromium_arm.md)
To explain why I commented out the assert() code, reading https://www.chromium.org/developers/gn-build-configuration I found the following section: Snip of section "Overriding the CPU architecture".
As we both are configuring target_cpu="arm", this should be enough to configure and build chromium for our ARM devices (this would cause current_toolchain to be set to a different toolchain then default_toolchain, hence why assert() is giving us this error).
After commenting out the assert() I was successfully able to run gn gen out/xxx, gn args out/xxx and run autoninja to build chromium.
This issue has been fixed in Chromium:
https://chromium-review.googlesource.com/c/chromium/src/+/3074662

How Do We Wire Up Converted Unit Tests in Doppl?

I am attempting to replicate what I see in PartyClickerSample in a fresh project, and am having difficulty with the pod and using it from Swift to set up the unit tests.
Based on PartyClickerSample, AFAICT, what I am supposed to do is put a Podfile like this in the iosTest/ directory (that contains a newly-created Xcode project):
platform :ios, '9.0'
target 'iosTest' do
use_frameworks!
pod 'testdoppllib', :path => '../app/build'
end
Then:
In AppDelegate.swift, import testdoppllib and call DopplRuntime.start() from the application() func
In ViewController.swift, import testdoppllib and call runResource() on... something that I can't quite figure out what it maps to
However, I can't even get to the latter bullet, as things start going sideways from the outset.
pod install seems to work as expected:
Analyzing dependencies
Fetching podspec for `testdoppllib` from `../app/build`
Downloading dependencies
Installing testdoppllib (0.1.0)
Generating Pods project
Integrating client project
[!] Please close any current Xcode sessions and use `iosTest.xcworkspace` for this project from now on.
Sending stats
Pod installation complete! There is 1 dependency from the Podfile and 1 total pod installed.
However, when I re-open the workspace:
In the Xcode tree thingy, Pods/Products/ shows testdoppillib.framework in red (which doesn't look good) and also shows Pods_iosTest.framework in black
If I try import testdoppillib, I get a message saying that Xcode does not recognize that name
If I try import Pods_iosTest, Xcode seems to find it, but then it does not recognize DopplRuntime.start()
So, what are the steps, in a Cocoapods-based Doppl setup, for starting the Doppl-created unit tests in Xcode?
Running pod install with set up the testdoppllib framework, but doesn't actually build it. One of the frustrating parts of the cocoapods process is you'll need to run build in Xcode, which should first build testdoppllib, then your Swift code.
To summarize, testdoppllib shows up as red, but it's most likely OK and just needs to be built. Once it's build, your Swift code should see "import testdoppllib"
For runResource, that's a little more complicated.
The Doppl gradle plugin writes a file called dopplTests.txt to the build/j2objcSrcGenTest directory. That's a listing of all the test classes. You need to add that file to Xcode, then pass in that name to DopplJunitTestHelper.runResource.
There's probably a way to set that up with cocoapods, but we haven't done that yet.

Ensuring installation/filesystem is properly in place

I have installed RedHawk 1.10.0 using Ubuntu 14.0.4LTS as described in appendix F of the RedHawk documentation. I also installed standalone IDE from SourceForge
again, as specified in appendix F, chapter 2.5. The IDE comes up looking ok, but here are the problems:
The components list is empty (there are supposed to be a set of pre-defined components). The directory is empty as well on the file system.
When attempting to generate C++ component, I get:
"Exception running "/bin/redhawk-codegen" /bin/redhawk-codegen - template=redhawk.codegen.jinja.cpp.component.pull --checkSupport
In detail, it said: bin/redhawk-codegen":error=2 no such file or directory. The /bin/redhawk-codegen is there under OSSIEHOME. The "pull" template is under: /usr/local/lib/python2.7/dist-packages/redhawk/codegen/jinja/cpp/component.
If I attempt to start Domain Manager I get an error "no domain configuration available".
So for all these problems it is obvious that I need to get a better picture of the expected file layout of all IDE and core RedHawk components. This is not clear from the documentation. Is there a starting point where I can begin debugging "where to find things"?
Regarding your first issue:
When installing for CentOS using the RPMs, a number of components, and devices are pre-packaged into the yum repository. When installing from source, as one must do for Ubuntu in 1.10, the pieces of Redhawk are modular and are installed individually.
The directions from Appendix F walk the user through installing each of the parts that make up the framework. The core, a GPP, bulkio, bustio, and the code generator. This does not include any components or devices (other than the GPP). To install these, you'll need to clone them from their respective git repositories and build and install from source either from the command line, or through the REDHAWK IDE. Building and installing the components from the command line follows the same pattern as the framework, there is a reconf script, which creates the configure script which creates the makefile script. eg. ./reconf; ./configure; make; sudo make install
Regarding your second issue:
These symptoms seem to point to the OSSIEHOME and SDRROOT variables not being set. Make sure that the OSSIEHOME and SDRROOT variables are set in the terminal using "echo $SDRROOT" and "echo $OSSIEHOME" prior to running the IDE. Keep in mind that the environments are unique to each session so, for example, having them set in one bash terminal does not guarantee they are set when launching the IDE from a desktop shortcut. Confirm they are set in your terminal, then launch the IDE from the same terminal.
Regarding your last issue:
This is likely also caused by your second issue. However, if it is not resolved following the above steps, take a look within $SDRROOT/dom/domain There should be two files. One DomainManager.dmd.xml.template and one DomainManager.dmd.xml. If all you have is the template then you need to create the DomainManager.dmd.xml file by copying the template. Then edit it and fill in the id and name fields. The default name is generally REDHAWK_DEV and the id should be a UUID.

Perl: libapt-pkg-perl AptPkg::Cache->new strange behaviour under precise

I have a very strange problem with the constructor of AptPkg::Cache object in the precise package of libapt-pkg-perl (v. 0.1.25).
The perl script is designed to download a debian package for three different architectures (i386, armel, armhf). For each architecture I do the following:
Configure AptPkg::Config '$_config' with the right parameters and package-lists for the desired architecture.
Create the cache object with AptPkg::Cache->new .
Call the method AptPkg::Cache->policy to create the AptPkg::Policy object.
Call the method AptPkg::Policy->candidate("program-name") .
Download the package for the selected architecture.
This works very well with Ubuntu Lucid, but with Ubuntu Precise I can only download the package for the first architecture defined. For the other two architectures there will be no installation candidate (method AptPkg::Policy->candidate("Package-Name") doesn't return an object).
I tried to build a workaround and I found one solution how the script works for all three architectures, without problems, in precise:
If I create the cache object (with AptPkg::Cache->new) twice in a row it works and the script downloads the debian package for all three architectures:
my $cache = AptPkg::Cache->new;
$cache = AptPkg::Cache->new;
I'm sure that the problem has something to do with the method AptPkg::Cache->new because I checked everything else, what could cause the problem, twice. All config-variables are set correctly and I even get a different Hash for AptPkg::Cache->new for each architecture, but it seems that I am overlooking something important.
I'm not very familiar with perl, so I am asking you guys if someone can explain why the script works with the workaround but not without it. Further it looks quite strange if you have the same line of code twice in your script.
Maybe you hit this bug - https://bugs.launchpad.net/ubuntu/+source/libapt-pkg-perl/+bug/994509
There is a script there to test if you're affected. If it's something else consider submitting a bug report.
edit: Just saw this is 11 months old :/

Resources