Where is dahdi_cfg called from on system boot? - dahdi

I install DAHDI on Debian stable (Buster) via module-assistant like this:
apt-get install dahdi dahdi-source module-assistant
module-assistant auto-install dahdi
I also create /etc/dahdi/system.conf and /etc/dahdi/assigned-spans.conf.
I cannot figure out who calls dahdi_cfg during system boot. I can confirm that it does get called by something, because if I remove dahdi_cfg and reboot, echo and dahdi_echocan_oslec modules are missing from lsmod (echo cancellation is specified in system.conf).
I found /usr/share/dahdi/span_config.d/10-dahdi-cfg, but I have no idea what may run this.
So, where is dahdi_cfg called from during system boot?
UPDATE
I found out that if system.conf is missing, echo cancellation modules are loaded anyways. Mandatory conditions are:
presence of dahdi_cfg
/etc/dahdi/assigned-spans.conf
UPDATE2
One more observation: if /etc/dahdi/assigned-spans.conf is removed and options dahdi auto_assign_spans=1 is added to /etc/modprobe.d/dahdi.conf, echo cancellation modules are not loaded (system.conf is still removed). So it seems auto_assign_spans=1 is not working.
Can anybody answer new questions in my answer?

First of all, let's deal with auto_assign_spans=1:
Remove auto_assign_spans=1. Result: /proc/dahdi/ is empty.
Use auto_assign_spans=1. Result: /proc/dahdi/ is not empty.
So, this is the way how to check the effect of auto_assign_spans=1.
This is the example:
Span 1: WCTDM/0 "Wildcard TDM410P" (MASTER)
1 WCTDM/0/0 RED
2 WCTDM/0/1
3 WCTDM/0/2
4 WCTDM/0/3
Now run dahdi_genconf system and check /proc/dahdi/1 again:
Span 1: WCTDM/0 "Wildcard TDM410P" (MASTER)
1 WCTDM/0/0 FXSKS RED
2 WCTDM/0/1 FXOKS
3 WCTDM/0/2 FXOKS
4 WCTDM/0/3 FXOKS
We have seen that dahdi_genconf messes with the span. Is this a bug?
Then run dahdi_cfg and check /proc/dahdi/1 again:
Span 1: WCTDM/0 "Wildcard TDM410P" (MASTER)
1 WCTDM/0/0 FXSKS RED (EC: OSLEC - INACTIVE)
2 WCTDM/0/1 FXOKS (EC: OSLEC - INACTIVE)
3 WCTDM/0/2 FXOKS (EC: OSLEC - INACTIVE)
4 WCTDM/0/3 FXOKS (EC: OSLEC - INACTIVE)
Now we see that everything is properly configured.
Next, dahdi_handle_device is called by udev. It does nothing (because auto_assign_spans=1 is used).
Then dahdi_span_config is called by udev. It does nothing also for the same reason.
And this is the interesting part: dahdi_cfg is not called if auto_assign_spans=1 is used. Is this a bug?
------------------------------
On the contrary, if auto_assign_spans=1 is not used, dahdi_cfg is called by dahdi_span_config.
This is a bit confusing. Why is it prohibited to run dahdi_cfg if auto_assign_spans=1 is used? If we have only one card, this is perfectly acceptable. auto_assign_spans=1 is even documented in dahdi-tools README as the use case for such scenario:
Normally (with auto_assign_spans=1 in the module
dahdi, which is the default), when a device is discovered and loaded,
it registers with the DAHDI core and its spans automatically become
available. However if you have more than one device, you may be
interested to set explicit spans and channels numbers for them.
Is it safe to add dahdi_cfg to dahdi_span_config manually?
BTW, system.conf need not even be created - it is generated dynamically if it does not exist, but again, only if auto_assign_spans=1 is not used.
If this deficiency is corrected somehow, the only thing needed to configure DAHDI would be just
echo options dahdi auto_assign_spans=1 >/etc/modprobe.d/dahdi.conf

Use the following patch for /lib/udev/rules.d/60-dahdi.rules:
+SUBSYSTEM=="dahdi_spans", RUN+="/usr/sbin/dahdi_cfg"
LABEL="dahdi_add_end"

Related

XSCT executes command in interactive shell but not within script

First, take note that I am using the Xilinx SDK 2018.2 on Kubuntu 22.04 because of my companies policy. I know from research, that the command I'm using is deprecated in newer versions, but in the version I am using, it works flawlessly - kind of... But read for yourself:
My task is to automate all steps in the FPGA build to create a pipeline which automatically builds and tests the FPGAs. To achieve this, I need to build the code - this works flawlessly in XSDK. For automation, this also has to work in the command line, so what I did is following the manual to find out how this is achieved. Everything works as expected if I write it in the interactive prompt like shown here:
user#ubuntuvm:~$ xsct
****** Xilinx Software Commandline Tool (XSCT) v2018.2
**** Build date : Jun 14 2018-20:18:43
** Copyright 1986-2018 Xilinx, Inc. All Rights Reserved.
xsct%
Then I can enter the commands I need to import all needed files and projects (hw, bsp, main project). With this toolset, everything works as expected.
Because I want to automate it via a pipeline, I decided to pack this into a script for easier access. The script contains exactly the commands I entered in the interactive shell and therefore looks like this:
user#ubuntuvm:~/gitrepos/repository$ cat ../autoBuildScript.tcl
setws /home/user/gitrepos/repository
openhw ./hps_packages/system.hdf
openbsp ./bsp_packages/system.mss
importprojects ./sources/mainApp
importprojects ./bsp_packages
importprojects ./hps_packages
regenbsp -bsp ./bsp_packages/system.mss
projects –clean
projects -build
The commands are identical to the ones entered via the interactive CLI tool, the only difference is that this is now packed into a script. The difference is, that this now does not build completely anymore. I get the following error:
user#ubuntuvm:~/gitrepos/repository$ xsct ../autoBuildScript.tcl
INFO: [Hsi 55-1698] elapsed time for repository loading 1 seconds
Starting xsdk. This could take few seconds... done
'mainApp' will not be imported... [ALREADY EXIST]
'bsp_packages' will not be imported... [ALREADY EXIST]
'hps_packages' will not be imported... [ALREADY EXIST]
/opt/Xilinx/SDK/2018.2/gnu/microblaze/lin
unexpected arguments: –clean
while executing
"error "unexpected arguments: $arglist""
(procedure "::xsdb::get_options" line 69)
invoked from within
"::xsdb::get_options args $options"
(procedure "projects" line 12)
invoked from within
"projects –clean"
(file "../autoBuildScript.tcl" line 8)
I've inserted projects -clean only, because I got the error before with projects -build and wanted to check, if this also happens with another argument.
In the internet I didn't really find anything according to my specific problem. Also I strictly held on to the official manual, in which the command is also used just as I use it - but with the result of it being working.
Also, I've checked the line endings (set to UNIX) because I suspected xsct to read maybe a newline character or something similar, with no result. This error also occurs, when I create the bsp and hardware from sketch. Also, to me the error looks like an internal one from Xilinx, but let me know what you think.
So, it appears that I just fixed the problem on my own. Thanks on everyone reading for being my rubber ducky.
Apparently, the version 2018.2 of XSDK has a few bugs, including inconsistency with their command interpretation. For some reason the command works in the interactive shell, but not in the script - because the command is in its short form. I just learned from a Xilinx tutorial, that projects -build is - even though it works - apparently not the full command. You usually need to clarify, that this command should come from the SDK like this: sdk projects -build. The interactive shell seems to ignore this fact for a reason - and so does the script for any command except projects. Therefore, I added the "sdk" prefix to all commands which I used from the SDK, just to be safe.
I cannot believe, that I just debugged 2 days for an error whose fix only contains 3 (+1 whitespace) letters.
Thanks everybody for reading and have a nice day

Linux, Installation of FBX SDK, how to automatically confirm?

I am currently building a project in a (Singularity) container, and one of the dependencies is the FBX SDK (for building Qt static).
The problem is that the installer of FBX reads the licence conditions and then asks to confirm those, and as the installation by recipe don't allow me answer manually, it produces an endless repetition of the lines
To continue installing the software, you must agree
to the terms of the software license agreement.
Type "yes" to agree and continue the installation. Agree [yes/no] ?
Now I did some research into how to auto confirm such things and found How do I script a "yes" response for installing programs?.
Given the answers there, I tried the following lines:
yes | ./fbx20190_fbxsdk_linux
yes Y | ./fbx20190_fbxsdk_linux
echo yes | ./fbx20190_fbxsdk_linux
(Second line just in case, as the installer clearly wants a "yes".)
None of those worked. Is there some other trick I could try? Or another way to install FBX?
(Note: the lines above do at least something, that is they automatically confirm File(s) will be extracted to: . Please confirm [y/n] ?)
...as the installer clearly wants a "yes"
In that case, try this:
yes yes | ./fbx20190_fbxsdk_linux
By default, the yes command does not send "yes" to the subsequent process, only y. You can override this behavior by giving a command line parameter as you did in your second attempt above. However, there you inokved it as yes Y which does not send a "yes" either, it sends Y.
The parameter to yes must be the string that you want to send to ./fbx20190_fbxsdk_linux, hence yes yes.

Ensuring installation/filesystem is properly in place

I have installed RedHawk 1.10.0 using Ubuntu 14.0.4LTS as described in appendix F of the RedHawk documentation. I also installed standalone IDE from SourceForge
again, as specified in appendix F, chapter 2.5. The IDE comes up looking ok, but here are the problems:
The components list is empty (there are supposed to be a set of pre-defined components). The directory is empty as well on the file system.
When attempting to generate C++ component, I get:
"Exception running "/bin/redhawk-codegen" /bin/redhawk-codegen - template=redhawk.codegen.jinja.cpp.component.pull --checkSupport
In detail, it said: bin/redhawk-codegen":error=2 no such file or directory. The /bin/redhawk-codegen is there under OSSIEHOME. The "pull" template is under: /usr/local/lib/python2.7/dist-packages/redhawk/codegen/jinja/cpp/component.
If I attempt to start Domain Manager I get an error "no domain configuration available".
So for all these problems it is obvious that I need to get a better picture of the expected file layout of all IDE and core RedHawk components. This is not clear from the documentation. Is there a starting point where I can begin debugging "where to find things"?
Regarding your first issue:
When installing for CentOS using the RPMs, a number of components, and devices are pre-packaged into the yum repository. When installing from source, as one must do for Ubuntu in 1.10, the pieces of Redhawk are modular and are installed individually.
The directions from Appendix F walk the user through installing each of the parts that make up the framework. The core, a GPP, bulkio, bustio, and the code generator. This does not include any components or devices (other than the GPP). To install these, you'll need to clone them from their respective git repositories and build and install from source either from the command line, or through the REDHAWK IDE. Building and installing the components from the command line follows the same pattern as the framework, there is a reconf script, which creates the configure script which creates the makefile script. eg. ./reconf; ./configure; make; sudo make install
Regarding your second issue:
These symptoms seem to point to the OSSIEHOME and SDRROOT variables not being set. Make sure that the OSSIEHOME and SDRROOT variables are set in the terminal using "echo $SDRROOT" and "echo $OSSIEHOME" prior to running the IDE. Keep in mind that the environments are unique to each session so, for example, having them set in one bash terminal does not guarantee they are set when launching the IDE from a desktop shortcut. Confirm they are set in your terminal, then launch the IDE from the same terminal.
Regarding your last issue:
This is likely also caused by your second issue. However, if it is not resolved following the above steps, take a look within $SDRROOT/dom/domain There should be two files. One DomainManager.dmd.xml.template and one DomainManager.dmd.xml. If all you have is the template then you need to create the DomainManager.dmd.xml file by copying the template. Then edit it and fill in the id and name fields. The default name is generally REDHAWK_DEV and the id should be a UUID.

Perl: libapt-pkg-perl AptPkg::Cache->new strange behaviour under precise

I have a very strange problem with the constructor of AptPkg::Cache object in the precise package of libapt-pkg-perl (v. 0.1.25).
The perl script is designed to download a debian package for three different architectures (i386, armel, armhf). For each architecture I do the following:
Configure AptPkg::Config '$_config' with the right parameters and package-lists for the desired architecture.
Create the cache object with AptPkg::Cache->new .
Call the method AptPkg::Cache->policy to create the AptPkg::Policy object.
Call the method AptPkg::Policy->candidate("program-name") .
Download the package for the selected architecture.
This works very well with Ubuntu Lucid, but with Ubuntu Precise I can only download the package for the first architecture defined. For the other two architectures there will be no installation candidate (method AptPkg::Policy->candidate("Package-Name") doesn't return an object).
I tried to build a workaround and I found one solution how the script works for all three architectures, without problems, in precise:
If I create the cache object (with AptPkg::Cache->new) twice in a row it works and the script downloads the debian package for all three architectures:
my $cache = AptPkg::Cache->new;
$cache = AptPkg::Cache->new;
I'm sure that the problem has something to do with the method AptPkg::Cache->new because I checked everything else, what could cause the problem, twice. All config-variables are set correctly and I even get a different Hash for AptPkg::Cache->new for each architecture, but it seems that I am overlooking something important.
I'm not very familiar with perl, so I am asking you guys if someone can explain why the script works with the workaround but not without it. Further it looks quite strange if you have the same line of code twice in your script.
Maybe you hit this bug - https://bugs.launchpad.net/ubuntu/+source/libapt-pkg-perl/+bug/994509
There is a script there to test if you're affected. If it's something else consider submitting a bug report.
edit: Just saw this is 11 months old :/

What does "No more variables left in this MIB View" mean (Linux)?

On Ubuntu 12.04 I am tring to get the subtree of management values with the following command:
snmpwalk -v 2c -c public localhost
with the last line of the output being
iso.3.6.1.2.1.25.1.7.0 = No more variables left in this MIB View (It is past the end of the MIB tree)
Is this an error? A warning? Does the subtree end there?
There's a bit more going on here than you might suspect. I encounter this on every new Ubuntu box that I build, and I do consider it a problem (not an error, but a problem--more on this down further).
Here's the technically-correct explanation (why this is not an "error"):
"No more variables left in this MIB View" is not particularly an error; rather, it is a statement about your request. The request started at something simple, say ".1.3" and continued to ask for the "next" lexicographic OID. It got "next" OIDs until that last one, at which point the agent has informed you that there's nothing more to see; don't bother asking.
Now, here's why I consider it a problem (in the context of this question):
The point of installing "snmpd" and running it is to gather meaningful information about the box; typically, this information is performance-oriented. For example, the three general things that I need to know about are network-interface information (IF-MIB::ifHCInOctets and IF-MIB::ifHCOutOctets), disk information (UCD-SNMP-MIB::dskUsed and UCD-SNMP-MIB::dskTotal), and CPU information (UCD-SNMP-MIB::ssCpuRawIdle, UCD-SNMP-MIB::ssCpuRawWait, and so on).
The default Ubuntu "snmpd" configuration specifically denies just about everything useful with this configuration (limiting access to just enough information to tell you that the box is a Linux box):
view systemonly included .1.3.6.1.2.1.1
view systemonly included .1.3.6.1.2.1.25.1
rocommunity public default -V systemonly
This configuration locks the box down, which may be "safe" if it will be on an insecure network with little SNMP administration knowledge available.
However, the first thing that I do is remove the "-V systemonly" portion of the "rocommunity" setting; this will allow all available SNMP information to be accessed (read-only) via the community string of "public".
If you do that, then you'll probably see what you're expecting, which is pages and pages of SNMP information that you can use to gauge the performance of your box.
I know this thread is probably very old the I fix this is to use:
rocommunity public
and that should fix the problem.
Briefly, this is not an error, actually, when you "walk up" all OIDs on your agent, it will shows your this line>
Sometimes, it won't show you this line, because the last OID is not on your agent(you have already walk up all OIDs on your agent, but not walk up all OIDs).
$ snmpwalk -v 2c -c public localhost NET-SNMP-EXTEND-MIB::nsExtendObjects
NET-SNMP-EXTEND-MIB::nsExtendObjects = No more variables left in this MIB View (It is past the end of the MIB tree)
Also you can get this error while you can trying to see executed scripts I fix that problem to add
view all included .1 80
line to snmpd.conf than restart service
Than you will see your output going to change for both input

Resources