Linker error when creating 1.9 FrontEnd device create from IDE using default - redhawksdr

Using:
Redhawk 1.9 / CentOS 6.4 (32 bit) / C++ implementation
Creating a new FRONTEND::TUNER device
Using default setting on code generation.
Following port required for FRONTEND Digital Tuner and regenerate the code.
<ports>
<provides repid="IDL:FRONTEND/DigitalTuner:1.0" providesname="DigitalTuner"/>
<provides repid="IDL:FRONTEND/RFInfo:1.0" providesname="RFInfo"/>
</ports>
After generating code make updates to port_impl.h and port_impl.cpp to get around the problem as defined in:
error: cannot allocate an object of abstract type ‘FRONTEND_RFInfo_In_i.
After making updates to port_impl.h and port_impl.cpp recompile the code. It compiles, but we get the linker error below.
/usr/local/redhawk/core/lib/libfrontendInterfaces.so: undefined reference to `BULKIO::PrecisionUTCTime::operator<<=(cdrStream&)'
/usr/local/redhawk/core/lib/libfrontendInterfaces.so: undefined reference to `BULKIO::PrecisionUTCTime::operator>>=(cdrStream&) const'
It appears to be not able to get this methods in the bulkio libraries.

This issue is a known bug in the 1.9.0 release for C++ devices that have Front End Interfaces based ports and no bulkIO based ports. It was discovered just recently and has been logged. There is a dependency on bulkIO Interfaces in FrontEnd Interface based ports and the linking of bulkio needs to be added to the configure.ac file (which is auto generated).
You may resolve the issue one of two ways.
1.) Ideally, your front end interfaces compliant device would also contain a bulkIO based input or output port. By simply having a bulkIO based port on your device the dependency will be added to the configure.ac and proper linking will occur.
2.) If for whatever reason your device contains a Front End Interfaces port but does not contain a bulkIO port, you may modify the configure.ac file found within your project and explicitly add the dependency.
-PKG_CHECK_MODULES([INTERFACEDEPS], [frontendInterfaces])
+PKG_CHECK_MODULES([INTERFACEDEPS], [frontendInterfaces, bulkio >= 1.0 bulkioInterfaces >= 1.9])
Take note that when you modify the configure.ac file manually, it will no longer be generated via the code-generators by default when you select Generate Code in the IDE. This may cause issues if you continue to modify your device and regenerate code as it could potentially add additional dependencies. My advice would be to either, allow the code-generators to regenerate the configure.ac file by selecting it in the "Regenerate Files" Dialog when you select Generate Code, then make the edits mentioned above.
Or, if you will be regenerating often you could temporarily place a bulkIO based input or output port on your device. By doing this you would not need to edit the configure.ac file until you've completed your device and remove the temporary bulkio port.

Related

How to replace a native dynamic library file permanently and appropriately

I try to develop a thirdparty unixODBC driver, it is a secondary development based on the original file libodbc.so.2.0.0.
so I want to rename 'libodbc.so.2.0.0' to 'libodbc.so.2.0.0_renamed'. And soft link my dynamic library file to libodbc.so.2.0.0.
But I found an issue bothering me, when I rename native file and run 'sudo ldconfig', the file named 'libodbc.so.2' automatically linked to the renamed file 'libodbc.so.2.0.0_renamed', as below:
I could not understand that:
why it occurs;
how to appropriately replace the library.
I don't have enough ackownledge about linux, so that I failed to get any keyword to search and deal with it.
Could you help me, thank you very much!
Shared objects under GNU/Linux follow a specific version naming scheme, which is known by the loader (and OS component, actually part of libc framework) to determine if a newer library is retro-compatible with some older version to which a binary was originally linked against. By adding the renamed suffix, you are violating the convention and the dynamic linking system is getting confused. You should renamed as suggested by #Bodo above.
In addition, perhaps rather than using rename, you might consider using the very versioning scheme. From GNU Build System (aka Autotools) manual, the version cheme is like it follows:
Versioning: CURRENT:REVISION:AGE
CURRENT The latest interface implemented.
REVISION The implementation number of CURRENT (read: number of bugs fixed...)
AGE The number of interfaces implemented, minus one.
The library supports all interfaces between CURRENT − AGE and CURRENT.
If you have
not changed the interface (bug fixes) CURRENT : REVISION+1 : AGE
augmented the interface (new functions) CURRENT+1 : 0 : AGE+1
broken old interface (e.g. removed functions) CURRENT+1 : 0 : 0
Therefore a possible history of your lib might be:
1:0:0 start
1:1:0 bug fix
1:2:0 bug fix
2:0:1 new function
2:1:1 bug fix
2:2:1 bug fix
3:0:0 broke api
3:1:0 bug fix
4:1:1 bug fix
5:0:0 broke api
You might, for instance, call the older and newer versions of libodbc.so.x.y.z, according to your needs. Just an idea.

Why Is Doppl Trying To Pull in ReactiveStreams?

I am attempting to convert parts of an Android app to iOS using Doppl, and I am getting a strange result: Doppl keeps trying to pull in android.arch.lifecycle:reactivestreams, even though I don't want it to.
Specifically, in app/build/j2objcSrcGenMain/android/arch/lifecycle/, there is a reactivestrams/ subdirectory with R.h and R.m files in it. This seems to make Xcode cranky and may explain why I had some oddities with pod install.
My app/build.gradle has compile "android.arch.lifecycle:reactivestreams:$archVer", because my activity is using LiveDataReactiveStreams.fromPublisher(). However:
The activity is not in the translatePattern (and since its code is not showing up in app/build/j2objcSrcGenMain/, I have to assume that the translatePattern is fine)
I do not have a doppl statement related to reactivestreams, because there does not appear to be a Doppl conversion of this library (nor should it be needed here)
AFAIK, nowhere else in this app am I referring to LiveDataReactiveStreams, which AFAIK is the one-and-only public class from the reactivestreams library
So, the questions:
What determines whether Doppl creates R.h and R.m files for some dependency? It's not the existence of a doppl statement, as I have doppl statements for a lot of other dependencies (RxJava, RxAndroid, Retrofit) and those do not get R.h and R.m files. It's not whether the dependency is referenced from generated code, as my repository definitely uses RxJava and Retrofit, yet there are no R files for those.
How can I figure out why Doppl generates R.h and R.m for reactivestreams?
Once I get this cleared up... do I re-run pod install, or is there some other pod command to refresh an existing pod with a new implementation?
Look into 'app/build/generated/source/r/debug' and confirm there's an R.java being created for the architecture component. It'll be under 'android/arch/lifecycle/reactivestrams'.
I think there are 2 problems here.
Problem 1
Somehow Doppl/J2objc is of the opinion that this file should be transpiled. It could be either that 'translatePattern' matches with it, or that something in the shared code is referencing it. If you can't figure out which, please post a comment and I'll try to help (or post in slack group).
Problem 2
Regardless of why that 'R.java' is being sucked into the translate step, because of how stock J2objc is configured, the code is being generated with package folders instead of creating One Big Name. That generated file should be called 'AndroidArchLifecycleReactivestramsR.h' (and AndroidArchLifecycleReactivestramsR.m). Xcode really doesn't like package folders. That's why there's a slightly custom J2ojbc being used with Doppl, so we can have files with big names instead of folders.
In cases where you intentionally use package names that match with what J2objc considers to be "system" classes, you need to provide a header mapping file to force long names. The 'androidbase' doppl library needs to add a lot of files that are in the 'android' package, which J2objc considers "system". We override those names in the mapping file.
build.gradle
https://github.com/doppllib/core-doppl/blob/master/androidbase/build.gradle#L19
mapping file
https://github.com/doppllib/core-doppl/blob/master/androidbase/src/main/java/androidbase.mappings
I screwed up.
In my dopplConfig, I have:
translatePattern {
include '**/api/**'
include '**/arch/**'
include '**/RepositoryTest.java'
}
In this case, **/arch/** not only matches my arch package, but also the arch package from the Architecture Components.
Ordinarily, this would not matter, because the Architecture Components source code is not in my project. But, R.java gets generated, due to resources, and the translatePattern includes generated source code in addition to lovingly hand-crafted source code. So, that's where my extraneous Objective-C was coming from.
Many thanks to Kevin Galligan for his assistance with this, out on the #newbiehelp Doppl Slack channel!

configure keeps finding tcmalloc. How?

I'm building NWChem on Cray. libtcmalloc_minimal is already added to an archive file by the cc in my Cray environment. In my configure routine, it explicitly appends a second -ltcmalloc_minimal resulting in a multiple definition and a configure fail. But none of the configure.* files or makefiles (or any files included with NWChem) contain any reference to tcmalloc_minimal.
How is tcmalloc_minimal getting in there?
How can I keep it out?
The autoconf _AC_FC_LIBRARY_LDFLAGS macro (called as part of AC_PROG_FC) and other macros querying library flags and objects retrieves this value from the verbose compiler output (which contains this library on Cray systems). For this reason Cray's patched autoconf contains a change of the above macro to get rid of the flag. I'm currently in the process of figuring out an override to the macro so configure scripts produced by unpatched versions of autoconf also work on Cray systems. I'll post an update once I've figured out something reliably working.

How do I ignore a system file picked up by `configure' generated from AC_CHECK_HEADERS

We using an automated build system which downloads and compiles source. The only interface I have to control the behaviour of the compilation is by setting ENV VARs and the arguments given to `./configure'.
The issue is that the 'configure' script (of the particular source I'm compiling) checks for a system header file, which if found, adversely affects the compilation process. (the compilation process will avoid compiling libraries which it believes are already installed on the local system when the above mentioned system header file is found.)
Since this is an automated process, I cannot modify the 'configure' script in anyway, and as mentioned can only specify the environment variables and arguments passed to `configure'. The configure script uses the AC_CHECK_HEADERS macro to generate the code to do the check for the system file. Is there anyway to avoid a check of a specific system file from the configure arguments?
The troublesome header file is in the path /usr/include/pcap/.
Thanks
Well there's a few things you could try:
remove foo.h from AC_CHECK_HEADERS and always build the library
use AC_CHECK_HEADER for foo.h and check for /usr/include/pcap/foo.h and don't AC_DEFINE(HAVE_FOO_H) if /usr/include/pcap/foo.h is there.
you could use AC_ARG_ENABLE or AC_ARG_WITH to turn off the offending test on a host-by-host basis via arguments to configure. So the answer to that question is yes.
All of these assume you can modify configure.ac and regenerate configure. If you can't do that you might have to modify configure (in an automated fashion, of course).

Linux+Language for video capture & Display

Very soon we are going to work on a project with open source s/w that does many things and one of the modules concerns with capturing live feed from a usb based camera for upto 48 hours and overwriting it in a nonstop loop. This would be going on in parallel with other operations in the application. We also have to display the captured feed of the least 48 hours to the user as a menu option.
I would like you all to suggest a suitable open source technology stack taking into account the audio/video part of the module, without this feature I can definitely use Qt to do my stuff but with this feature that becomes a difficult proposition. I have developed GUI applications with Qt on Linux platform but haven't been able to come up with something that can record and display video in an application. Qt has phonon but setting it up is a nightmare. Earlier some of you had suggest v4linux. I tried to compile the sample program capture.c on RHEL 4 machine and it gave the following errors.
usr/src/kernels/2.6.9-5.EL-i686/include/linux/videodev2.h:436: warning: no semicolon at end of struct or union
/usr/src/kernels/2.6.9-5.EL-i686/include/linux/videodev2.h:436: error: syntax error before '*' token
/usr/src/kernels/2.6.9-5.EL-i686/include/linux/videodev2.h:438: error: syntax error before '*' token
/usr/src/kernels/2.6.9-5.EL-i686/include/linux/videodev2.h:438: warning: data definition has no type or storage class
/usr/src/kernels/2.6.9-5.EL-i686/include/linux/videodev2.h:439: error: syntax error before '}' token
/usr/src/kernels/2.6.9-5.EL-i686/include/linux/videodev2.h:810: error: field `win' has incomplete type
So hit a dead end, besides I haven't come up with concrete workable examples for the same. Also the website isn't being updated frequently suggesting a stagnancy in development process.
Since the application will be graphical with menu based user interaction, it would need to use Qt or something similar to it for the graphical part. The headache is I haven't been able to figure out, how I can implement/integrate the video capture/display feature in a dummy application (my try was with Qt may be some of you have done it with some other library or language).
EDIT:
Was able to compile the program by importing a local copy of videodev2.h and adding define statement to include a __user macro. But now it won't run as it cannot find /dev/video device. So again stuck at a dead end in video4linux
You could try the FFmpeg family of libraries. As of fairly recently (I think), it also comes with the libavdevice library that supports V4L and V4L2 for video capture, and it shouldn't be very difficult to build a FFmpeg pipeline to read video from an avdevice source and write it using avcodec and avformat into a file...
gstreamer is a very capable multimedia stack for capturing, and pygst and PyQt4 bindings exist for use in Python.
If it said "can't find /dev/video", that is because Linux numbers the devices.
If you would have looked in the .c file you would have seen there were several arg_opts
as -d ... (where ... would become /dev/video0) or you can go in the .c file and add a 0 to /dev/video, you'll find it somewhere about line 590 in the beginning of int main.

Resources