I'm running Linux and have a situation like this:
A binary file 'bin1' loads via dlopen 'shared1.so' which is linked with 'shared2.so' and 'shared3.so'.
if 'shared2.so' or 'shared3.so' is missing the program 'bin1' won't run.
There are runs that I know that I won't touch any code from 'shared2.so' and I want 'bin1' to be able to run even when this library is missing, can this be done ?
You could ship program with dummy shared2.so library. You might need to add dummy functions which shared1 expects to find there. This can be done manually or via automatic tool like Implib.so.
Related
I'm writing a Perl script that takes data and writes it to an Excel file. I'm using Excel::Writer::XLSX to do this.
I'm hoping to write the script and then give it to the rest of my team so we can all use it to compile the data when we need to.
I have a few questions about this:
Do my colleagues need to have the module installed to for the script to work?
If not, how do I wrap up the module with the script to give it to them?
Is there a better way of doing this that using the module I've chosen?
There are a few ways of doing this. One options is to put together a Makefile.PL that specifies the dependencies. This allows you to bundle your script as a distribution. E.g.
use ExtUtils::MakeMaker;
WriteMakefile(
ABSTRACT => 'myscript creates Excel files',
AUTHOR => 'A.U. Thor',
EXE_FILES => [ 'myscript' ],
NAME => 'myscript',
VERSION => '1.2.3',
PREREQ_PM => {
'Excel::Writer::XLSX' => '0.88',
},
);
Then, people can do perl Makefile.PL which will inform them of the dependencies. If you do make dist, and distribute the resulting archive file, they can also use cpanm to install your script along with its dependencies.
Another option is to put together a cpanfile. Then, recipients can install all the dependencies using a tool such as cpanm.
Now, if you are distributing the script to people who do not use Perl normally, and you want them to be able to just click and run etc, you might want to look into pp.
A long time ago, I wrote a program I called scriptdist to turn a single-file program into a CPAN-like distribution, complete with a build file. That way you could pass it around as an archive and people could treat it like any other CPAN distribution. It basically automates what Sinan posted. I wrote about it for Dr. Dobbs.
There's a trick that you can use if you want to pass around the archive. The cpan tool can install from the current directory. That will get the dependencies (which, by the nature of being dependencies, are required):
$ cpan .
That way, you can install your program and its dependencies without putting anything in a CPAN-like repository.
It's far from clear what you need to know
Do my colleagues need to have the module installed to for the script to work?
I think it's obvious that your colleagues need access to your code to be able to make use of it
It's not clear what you have written, but if you have created a module then any program with access to your module files can simply use it to access its capabilities
If not, how do I wrap up the module with the script to give it to them?
Your "if not" isn't clear. What you have written means "If they don't need to have the module installed to for the script to work", and I doubt if that is your intention
"how do I wrap up the module with the script" Are you asking how to create a module, or do you already have one? Typically, modules are accessed by programmers who write a script with the use statement
If you have a module and you want other people to be able to load it with use then it must appear in one of the directories listed in their #INC array. If you are working on separate systems then it is best to create a package that you can alter as necessary and have others update
Is there a better way of doing this (than) using the module I've chosen?
Are you referring to Excel::Writer::XLSX or your own module?
If Excel::Writer::XLSX is doing what you need then you probably shgouldn't change. But if you are having problems with it in some way then you need to ask a new question and describe those issues
How do I cross-compile a QT application for a Freescale Hummingboard(imx6(arm))?
There are some guides around, but I've not been able to complete one with success.
The following (and more) guides give me a compile error on ./configure
http://forum.solid-run.com/linux-on-cubox-i-and-hummingboard-f8/qt5-3-on-hummingboard-t2072.html
https://community.freescale.com/docs/DOC-94066
When I run the ./configure command (With recommended commands, I've tried this with a lot of possibilities for commands but got none working). I got a compile error for all the external libraries QT uses (zlib, libjpeg, libpng, etc.). So it's a dead end from there.
I've tried a lot more stuff, I don't even remember all the stuff I've tried, but I got nothing working.
I'm trying to use mini-distribution for the Hummingboard. It's a system without window manager that is able to run QT applications (QT5). The build tool I'm trying to use is gcc-linaro-arm-linux-gnueabi, I'm using QT Creator. I've got QT working on the Hummingboard, I just can't compile anything for it.
I managed building an application for the IMX6 finally. Here is how I did it for other's. It's not an optimal solution but it is an solution.
I use Buildgear to build mini-distirbution as OS (Google it, not enough links with my reputation). I append my own application to this mini-distribution to also build it. This is done by placing creating a folder in the buildfiles/cross/cross-hummingboard folder and adding a buildfile (mine look like this http://pastebin.com/bZkJUiry). In this folder I also place a .tar of the project files (including the .pro). To get it to build I add "qt-gui" as a dependency to the fs (buildfiles/cross/cross-hummingboard/fs) by adding it to the list of depends.
I then run buildgear build fs, which will create an (Tarred) image including my (working) qt-application! I then extract the ./qt-gui executable and ssh it to the Hummingboard.
Of course this is all a bit cumbersome so I made a script that automates this all: http://pastebin.com/jFM6rZyY
It copies and Tars sources, compiles it together with the fs, extracts the executable, ssh's the file to the hummingboard and runs it. Takes about 3 minutes building but it works which is what counts for me at this point.
I'd like to share how I implemented a solution to a problem I had, to get some feedback and maybe learn some new feature of buildbot.
Scenario:
Create a package of a given software, and upload the package to the buildmaster into a shared folder.
The package name contains some data that are known to the build system (i.e. Makefiles) specifically the sw version. Let's assume the package name is:
myapp-1.2.3-r2435.tar.gz
Question:
How do I send to the buildslave steps the required to build up the very same package name, so that the buildslave can upload the package?
Specifically I need to know the version number (but I guess this could be any param)
Implemented (and working) solution:
The makefile, once the compilation process is completed, writes a file with the required param.
The slave uses the SetProperty() step to read the content of the file into a custom named property
Once I have the value of interest in the property (let's say APP_VERSION) I use it to build the package name with the same pattern used by the build system.
The described solution works, but I do not really like it because:
1) it's complicated, hence, I guess, fragile
2) it is not OS independent (I use "echo $VAR > file" to write the file, and "cat file" to read it and set the buildslave Property)
Is there in your opinion a better way to solve this issue?
Do you have any suggestion to make the solution OS independent? (It will not work for sure on Windows, while my package shoudl be built on Windows OS too)
We using an automated build system which downloads and compiles source. The only interface I have to control the behaviour of the compilation is by setting ENV VARs and the arguments given to `./configure'.
The issue is that the 'configure' script (of the particular source I'm compiling) checks for a system header file, which if found, adversely affects the compilation process. (the compilation process will avoid compiling libraries which it believes are already installed on the local system when the above mentioned system header file is found.)
Since this is an automated process, I cannot modify the 'configure' script in anyway, and as mentioned can only specify the environment variables and arguments passed to `configure'. The configure script uses the AC_CHECK_HEADERS macro to generate the code to do the check for the system file. Is there anyway to avoid a check of a specific system file from the configure arguments?
The troublesome header file is in the path /usr/include/pcap/.
Thanks
Well there's a few things you could try:
remove foo.h from AC_CHECK_HEADERS and always build the library
use AC_CHECK_HEADER for foo.h and check for /usr/include/pcap/foo.h and don't AC_DEFINE(HAVE_FOO_H) if /usr/include/pcap/foo.h is there.
you could use AC_ARG_ENABLE or AC_ARG_WITH to turn off the offending test on a host-by-host basis via arguments to configure. So the answer to that question is yes.
All of these assume you can modify configure.ac and regenerate configure. If you can't do that you might have to modify configure (in an automated fashion, of course).
In this instance I'm using c with autoconf, but the question applies elsewhere.
I have a glade xml file that is needed at runtime, and I have to tell the application where it is. I'm using autoconf to define a variable in my code that points to the "specified prefix directory"/app-name/glade. But that only begins to work once the application is installed. What if I want to run the program before that point? Is there a standard way to determine what paths should be checked for application data?
Thanks
Thanks for the responses. To clarify, I don't need to know where the app data is installed (eg by searching in /usr,usr/local,etc etc), the configure script does that. The problem was more determining whether the app has been installed yet. I guess I'll just check in install location first, and if not then in "./src/foo.glade".
I dont think there's any standard way on how to locate such data.
I'd personally do it in a way that i'd have a list of paths and i'd locate if i can find the file from anyone of those and the list should containt the DATADIR+APPNAME defined from autoconf and CURRENTDIRECTORY+POSSIBLE_PREFIX where prefix might be some folder from your build root.
But in any case, dont forget to use those defines from autoconf for your data files, those make your software easier to package (like deb/rpm)
There is no prescription how this should be done in general, but Debian packagers usually installs the application data somewhere in /usr/share, /usr/lib, et cetera. They may also patch the software to make it read from appropriate locations. You can see the Debian policy for more information.
I can however say a few words how I do it. First, I don't expect to find the file in a single directory; I first create a list of directories that I iterate through in my wrapper around fopen(). This is the order in which I believe the file reading should be done:
current directory (obviously)
~/.program-name
$(datadir)/program-name
$(datadir) is a variable you can use in Makefile.am. Example:
AM_CPPFLAGS = $(ASSERT_FLAGS) $(DEBUG_FLAGS) $(SDLGFX_FLAGS) $(OPENGL_FLAGS) -DDESTDIRS=\"$(prefix):$(datadir)/:$(datadir)/program-name/\"
This of course depends on your output from configure and how your configure.ac looks like.
So, just make a wrapper that will iterate through the locations and get the data from those dirs. Something like a PATH variable, except you implement the iteration.
After writing this post, I noticed I need to clean up our implementation in this project, but it can serve as a nice start. Take a look at our Makefile.am for using $(datadir) and our util.cpp and util.h for a simple wrapper (yatc_fopen()). We also have yatc_find_file() in case some third-party library is doing the fopen()ing, such as SDL_image or libxml2.
If the program is installed globally:
/usr/share/app-name/glade.xml
If you want the program to work without being installed (i.e. just extract a tarball), put it in the program's directory.
I don't think there is a standard way of placing files. I build it into the program, and I don't limit it to one location.
It depends on how much customising of the config file is going to be required.
I start by constructing a list of default directories and work through them until I find an instance of glade.xml and stop looking, or not find it and exit with an error. Good candidates for the default list are /etc, /usr/share/app-name, /usr/local/etc.
If the file is designed to be customizable, before I look through the default directories, I have a list of user files and paths and work through them. If it doesn't find one of the user versions, then I look in the list of default directories. Good candidates for the user config files are ~/.glade.xml or ~/.app-name/glade.xml or ~/.app-name/.glade.xml.