In this instance I'm using c with autoconf, but the question applies elsewhere.
I have a glade xml file that is needed at runtime, and I have to tell the application where it is. I'm using autoconf to define a variable in my code that points to the "specified prefix directory"/app-name/glade. But that only begins to work once the application is installed. What if I want to run the program before that point? Is there a standard way to determine what paths should be checked for application data?
Thanks
Thanks for the responses. To clarify, I don't need to know where the app data is installed (eg by searching in /usr,usr/local,etc etc), the configure script does that. The problem was more determining whether the app has been installed yet. I guess I'll just check in install location first, and if not then in "./src/foo.glade".
I dont think there's any standard way on how to locate such data.
I'd personally do it in a way that i'd have a list of paths and i'd locate if i can find the file from anyone of those and the list should containt the DATADIR+APPNAME defined from autoconf and CURRENTDIRECTORY+POSSIBLE_PREFIX where prefix might be some folder from your build root.
But in any case, dont forget to use those defines from autoconf for your data files, those make your software easier to package (like deb/rpm)
There is no prescription how this should be done in general, but Debian packagers usually installs the application data somewhere in /usr/share, /usr/lib, et cetera. They may also patch the software to make it read from appropriate locations. You can see the Debian policy for more information.
I can however say a few words how I do it. First, I don't expect to find the file in a single directory; I first create a list of directories that I iterate through in my wrapper around fopen(). This is the order in which I believe the file reading should be done:
current directory (obviously)
~/.program-name
$(datadir)/program-name
$(datadir) is a variable you can use in Makefile.am. Example:
AM_CPPFLAGS = $(ASSERT_FLAGS) $(DEBUG_FLAGS) $(SDLGFX_FLAGS) $(OPENGL_FLAGS) -DDESTDIRS=\"$(prefix):$(datadir)/:$(datadir)/program-name/\"
This of course depends on your output from configure and how your configure.ac looks like.
So, just make a wrapper that will iterate through the locations and get the data from those dirs. Something like a PATH variable, except you implement the iteration.
After writing this post, I noticed I need to clean up our implementation in this project, but it can serve as a nice start. Take a look at our Makefile.am for using $(datadir) and our util.cpp and util.h for a simple wrapper (yatc_fopen()). We also have yatc_find_file() in case some third-party library is doing the fopen()ing, such as SDL_image or libxml2.
If the program is installed globally:
/usr/share/app-name/glade.xml
If you want the program to work without being installed (i.e. just extract a tarball), put it in the program's directory.
I don't think there is a standard way of placing files. I build it into the program, and I don't limit it to one location.
It depends on how much customising of the config file is going to be required.
I start by constructing a list of default directories and work through them until I find an instance of glade.xml and stop looking, or not find it and exit with an error. Good candidates for the default list are /etc, /usr/share/app-name, /usr/local/etc.
If the file is designed to be customizable, before I look through the default directories, I have a list of user files and paths and work through them. If it doesn't find one of the user versions, then I look in the list of default directories. Good candidates for the user config files are ~/.glade.xml or ~/.app-name/glade.xml or ~/.app-name/.glade.xml.
Related
I'm working on implementing a build system using scons for a somewhat large software project. There is a directory structure which separates the code for individual libraries and programs into their own directories. With our existing make system, I can do a "make clean" in a single program directory and it will only clean the files associated with the source in that directory. If I do an "scons -c" though, it recognizes that the program depends on a slew of libraries that are in sibling (or cousin) directories and cleans all of the files for those as well. This is not what I want since I then have to rebuild all of these libraries which can take several minutes.
I have tried playing with the "NoClean()" command, but have not gotten it to work in the way I need. Given the size of the code base and complexity of the directory structure, I can't realistically have a NoClean() line for every file in every library.
Is there any way to tell scons to ignore any dependencies above the current directory when doing a clean (i.e. scons -c) ?
I'd love to have a good answer to this myself.
The only solution that I can offer for now is that you get Noclean working.
So in your library, you should have something like this
lib_objs = SharedObject(source_list)
mylib = SharedLibrary('libname', lib_objs)
So for this we want to protect the library and the sources from being cleaned.
NoClean([mylib, lib_objs])
Notice that I had to split the building of the object files from the library because I want to be able to pass them to NoClean as well.
Try using the target name when cleaning.
scons -c aTargetName
You can use the SCons Alias() function to simplify the target name and to also group several target names into one alias.
With this approach you'll have to add an alias in each appropriate subdir, which isn't necessarily a bad thing :)
I was wondering whether Node.js/npm include any kind of exension mechanism comparable to Python setuptools' "entry points".
So, in short:
is there any way I can do dynamic discovery of services provided by other packages using npm?
if not, what would be the best way to implement something similar? Specifying the extension name in the main module's configuration file seems to be the logical solution, but I wonder whether something "automatic" can be done.
I'm not aware of any builtin mechanism to do this.
One viable way of doing it yourself:
I made a small tool (Jumpstart) to quickly create project scaffolding from templates with placeholders, and I used a kind of plugin mechanism for that. It basically comes down to that the Jumpstart script searches for modules named jumpstart-* "adjacent" to where the module itself is installed. So it would work for both local and global installations. If installed locally, it would search the other local modules (on the same level) and if global, it searches the other global modules.
Note that here, "search" comes down to a simple fs.exists check to see if there's a Jumpstart template module with a particular name installed. However, nothing would stand in the way to actually get a full list of all installed packages matching the jumpstart-* pattern, and loading all at once. I could also search up the entire directory tree for node_modules directories and do the same. There's no point in doing this for this particular program, however.
See https://npmjs.org/package/jumpstart for docs.
The only limitation to this technique is that all modules must be named in a consistent fashion. Start with some string, end with some string, something like that. Any rogue packages polluting the namespace could be detected by doing further checks on a package contents: What files does it contain? What kind of object does its main module export? etc.
Brunch also uses a plugin mechanism. This one actually deals with file extensions, so is more relevant: https://github.com/brunch/brunch/wiki/Plugins . See for example source of the CoffeeScript plugin https://github.com/brunch/coffee-script-brunch/blob/master/src/index.coffee .
Is there a kind of "include" directive in RPM spec? I couldn't find an answer by googling.
Motivation: I have a RPM spec template which the build process modifies with the version, revision and other build-specific data. This is done by sed currently. I think it would be cleaner if the spec would #include a build-specific definitions file, which would be generated by the build process, so I don't need to search and replace in the spec.
If there is no include, is there an idiomatic way to do this (quite common, I believe) task?
Sufficiently recent versions of rpmbuild certainly do support %include:
%include common.inc
Unfortunately, they aren't very smart about it -- there is no known set of directories, in which it will look for the requested files, for example. But it is there and variables are expanded, for example:
%include %{_topdir}/Common/common.inc
RPM does not support includes.
I have solved similar problems with either m4 macro processor or by just concatenating parts of spec (when the "include" was at the beginning).
If you only need to pass a few variables at build time, and not include several lines from another file, you can run
rpmbuild --define 'myvar SOMEVALUE' -bb myspec.spec
and you can use %myvar in the spec.
I faced this same issue recently. I wanted to define multiple sub-packages that were similar, but each varied just slightly (they were language-specific RPMs). I didn't want to repeat the same boiler-plate stuff for each sub-package.
Here's a generic version of what I did:
%define foo_spec() %{expand:%(cat '%{myloc}/main-foo.spec')}
%{foo_spec bar}
%{foo_spec baz}
%{foo_spec qux}
The use of %{expand} ensures that %(cat) is only executed a single time, when the macro is defined. The content of the main-foo.spec file is then three times, and each time %1 in the main-foo.spec file expands to each of bar, baz and qux, in turn, allowing me to treat it as a template. You could easily expand this to more than one parameter, if you have the need (I did not).
For the underlying issue, there maybe two additional solutions that are present in all rpm versions that I am aware of.
Subpackages
macro and rpmrc files.
Subpackages
Another alternative (and perhaps the "RPM way") is to use sub-packages. Maximum RPM also has information and examples of subpackages.
I think the question is trying to structure something like,
two spec files; say rpm_debug.spec and rpm_production.spec
both use %include common.spec
debug and production could also be client and server, etc. For the examples of redefining a variable, each subpackage can have it's own list of variables.
Limitations
The main advantage of subpackages is that only one build takes place; This may also be a disadvantage. The debug and production example may highlight this. That can be worked around using strip to create variants or compiling twice with different output; perhaps using VPATH with Gnu Make). Having to compile large packages and then only have simple variations, like with/without developer info, like headers, static libraries, etc. can make you appreciate this approach.
Macros and Rpmrc
Subpackages don't solve the problem of structural defines that you wish for an entire rootfs hierarchy, or larger collection of RPMs. We have rpmbuild --showrc for this. You can have a large amount of variables and macros defined by altering rpmrc and macros when you run rpm and rpmbuild. From the man page,
rpmrc Configuration
/usr/lib/rpm/rpmrc
/usr/lib/rpm/redhat/rpmrc
/etc/rpmrc
~/.rpmrc
Macro Configuration
/usr/lib/rpm/macros
/usr/lib/rpm/redhat/macros
/etc/rpm/macros
~/.rpmmacros
I think these two features can solve all the problems that %include can. However, %include is a familiar concept and was probably added to make rpm more full-featured and developer friendly.
Which version are you talking about? I currently have %include filename.txt in my spec file and it seems to work just like the C #include directive.
> rpmbuild --version
RPM version 4.8.1
You can include the *.inc files from the SOURCES directory (%_sourcedir):
Source1: common.inc
%include %{SOURCE1}
In this way they will go automatically into SRPMS.
I've used scripts (name your favorite) to take a template and create the spec file from that. Also, the %files tag can import a file that is created by another process, e.g. Python's bdist-rpm.
I am trying to build an application with OpenSync 0.4 (0.3.9 indeed) dependency.
In the project's configure.ac the opensync library is written as libopensync1. However, this doesn't build on my Gentoo system. Changing libopensync1 to libopensync does fix the issue for me.
I searched with Google and found that libopensync1 is used in some distributions, while libopensync in others. So how to resolve this issue in configure.ac?
Thanks.
The macro AC_SEARCH_LIBS does what you need. (There is much heated debate about whether or not pkg-config should ever be used. If you choose to rely on it, ptomato gives a reasonable approach.) Simply add this to your configure.ac:
AC_SEARCH_LIBS([osync_mapping_new],[opensync1 opensync],[],
[AC_MSG_ERROR([can't find opensync])])
This will first look for a library named opensync1; if
it doesn't find that, it will look for opensync.
The primary drawback of using pkg-config is that most projects that
rely on it do not actually check if the data provided by the .pc
file is reliable, so configure may succeed but a subsequent build
will fail. It is always possible for a user to set PKG_CONFIG=true
when running configure and completely eliminate all of the data
provided by any associated .pc files, setting LIBS, CFLAGS, etc by
hand the 'old-fashioned' way.
The primary drawback of not using pkg-config is that the user
has to set LIBS, CFLAGS, etc. the old-fashioned way. In practice,
this is pretty trivial, and all pkg-config has done is move the
data from a single CONFIG_SITE file to separately maintained
.pc files for each package.
If you do use PKG_MODULE_CHECK, follow it up with a call to
AC_CHECK_LIB or AC_SEARCH_LIBS to validate the data in whatever
.pc file was located by PKG_CHECK_MODULES
I'm assuming that the place at which this occurs inside your configure.ac is inside a PKG_CHECK_MODULES call.
Looking at the libopensync sources, it seems that libopensync1 is the newer name, and libopensync is the old name. So, we'll use pkg-config macros to look for the newer name unless it doesn't exist.
Put this in your configure.ac:
# Check if libopensync1 is known to pkg-config, and if not, look for libopensync instead
PKG_CHECK_EXISTS([libopensync1], [OPENSYNC=libopensync1], [OPENSYNC=libopensync])
Then later in your PKG_CHECK_MODULES call, replace libopensync1 with $OPENSYNC.
I'm trying to compile a piece of software which has the standard build process e.g.
configure
make
make install
The software requires a library e.g. libreq.so which is installed in /usr/local/lib. However, my problem is I'd like to build the software and link it with a different version of the same library (i have the source for the library as well) that I've installed in /home/user/mylibs.
My question is, how do I compile and link the software against the library in /home/user/mylibs rather than the one in /usr/local/lib
I tried setting "LD_LIBRARY_PATH" to include /home/user/mylibs but that didn't work.
Thanks!
When you have an autoconf configure script, use:
CPPFLAGS=-I/home/user/include LDFLAGS=-L/home/user/mylibs ./configure ...
This adds the nominated directory to the list of directories searched for headers (usually necessary when you're using a library), and adds the other nominated directory to the list searched for actual libraries.
I use this all the time - on my work machine, /usr/local is 'maintained' by MIS and contains obsolete code 99.9% of the time (and is NFS-mounted, read-only), so I struggle to avoid using it at all and maintain my own, more nearly current, versions of the software under /usr/gnu. It works for me.
Try using LD_PRELOAD set to your actual file.
LD_LIBRARY_PATH is for finding the dynamic link libraries at runtime. At compiling you should add -L parameters to gcc/g++ to specify in which directory the *.so files are. You also need to add the library name with -l<NAME> (where the library is libNAME.so).
Important! For linking you not only need the libNAME.so file but a libNAME.a is needed too.
When you run the application, don't forget to add the dir to the LD_LIBRARY_PATH.
When you added the /home/user/mylibs to the LD_LIBRARY_PATH did you add it to the front or end of the existing paths? The tokens are searched in-order so you will want yours to appear first in the list.
Also, many standard build environments that use configure will allow you to specify an exact library for each required piece. You will have to run ./configure --help but you should see something like --using-BLAH-lib=/path/to/your/library or similar.