I'm working on implementing a build system using scons for a somewhat large software project. There is a directory structure which separates the code for individual libraries and programs into their own directories. With our existing make system, I can do a "make clean" in a single program directory and it will only clean the files associated with the source in that directory. If I do an "scons -c" though, it recognizes that the program depends on a slew of libraries that are in sibling (or cousin) directories and cleans all of the files for those as well. This is not what I want since I then have to rebuild all of these libraries which can take several minutes.
I have tried playing with the "NoClean()" command, but have not gotten it to work in the way I need. Given the size of the code base and complexity of the directory structure, I can't realistically have a NoClean() line for every file in every library.
Is there any way to tell scons to ignore any dependencies above the current directory when doing a clean (i.e. scons -c) ?
I'd love to have a good answer to this myself.
The only solution that I can offer for now is that you get Noclean working.
So in your library, you should have something like this
lib_objs = SharedObject(source_list)
mylib = SharedLibrary('libname', lib_objs)
So for this we want to protect the library and the sources from being cleaned.
NoClean([mylib, lib_objs])
Notice that I had to split the building of the object files from the library because I want to be able to pass them to NoClean as well.
Try using the target name when cleaning.
scons -c aTargetName
You can use the SCons Alias() function to simplify the target name and to also group several target names into one alias.
With this approach you'll have to add an alias in each appropriate subdir, which isn't necessarily a bad thing :)
Related
I've been working on a few scripts lately and I found that it would be really useful to separate some common functionalities on other files like 'utils' and the import them into my main scripts. For this, I used source ./utils.sh. However, It seems that this approach depends on the current path from where I'm calling my main script.
Let's say I have this folder structure:
scripts/
|-tools/
| |-utils.sh
|-main.sh
On main.sh:
source ./tools/utils.sh
some_function_defined_on_utils_sh
...
If I run main.sh from scripts/ folder everything works fine. But if I run it from a different directory ./scripts/main.sh the script fails because it can't find ./tools/utils.sh, which makes sense.
I understand why this doesn't work. My question is if there is any other mechanism to import 'modules' into my script, and make everything current-dir agnostic (just like python scripts from utils import some_function_defined_on_utils_sh))
Thanks!
Define an environment variable with a suitable default setting that is where you'll store your modules, and then read the module using the . (dot) command.
One serious option is simply to place the files in a directory already on your PATH — $HOME/bin is a plausible candidate for private material; /usr/local/bin for material to be shared. Then you won't even need to specify the path to the files; you can write . file-found-on-path to read it. Of course, the downside is that if someone switches their PATH setting, or puts a file with the same name in another directory earlier on their PATH, then you end up with a mess.
The files do not even need to be executable — they just need to be readable to be usable with the dot (.) command (or in Bash/C shells, the source command).
Or you could specify the location more exactly, such as:
. ${SHELL_UTILITY_DIR:-/opt/shell/bin}/tools/utils.sh
or something along those general lines. The choice of environment variable name and default location is up to you, and how much substructure you use on the directory is infinitely variable. Simplicity and consistency are paramount. (Also consider whether versioning will be a problem; how will you manage updates to your utilities?)
I work with the Yocto Project quite a bit and a common challenge is determining why (or from what recipe) a file has been included on the rootfs. This is something that can hopefully be derived from the build system's environment, log & meta data. Ideally, a set of commands would allow linking a file back to a source (ie. recipe).
My usual strategy is to perform searches on the meta data (e.g. grep -R filename ../layers/*) and searches on the internet of said filenames to find clues of possible responsible recipes. However, this is not always very effective. In many cases, filenames are not explicitly stated within a recipe. Additionally, there are many cases where a filename is provided by multiple recipes which leads to additional work to find which recipe ultimately supplied it. There are of course many other clues available to find the answer. Regardless, this investigation is often quite laborious when it seems the build system should have enough information to make resolving the answer simple.
This is exact use case for oe-pkgdata-util script and its subcommand find-path. That script is part of openembedded-core.
See this example (executed in OE build environment, i.e. bitbake works):
tom#pc:~/oe/build> oe-pkgdata-util find-path /lib/ld-2.24.so
glibc: /lib/ld-2.24.so
You can clearly see that this library belongs to glibc recipe.
oe-pkgdata-util has more useful subcommands to see information about packages and recipes, it worth to check the --help.
If you prefer a graphical presentation, the Toaster web UI will also show you this, plus dependency information.
The candidate files deployed for each recipe are placed in each $WORKDIR/image
So you can cd to
$ cd ${TMPDIR}/work/${MULTIMACH_TARGET_SYS}
and perform a
$ find . -path '*/image/*/fileYouAreLookingFor'
from the result you should be able to infer the ${PN} of the recipe which deploys such file.
For example:
$ find . -path '*/image/*/mc'
./bash-completion/2.4-r0/image/usr/share/bash-completion/completions/mc
./mc/4.8.18-r0/image/usr/share/mc
./mc/4.8.18-r0/image/usr/bin/mc
./mc/4.8.18-r0/image/usr/libexec/mc
./mc/4.8.18-r0/image/etc/mc
I have a code snippet similar to this:
# Compile protobuf headers
env.Protoc(...)
# Move headers to 'include' (compiled via protobuf)
env.Command([include headers...], [headers...], move_func)
# Compile program (depends on 'include' files)
out2 = SConscript('src/SConscript')
Depends(out2, [include headers...])
Basically, I have Protoc() compiling protobuf files, then the headers are moved to the 'include' directory by env.Command() and finally the program is compiled through a SConscript file in the 'src'.
Since these are header files that are being moved (that the src compilation depends on), they are not explicitly defined as a dependency by scons (as far as I understand). Thus, the compilation runs, but the header files haven't been moved so it fails. I have tried exposing the dependency via Depends() and Requires() without success.
I understand that in the usual case, scons should "figure-out" dependencies, but I don't know how it could do that here.
Thanks!
You seem to be thinking in "make" ways about your build process, which is the wrong approach when using SCons. You can't order single build steps by putting them in different SConscripts, and then including those in a special order. You have to define proper dependencies between your actual sources (C/CPP files for example) and a target like a program or PDF file. Then SCons is able to figure out the correct build order, and will traverse through the folder structure of your project automatically. If required, it will enter subfolders more than once when the dependency graph (DAG) dictates this. Defining this kind of dependencies between inputs and outputs is usually done, using a Builder...and in your case the Install() builder would be a good fit. Please also regard the hints for #2 in the list of "most frequently-asked FAQs" ( https://bitbucket.org/scons/scons/wiki/FrequentlyAskedQuestions).
Further, I can only recommend to read a little more in the UserGuide ( http://www.scons.org/doc/production/HTML/scons-user.html ) to get a better feeling for how to do things in a more "SConsy" way. If you get stuck, feel free to ask further questions on our mailing list at scons-users#scons.org (see http://www.scons.org/lists.php ).
Finally, if you have a lot of steps that you want to execute in serial, and that don't require any special input/output files, SCons is probably not the right tool for your current task. It's designed as a file-oriented build system with automatic parallelization in mind, a simple (Python?) script might be better at the mere serial stuff...
Using Scons, if I build static library, Scons compile all the source file and .obj files are generated. Now when I want to clean static library, I don't want to clean .obj files, how do I do that?
How about:
env = Environment()
sources = ['tridip.c', 'tridip1.c', 'tridip2.c']
objects = [ env.StaticObject(sf) for sf in sources ]
env.NoClean(objects)
lib = env.StaticLibrary('tridip', objecst)
exe = env.Program('tridip3.c', LIBS=lib, LIBPATH='.')
If you don't want your final target and your static lib to actually depend on each other, you can create them in two separate SConstructs (two SCons projects). Like this, you'll still be able to reference the library as input to your final application/executable by name...but there will be no direct dependency detected by SCons, meaning that the static lib won't get created automatically when you try to create your final target. You'd have to do this manually each time. This is the crucial point here: you either want a dependency lib->exe, or not. In the former case, the dependency also holds for cleaning a target (and implicitly all of its dependencies further down the tree).
A way out of this dilemma would be to use the NoClean() function (see the UserGuide at http://www.scons.org/doc/production/HTML/scons-user.html ), but you'd have to wrap each single object file with it. Consequences are unclear regarding the correctness and stability of your build, so I definitely don't encourage you to go this way...nor any other user reading this. ;)
In this instance I'm using c with autoconf, but the question applies elsewhere.
I have a glade xml file that is needed at runtime, and I have to tell the application where it is. I'm using autoconf to define a variable in my code that points to the "specified prefix directory"/app-name/glade. But that only begins to work once the application is installed. What if I want to run the program before that point? Is there a standard way to determine what paths should be checked for application data?
Thanks
Thanks for the responses. To clarify, I don't need to know where the app data is installed (eg by searching in /usr,usr/local,etc etc), the configure script does that. The problem was more determining whether the app has been installed yet. I guess I'll just check in install location first, and if not then in "./src/foo.glade".
I dont think there's any standard way on how to locate such data.
I'd personally do it in a way that i'd have a list of paths and i'd locate if i can find the file from anyone of those and the list should containt the DATADIR+APPNAME defined from autoconf and CURRENTDIRECTORY+POSSIBLE_PREFIX where prefix might be some folder from your build root.
But in any case, dont forget to use those defines from autoconf for your data files, those make your software easier to package (like deb/rpm)
There is no prescription how this should be done in general, but Debian packagers usually installs the application data somewhere in /usr/share, /usr/lib, et cetera. They may also patch the software to make it read from appropriate locations. You can see the Debian policy for more information.
I can however say a few words how I do it. First, I don't expect to find the file in a single directory; I first create a list of directories that I iterate through in my wrapper around fopen(). This is the order in which I believe the file reading should be done:
current directory (obviously)
~/.program-name
$(datadir)/program-name
$(datadir) is a variable you can use in Makefile.am. Example:
AM_CPPFLAGS = $(ASSERT_FLAGS) $(DEBUG_FLAGS) $(SDLGFX_FLAGS) $(OPENGL_FLAGS) -DDESTDIRS=\"$(prefix):$(datadir)/:$(datadir)/program-name/\"
This of course depends on your output from configure and how your configure.ac looks like.
So, just make a wrapper that will iterate through the locations and get the data from those dirs. Something like a PATH variable, except you implement the iteration.
After writing this post, I noticed I need to clean up our implementation in this project, but it can serve as a nice start. Take a look at our Makefile.am for using $(datadir) and our util.cpp and util.h for a simple wrapper (yatc_fopen()). We also have yatc_find_file() in case some third-party library is doing the fopen()ing, such as SDL_image or libxml2.
If the program is installed globally:
/usr/share/app-name/glade.xml
If you want the program to work without being installed (i.e. just extract a tarball), put it in the program's directory.
I don't think there is a standard way of placing files. I build it into the program, and I don't limit it to one location.
It depends on how much customising of the config file is going to be required.
I start by constructing a list of default directories and work through them until I find an instance of glade.xml and stop looking, or not find it and exit with an error. Good candidates for the default list are /etc, /usr/share/app-name, /usr/local/etc.
If the file is designed to be customizable, before I look through the default directories, I have a list of user files and paths and work through them. If it doesn't find one of the user versions, then I look in the list of default directories. Good candidates for the user config files are ~/.glade.xml or ~/.app-name/glade.xml or ~/.app-name/.glade.xml.