I have some targets that need to be built in order to determine what some of my other targets are. How do I tell SCons?
An example:
A script, generate is run on some configuration files. This script generates include path and build flags based on information in the configuration files. In order to build a SCons Object, I need to read the generated files.
I was just running Execute() on generate but it's now got lots of files to generate and it takes a good amount of time, so I only want to run it when it or a configuration file changes. How do I tell SCons to ask me at build time for some more targets once this Command has done anything it needs to do?
ok, some SCons clarifications first. Scons have two phases in doing a build. First, in the analysis phase all Scons scripts are executed and the result is a static dependency tree describing source and target files for all the builders defined in the scripts. Next, based on that tree, the build database from last build and the signatures of the files on disc, all builders with out of date targets are rebuild.
Now to your question. If you want to only run generate when necessary (when generate or configuration files changes), then running generate as a part of the analysis phase is out of the question. So don't use Execute(). Instead generate must be a builder of its own. So far so good.
Now you have two builders, the first builder generate and the second builder, I call it buildObject. buildObject depend in the targets of generate, but as you state, the generate targets are unknown at analysis time (because generate is not run, it is only set up as a builder). Having unknown targets at analysis time is a classic challenge with SCons, and there are no easy way to solve it.
I normally solve it by using what I call a SCons.pleaser file.
In your case it would be a known target that generate generates containing a high res timestamp. The buildObject builder then take this file as a source.
Now, if your configuration files has not changed, generate will not run, the SCons.pleaser will not change, and the buildObject will not run. If you change you configuration files, generate will run, the SCons.pleaser will change, and the buildObject will run as well.
Regards
The solution I went with was to make a new SConstruct that knows how to do the generate phase, and Execute() it early in my SConscripts before I get to the bits where its output is needed. It works well, since it just builds things as necessary with the small fixed overhead of invoking SCons from within SCons.
Related
In order to reduce the executable size of a Rust program (called runtime in my code), I am trying to compress it and then include it in a second program (called szl) that decompresses it and executes it.
I have done that by using a Cargo build script in szl that opens the output binary from runtime, compresses it, and then generates a file that is ready for use by include_bytes!.
The issue with this approach is the dependencies are not handled properly. For example, Cargo may try to build szl before runtime (and fail), and when the source code of runtime is modified, szl is not rebuilt.
Is there a way to tell Cargo that szl depends on the binary from runtime (and transitively on the source code of runtime), or should I use another approach such as an external Makefile?
While not exactly your use case, you might get it to work with the links manifest key. It would allow you to express a dependency between the two programs and you can pass more information with DEP_FOO_KEY variables.
Before you go to such drastic measures, it might be worth it to try other known strategies for reducing rust binary size (such as calling strip, remove debug symbols, LTO, panic=abort) etc.
I have compiled the sources of wget, here is the ftp server https://ftp.gnu.org/gnu/wget/ to link my own program to one of the object files that I obtained after I compiled the project. But running nm -u on the desired file (to be specific src/http.o) gives me a whole lot of names that need to be resolved at link-time.
Question #1
Is there a tool to find which other object files are needed to be present for linker to resolve all the symbols? Manually testing every possible combination of object files does not even seem reasonable.
Question #2
When I try to link my program with every possible object file obtained from compiling the project I meet the following error - multiple definition. Does it imply that in general I need to select only a meaningful subset of the object files that I get after compiling some project and then building my executable with them?
Is there a tool to find which other object files are needed to be present for linker to resolve all the symbols?
No. Constructing such a tool would not be difficult (you want to find connected components in the dependency graph), but the problem is not common.
Manually testing every possible combination of object files does not even seem reasonable.
It looks like wget consists of about 100 source files. Using all possible permutations, you would only have to try your link 100! times, which is indeed a bit too many combinations to try.
As #kaylum commented, the developers didn't intend wget as a reusable library, so there is no guarantee that there is a solution even if you do try every possible combination.
Also note that linking in wget sources imposes licence restrictions on your final program (you would have to release it under GPLv3).
When I try to link my program with every possible object file obtained from compiling the project I meet the following error - multiple definition.
That is expected: both your own program and wget/src/main.c define the main function.
Does it imply that in general I need to select only a meaningful subset of the object files that I get after compiling some project and then building my executable with them?
Yes. And in general there is no guarantee that a subset satisfying your requirements even exists.
I have a directory with multiple source files of indeterminate name. The only thing I know is the file extension. I want to take each source file, and build a single target from each. The method I'm currently using is to determine the name of each source using a for loop:
targets = []
for file in listdir('.'):
if file.endswith('.xdm'):
targets += env.m4(source=file)
The advantage of doing it progrmatically like this is that the SConscript doesn't have to be maintained by the developers as they add new sources. The problem is that the targets are no longer cleaned because of something to do with dependencies that I don't entirely understand.
So my question is is there a more appropriate way to do this, using in-built SCons functionality, without relying on more traditional flow control, or should I just ensure that each of my sources is determined and list them individually in the SConscript?
Instead of fiddling with listdir I would simply use the Glob() method, as provided by SCons itself:
for file in Glob("*.xdm"):
env.m4(source=file)
This (like the example from your question) is a perfectly fine approach, since it uses the fact that SConscripts are actually Python scripts. The Glob() approach has the advantage of also finding *.xdm files that don't exist on the harddrive yet, but may get created as part of the build process later.
I wonder about the problems that you mentioned, regarding cleaning of the targets. The Q&A linked in your question above seems unrelated to me. If you experience actual "cleaning" problems with one of the approaches above, please post a separate question together with the full verbatim input and output. If it should turn out that this doesn't work out-of-the-box, I'd consider it to be a bug.
I am doing a timestamp-only build to bulk convert image files. Many of the converted image files already exist, but I like to make sure that they are all checked through each time.
How come SCons requires a database file (.sconsign.dblite) that it uses for MD5 hash data when it's instructed (via env.Decider("timestamp-newer")) to only deal with timestamps? It shouldn't need to keep a database between builds for timestamps because all the information is associated with the files themselves.
If the dblite database doesn't exist SCons reconverts all the images regardless of whether their timestamps imply they need to be rebuilt or not. The title is an example message I get when the dblite database does not exist.
If anyone can explain this I'd really appreciate it. I love the functional programming with Python, but SCons itself is not quite doing it for me at the moment.
Using "timestamp-newer", SCons actually stores the timestamp info. You can see why here:
Using Time Stamps to Decide If a File Has Changed
Try using "timestamp-match" instead.
I finally got this sorted. Brady was right about how to use SCons, but I a few days ago I eventually worked out you can also control exactly what you want built by just controlling what build commands are issued in the first place. In my case I ignored any image files for which the target file already exists using os.path.exists().
Sounds simple, but it is a conceptual difference between SCons and make, because make does not save its state between builds in the way SCons does.
Yes, I'm trying to work out the same thing, but I'm doing bulk conversion of video files which takes several days if done unnecessarily. I've already done most of it.
So I want a way to tell SCons, "For files that exist now, store their existing timestamps/MD5s, and don't rebuild unless that changes in future."
Will report back if I find a way...
I think your question is really about why there's a .sconsign.dblite when you set the decider to just check timestamp.
One reason is that it allows SCons to keep track of the method used to produce each target. If that changes, even if the timestamp doesn't, it should rebuild the affected targets.
Have you tried building a single file, and then using the sconsign utility to examine the contents of the .sconsign.dblite file?
I'm trying to cheaply and accurately predict all the SystemVerilog dependencies for a build flow. It is ok to over-predict the dependencies and find a few Verilog files that aren't sv dependencies, but I don't want to miss any dependencies.
Do I actually have to parse the Verilog in order to determine all its dependencies? There are tick-include preprocessor macros, but those tick-include don't seem to load all the code currently getting compiled. There is a SYSTEM\_VERILOG\_PATH environment variable. Do I need to parse every SystemVerilog file in that SYSTEM\_VERILOG\_PATH variable in order to determine which modules are defined in which files?
One good way (if this is synthesizable code) is to use your synthesis tool file list (e.g. .qsf for Altera). That tends to be complete, but if it isn't, you can look at the build log for missing files that it found.
From a readily compiled environment it is possible to dump the source files
(e.g. Cadence
-- To list source files used by the snapshot 'worklib.top:snap'
% ncls -source -snapshot worklib.top:snap
)
but if you are starting from scratch I am afraid there is no easy solution. I would go for the pragmatic one: have a config file with all the directories that contain .sv files and then compile everything in it. If your project has a proper file structure, you could also modularize this by supplying config files for every major block.
Hope that helps.
I know Questa has a command line option where it will generate a makefile for you with all the dependencies in it after you have compiled your design. I'm not sure if the other simulators have that.
Another option is to browse and dump your compiled library in your simulator. You probably won't get the actual filenames the modules are compiled from, but it'll be a lot easier to parse all your verilog files for the module names that show up in the compiled library.