First, I've got SConstruct file like this:
Object('a.s')
Program('mya','a.o')
I run scons, it generates 'mya'. OK. Then I change my SConstruct to be:
Object('a.s',CCFLAGS='-DHello')
Program('mya','a.o')
Run scons again. Nothing is done:
scons: Reading SConscript files ...
scons: done reading SConscript files.
scons: Building targets ...
scons: `.' is up to date.
scons: done building targets.
This is quite weird to me. When I used make or msbuild systems, whenever there's argumenet change in project configuration file, there'll be a rebuild. This is a default rule.
But seems scons's rule is different, is this by design? Only when source file change will trigger rebuild? If this is the design, I think there's a flaw that when compilation/linker option changes, target file should be different and thus, should trigger rebuild, right?
Is my understanding incorrect, or there's some special points in scons that I still need to know about?
Thanks a lot.
Refering directly to your last paragraph, and based on your last three questions (Using 'LIBS' in scons 'Program' command failed to find static library, why? and When changing the comment of a .c file, scons still re-compile it? and this one) and the depth of them, yes there seem to be a lot of things that you don't know about SCons.
So please take the next steps in reading its MAN page and the UserGuide. You might also want to step your tone down a bit and instead of questioning its design or claiming that there seems to be a "flaw" doing your homework (see also How To Ask Questions The Smart Way).
When you are calling a "scons -c" followed by a "scons" you should see that the "-DHello" doesn't appear in the command-line, even though "a.o" gets rebuilt. The variable $CCFLAGS isn't used to compile assembler files, but $ASFLAGS is...and when setting it instead, you should indeed see a rebuild immediately, without editing the source file.
Related
I have a project with both c and cpp files, and I've been using NMake to build. My problem is that if I have two inferences rules, one for each file type,
{$(dirSrc)}.c{$(dirObj)}.obj:
cl /nologo /c /EHsc /Fo$(dirObj)\ $<
{$(dirSrc)}.cpp{$(dirObj)}.obj:
cl /nologo /c /EHsc /Fo$(dirObj)\ $<
$(binPath): $(dirObj)\*.obj
link /nologo /dll /out:$(binPath) $(dirObj)\*.obj
only the c files get compiled, presumably because the .c extension is first in the .SUFFIXES list.
I could of course simply change the extensions on the c files to cpp, but I was wondering if anyone knows of a way to have both rules invoked.
Well, to answer my own question, the best I could think of was to compile to 2 separate directories, then point to both when running the linker.
{$(dirSrc)}.c{$(dirObj)\c}.obj:
cl /nologo /c /EHsc /Fo$(dirObj)\c\ $<
{$(dirSrc)}.cpp{$(dirObj)\cpp}.obj:
cl /nologo /c /EHsc /Fo$(dirObj)\cpp\ $<
$(binPath): $(dirObj)\c\*.obj $(dirObj)\cpp\*.obj
link /nologo /dll /out:$(binPath) $(dirObj)\c\*.obj $(dirObj)\cpp\*.obj
(As a reference for others in the same boat) the problem is the *.obj wildcard.
That, together with the fact that if there are multiple (inference) rules for a target, only one can be reasonably applied to create/update it.
Now, the build logic from a clean state is the following (simplified):
Need to build $(binPath), let's see what it depends on...
There's that *.obj, so let't look closer...
Doesn't match any existing files; check if we could build it somehow...
Cool, an inference rule says .obj can be built from .c (or .cpp, it really doesn't matter, which one is found first, see below! BTW, it's not the .SUFFIXES order, but the ordering of the rules in the makefile that matters here)...
So, execute the compiler in the rule to create *.obj from its matching source, which is the non-expandable, literal *.c:
→ cl /c /Fo($outdir) *.c
Great, *.obj is now ready, go ahead to linking...
OK, final target, and (even though the linking may now fail for undefined externals) that's all we could do, so let's call it a day.
Notice, how the alternative rule never even came into the picture!
Now, for updates, assuming the broken (incomplete) target isn't there, but the object files built above are, and there're even some updates, both to some .c and .cpp files:
Need to build the missing $(binPath), let's see what it depends on...
There's *.obj, again, which can now match existing files, so let's check them (one by one) if they need updating...
Then, there's the same inference rule again (whichever; there can be only one for an item, let's stick with .c) matching the .obj target in question, so check for changes in the corresponding .c sources...
They are found, so "run compiler", but this time NMAKE is smart enough to only supply the updated C sources via the $< macro:
→ cl /c /Fo($outdir) some.c other.c
(Note: it's even smart enough (apparently, with VS 2019 here) to go ahead and use batch-mode by default, even if it's not explicitly a batch rule.)
OK, *.obj is now up-to-date again, proceed to linking...
Final target, same as before, end of job, celebrate! (Let's assume now that the link has succeeded, just for the sake of the remaining examples below.)
Again: the other rule was never needed/used for anything.
Now, curiously, you can even start deleting .obj files one by one (as long as some remain), and leave NMAKE unfazed:
'test.dll' is up-to-date
What?! Why aren't the missing ones rebuilt?
You know what? Let's delete all of them... And, just for the fun of it, replace them with a single fake one, by copying some random file there (from anywhere), renaming it to fake.obj.
'test.dll' is up-to-date
Jesus! This makes no sense!
OK, let's end this. Create a brand new .c file then. That'll sure trigger a rebuild!
'test.dll' is up-to-date
No way! :-o What the hell's going on?! Maybe adding a new .cpp then...
'test.dll' is up-to-date
OMG!... Such a piece of junk! Incredible, this NMAKE thing... Right?
Well... First of all, for fake.obj, for which there's no source with a matching name, neither rule applies: NMAKE can't "invent" a source for that, so it'll be never rebuilt, it's just sitting there, as a time-bomb, until the next linking round, where the linker would eventually pick it up and find out about it, ending all the fun. :)
To all the other "anomaly":
Any existing .obj file will satisfy the dependency of *.obj (for the lib), so as long as there's at least one, NMAKE will be happy, and never even know that it's not the complete list!
That is why nothing is ever done for any new .c or .cpp added to the project, so using a wildcard this way is shooting oneself in the foot. (Which doesn't mean there aren't any perfectly legitimate cases for wildcards in build scripts, BTW.)
And (to recap), for rebuilding a missing object, NMAKE (just like gmake etc.) has to pick a winner, and ignore the rest, if there are multiple matching rules.
(FYI, gmake even has a page specifically about this wildcard pitfall. And, to mitigate the risks, unlike NMAKE, it seems to refuse building the initial "complete" set of objects for a *.o wildcard, knowing that it'll become incomplete the minute we start adding sources — i.e. by the very act the wildcard was hoped to support! ;) )
My SCons project depends on a lot of third party libs, each providing dozens or hundreds of include files.
My understanding of how SCons works is that, on each build, it parses the source files of my project to find the #include directives, and it uses the value of env['CPPPATH'] to find these files and compute their md5 sum.
This scanning is costly, and thus I would like to optimize this process by teaching SCons that all the headers of my third party files will never change. This property is actually enforced by the tool that manages our third party libs.
I know there is a --implicit-deps-unchanged option that forces scons to assume that the implicit dependencies did not change, but it works globally. I did not find a way to restrict this option to a particular directory. I tried to find if the default Scanner of implicit C++ files can be configured, but found nothing. I think it is possible to avoid using CPPPATH, and instead only give the -I option to the compiler directly, but it is cumbersome.
Is there any way to optimize SCons by teaching him that files in a directory will never, ever change?
You can try pre-expanding the list of header file paths into CCFLAGS.
Note that doing so means they will not be scanned.
for i in list_of_third_party_header_directories:
env['CCFLAGS'].append('-I' + i)
In this case the contents of CPPPATH would be your source directories, and not the third-party ones which you assert don't change.
Note that changing the command line of your compile commands in any way (unless the arguments are enclosed in $( $)) will cause your source files to recompile.
I have been using git for some time now and I feel I have a good handle on it.
I did however, build my first small program as a distribution (something with ./configure make and make install) and I want to put it up on github but I am not sure how to exactly go about tracking it.
Should I, for instance, initialize git but only track the source code file, manpage, and readme (since the other files generated by autoconf and automake seem a bit superfluous)
Or should I make an entirely different directory and put the source files in there and then manually rebuild everything for version 0.2 when it is time?
Or do something else entirely?
I have tried searching but I cannot come up with any search terms that give me the kind of results I am looking for.
for instance initialize git but only track the source code file, manpage, and readme (since the other files generated by autoconf and automake seem a bit superfluous)
Yes: anything used to build needs to be tracked.
Anything being the result of the build does not need to be tracked.
should I make an entirely different directory
No: in version control, you could make a new tag to mark each version, and release branches from those tags to isolate patches which could be specific to the implementation details of a fixed release.
But you don't create folders (that was the subversion way)
should I make an entirely different directory for sources
Yes, you can (if you have a large set of files for sources)
But see also "Makefiles with source files in different directories"; you don't have just one Makefile.
The traditional way is to have a Makefile in each of the subdirectories (part1, part2, etc.) allowing you to build them independently.
Further, have a Makefile in the root directory of the project which builds everything.
And don't forget to put your object files in a separate folder (not tracked) as well.
See also this question as a concrete example.
I've defined a CMakeLists.txt file for my project which works correctly.
I use the CMake GUI for generating a Visual Studio Project, and I ask to build the binaries (CMAke cache and other stuff) in the folder Build which is in the same folder where CMakeLists.txt is.
I was able to specify where the executable and the libraries have to be created.
Is there a way to specify also where the Visual Studio Solution file has to be created? I would like to have it in the root directory, but at the same time I don't want to have also all the other files that CMake creates in the Build directory.
CMake creates the Project I defined in CMakeLists.txt but also two other projects: ALL_BUILD and ZERO_CHECK. What's their utility?
I was able to avoid the creation of ZERO_CHECK by using the command set_property(GLOBAL PROPERTY USE_FOLDERS On).
Is there a way for avoiding also the creation of ALL_BUILD?
It seems you only switched to CMake very recently, as exactly those questions also popped into my head when I first started using CMake. Let's address them in the order you posted them:
I use the CMake GUI for generating a Visual Studio Project, and I ask
to build the binaries (CMAke cache and other stuff) in the folder
Build which is in the same folder where CMakeLists.txt is.
Don't. Always do an out-of-source build with CMake. I know, it feels weird when you do it the first time, but trust me: Once you get used to it, you'll never want to go back.
Besides the fact that using source control becomes so much more convenient when code and build files are properly separated, this also allows to build separate distinct build configurations from the same source tree at the same time.
Is there a way to specify also where the Visual Studio Solution file has to be created?
You really shouldn't care.
I see why you do feel that you need full control over how the solution and project files get created, but you really don't. Simply specify the target for the solution as the origin of your out-of-source build and forget about all the other files that are generated. You don't need to worry, and you don't want to worry - this is exactly the kind of stuff that CMake is supposed to take care of for you.
Ask yourself: What would you gain if you could handpick the location of every project file? Nothing, because chances are, you will never touch them anyways. CMake is your sole master now...
CMake creates the Project I defined in CMakeLists.txt but also two
other projects: ALL_BUILD and ZERO_CHECK. What's their utility? I was
able to avoid the creation of ZERO_CHECK by using the command
set_property(GLOBAL PROPERTY USE_FOLDERS On). Is there a way for
avoiding also the creation of ALL_BUILD?
Again, you really shouldn't care. CMake defines a couple of dummy projects which are very useful for certain internal voodoo that you don't want to worry about. They look weird at first, but you'll get used to their sight faster than you think. Just don't try to throw them out, as it won't work properly.
If their sight really annoys you that much, consider moving them to a folder inside the solution so that you don't have to look at them all the time.
Bottom line: CMake feels different than a handcrafted VS solution in a couple of ways. This takes some getting used to, but is ultimately a much less painful experience than one might fear.
You don't always have a choice about what your environment requires. Visual Studio's GitHub integration requires that the solution file exists in source control and is at the root of the source tree. It's a documented limitation.
The best I was able to come up with is adding this bit to CMakeList.txt:
# The solution file isn't generated until after this script finishes,
# which means that:
# - it might not exist (if this is the first run)
# - you need to run cmake twice to ensure any new solution was copied
set(sln_binpath ${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}.sln)
if(EXISTS ${sln_binpath})
# Load solution file from bin-dir and change the relative references to
# project files so that the in memory copy is as if it had been built in
# the source dir.
file(RELATIVE_PATH prefix
${CMAKE_CURRENT_SOURCE_DIR}
${CMAKE_CURRENT_BINARY_DIR})
file(READ ${sln_binpath} sln_content)
string(REGEX REPLACE
"\"([^\"]+).vcxproj\""
"\"${prefix}/\\1.vcxproj\""
sln_content
"${sln_content}")
# Compare the updated contents with the existing source path sln, if it
# exists and is the same we don't want to disturb VS by touching it.
set(sln_srcpath ${CMAKE_CURRENT_SOURCE_DIR}/${PROJECT_NAME}.sln)
set(old_content "")
if(EXISTS ${sln_srcpath})
file(READ ${sln_srcpath} old_content)
endif()
if(NOT old_content STREQUAL sln_content)
file(WRITE ${sln_srcpath} ${sln_content})
endif()
endif()
What would be helpful is if cmake had a way to run post generation scripts, but I couldn't find one.
Other ideas that didn't work out:
wrap cmake inside a script that does the same thing, but:
telling users to run a seperate script isn't simpler than saying to run cmake twice. Especially since needing to run cmake twice isn't a foreign concept.
put it in a pre-build step, but
building is common and changing the build is rare
changing the solution from builds inside the IDE makes it do... things
use add_subdirectory because that's suppose to finish first
it appeared to make the vcxproj's immediately, but not the sln until later, but I didn't try as hard because this adds a bunch of additional clutter I didn't want - so maybe this can be made to work
I downloaded a set of source code for a program in a book and I got a makefile.
I am quite new to Linux, and I want to know whether there is any way I can see the actual source code written in C?
Or what exactly am I to do with it?
It sounds like you may not have downloaded the complete source code from the book web site. As mentioned previously, a Makefile is only the instructions for building the source code, and the source code is normally found in additional files with names ending in .c and .h. Perhaps you could look around the book web site for more files to download?
Or, since presumably the book web site is public, let us know which one it is and somebody will be happy to point you in the right direction.
A Makefile does not contain any source itself. It is simply a list of instructions in a special format which specifies what commands should be run, and in what order, to build your program. If you want to see where the source is, your Makefile will likely contain many "filename.c"'s and "filename.h"'s. You can use grep to find all the instances of ".c" and ".h" in the file, which should correspond to the C source and header files in the project. The following command should do the trick:
grep -e '\.[ch]' Makefile
To use the Makefile to build your project, simply typing make should do something reasonable. If that doesn't do what you want, look for unindented lines ending in a colon; these are target names, and represent different arguments you can specify after "make" to build a particular part of your project, or build it in a certain way. For instance, make install, make all, and make debug are common targets.
You probably have GNU Make on your system; much more information on Makefiles can be found here.
It looks like you also need to download the SB-AllSource.zip file. Then use make (with the Makefile that you've already downloaded) to build.