SCons: How to copy build information across computers? - scons

I do not have all the required programs to compile my project in the same computer. I can compile one part in one computer and the rest in a different computer. I want to run
scons path/to/first/file in one computer.
scons path/to/second/file in a different computer.
The second file depends on the first file. I compiled the first file in one computer and copied it over. However, when I try running scons on the other computer, I get an error because scons wants to re-build the first file. The message I get if I add --debug=explain is Cannot explain why '/full/path/to/first/file' is being rebuild: No previous build information found. I have also tried copying the .sconsign.dblite file over between machines and that hasn't worked. Help?

The "best" way to resolve this is likely to only specify building the part each computer can build based on the presence of the build tools which can build it.
So detect if the computer has compiler_a and then do something like:
if compiler_a:
# do all compiler_a build stuff
else:
# do all compiler_b build stuff

Related

Recovering a build

While building chromium (ninja) my workstation crashed. After reboot, there were a number of
empty .o files. I tried a fix by making small changes to the respective .cc file (adding stubs
//stub is a comment that should trigger a recompile for this file to every file that failed to link and deleting the empty .o files).
Later, however, libchrome failed to link without (class name not defined). Now, I could simply look for these classes, change them and try to retrigger a rebuild by ninja, but chromium is massive, really massive.
Do you know any ninja tricks?

Include an Application Image in Yocto Build

I feel like I have done my level best to search for an answer for this but, admittedly, maybe I am not using the correct search keys.
I am building a Linux kernel using Yocto and I can see that adding lines IMAGE_INSTALL_append to local.conf, followed my the additional images that you want to include is the way that you include things like connman, dropbear, etc. That's fine.
What I want to do is include an image of the application that I have written. Let's call it HelloWorld.exe and I would like it to be tucked into it's own directory (MyHello) along with a sub-directory and the sub-directory also contains some files that are necessary for the operation of HelloWorld.
I'm sure that there are different ways of doing this but I just need one. I need to know:
Where do I position my HelloWorld.exe and its attendant files and subdirectories on my Ubuntu system where they will be picked up during the build and included in the image?
How do I alter local.conf to ensure that the final image will include my application and its support files and directories where I need it to be on the target?
Thank you. Mark
I believe it gets a bit complicated in Yocto:
You need to create your own layer. Let's say meta-hello. This folder needs to in the same place as all your other meta layers and also where your poky directory is.
You need to enable that layer in your bblayers.conf file. For that you can use bitbake-layers add-layer /path/to/meta-hello
Now within your meta-hello create a recipe in a folder recipes-hello/hello
your hello.bb file is within the above mentioned folder and your can decide to use either automake, makefile or compile it accordingly using the Dev Manual Here
Once that is done, in your BUILD dir perform bitbake hello and this will compile and provide errors if any. Resolve them and once it compiles successfully, add IMAGE_INSTALL_append = " hello" in the local.conf file.
This is one way of doing it. Another one is a bit complex using the ADT Yocto Workflow
Sorry to say there is no easier way around this as Yocto does have a steep learning and development curve.
Practical Example
You can look at this blog post by Boundary Devices which creates a simple daemonize automake example. You can find it on GitHub too.
devtools workflow
Youtube Video by Tim Orling from Intel on devtools workflow
packing external binaries
For this case use Binaries Installation in Mega Manual

how to use git with a package I am distributing

I have been using git for some time now and I feel I have a good handle on it.
I did however, build my first small program as a distribution (something with ./configure make and make install) and I want to put it up on github but I am not sure how to exactly go about tracking it.
Should I, for instance, initialize git but only track the source code file, manpage, and readme (since the other files generated by autoconf and automake seem a bit superfluous)
Or should I make an entirely different directory and put the source files in there and then manually rebuild everything for version 0.2 when it is time?
Or do something else entirely?
I have tried searching but I cannot come up with any search terms that give me the kind of results I am looking for.
for instance initialize git but only track the source code file, manpage, and readme (since the other files generated by autoconf and automake seem a bit superfluous)
Yes: anything used to build needs to be tracked.
Anything being the result of the build does not need to be tracked.
should I make an entirely different directory
No: in version control, you could make a new tag to mark each version, and release branches from those tags to isolate patches which could be specific to the implementation details of a fixed release.
But you don't create folders (that was the subversion way)
should I make an entirely different directory for sources
Yes, you can (if you have a large set of files for sources)
But see also "Makefiles with source files in different directories"; you don't have just one Makefile.
The traditional way is to have a Makefile in each of the subdirectories (part1, part2, etc.) allowing you to build them independently.
Further, have a Makefile in the root directory of the project which builds everything.
And don't forget to put your object files in a separate folder (not tracked) as well.
See also this question as a concrete example.

CMake and Visual Studio - Specify solution file directory

I've defined a CMakeLists.txt file for my project which works correctly.
I use the CMake GUI for generating a Visual Studio Project, and I ask to build the binaries (CMAke cache and other stuff) in the folder Build which is in the same folder where CMakeLists.txt is.
I was able to specify where the executable and the libraries have to be created.
Is there a way to specify also where the Visual Studio Solution file has to be created? I would like to have it in the root directory, but at the same time I don't want to have also all the other files that CMake creates in the Build directory.
CMake creates the Project I defined in CMakeLists.txt but also two other projects: ALL_BUILD and ZERO_CHECK. What's their utility?
I was able to avoid the creation of ZERO_CHECK by using the command set_property(GLOBAL PROPERTY USE_FOLDERS On).
Is there a way for avoiding also the creation of ALL_BUILD?
It seems you only switched to CMake very recently, as exactly those questions also popped into my head when I first started using CMake. Let's address them in the order you posted them:
I use the CMake GUI for generating a Visual Studio Project, and I ask
to build the binaries (CMAke cache and other stuff) in the folder
Build which is in the same folder where CMakeLists.txt is.
Don't. Always do an out-of-source build with CMake. I know, it feels weird when you do it the first time, but trust me: Once you get used to it, you'll never want to go back.
Besides the fact that using source control becomes so much more convenient when code and build files are properly separated, this also allows to build separate distinct build configurations from the same source tree at the same time.
Is there a way to specify also where the Visual Studio Solution file has to be created?
You really shouldn't care.
I see why you do feel that you need full control over how the solution and project files get created, but you really don't. Simply specify the target for the solution as the origin of your out-of-source build and forget about all the other files that are generated. You don't need to worry, and you don't want to worry - this is exactly the kind of stuff that CMake is supposed to take care of for you.
Ask yourself: What would you gain if you could handpick the location of every project file? Nothing, because chances are, you will never touch them anyways. CMake is your sole master now...
CMake creates the Project I defined in CMakeLists.txt but also two
other projects: ALL_BUILD and ZERO_CHECK. What's their utility? I was
able to avoid the creation of ZERO_CHECK by using the command
set_property(GLOBAL PROPERTY USE_FOLDERS On). Is there a way for
avoiding also the creation of ALL_BUILD?
Again, you really shouldn't care. CMake defines a couple of dummy projects which are very useful for certain internal voodoo that you don't want to worry about. They look weird at first, but you'll get used to their sight faster than you think. Just don't try to throw them out, as it won't work properly.
If their sight really annoys you that much, consider moving them to a folder inside the solution so that you don't have to look at them all the time.
Bottom line: CMake feels different than a handcrafted VS solution in a couple of ways. This takes some getting used to, but is ultimately a much less painful experience than one might fear.
You don't always have a choice about what your environment requires. Visual Studio's GitHub integration requires that the solution file exists in source control and is at the root of the source tree. It's a documented limitation.
The best I was able to come up with is adding this bit to CMakeList.txt:
# The solution file isn't generated until after this script finishes,
# which means that:
# - it might not exist (if this is the first run)
# - you need to run cmake twice to ensure any new solution was copied
set(sln_binpath ${CMAKE_CURRENT_BINARY_DIR}/${PROJECT_NAME}.sln)
if(EXISTS ${sln_binpath})
# Load solution file from bin-dir and change the relative references to
# project files so that the in memory copy is as if it had been built in
# the source dir.
file(RELATIVE_PATH prefix
${CMAKE_CURRENT_SOURCE_DIR}
${CMAKE_CURRENT_BINARY_DIR})
file(READ ${sln_binpath} sln_content)
string(REGEX REPLACE
"\"([^\"]+).vcxproj\""
"\"${prefix}/\\1.vcxproj\""
sln_content
"${sln_content}")
# Compare the updated contents with the existing source path sln, if it
# exists and is the same we don't want to disturb VS by touching it.
set(sln_srcpath ${CMAKE_CURRENT_SOURCE_DIR}/${PROJECT_NAME}.sln)
set(old_content "")
if(EXISTS ${sln_srcpath})
file(READ ${sln_srcpath} old_content)
endif()
if(NOT old_content STREQUAL sln_content)
file(WRITE ${sln_srcpath} ${sln_content})
endif()
endif()
What would be helpful is if cmake had a way to run post generation scripts, but I couldn't find one.
Other ideas that didn't work out:
wrap cmake inside a script that does the same thing, but:
telling users to run a seperate script isn't simpler than saying to run cmake twice. Especially since needing to run cmake twice isn't a foreign concept.
put it in a pre-build step, but
building is common and changing the build is rare
changing the solution from builds inside the IDE makes it do... things
use add_subdirectory because that's suppose to finish first
it appeared to make the vcxproj's immediately, but not the sln until later, but I didn't try as hard because this adds a bunch of additional clutter I didn't want - so maybe this can be made to work

let ./configure find library files in specific directory

I'm currently installing R software on a shared space across several servers. After installation I found that when I login on different servers, R is not guaranteed to run due to the missing of some library files on different machines.
Here is what I'm trying to do: since the installation of R is machine-dependent, I'd like to put all missing library files like libtermcap.so.2, libg2c.so.1, etc, to a single directory on the shared space, so that when I run ./configure, it will also search for this directory. Since this directory is shared, the installation could become machine-independent, so I won't need to add missing files on each server.
Is there an option to achieve this when I run ./configure? Thanks.
Assuming you have copied the library files to /shared/lib/ and the header files to /shared/include/, you can run
./configure LDFLAGS=-L/shared/lib CPPFLAGS=-I/shared/include ...other options...
Note, however, that you are bound to run into trouble at run time, when you have to convince your installation to use the shared libraries from the right directory, especially in case someone decides to upgrade the default version on the respective host. That whole business is platform and installation dependent. I think if your hosts are not at least mostly identical, you ought to install your software (R) locally in a way suitable to the respective system.
Peter's answer is correct (+1), and please take special note of his suggestion to install locally. Using the local package management system and auto updating on each box is (in the long run) a much easier solution than trying to get compatible binaries/libraries on a shared drive. To simplify using Peter's solution, note that you can place the appropriate arguments in /shared/share/config.site. For example:
$ cat > /shared/share/config.site << EOF
: ${LDFLAGS=-L/shared/lib}
: ${CPPFLAGS=-I/share/include}
EOF
Whenever you run configure with --prefix=/shared, the config.site file will be read and defaults will be set.

Resources