Is it possible to have ccnet don't increment the build number if the build is actually a forcebuild?????????? In better words, Is it possible to have cruisecontrol keep it's label number if it is rcovering from an exception(sourcecontrol exception)??? Basically if the forcebuild is happening to recover from an exception, I don't want my build number to change.
It depends - why are you forcing the build?
If you need some kind of interval trigger, then just set it up and don't use forced build.
If you are forcing the build because the previous one has failed, then just set incrementOnFailure to false (which is the default).
If you are using Assembly Version Labeller then you can explicitly set build revision - unfortunately this is not possible with other labellers.
Related
I'm fanatic about not getting the project quality out of control.
I understand that at certain cases warnings might make sense but i'm concerned that the number of warnings will increase over time.
I have an Azure DevOps build (gated commit) pipeline and i want to allow only 10 warnings so that at some point developers will have to address their warnings.
Is there a way to count the warnings and block the pipeline if the warnings count exceeds a certain number?
Thanx!
Is there a way to count the warnings and block the pipeline if the warnings count exceeds a certain number?
The answer is yes.
There is a great extension to meet your needs:Build Quality Checks. It provides a wealth of build quality checks, and you can add these build quality checks to your build process.
There is an option Warning Threshold, which could be used to specify the maximum number of warnings. The build will fail if this number is exceeded.
And even if the policy breaks the build, you could still get test results as well as the compile output.
In your case, I would recommend treating warnings as error so that your gated commit will fail if the project contains warnings - here an example for dotnet core:
dotnet build /warnaserror
Developers still have the option to disable a warning in certain cases:
#pragma warning disable CS8632
// Here I do something that nullable references without context but I know what I do because...
#pragma warning restore CS8632
I have a Haskell project with 300+ files (mostly auto generated). I can build it in few minutes with my 4 yo. processor (by specifying ghc-options: $everything: -j in stack.yaml) but when it comes to Travis things becomes really slow. It seems that modules being processed sequentially and even single module compilation time much larger (about one second on my machine vs tens on seconds on Travis) Eventually I hit Travis timeout (50 min for single job). Is there any way to speed up Travis build or to split up compilation process to multiple jobs? I would accept paid plan from Travis, I need solution which just works without complex setup.
This configuration uses stages: https://github.com/google/codeworld/blob/f20020ca78fee51afdf6a5ef13eacc6d15c15724/.travis.yml
However, there are unpredictable problems with the cache, or perhaps problems with the Travis config: https://travis-ci.org/google/codeworld/builds/626216910 Also, I am not sure how Travis utilizes the cache(s) for simultaneous builds.
https://github.com/google/codeworld/blob/f20020ca78fee51afdf6a5ef13eacc6d15c15724/.travis.yml#L52-L63 , https://github.com/google/codeworld/blob/f20020ca78fee51afdf6a5ef13eacc6d15c15724/.travis.yml#L74 , and the redundant calls to stack upgrade --binary-only are attempts to work around these issues.
I have build process where I have a set of input files I want to build slightly differently based on slow-to-extract meta-data I can find in the input files.
The simple approach would be to get this information from the file every time I run scons and build conditionally based on it but it means I need to rescan the files every time I start a build significantly slowing down the build.
I have two potential approaches I'm looking to explore:
A two stage build where I first run one scons file to extract the metadata into sidecar files. These sidecar files get picked up by a second scons project that generates the right build targets and operations based on the sidecar files.
(Ab)use a custom scanner for the input files to generate sidecar files with the metadata in and enable the implicit_cache to make sure scanning only happens when the input files changes.
What would be the most correct and scons-idiomatic way of accomplishing what I'm looking to do?
The scenario outlined is this:
Someone has built the Linux kernel from source code.
That person wants to change the build configuration.
They still have all of the object files and temporary files that were produced by the previous build operation.
Given all of that, what needs to be done to rebuild as few things as possible in order to save time?
I understand that these will trigger or necessitate a complete recompilation of the source code:
Running make clean.
Running make menuconfig.
make clean is an obvious course of action to avoid to achieve the desired goal because it deletes all object files, both those that would need to be rebuilt and those that could otherwise be left alone. I don't know why make menuconfig would cause the build system to recompile everything, but I've read on here that that is what it would do.
The problem I see with not having the second avenue open to me is that if I change the configuration manually with a text editor, the options that I change might require changes in other options that depend on them (e.g., IMA_TRUSTED_KEYRING depends on SYSTEM_TRUSTED_KEYRING) and I'd be working without an interface that would automatically make those required secondary changes.
It occurred to me that invoking scripts/kconfig/mconf, the program built and launched by make menuconfig, could possibly be a solution to the problems described in the previous paragraph since it was not stated that mconf is what makes the build system recompile everything. But, it possibly could be that very program, so I do not wish to try it until I know it won't do that.
Sooooo, how does one achieve the stated objective given the stated scenario?
I am build several large set of source files (targets) using scons. Now, I would like to know if there is a metric I can use to show me:
How many targets remain to be build.
How long it will take -- this to be honest, this is probably a no-go as it is really hard to tell!
How can I do that in scons?
There is currently no progress indicator built into SCons, and it's also not trivial to provide one. The problem is, that SCons doesn't build the complete DAG first, and then starts the build...such that you'd have a total number of targets to visit that you could use as a reference (=100%).
Instead, it makes up the DAG on the go... It looks at each target, and then expands the list of its children (sources and implicit dependencies like headers) to check whether they are up-to-date. If a child has changed, it gets rebuilt by applying the same "build step" recursively.
In this way, SCons crawls itself from the list of targets, as given on the command-line (with the "." dir being the default), down the DAG...where only the parts are ever visited, that are required for (or, in other words: have a dependency to) the requested targets.
This makes it possible for SCons to handle things like "header files, generated by a program that must be compiled first" in the first go...but it also means that the total number of targets/children to get visited changes constantly.
So, a standard progress indicator would continuously climb towards the 80%-90%, just to then fall back to 50%...and I don't think this would give you the information you're really after.
Tip: If your builds are large and you don't want to wait, do incremental builds and only build the library/program you're currently doing work on ("scons lib1"). This will still take into account all dependencies, but only a fraction of the DAG has to get expanded. So, you use less memory and get faster update times...especially if you use the "interactive" mode. In a project with a 100000 C files total, the update of a single library with 500 C files takes about 1s on my machine. For more infos on this topic check out http://scons.org/wiki/WhySconsIsNotSlow .