I'm fanatic about not getting the project quality out of control.
I understand that at certain cases warnings might make sense but i'm concerned that the number of warnings will increase over time.
I have an Azure DevOps build (gated commit) pipeline and i want to allow only 10 warnings so that at some point developers will have to address their warnings.
Is there a way to count the warnings and block the pipeline if the warnings count exceeds a certain number?
Thanx!
Is there a way to count the warnings and block the pipeline if the warnings count exceeds a certain number?
The answer is yes.
There is a great extension to meet your needs:Build Quality Checks. It provides a wealth of build quality checks, and you can add these build quality checks to your build process.
There is an option Warning Threshold, which could be used to specify the maximum number of warnings. The build will fail if this number is exceeded.
And even if the policy breaks the build, you could still get test results as well as the compile output.
In your case, I would recommend treating warnings as error so that your gated commit will fail if the project contains warnings - here an example for dotnet core:
dotnet build /warnaserror
Developers still have the option to disable a warning in certain cases:
#pragma warning disable CS8632
// Here I do something that nullable references without context but I know what I do because...
#pragma warning restore CS8632
Related
I am building on one machine and running it on the other.
Build:
runcpu --action build --config xxx
Run:
runcpu --action run --config xxx --nobuild
All cases reported checksum mismatched. How do I resolve this.
Explanation
For SPEC CPU 2017, check out the config file options for runcpu. It lists two options that may be of interest that you can put in a header section: strict_rundir_verify and verify_binaries. I pasted their descriptions below.
strict_rundir_verify=[yes|no]:
When set, the tools will verify that the file contents in existing run directories match the expected checksums. Normally, this should always be on, and reportable runs will force it to be on. Turning it off might make the setup phase go a little faster while you are tuning the benchmarks.
Developer notes: setting strict_rundir_verify=no might be useful when prototyping a change to a workload or testing the effect of differing workloads. Note, though, that once you start changing your installed tree for such purposes it is easy to get lost; you might as well keep a pristine tree without modifications, and use a second tree that you convert_to_development.
verify_binaries=[yes|no]:
runcpu uses checksums to verify that executables match the config file that invokes them, and if they do not, runcpu forces a recompile. You can turn that feature off by setting verify_binaries=no.
Warning: It is strongly recommended that you keep this option at its default, yes (that is, enabled). If you disable this feature, you effectively say that you are willing to run a benchmark even if you don't know what you did or how you did it -- that is, you lack information as to how it was built!
The feature can be turned off because it may be useful to do so sometimes when debugging (for an example, see env_vars), but it should not be routinely disabled.
Since SPEC requires that you disclose how you build benchmarks, reportable runs (using the command-line switch --reportable or config file setting reportable=yes) will cause verify_binaries to be automatically enabled. For CPU 2017, this field replaces the field check_md5.
For SPEC CPU 2006, these two options also exist, but note that verify_binaries used to be called check_md5.
Example
Example. I recently built the SPEC CPU 2017 binaries, patched them (in their respective exe directories), and then performed a (non-reportable) run. To do this, I put the following in the "global options" header section of my configuration file:
#--------- Global Settings ----------------------------------------------------
...
reportable = 0
verify_binaries = 0
...
before building, patching, and running (with the --nobuild flag) the suite.
My colleague faced an issue, where his sort job failed with an SB37 abend, I know that this error can be rectified by allocating more space to the output file but my question here is:
How can I remediate an SB37 abend without changing space allocation?
It takes a week or more to move changes to production. As such, I can't change the space allocation of file at the moment as the error is in production.
An SB37 abend indicates an out of space condition during end-of-volume processing.
B37 Explanation The error was detected by the end-of-volume
routine. This system completion code is accompanied by message
IEC030I. Refer to the explanation of message IEC030I for complete
information about the task that was ended and for an explanation of
the return code (rc in the message text) in register 15.
This is accompanied with message IEC030I which will provide more information about the issue.
Depending on a few items your production control team may be able to fix the environment where it would allow the job to run. Lacking more detail it is impossible to provide an exact answer so consider this a roadmap on how to approach the problem.
IEC030I B37-rc,mod, jjj,sss,ddname[-#],
dev,ser,diagcode,dsname(member)
In the message there should be a volser that identifes the volume that was being written to. If you have the production control team look at the contents of that volume there may be insufficient space that can be remedied by removing datasets. There are too many options to enumerate without specifics about the failure, type of dataset and other information to guide you.
However, as indicated in other comments, if you have a production control team that can run the job, they should be able to make changes to the JCL to direct the output dataset to another set of volumes or storage groups.
Changes to the JCL are likely the only way to correct the problem.
I have a Haskell project with 300+ files (mostly auto generated). I can build it in few minutes with my 4 yo. processor (by specifying ghc-options: $everything: -j in stack.yaml) but when it comes to Travis things becomes really slow. It seems that modules being processed sequentially and even single module compilation time much larger (about one second on my machine vs tens on seconds on Travis) Eventually I hit Travis timeout (50 min for single job). Is there any way to speed up Travis build or to split up compilation process to multiple jobs? I would accept paid plan from Travis, I need solution which just works without complex setup.
This configuration uses stages: https://github.com/google/codeworld/blob/f20020ca78fee51afdf6a5ef13eacc6d15c15724/.travis.yml
However, there are unpredictable problems with the cache, or perhaps problems with the Travis config: https://travis-ci.org/google/codeworld/builds/626216910 Also, I am not sure how Travis utilizes the cache(s) for simultaneous builds.
https://github.com/google/codeworld/blob/f20020ca78fee51afdf6a5ef13eacc6d15c15724/.travis.yml#L52-L63 , https://github.com/google/codeworld/blob/f20020ca78fee51afdf6a5ef13eacc6d15c15724/.travis.yml#L74 , and the redundant calls to stack upgrade --binary-only are attempts to work around these issues.
I am build several large set of source files (targets) using scons. Now, I would like to know if there is a metric I can use to show me:
How many targets remain to be build.
How long it will take -- this to be honest, this is probably a no-go as it is really hard to tell!
How can I do that in scons?
There is currently no progress indicator built into SCons, and it's also not trivial to provide one. The problem is, that SCons doesn't build the complete DAG first, and then starts the build...such that you'd have a total number of targets to visit that you could use as a reference (=100%).
Instead, it makes up the DAG on the go... It looks at each target, and then expands the list of its children (sources and implicit dependencies like headers) to check whether they are up-to-date. If a child has changed, it gets rebuilt by applying the same "build step" recursively.
In this way, SCons crawls itself from the list of targets, as given on the command-line (with the "." dir being the default), down the DAG...where only the parts are ever visited, that are required for (or, in other words: have a dependency to) the requested targets.
This makes it possible for SCons to handle things like "header files, generated by a program that must be compiled first" in the first go...but it also means that the total number of targets/children to get visited changes constantly.
So, a standard progress indicator would continuously climb towards the 80%-90%, just to then fall back to 50%...and I don't think this would give you the information you're really after.
Tip: If your builds are large and you don't want to wait, do incremental builds and only build the library/program you're currently doing work on ("scons lib1"). This will still take into account all dependencies, but only a fraction of the DAG has to get expanded. So, you use less memory and get faster update times...especially if you use the "interactive" mode. In a project with a 100000 C files total, the update of a single library with 500 C files takes about 1s on my machine. For more infos on this topic check out http://scons.org/wiki/WhySconsIsNotSlow .
Is it possible to have ccnet don't increment the build number if the build is actually a forcebuild?????????? In better words, Is it possible to have cruisecontrol keep it's label number if it is rcovering from an exception(sourcecontrol exception)??? Basically if the forcebuild is happening to recover from an exception, I don't want my build number to change.
It depends - why are you forcing the build?
If you need some kind of interval trigger, then just set it up and don't use forced build.
If you are forcing the build because the previous one has failed, then just set incrementOnFailure to false (which is the default).
If you are using Assembly Version Labeller then you can explicitly set build revision - unfortunately this is not possible with other labellers.