GitLab CI: How to fail on new compiler warnings - gitlab

We are trying to get an old legacy code base under control while simultaneously developing new features. Currently the code compiles with a hell of a lot of compiler warnings and warnings from static code analyzers. For that reason it is not uncommon that code introducing new warnings reaches production simply because the new warning got lost in the shuffle.
Currently we are using Jenkins for nightly builds and make the build fail on new warnings. However, when Jenkins detects the new warnings the code was already merged a few hours ago. So we would like to not only shorten the feedback cycle but also ensure to only merge changes that do not introduce new warnings.
As far as I know it is possible to trigger a Jenkins build on a push to GitLab. But Jenkins can only compare the count of warnings to the previous build. But we would need to compare to a build of a different branch.
Can GitLab CI or a combination of GitLab EE and Jenkins somehow be configured to detect if a merge request introduces new warnings?

Yes that is possible but that's rather an open-ended question that will depend a lot on how long a build takes and how you will compare the outcomes.
You don't have to run only the checks on the branch you have checked out. You may set up two jobs in parallel that run tests on current branch and the develop branch, pass them as artifacts to a third job and compare them there.
You may want to store the state of a build on your develop branch and download the artifact to your current job and compare it against the local results. You could also store them in a database, on a file server or wherever else it's comfortable.
Finally you may try an external code quality tool like SonarQube which has greater insight into what's new and what's old.

In the meantime tools got developed that allow a workflow which is not perfect but comes quite close.
Jenkins has the Warnings Next Generation Plugin which can compare the warnings found in one Jenkins job to the warnings found in another Jenkins job. So we set up a job to compile our develop branch each time a new commit is pushed to it. We then use the results as baseline. Another job that gets triggered for each merge request in GitLab then uses this baseline to determine the new warnings introduced by the merge request.
This works reasonably well.

Related

Gitlab: Issues and Pipelines

I have setup a Git project + CI (using Gitlab-runner) on Gitlab v12.3.5. I have a question about issues and pipelines. Let's say I create an issue and assign it to myself. So this create a branch/merge request. Then, I open up the WebIDE to modify some files in an attempt to fix the issue. Now I want to see what if my changes will fix the issue. In order to run the pipeline, is it necessary to commit the changes into the branch or is there some other way?
The scenario I have is that it may take me 20 times to fix the files to make the pipeline 'clean'. In that case, I would have to keep committing on each change to see the results. What is the preferred way to accomplish this? Is it possible to run the pipeline by just staging the changes to see if they work?
I am setting up the gitlab-ci.yaml file. Hence it is taking a lot of trials to get it working properly.
You should create a branch and push to that. Only pushed changes will trigger pipeline runs. After you're done, you can squash and merge the branch so that the repo's history will be clean.
Usually though, you won't have to do this because you'll have automated tests set up to check whether your code works. You should also try testing the Linux commands (or whichever commands you're running in your GitLab CI scripts) locally first. If you're worried about whether your .gitlab-ci.yml syntax is correct, you can navigate to the file in your repository and check there (there's a button at the top which lints it).

In GitLab is it possible to configure a Scheduled Pipeline that runs on all branches periodically?

I am using GitLab for Git version control and GitLab CI / CD for my automated builds. Usually, the builds are triggered by Git repository activity but I also have a weekly build to ensure that projects not under active development continue to work. When there is only a "master" branch on a project, it is easy to ensure a weekly build is run on the latest code. When there are multiple branches in a project, I would like to repeat the pipeline work for each of them in turn.
What I would like to be able to do is schedule a build (weekly, fortnightly or monthly) that runs on all current branches visible in Git. Is that possible within GitLab's Continuous Delivery system?
The motivation behind doing this is to ensure that external activity, such as tool and library updates, do not introduce an issue without it being promptly visible. Assuming there are reasonable automated testing, coverage and comprehensive builds for target platforms, a monthly build with the latest tools should highlight the problem promptly. This is better than an invisible mountain to problems accumulating while a project is shelved for a few years (or months). Sometimes all that is required is occasional maintenance.
There are only a handful of feature branches and release lines on the projects currently. I would not expect that number to grow significantly. There is time enough over a weekend to run the required pipelines dozens if not hundreds of times at present.
Ideally, I would like something straightforward to set up. I cannot see anything in the admin GUI that would allow this at present. I did look at the API and I can see there is some scope there to script the addition and removal. Perhaps some script that is run once a month to create new Scheduled pipelines based on git branches is the only way. A pre-made solution on those lines would be perfectly acceptable. If nothing exists I might start work on something like that in time.
I am currently running GitLab Community Edition 11.2.3 06cbee3 (GitLab CE 11.2.3). If there is an Enterprise Edition only answer, that is fine and will add to the justifications of purchasing the EE version. I would pick at CE one above the EE one though.
You cannot set a schedule for all branches at once, you have to configure one schedule per branch yourself.
Perhaps some script that is run once a month to create new Scheduled
pipelines based on git branches is the only way.
I would go in that way.

How to update repository with built project?

I’m trying to set up GitLab CI/CD for an old client-side project that makes use of Grunt (https://github.com/yeoman/generator-angular).
Up to now the deployment worked like this:
run ’$ grunt build’ locally which built the project and created files in a ‘dist’ folder in the root of the project
commit changes
changes pulled onto production server
After creating the .gitlab-ci.yml and making a commit, the GitLab CI/CD job passes but the files in the ‘dist’ folder in the repository are not updated. If I define an artifact, I will get the changed files in the download. However I would prefer the files in ‘dist’ folder in the to be updated so we can carry on with the same workflow which suits us. Is this achievable?
I don't think commiting into your repo inside a pipeline is a good idea. Version control wouldn't be as clear, some people have automatic pipeline trigger when their repo is pushed, that'd trigger a loop of pipelines.
Instead, you might reorganize your environment to use Docker, there are numerous reasons for using Docker in a professional and development environments. To name just a few: that'd enable you to save the freshly built project into a registry and reuse it whenever needed right with the version you require and with the desired /dist inside. So that you can easily run it in multiple places, scale it, manage it etc.
If you changed to Docker you wouldn't actually have to do a thing in order to have the dist persistent, just push the image to the registry after the build is done.
But to actually answer your question:
There is a feature request hanging for a very long time for the same problem you asked about: here. Currently there is no safe and professional way to do it as GitLab members state. Although you can push back changes as one of the GitLab members suggested (Kamil Trzciński):
git push http://gitlab.com/group/project.git HEAD:my-branch
Just put it in your script section inside gitlab-ci file.
There are more hack'y methods presented there, but be sure to acknowledge risks that come with them (pipelines are more error prone and if configured in a wrong way, they might for example publish some confidential information and trigger an infinite pipelines loop to name a few).
I hope you found this useful.

How to test build script changes on travis without having to check in code for each change

I am having a build issue on travis with my node.js project. The issue stems from the fact that I have a rather complex test that I want to run, which requires building and running some test scaffolding framework on the VM before I get to 'npm test'. Somewhere along the line it is failing, and I find myself adding debugging statements to my .travis.yml to try to root out the problem, but its annoying to have my commit history littered with these changes/attempted fixes.
I guess I want to be able to either (a) get on the travis box at the time the test is running (or afterwards) so I can inspect what is going on/went wrong, or (b) at least be able to tweak and run my .travis.yml file and associated scripts somehow and re-run immediately without having to formally check those changes in in order to kick off travis again.
I find myself adding debugging statements to my .travis.yml to try to root out the problem, but its annoying to have my commit history littered with these changes/attempted fixes.
If the history is important, maybe because your changelog is generated from it, then my suggestion is, to create a private sandbox for experiments by cloning the repo.
clone a organization repo to a user repo.
activate travis on the user repo
try and error commit as long as you need to your .travis.yml
when everything is working like you want, squash the git commits into 1
do a pull request of this single commit from user repo to company repo
et voila: history stays clean
Big Warning: When you have no contributors with forks to worry about, then you could simply commit till you get it right and squash the history into a single commit and do a force push.
get on the travis box at the time the test is running (or afterwards) so I can inspect what is going on/went wrong
That's not possible. But you can view or download the log from the builds.
If you view the build log directly after a push, then you get live view of the processing steps on the Travis env. You can also cancel it manually.
at least be able to tweak and run my .travis.yml file and associated scripts somehow and re-run immediately without having to formally check those changes in in order to kick off travis again.
When you are logged in on Travis and you will find a button to rerun a build.
You could try executing your build commands inside a normal Ubuntu VM.
Back in the days box images were available over at http://files.travis-ci.org/boxes/provisioned/travis-ruby.box
But Travis switched from Vagrant to BlueBox and stopped providing the downloads.
You could try on IRC and ask to get access to your “box” for debugging.
I'm not sure if you get access.

Does CC.NET detect modification when a build script performs a checkin

I've been doing some research into finally automating our Development builds and still have one nagging question that I'm hoping the StackOverflow community can solve for me.
My understanding is that an IntervalTrigger when setup properly will check VSS every X seconds for changes and if it finds a modified file, will run my tasks. One of my tasks would be to checkout the AssemblyInfo files and update the version numbers. After these files are updated they would be checked back into VSS.
Thinking about this solution it doesn't make much sense because in my mind, I'm forcing the check for changed files to true every time the trigger fires. Am I missing something here? Is there a way of doing this without triggering an automatic build on the AssemblyInfo check-in?
You can use a Filtered Source Control Block to exclude certain files from the trigger.
I just posted a bunch about my default build process here which may be of some interest to you: SVN Website Development and Deployment Solution
The way I usually configure my projects with CC.NET is to have two project blocks per solution. One configured as an interval trigger that does nothing more than get the latest from my repository, build the solution, and run unit tests. The other is a schedule trigger that does all the things the other one does, but actually publishes a build. This includes changing version numbers, publishing files, etc. This might work in your case, since the change in version would cause the interval project to trigger, but only once.
Checking the automatically generated AssemblyInfo into the version control system is a bad idea, don't do it. You'll get a lot of noise (50% of all commits!) in your history. Also, it does not give you any new information - you can always pull this from VCS. Have your build script autogenerate those files is a good practice, but don't push those changes back!

Resources