How to keep only red builds in jenkins - linux

How to keep only the failed builds logs in a job history?
I haven't enough disk space to store both successful and failed builds. I'm looking for a simple way to keep all the red one's logs and none of the blue/green on a Linux jenkins. (Perhaps with a Post-build Action?)

The Discard Old Build plugin can do that for you:
From the link:

Related

Bamboo plan: Compress the artifact after build and uncompress after deployment to server

This is my first time where I am both learning and implementing automated CICD pipelines in Atlassian bamboo. I have a NodeJS project whose build and deployment plan I configured after much R&D over the net.
In the deployment process, I observed that the deployment is taking very much time as the number of files to be transferred are more in numbers due to node_modules probably. I would like to compress the artifact generated after build steps and want to decompress at server side once the transfer is complete.
I tried finding ZIP in the tool tasks but it is not there. My question is that is it possible in any other way. Is doing it via cmd works & is feasible?
I have a little experience over the Linux commands.
Any help would be highly appreciated.
In my company we use an Ant task including ivy to prepare, zip and publish our projects as artifacts. In the deployment we use an SCP Task to copy the artifact onto our server and an SSH task to unzip it.
So our whole build part is implemented in ant and the only thing our bamboo build does, is checking out a git repository and running the ant script.
That workflow is used for a lot of different projects including nodejs, python, java, c++ or pure text file setups and it works really well.
But a normal script task for zipping should also do the job and depending on the scale of your projects Ant may be an overkill.
I think its possible to use win/linux commands for acheiving your requirement. you would need to write a task to compress the files you can use shell plugin or any other suitable plugin. once the artifact is sent to server you would need a pooling batch program to unzip your artifact at the server end.

GitLab CI: How to fail on new compiler warnings

We are trying to get an old legacy code base under control while simultaneously developing new features. Currently the code compiles with a hell of a lot of compiler warnings and warnings from static code analyzers. For that reason it is not uncommon that code introducing new warnings reaches production simply because the new warning got lost in the shuffle.
Currently we are using Jenkins for nightly builds and make the build fail on new warnings. However, when Jenkins detects the new warnings the code was already merged a few hours ago. So we would like to not only shorten the feedback cycle but also ensure to only merge changes that do not introduce new warnings.
As far as I know it is possible to trigger a Jenkins build on a push to GitLab. But Jenkins can only compare the count of warnings to the previous build. But we would need to compare to a build of a different branch.
Can GitLab CI or a combination of GitLab EE and Jenkins somehow be configured to detect if a merge request introduces new warnings?
Yes that is possible but that's rather an open-ended question that will depend a lot on how long a build takes and how you will compare the outcomes.
You don't have to run only the checks on the branch you have checked out. You may set up two jobs in parallel that run tests on current branch and the develop branch, pass them as artifacts to a third job and compare them there.
You may want to store the state of a build on your develop branch and download the artifact to your current job and compare it against the local results. You could also store them in a database, on a file server or wherever else it's comfortable.
Finally you may try an external code quality tool like SonarQube which has greater insight into what's new and what's old.
In the meantime tools got developed that allow a workflow which is not perfect but comes quite close.
Jenkins has the Warnings Next Generation Plugin which can compare the warnings found in one Jenkins job to the warnings found in another Jenkins job. So we set up a job to compile our develop branch each time a new commit is pushed to it. We then use the results as baseline. Another job that gets triggered for each merge request in GitLab then uses this baseline to determine the new warnings introduced by the merge request.
This works reasonably well.

How to test build script changes on travis without having to check in code for each change

I am having a build issue on travis with my node.js project. The issue stems from the fact that I have a rather complex test that I want to run, which requires building and running some test scaffolding framework on the VM before I get to 'npm test'. Somewhere along the line it is failing, and I find myself adding debugging statements to my .travis.yml to try to root out the problem, but its annoying to have my commit history littered with these changes/attempted fixes.
I guess I want to be able to either (a) get on the travis box at the time the test is running (or afterwards) so I can inspect what is going on/went wrong, or (b) at least be able to tweak and run my .travis.yml file and associated scripts somehow and re-run immediately without having to formally check those changes in in order to kick off travis again.
I find myself adding debugging statements to my .travis.yml to try to root out the problem, but its annoying to have my commit history littered with these changes/attempted fixes.
If the history is important, maybe because your changelog is generated from it, then my suggestion is, to create a private sandbox for experiments by cloning the repo.
clone a organization repo to a user repo.
activate travis on the user repo
try and error commit as long as you need to your .travis.yml
when everything is working like you want, squash the git commits into 1
do a pull request of this single commit from user repo to company repo
et voila: history stays clean
Big Warning: When you have no contributors with forks to worry about, then you could simply commit till you get it right and squash the history into a single commit and do a force push.
get on the travis box at the time the test is running (or afterwards) so I can inspect what is going on/went wrong
That's not possible. But you can view or download the log from the builds.
If you view the build log directly after a push, then you get live view of the processing steps on the Travis env. You can also cancel it manually.
at least be able to tweak and run my .travis.yml file and associated scripts somehow and re-run immediately without having to formally check those changes in in order to kick off travis again.
When you are logged in on Travis and you will find a button to rerun a build.
You could try executing your build commands inside a normal Ubuntu VM.
Back in the days box images were available over at http://files.travis-ci.org/boxes/provisioned/travis-ruby.box
But Travis switched from Vagrant to BlueBox and stopped providing the downloads.
You could try on IRC and ask to get access to your “box” for debugging.
I'm not sure if you get access.

Hudson : Scheduling a build without a tag and generate a report

How can I schedule a build without tag over Windows, Linux and WCE in Hudson using a shell script and generate a report that will be sent to a specified server?
And so the conditions are :
1. How can I create the build without creating a new tag?
2. How is it possible to excute .sh over windows and WCE (Windows Mobile), is it simply by going through Cygwin? Moreover, having a cross-platform (3 platforms) build does it mean that I must run the build 3 times?
3. How to generate a report and save it in a directory of a server that I'm authorized to access to?
I know that I asked many questions at once. It is because this is my first use of Hudson and these are kind of details. Moreover, I don't want to make a mistake by creating new tags during my tests. The 1st and 3rd questions are the most important. If anyone gives me the right answer to them, I'll choose it as the right answer.
Thank you a lot.
first, people nowadays mostly use jenkins instead of hudson (open source, better support)
build can be started manually in hudson / jenkins, just click the green arrow. It will create a new build but won't change your repository (unless the last step of your build is creating a tag, in that case, just remove that step for testing)
Usually, .sh scripts run in shell excecutables (ash, sh, bash, csh...) and are not supported of the shell on windows. You'll have to go through cygwin or have a platform specific build command
kind of not clear for me. If you use jenkins to build a matric build (with the matrix axis being your target platform), you'll have automatically a nice report in jenkins itself (status of each build). You can keep artifacts (use post-build action : archive the artifacts) or use another plugin to publish the file you like (exemple : ftp reporting)
Sorry not being able to be more precise, that's how far I understand your questions.

CruiseControl.net - Is there a way to run a batch file that will copy (msi) files to another server in a different domain?

I have an issue where I am trying to copy files(msi) from our build server to our Test server in CruiseControl. Once these are copied over we are planning on having a Scheduled Task that will run silent installs nightly. I need to be able to push back to CruiseControl the status of that build.
I am having issues copying these files from a batch file that is being run in our cruisecontrol prooject. I'm pretty sure its a permissions issue.
Also is there a way to push the build status back to CruiseControl so that it could tell us when the install failed?
Not a simple solution available, I'm afraid. Only time I saw anything similar done was using a python script to invoke commands on a remote system using ssh.

Resources