GitVersion Mainline - Version increment on every push - gitversion

I am using GitVersion with Mainline mode. With default settings, it increases patch number with every commit in master. Is there a way to increase patch number on every push rather than on every commit?
If I push 3 commits together, patch gets increased by 3. In this case I would get version jump from 2.0.4 to 2.0.7 on VSTS build.
GitVersion.yml
mode: Mainline
Note: I have got only one branch, which is 'master' and I will be keep pushing my changes to master directly. I am not looking to use any branching strategy yet.

As I see it, you have two options avaliable to you.
You could override increment: None for your master branch configuration:
branches:
master:
increment: None
But then, I think you would always need to "manually" bump the version of your code through git commits, such as including in your commit message: +semver: (major|minor|patch), or updating your GitVersion.yml file to set the next-version configuration. This, I think, defeats many of the benefits of using GitVersion at all. But, this is still better than not versioning at all!
However, saying that you're doing "mainline" development, does not necessarily mean that you only develop in the master branch. I believe that mainline development mainly implies that you release off of the master branch (i.e. you're not using GitFlow with two long-lived branches, master and develop) and that the state of master at any point in time is deployable to production.
So you could achieve what you're looking for using two different branches as #prestonsmith already said in his answer. You and your team could work in short-lived topic branches off of master, and eventually merge (either normal merge-commit to preserve history, or squash merge (loses branch history and just introduces a single commit into master with all the changes created on the topic branch). This would result in the default behavior of a single patch version increment, which could later be changed to a minor or major increment via the use of git tags or, in your merge commit message, adding something like +semver: (major|minor) to instead increment the major or minor version number.

The short answer is no - sorry :(
However, if you did decide to simply use two branches, you could simulate this by using Git's "Squash and Merge" strategy to achieve it. Basically, all of your commits would become a single commit on the main branch (master) after merging.
Feels simple enough to warrant it as a suggestion :)

Related

How can I use 2 repositories or have 2 series of commits?

I work on an application in Python that is developed by a small team.
I need to remember the state of the code at regular points in time for myself, and occasionally I need to commit the code to the group. However, they do not want to see the 10s of intermediate commits I do.
So I am looking for a way to have 2 repositories with independent histories. Or some other smart suggestion!
I need to remember the state of the code at regular points in time for myself
Use tags
I need to commit the code to the group. However, they do not want to see the 10s of intermediate commits I do.
Clone repo
Commit into this local clone
When you'll have any milestone, join your history into one commit with git merge --squash
Push results

GitLab CI - Keep last pipeline status

In GitLab CI, is it possible to keep the last pipeline status when no jobs are queued upon a push? I have a changes rule setup like this in my .gitlab-ci.yml:
changes:
- Assets/*
- Packages/*
- ProjectSettings/*
- .gitlab-ci.yml
which applies to all jobs in the pipeline (these are build jobs for Unity, though irrelevant). NOTE: I only wanted to run a build job if there are any actual files changes that would require a rebuild. changes to README.md and CONTRIBUTING.md are not changes that require a rebuild so this is why I have such a rule.
Problem is I require successful pipeline to merge branches and when I try to merge a branch that modified README.md there obviously is no pipeline.
Is there a way to just "reuse" the result of a previous pipeline or to have a "dummy" job that succeeds instantly upon any push, so as to be able to merge this branch without requiring an expensive rebuild of the whole project?
As you mentioned in your last paragraph, the only way to work around this would be to inject a dummy job that always succeed; something like echo "hello world" in the script.
However, depending on how long your tests run, your best bet may be to just have your tests run every time regardless of changes. Any sort of partial pipeline run using the changes keyword leaves you open to merging changes that break your pipeline. It essentially tightly couples your logic in your pipeline to the component structure of your code, which isn't always a good thing since one of the points of your pipeline is to catch those kinds of cross-component breakages.

How to defer "file" function execution in puppet 3.7

This seems like a trivial question, but in fact it isn't. I'm using puppet 3.7 to deploy and configure artifacts from my project on to a variety of environments. Puppet 5.5 upgrade is on the roadmap, but without an ETA so far.
One of the things I'm trying to automate is the incremental changes to the underlying db. It's not SQL so standard tools are out of question. These changes will come in the form of shell scripts contained in a special module, that will also be deployed as an artifact. For each release we want to have a file, whose content will list the shell scripts to execute in scope of this release. For instance, if in version 1.2 we had implemented JIRA-123, JIRA-124 and JIRA-125, I'd like to execute scripts JIRA-123.sh, JIRA-124.sh and JIRA-125.sh, but no other ones that will still be in that module from previous releases.
So my release "control" file would be called something like jiras.1.2.csv and have one line looking like this:
JIRA-123,JIRA-124,JIRA-125
The task for puppet here seems trivial - read the content of this file, split on "," character, and go on to build exec tasks for each of the jiras. The problem is that the puppet function that should help me do it
file("/somewhere/in/the/filesystem/jiras.1.2.csv")
gets executed at the time of building the puppet catalog, not at the time when the catalog is applied. However, since this file is a part of the payload of the release, it's not there yet. It will be downloaded from nexus in a tar.gz package of the release and extracted later. I have an anchor I can hold on to, which I use to synchronize the exec tasks, but can I attach the reading of the file content to that anchor?
Maybe I'm approaching the problem incorrectly? I was thinking the module with the pre-implementation and post-implementation tasks that constitute the incremental db upgrades could be structured so that for each release there's a subdirectory matching the version name, but then I need to list the contents of that subdirectory to build my exec tasks, and that too - at least to my limited puppet knowledge - can't be deferred until a particular anchor.
--- EDITED after one of the comments ---
The problem is that the upgrade to puppet 5.x is beyond my control - it's another team handling this stuff in a huge organisation, so I have no influence over that and I'm stuck on 3.7 for the foreseeable future.
As for what I'm trying to do - for a bunch of different software packages that we develop and release I want to create three new ones: pre-implementation, post-implementation and checks. The first will hold any tasks that are performed prior to releasing new code in our actual packages. This is typically things like backing up db. Post-implementation will deal with issues that need to be addressed after we've deployed the new code - example operation would be to go and modify older data because for instance we've changed a type of column in a table. Checks are just validations performed to make sure the release is 100% correctly implemented - for instance run a select query and assert on the type of data in the column, whose type we've just changed. Today all of these are daunting manual operations performed by whoever is unlucky to be doing a release. Above all else though, being manual these are by definition error prone.
The approach taken is that for every JIRA ticket being part of the release the responsible developer will have to decide what steps (if any) are needed to release their work, and script that. Puppet is supposed to orchestrate the execution of all of this.

Why is an a index.lock sometimes created when switching branches in vscode?

Why does vscode create a index.lock sometimes when switching branches? Specifically, if the previous branch I just had open had some thing in package-lock.json and I just wanted it reset did a git reset --hard? FYI, I am using node 8. Here is a screenshot:
Git creates index.lock whenever it is updating the index. (In fact, the index.lock lock file itself is the new index being built, to be swapped into place once it's finished. But this is an implementation detail.) Git removes the file automatically (in fact, by swapping it into place) once it has finished the update. At that point, other Git commands are free to lock and then update the index, though of course, one at a time.
If a Git command crashes, it may leave the lock file in place (which, since it's also the new index, may be incomplete and hence not actually useful). In this particular case, there's no ongoing Git command to complete and hence unlock and allow the next Git command to run.
If the file is there at one point, but not there the next time you try something, that means some Git command was still running (and updating) and you were just too impatient. :-) If you mix different Git commands (and/or interfaces such as GUIs) you may have to manually coordinate to avoid these run-time collisions. Any one interface should coordinate with itself internally.

Perforce Streams - Isolating imported libraries

I need some guidance on a use case I've run into when using Perforce Streams. Say I have the following structure:
//ProductA/Dev:
share ...
//ProductA/Main
share ...
import Component1/... //Component1/Release-1_0/...
//ProductA/Release-1_0
share ...
//Component1/Dev
share ...
//Component1/Main
share ...
//Component1/Release-1_0
share ...
ProductA_Main imports code from Component1_Release-1_0. Whenever Component1_Release-1_0 gets updated, it will automatically be available to ProductA (but read-only).
Now. The problem I'm running into is that since ProductA_Release-1_0 inherits from Main and thus also imports Component1_Release-1_0, any code or changes made to the component will immediately affect the ProductA Release. This sort of side effect seems very risky.
Is there any way to isolate the code such that in the release stream such that ALL code changes are tracked (even code that was imported) and there are 0 side-effects from other stream depots but for main and and dev streams, the code is imported. This way, the release will have 0 side effects, while main and dev conveniently import any changes made in the depot.
I know one option would be to create some sort of product specific release stream in the Component1 depot, but that seems a bit of a kludge since Component1 shouldn't need any references to ProductA.
If you are just looking to be able to rebuild the previous versions, you can use labels to sync the stream back to the exact situation it was in at the time by giving a change list number (or label) to p4 sync.
If you are looking for explicit change tracking, you may want to branch the component into your release line. This will make the release copy of the library completely immune to changes in the other stream, unless you choose to branch and reconcile the data from there. If you think you might make independent changes to the libraries in order to patch bugs, this might be something to consider. Of course, perforce won't copy the files in your database on the server, just pointers to them in the metadata, and since you're already importing them into the stream, you're already putting copies of the files on your build machines, so there shouldn't be any "waste" except on the metadata front.
In the end, this looks like a policy question. Rebuilding can be done by syncing back to a previous version, but if you want to float the library fixes into the main code line, leave it as is; if you want to lock down the libraries and make the changes explicit, I'd just branch the libraries in.
Integrating into your release branch
In answer to the question in the comments, if you choose to integrate directly into your release branch, you'll need to remove the import line from the stream specification and replace it with the isolate line which would then place the code only in the release branch. Then you would use the standard p4 integrate command (or p4v) to integrate from the //Component1/Release-1_0/... to //ProductA/Main/Component1/....
Firming up a separate component stream
One final thought is that you could create another stream on the //Component1 line to act as a placeholder for the release.

Resources