How to identify the fabric-protos version used to generate specific fabric-protos-go - hyperledger-fabric

github.com/hyperledger/fabric makes use of github.com/hyperledger/fabric-protos-go repo as a dependency module to setup gRPC communication between nodes. These fabric-protos-go files are generated by .proto of github.com/hyperledger/fabric-protos repo.
I have cloned github.com/hyperledger/fabric Tag:v2.4.7 repo and making some updates on it. I need to generate some go structs to use by defining new message in .proto files.
This fabric-v2.4.7 depends on github.com/hyperledger/fabric-protos-go v0.0.0-20220315113721-7dc293e117f7 as per the go.mod
But I am not able to identify which version of fabric-protos led to generation this fabric-protos-go v0.0.0-20220315113721-7dc293e117f7. Because if I clone the the latest version of fabric-protos repo to generate go files and use as dependency its throwing many incompatibility errors.
Please let me know how can I determine which version of fabric-protos-go and fabric-protos were used in a specific fabric tag.

This documentation page for fabric-protos mentions which versions of the protocol buffer bindings correspond to which Fabric versions:
https://hyperledger.github.io/fabric-protos/
The protocol buffer definition files are in branches of the fabric-protos repository, named the same as the versions in the documentation page above. Currently the main branch contains the definitions for Fabric v3 (the v0.3.x versions of the published bindings).
If you are making modifications, you should just pick the HEAD (latest commit) of the appropriate branch on which to base your change. Avoid making breaking changes. Changes will also need to be applied to any more recent versions (branches) so compatibility is maintained going forwards. A good strategy if you actually come to push changes to the repository is probably to raise a pull request for the main branch first, then cherry-pick changes back to as many earlier version branches as you need.

Related

Could Origen updater changes impact generated test flows or patterns?

As I was going through and locking down specific tag versions in my app's Gemfile, I noticed the origen_updater gem had no version associated with it.
gem 'origen_updater'
Why is that? I see that it is version controlled. Could changes to this gem impact modeling or generation?
thx
No, the only thing this gem does is copy its fix_my_workspace script to the application's bin directory.
See here for more background - http://origen-sdk.org/origen//guides/starting/workspace/?highlight=fix_my_workspace#Fix_My_Workspace
It is not versioned for the following reasons:
The latest version of this script will always be the best, which by definition, will have the best chance of getting your workspace going again if it is in a broken state
This script is intended to be called directly at times when either Origen or Bundler fail to launch. Finding out that you need a newer version of the script to fix your current problem is not very helpful if you are in a state where Bundler cannot be used to pull it in.
Therefore, the recommendation is not to lock this to a particular version and instead allow it to pull in the latest and greatest anytime you do a bundle update.

npm module versioning with auto merging in git

I'm currently struggling with automatic merges of a semantic versioned node project. In my current setup I have to maintain multiple older (minor) versions of the applications. To ensure that bug fixes in older versions are also applied to newer versions I'm using release branches in combination with bit buckets feature of auto merging. It works great apart from permanent auto merge conflicts with the version of the application that has to be stored in the package.json. Each time an auto merge happens there is a version conflict with the newer release versions.
Is there any way to avoid those merge conflicts? I fiddled around with a custom merging driver (https://gist.github.com/jphaas/ad7823b3469aac112a52), it kind of works but in my opinion there should be an easier solution like storing the version in a dedicated file (e.g .npmversion) and using build in merge drivers.

Git npm version management during the development flow

Here's my project development process:
feature/feature1
feature/feature2
feature/etc..
master
production
I develop my features on the features branches, when I have finished with a branch, I merge it on master and delete it via github ui. CircleCI detect the merge and deploy the master on a staging server.
Later I merge manually the master branch onto the production one, and CircleCI deploy to my productions server.
I would like my package.json version to bump each time I merge a feature branch to the master branch (via github UI). But I have no idea if
Github allow to do so (if yes please can you explain to me?)
It's a good process
I'm aware I could do it via npm version command when I merge master onto production, but I do need the version to be updated on the master automatically when I merge a branch into it.
Don't hesitate to criticize my way to proceed and tell me yours. :)
Thank you
I don't think Github offers any such feature. But there are some grunt modules that do this during build time. You could probably script this or have a make file that does this for you as well.
I don't think this is good way of versioning. After you are done with a feature, you have to decide if the changes you have made are minor or major. Some times you might commit breaking changes. Just incrementing the version number form 1.0.1 to 1.0.2 or say 1.1.0 to 1.1.1 (every time) will not convey the magnitude of these changes. Best Practice: Software Versioning
The best practices for versioning are already covered here.
We manage versioning manually where I work. Before each release we create a tag (v1.0.3, v1.1.4..etc) and then create a release on Github. In the description of the release we put all new commits. Going through the commit message gives us a good idea of the changes that were made. If the changes only involve bug fixes and minor feature additions we will increment the minor number ie. 1.2.1 to 1.2.2.
If a major new feature is added, we increment the major version number ie. 1.2.2 to 1.3.0. When we add many breaking changes we go from 1.3.0 to 2.0.0.
Sometimes we are loose with versioning. Our API is not public and the only reason we use versioning is for deploying and for rolling back. If you are expecting to make you work open source and or expecting to make your work available through some kind of package manager, like say npm, you should follow semver versioning strictly.

Does the case for not including Node modules in version control also apply to Composer packages?

In doing research on whether Node's node_modules should be checked into your version control repository, the most recent consensus seems to be that you should include it for applications you deploy.
Sources:
Check in node_modules vs. shrinkwrap
Should I check in node_modules to git when creating a node.js app on Heroku?
https://www.npmjs.org/doc/misc/npm-faq.html#Should-I-check-my-node_modules-folder-into-git
In reading these arguments, it lead me to question whether Composer's /vendors directory should also be checked into version control. Composer's documentation suggests that you don't:
Should I commit the dependencies in my vendor directory?
The general recommendation is no. The vendor directory [...] should be added to .gitignore/svn:ignore/etc.
The best practice is to then have all the developers use Composer to install the dependencies. Similarly, the build server, CI, deployment tools etc should be adapted to run Composer as part of their project bootstrapping.
While it can be tempting to commit it in some environment, it leads to a few problems:
Large VCS repository size and diffs when you update code.
Duplication of the history of all your dependencies in your own VCS.
[...]
Contrasting that argument is this one (source):
Doesn’t checking in node_modules create a lot of noise in the source tree that isn’t related to my app?
No, you’re wrong, this code is used by your app, it’s part of your app, pretending it’s not will get you in to trouble. You depend on other people’s code and they are just as likely to write new bugs as you are, probably more so. Checking all of that code in to source control gives you a way to audit every line that ever changed in your application. It allows you to use $ git bisect locally and be ensured that it’s the same as in production and that every machine in production is identical. No more tracking down unknown changes in dependencies, all the changes, in every line, are viewable in source control.
In summary, the question is this: Why would one gitignore (i.e. not version control) node_modules but not do the same for Composer's vendor/ directory?
The reason to commit external dependencies is
it's easier to deploy with git pull
the code used is directly included in the commit anyone checks out
Arguments against this are
Git is no deployment tool
all dependency managers do have a way to make exactly sure the code used can be fetched
I don't know anything about npm, but for Composer that last point is implemented by committing composer.lock.
I don't think the "audit code" argument is a valid one in every case. If you do write software that get's audited by itself, and subsequently needs all libraries to be audited, then probably all code changes need to be conserved. This isn't true for the general case.
git bisect works still as well with a committed composer.lock. It does require installing the dependencies with every bisect step, but this is just one simple step, probably already done in the automatic test suite run.
The last thing to worry about is when used packages go offline. This really is more of a problem with Composer, because there is no central hosting of the downloadable releases (npm probably does this). If this is a problem, either commit the code (and try to figure out how to update that missing package in the future), or setup an instance of Satis to create a local copy of the code you use.
Putting all your modules in you VCS makes it really heavy to download or upload. For example, I work on two node.js projects and the total node_modules directories size is between 250MB and 500MB whereas the whole code with assets is generally less than 40MB. Every Node.js developer likes Node lightness, so the code must stay easy to download and share.
For the second point, an alternative to avoid regressions is to be more restrictive in your package.json dependencies versions. You will find more information here.
Finally the best argument is to take a look on the famous modules everybody know :
express ignores
node_modules
mocha ignores
node_modules
q ignores node_modules
...
The more I research this the more I'm starting to think that there is no one correct answer to this, just different opinions as well as pros and cons of each method.
This blog, looking from the context of Bower, does a good job of weighing the pros and cons of each: http://addyosmani.com/blog/checking-in-front-end-dependencies/.
In short: At least for right now, weigh the pros and cons and decide what best fits your situation.

Project references v NuGet dependencies

I am in the process of introducing NuGet into our software dev process, both for external binaries (eg Moq, NUnit) and for internal library projects containing shared functionality.
TeamCity is producing NuGet packages from our internal library projects, and publishing them to a local repository. My modified solution files use the local repository for accessing the NuGet packages.
Consider the following source code solutions:
Company.Interfaces.sln builds Company.Interfaces.1.2.3.7654.nupkg.
Company.Common.sln contains a reference to Company.Interfaces via its NuGet package, and builds Company.Common.1.1.1.7655.nupkg, with Company.Interfaces.1.2.3.7654 included as a dependency.
The Company.DataAccess.sln uses the Company.Common nupkg to add
Company.Interfaces and Company.Common as references. It builds
Company.DataAccess.1.0.8.7660.nupkg, including Company.Common.1.1.1.7655 as a dependent component.
Company.Product.A is a website solution that contains references to all three library projects (added by selecting the
Company.DataAccess NuGet package).
Questions:
If there is a source code change to Company.Interfaces, do I always need to renumber and rebuild the intermediate packages (Company.Common and Company.DataAccess) and update the packages in Company.Product.A?
Or does that depend on whether the source code change was
a bug fix, or
a new feature, or
a breaking change?
In reality, I have 8 levels of dependent library packages. Is there tooling support for updating an entire tree of packages, should that be necessary?
I know about Semantic Versioning.
We are using VS2012, C#4.0, TeamCity 7.1.5.
It is a good idea to update everything on each check-in, in order to test it early.
What you're describing can be easily managed using artifact dependencies (http://confluence.jetbrains.com/display/TCD7/Artifact+Dependencies) and "Finish Build" build triggers (or even solely "Nuget Dependency Trigger").
We wrote our own build configuration on the base project (would be Company.Interfaces.sln in this case) which builds and updates the whole tree in one go. It checks in updated packages.config files and .nuspec files along the way. I can't say how much of a time-saver this ended up being for us, even if it might sound like overkill at the beginning.
One thing to watch out for: the script we wrote checks in the files even if the chain fails somewhere in between, to give us the chance of fixing it on our local machine, check in the fix and restart the publishing.

Resources