How can one create package for ClearQuest? - bug-tracking

I am modifying ClearQuest database schema and I wonder is there a way to create a package for future deployment. If there isn't what are best practices for tracking and deployment of schema modifications?

In your CQ installation path, look for this "cqload" tool.
basically...
cqoload exportintegration - Exports specific schame versions into a text
file
cqload importintegration - Imports the exported text file into a CQ
schema
What is common practice is you at least have 2 CQ environments, a Dev/QA env, and a Production env. Developers work on the Dev/QA environments (checkout/modify/checkin) until they are happy, QA verified the changes in the same environment. Then the implementor will use cqload commands to transfer the changes to the production environment.
Personally I think this workflow is retarded, as it requires so many manual process and it doesnt work properly if you have additional "Packages" upgrade or installation within CQ, e.g. UCM package, etc. Unfortunately I dont think this is gonna change anytime.

Related

Why are SAP Commerce Cloud recipes not recommended for Production, but the set-up instructions usually mentions using recipes?

In Installing SAP Commerce Using Installer Recipes and Installer Recipe Reference, there is a comment that says something like:
The installer is currently only intended to install SAP Commerce in
development environments or for demonstration purposes. Do not use the
installer to install SAP Commerce in a production environment.
However, guides like Customizing the Accelerator with extgen and modulegen usually mention recipes:
On Windows: install.bat -r b2c_acc_plus
So, how do you really set-up a project from scratch? Do you start with recipes, or do you start with ant modulegen?
I don't see clear instructions (or best practice) on how I should build a B2C/B2B application from scratch for development and then preparing it for Production. (Maybe there is a gap in the instructions, or I just don't know where it is)
Even the Installing SAP Commerce Cloud for use with Spartacus guide mentions starting with a B2C recipe. Does this mean that the starting point of building a SAP Commerce project is to use recipes? Are there cases where you would not use a recipe, and build everything from scratch using ant modulegen and ant addoninstall?
It is not recommended to use recipe for direct installation on production. Reason being it installs a preset of hybris extensions which might or might not be needed for your requirements, also it might not be a allowed to use under the license you got.
However, when you start your development, you can use recipe to give your development a quick start. It generates the raw structure for your e-commerce application which you would need to customize and later deploy on your production.
how do you really set-up a project from scratch? Do you start with recipes, or do you start with ant modulegen?
Well, You can use any of those. If you are looking for difference, it has already been answered here
how I should build a B2C/B2B application from scratch and prepare for production?
For production hybris deployment procedure refer this.
NOTE :
a) recipe installation does more than what you can achieve using modulegen like complete installation, configuration and initialization for a running e-commerce example. I think once you go through above links, you will have a much better understanding on this.
b) When you go with recipe, it will install related extensions which you might not want to use or don't have production license for that. Please be considerate to review and disable such extensions,
Thanks
A few more points adding to the answer by www.hybriscx.com
Generally, the integrations in a recipe are mock integration e.g. payment integration as the purpose of a recipe is to provide a ready-to-use demo/reference application (store).
The data (catalog, users & password, usergroups, roles, promotions etc.) in a recipe are sample data. The same goes with the look-n-feel (logos, colour, layout etc.). Every business requires its specific data and look-n-feel.
The system configurations/properties (e.g. memory configuration, logging configuration etc.) may be optimised for the demo purpose but the production setup may require a different configuration. On the same line, configurations like hosts, ports, encryption etc. are general purpose configurations and a production environment may need to change them.
The database set up by a recipe is generally HSQLDB which is only good for development/demo.

GitVersion – selective versioning multiple assemblies of the same project

I’m on a .net c# project composed by a solution with several class library projects.
The source control is managed by git using gitflow as branching model.
We have decided that we wanted to implement semantic versioning (http://semver.org/) of the project in order to follow a standard way to communicate our releases.
For that we are using GitVersionTask (via NuGet) which works pretty well with gitflow.
Every time we tag a release and we perform a build from the master branch the version of all assemblies are updated and a new release is out for delivery.
Only one of the assemblies has a public API, all the other are for internal consume. I would like to know if this is the correct way to manage the version of multiple assemblies of the same project I mean, isn’t it wrong to change the version of every assembly when only a couple (or even just one) was changed? To get thinks more complicated there is strong possibility that some of the “internal” assemblies will be used by other projects so I believe it not very wise to increment a major version of an assembly that didn’t suffer a change just because another assembly of the same project is promoting breaking changes. Should each assembly project be managed on its own repository?
Thanks in advance.
I know this is a bit of an old question, still:
I want to share a workaround that seems to be working:
GitVersion uses $(Build.SourcesDirectory) to see where the sources are located - src
We can change this using logging commands*
Workaround is to set the Build.SourcesDirectory before GitVersion task
Then gitVersion uses the GitVersion.yml from the project folder (Build.SourceDirectory) and voila - works
After that you might want to roll back the change or not - depending on your need. For me it seems it is nice to scope down to the only nuget package from the collection of nuget packages in our nugetPackages monorepo.
see GitVersion issue and comment
*Example Powershell command:
standard PowerShell task; set to inline script;
Write-Host "##vso[task.setvariable variable=Build_SourcesDirectory;]$(Build.SourcesDirectory)\$(NugetProjectName)"
There is certainly nothing in GitVersion that would help with having separate projects within the same repository. The guidance that we would offer here is that you should use different repositories for the different parts of your application. That way they can be versioned/updated at their own cadence.

SSIS Shared database connection strings between parent and child packages

I want to be able to build 30+ packages in SSIS and be able to test/develop them in isolation. I also want to be able to run these from a Master/Parent package.
When it comes to delivering the SSIS parent package I want to be able to change the connection string once and have this trickle down to all child packages. Other developers will be building and testing without using the master package and want to be able to develop these in isolation.
I've seen many articles on XML config/parameter mappings etc. but I've not seen any definitive guide on how this should be done & what is best practice.
The project we have created also only allows packages to be linked in the solution as an external reference rather than as project links (is this the legacy format?). I'm wondering if this type of project could hamper the ability to achieve shared connection strings.
Answering this myself for reference. Basically there is no streamlined way of doing this in the Package Deployment model. It is much easier to achieve this using the Project Deployment model which is the default in VS2012. However, we don't have this luxury.
I had to create some parent variables contained in the master package. These are then set to the XML config. The child packages then have direct config links to the parent variables, with the target properties mapped to the connection string properties of the connection managers.

Exclude Certain Database Objects from the Build Depending on Configuration Settings

I have a database project in Visual Studio 2012 with SSDT (latest as of this writing). In the database project, I have a schema called "UNITTEST" which contains tons of stored procedures that create, destroy, and provide other helper functionality for the unit tests. We do this because it gives us the ability to control our test data centrally rather than inside each unit test. Now that's fine and all however, I don't want to publish this schema or any of the objects inside of this schema to production.
So my question.. Is there a way to stop SSDT/VS2012 from including the UNITTEST schema in the production build deployment script?
I'm thinking there should be a way to do it depending on the solution configuration settings and publish profiles. If my configuration is set to "Release" then I want the build to perform a bit differently.
Builds are very new to me. I found this question: build-different-scripts-depending-on-build-configuration but I can't seem to get the answer to fulfill my problem. This question also doesn't help although it's very similar: bind-the-deploy-and-publish-destination.
Is anyone else managing something like this? The other developers in my team are just modifying the published script to remove these objects but I HATE manual work, there HAS to be a solution! :)
Thanks all!
One of my schemas references a lot of sys.* objects which created a lot of errors in the build. I created another project in the solution and moved that schema to the new project.
Luckily you can build and publish at the project level.
This allows me to keep the other schema in change control at least.
(It may also help to set the Properties on the SQL files to Build Action: None)
Partial/Composite projects might be useful here. Main project contains all of your necessary DB objects for your apps to run. The partial project references the main project, but then contains all of the "Test" code.
Here are a couple of options from Jamie Thomson:
http://sqlblog.com/blogs/jamie_thomson/archive/2013/03/10/deployment-of-client-specific-database-code-using-ssdt.aspx --This may be the simplest way to handle this
http://sqlblog.com/blogs/jamie_thomson/archive/2012/01/01/implementing-sql-server-solutions-using-visual-studio-2010-database-projects-a-compendium-of-project-experiences.aspx --Lots of good information in this post and most of it also applies to SSDT SQL Projects.
http://msdn.microsoft.com/en-us/library/dd193415.aspx - Composite projects for larger DBs. This could potentially work for you as well.

Deploy repository code to multiple machines at once

My question is: How do you guys deploy the same code from whatever [D]VCS you use on multiple machines? Do you have an automated deployment system and if so what's that? Is it built in-house? Are there out there any tools that can do this automatically? I am asking because I am pretty bored updating up to 20 machines every time I make some modifications.
P.S.: Probably this belongs on ServerFault, but I am asking here because I am thinking at writing my own custom-made deployment system.
Roll your own rpm/deb/whatever for your package, set up your own repo, and have your machines pull on a regular basis. Its really not that hard to do and its already built-in to your system, is well tested, and loaded with features. You could use something like Func if you needed to push instead.
Depending on your situation deploying straight from the versioning system might not always be the best idea. You can only so much by just updating files, and mixing deployment and development probably will make the development use of the versioning system less free.
I see two alternatives that might be interesting.
Deploy from your continuous integration server. (add a task that runs after every successful build, copies over files and executes some remote commands, I'm using this to deploy to a testserver and would find it to tricky to upgrade production in such a way)
Deploy using an existing package manager. You can set up your own apt (or equivalent) repository and package the updates using apt. Have your continuous build system build apt packages but let an admin decide if the should be pushed to the update server. I think this is the only safe solution for production machines.
We use Capistrano for deployment & Puppet for maintaining the servers and avoiding the inevitable 'configuration drift' when many developers/engineers tinker with the package lists and configuration files.
Both of these programs are written in Ruby, but we use them for our PHP codebase stored in a git repository.
I use a combination of deb packages with puppet to deploy code and configure a bunch of machines.
In most projects i have been involved with the final stage has always been an scripted rsync deployment to live. so the multiple targets are built into this process.

Resources