Migration of rule project from IBM ODM 8.5 to 8.9v - ibm-odm

Is it possible to migrate IBM ODM Rule project of 8.5v to Decision service in 8.9v.
what precaution need to taken while migration?

yes, migration from 8.5 to 8.9 is possible. IBM provides a wizard, but manual steps are required. I have not used the wizard in 8.9, but used it several times in 8.7. Should be similar. Here are my notes.
How to Migrate Classic Rule Projects to Decision Services
This document provides a brief recipe for converting from ‘classic’ rule projects to the new ‘decision service’ style rule projects in ODM v8.7.1.1.
IBM Knowledge Center References
Migrating classic rule projects to decision services http://www.ibm.com/support/knowledgecenter/SSQP76_8.7.1/com.ibm.odm.distrib.migrating/odm_topics/tsk_migrate_projects_to_ds.html
Procedure
Open the Rule Designer workspace that contains the classic rule projects that you want to migrate.
Verify the migration pre-requisites have been met (see the Knowledge Center reference), including the verbalization of the ruleset parameters.
From the Rules perspective, click the Rule Projects Migration icon (looks like a folder with a green arrow pointing to the right) in the toolbar.
Follow the steps in the migration wizard.
Select all the rule projects at once
Let the wizard do its work.
Review Rule Project Migration reports for each rule project
Clean and Build the workspace
Rename the rule projects, if required for new naming standards (your company, not IBM ODM).
Change the Decision Service property of the Main Rule Project to be Standard Rule Project.
Create a new Decision Service Main Rule Project that references all the other rule projects with an appropriate name, like DecisionService.
Move the contents of the deployment folder from the project that originally specified the parameters to the new Main Rule Project created in the previous step. This folder should contain a new ‘operation’ corresponding to each ruleset in the Classic Rule Projects.
Rename the operation in the Main Rule Project to match the previous Rule App name.
Edit the operation in the Main Rule Project to change the Source Rule Project to be the Main Rule Project.
Edit the operation in the Main Rule Project to change the Ruleflow to ‘Use main ruleflow’ and specify the Main ruleflow.
Edit the operation in the Main Rule Project to change the Ruleset Name to match the previous ruleset name.
Add an Action Task to some ruleflow to initialize any ruleset variables that were previously initialized directly from the ‘parameter’.
Note: With Classic Rule Projects, the Initial Value of a ruleset variable could be set to the value of a ruleset parameter. With Decision Service Rule Projects, there is no longer any such thing as a ruleset parameter – you must define a ruleset variable to hold that value. Since the Initial Value of a ruleset variable cannot be set to the value of another ruleset variable, it is no longer possible to use the Initial Value to set the ruleset variables that were used as ‘virtual parameters’; instead, these ruleset variables now should be initialized in the Main rule flow in either (a) the Initial Actions of the Initialize rule task or (b) in an action task.
You should be able to run DVS tests from Rule Designer, or publish the Main Rule Project to Decision Center and run Decision Runner tests from the Business Console.

Related

Use includes in .gitlab-ci.yml without direct user access to included project

We are using includes from project B within project A within the .gitlab-ci.yml as follows:
include:
- project: pathto/projectb
file:
- "/pathto/myfile.yml"
which works well when the user has access rights to both projects but breaks with a linting error when the user has only access rights for project A:
Found errors in your .gitlab-ci.yml:
Project `pathto/projectb` not found or access denied! Make sure any includes in the pipeline configuration are correctly defined.
Now the problem is that we want to use the include (and pipeline to succeed) also when the user who starts the pipeline has only access rights to project A.
Are there any ways to achieve this?
Background: Project B holds some general CI files and the user is an external developer and should have limited access only.
Thanks in advance!
No, for an include: from a project to work, the user that triggers the pipeline must have at least read access to the referenced CI YAML files.
You must either (1) make Project B have internal or public visibility or (2) provide membership access to users who trigger pipelines using includes in the project.
There may be ways you can separate the yaml files from Project B (like publishing them elsewhere), but in all cases, users can (must be able to) read all includable files.

Data Factory DevOps SSIS-IntegrationRuntime

We're planning to use CI/CD pipelines for Data Factory.
In one of our pipelines we use SSIS packages that needs to be called. To call SSIS packages you need to specify an Azure-SSIS IR that must be used.
The Azure-SSIS IR has a different naming on every environment.
Now, it is not possible to set this value dynamic (the option "Add dynamic content [Alt+P]" is not available on this field)
Is there a simple solution to change the Azure-SSIS IR during the deployment?
Thanks in advance
Your linked services aren't named by environment are they? (they most definitley should not be)
The default out of the box cloud runtime is also not named by environment.
Your runtimes should not be named by environment either.
IMHO your naming convention is incorrect. You should challenge it - there's no reason to include an environment designator in any runtime names.
Yes, your parent data factory should definitely have a different name per environment. That's where the distinction is made. Your runtimes should not.
In direct answer to your question, the way I have dealt with this in the past is added a powershell script task to the build part of DevOps that transforms the deployment asset and basically find/replaces the name the delivers the result as a build artifact

Manage csproj file in CI/CD process

I have an asp.net web forms application. To implement Continuous integration/deployment process:
TFS
Dev branch
Master branch
Azure: App service
Dev Slot
Staging Slot
Prod
The lifecycle is :
The developer add some features and commit it to Dev branch ==> the commit will be automatically deployed to Dev slot
The requestor or the client see, test and validate modifications
If it is Ok ==> the developer merge his changeset to master branch
when the changeset were merged with success, it will be ==>
deployed to Staging slot
Test some important Url in staging slot
If it is OK ==> Swap Staging and Production slots
So at the end, we will have the versions of the application:
Version N+1 in the dev slot
Version N-1 in the staging slot
Version N in the production
This process works fine. Because of the csproj file, in some it didn't
Example :
A Developer add a subsite A and B
site B is validated by the client but site A did not
when the site B's changeset were merged to the master branch, the csproj file will contains A and B pages references
So we will have a compilation error in master, because site A's pages are mentionned in the csproj but it doesn't exist in this master branch!
So I need to know how can I fix this issue ?
Thanks,
You have a continuously integrated dev branch. When it comes to deploying, you find yourself asking the question, "How do I deliver a subset of what's presently in the branch?" At this point, you're trying to essentially un-merge. You don't ever want to "unmerge".
Instead, consider adopting a feature toggle pattern. This is a developer-centric activity, not a branching/merging activity. Your developers wrap any new feature behind a toggle that can be conditionally enabled or disabled. If Site B is approved for deployment and Site A isn't, that's fine -- deploy with Site A's feature toggle disabled. It's still in the code, but there's no way for your end-users to access it.
Another possibility is to adopt a microservice architecture where there are fewer hard dependencies. Instead, your application is comprised of many, smaller, independently versioned and deployed services. This will probably end up involving feature toggles too, of course.
The above two thoughts are coming from a place of modern application design. To go back to an older way of thinking: If you need to "unmerge", it means you're merging too early. You may need to maintain multiple development branches and QA features independently, only merging them together once the changes have been approved. This, of course, requires a longer QA cycle because you'll have to QA both features a second time, after they are merged.

Octopus Deploy and Multiple Branches/Release Candidates

We have currently released our code to Production, and as a result have cut and branched to ensure we can support our current release, whilst still supporting hot-fixes without breaking the current release from any on-going development.
Here is our current structure:
Project-
/Development
/RC1
Until recently using Octopus we have had the following process:
Dev->Staging/Dev Test->UAT
Which works fine as we didn't have an actual release.
My question is how can Octopus support our new way of working?
Do we create a new/clone project in Octopus named RC1 and have CI from our RC1 branch into that? Then add/remove as appropriate as this RC's are no longer required?
Or is there another method that we've clearly missed out on?
It seems that most organisations that are striving for continuous something end up with a CI server and continuous deployment up to some manual sign off environment and then require continuous delivery to production. This generally leads to a branching strategy in order to isolate the release candidate to allow hot fixing.
I think a question like this raises more points for discussion, before trying to provide a one size fits all answer IMHO.
The kind of things that spring to mind are:
Do you have "source code" dependencies or binary ones for any shared components.
What level of integration / automated regression testing do you have.
Is your deployment orchestrated by TFS, or driven by a user in Octopus.
Is there a database as part of the application that needs consideration.
How is your application version numbering controlled.
What is your release cycle.
In the past where I've encountered this scenario, I would look towards a code promotion branching strategy which provides you with one branch to maintain in production - This has worked well where continuous deployment to production is not an option. You can find more branching strategies discussed on the ALM Rangers page on CodePlex
Developers / Testers can continually push code / features / bug fixes through staging / uat. At the point of release the Dev branch is merged to Release branch, which causes a release build and creates a nuget package. This should still be released to Octopus in exactly the same way, only it's a brand new release and not a promotion of a previous release. This would need to ensure that there is no clash on version numbering and so a strategy might be to have a difference in the major number - This would depend on your current setup. This does however, take an opinionated view that the deployment is orchestrated by the build server rather than Octopus Deploy. Primarily TeamCity / TFS calls out to the Ocotpus API, rather than a user choosing the build number in Octopus (we have been known to make mistakes)
ocoto.exe create-release --version GENERATED_BY_BUILD_SERVER
To me, the biggest question I ask clients is "What's the constraint that means you can't continuously deploy to production". Address that constraint (see theory of constraints) and you remove the need to work round an issue that needn't be there in the first place (not always that straight forward I know)
I would strongly advise that you don't clone projects in Octopus for different environments as it's counter intuitive. At the end of the day you're just telling Octopus to go and get this nuget package version for this app, and deploy it to this environment please. If you want to get the package from a different NuGet feed for release, then you could always make use of the custom binding on the NuGet field in Octopus and drive that by a scoped variable depending on the environment you're deploying to.
Step 1 - Setup two feeds
Step 2 - Scope some variables for those feeds
Step 3 - Consume the feed using a custom expression
I hope this helps
This is unfortunately something Octopus doesn't directly have - true support for branching (yet). It's on their roadmap for 3.1 under better branching support. They have been talking about this problem for some time now.
One idea that you mentioned would be to clone your project for each branch. You can do this under the "Settings" tab (on the right-hand side) in your project that you want to clone. This will allow you to duplicate your project and simply rename it to one of your branches - so one PreRelease or Release Candidate project and other is your mainline Dev (I would keep the same name of the project). I'm assuming you have everything in the same project group.
Alternatively you could just change your NuSpec files in your projects in different branches so that you could clearly see what's being deployed at the overview project page or on the dashboard. So for your RC branch, you could just add the suffix -release within the NuSpec in your RC branch which is legal (rules on Semantic Versioning talk about prereleases at rule #9). This way, you can use the same project but have different packages to deploy. If your targeted servers are the same, then this may be the "lighter" or simpler approach compared to cloning.
I blogged about how we do this here:
http://www.alexjamesbrown.com/blog/development/working-branch-deployments-tfs-octopus/
It's a bit of a hack, but in summary:
Create branch in TFS Create branch specific build definition
Create branch specific drop location for Octopack
Create branch specific Octopus Deployment Project (by cloning your ‘main’ deployment
Edit the newly cloned deployment, re-point the nuget feed location to your
branch specific output location, created in step 3

Project specific Mantis states

I'm working in a company using one Mantis bug tracker hosting several projects.
The project I am working on needs specific statuses and I don't want these to be seeable in other project pages.
I see in the doc how to add a status but they seem to be global and not project specific.
The only way I see how to do it is to add global states and remove them from all other projects' workflows.
Do you know if (and how) it is possible to add project specific workflows ?
Thanks :)
Almost everything you can set in config_inc.php can also be defined in the database, either globally (all projects) or for specific projects.
You can do this using the Manage | Manage Configuration | Configuration Report page; at the bottom of the page, a section lets you define custom config options. Define the enums as appropriate and described in the documentation (you should still define the translations globally, including all valid values in the string)
To set a custom workflow, just set your current project as appropriate and use the Manage | Manage Configuration | Workflow Transitions to define it.

Resources