Should changeman demote delete the components from promotion libraries - mainframe

I promoted a new JCL 'TTTTS360' to TST( Promotion level 1). I noticed and JCL was created in ( as this is new JCL) in TTTTTST.E998.JCL(TTTTS360) and similarly entry was created in parameter lib 'COMPTST.AAAA.PARMLIB(QEEEEAU)'.
Now once I demote my package to level 0 i.e. development , I still see 'TTTTTST.E998.JCL(TTTTS360)', 'COMPTST.AAAA.PARMLIB(QEEEEAU)', shouldn't they be removed ? I was expecting them to be removed all together?
I see following steps in changeman JOB
SYSPRINT DEL1CTC
SYSPRINT DEL1JCL
DEL1CTC CHANGEMAN STEP
DELETE QEEEEAU
QEEEEAU WAS DELETED FROM TARGET DATA SET
DEL1JCL CHANGEMAN STEP
DELETE TTTTS360
TTTTS360 WAS DELETED FROM TARGET DATA SET

ChangeMan has the concept of "staging libraries" and "promotion libraries." The former are sometimes referred to as "package datasets" because they are part of your ChangeMan package.
When you promote your package, typically the members from your staging libraries are copied to the corresponding target promotion libraries. When you demote your package, the members that were promoted are deleted from the target promotion libraries.
Your staging libraries aren't cleaned up until after install and baseline have completed as part of your request to install in your production environment. The cleanup may be days or weeks afterward, as a backout requires the staging libraries to be present.
Having said all that, ChangeMan is very configurable as Bruce Martin indicated in his comments. Talk to your ChangeMan Administrator(s) about what behavior you should expect to see.

Related

Azure DevOps Releases skip tasks

I'm currently working on implementing CI/CD pipelines for my company in Azure DevOps 2020 (on premise). There is one requirement I just not seem to be able to solve conveniently: skipping certain tasks depending on user input in a release pipeline.
What I want:
User creates new release manually and decides if a task group should be executed.
Agent Tasks:
1. Powershell
2. Task Group (conditional)
3. Task Group
4. Powershell
What I tried:
Splitting the tasks into multiple jobs with the task group depending on a manual intervention task.
does not work, if the manual intervention is rejected the whole execution stops with failed.
Splitting the tasks into multiple stages doing almost the same as above with the same outcome.
Splitting the tasks into multiple stages trigger every stage manually.
not very usable because you have to execute what you want in the correct order and after the previous stages succeeded.
Variable set at release creation (true/false).
Will use that if nothing better comes up but kinda prone to typos and not very usable for the colleagues who will use this. Unfortunately Azure DevOps seems to not support dropdown or checkbox variables for releases. (but works with parameters in builds)
Two Stages one with tasks 1,2,3,4 and one with tasks 1,3,4.
not very desireable for me because of duplication.
Any help would be highly appreciated!
Depends on what the criteria is for the pipelines to run. One recommendation would be two pipeline lines calling the same template. And each pipeline may have a true/false embedded in it to pass as a parameter to the template.
The template will have all the tasks defined in it; however, the conditional one will have a condition like:
condition: and(succeeded(), eq('${{ parameters.runExtraStep}}', true))
This condition would be set at the task level.
Any specific triggers can be defined in the corresponding pipeline.
Here is the documentation on Azure YAML Templates to get you started.
Unfortunately, it's impossible to add custom condition for a Task Group, but this feature is on Roadmap. Check the following user voice and you can vote it:
https://developercommunity.visualstudio.com/idea/365689/task-group-custom-conditions-at-group-and-task-lev.html
The workaround is that you can clone the release definition (right click a release definition > Clone), then remove some tasks or task groups and save it, after that you can create release with corresponding release definition per to detailed scenario.
Finally I decided to stick with Releases and split my tasks into 3 agent jobs. Job 1 with the first powershell, job 2 with the conditional taskgroup that executes only if a variable is true and job 3 with the remaining tasks.
As both cece-dong and dreadedfrost stated, I could've achieved a selectable runtime parameter for the condition with yaml pipelines. Unfortunately one of the task groups needs a specific artifact from a yaml pipeline. Most of the time it would be the "latest", which can be easily achieved with a download artifacts task but sometimes a previous artifact get's chosen. I have found no easy way to achieve this in a way as convenient as it is in releases where you by default have a dropdown with a list of artifacts.
If found this blog post for anyone interested on how you can handle different build artifacts in yaml pipelines.
Thanks for helping me out!

What are obsolete snapshots and snapshot files?

I find the Jest Snapshot Summary a bit confusing. After running tests in one of our repositories, I get the following Summary:
Snapshot Summary
› 2 snapshots written in 1 test suite.
› 50 obsolete snapshot files found, re-run with `-u` to remove them.
› 3 obsolete snapshots found, re-run with `-u` to remove them.
Snapshot testing means we compare the current tests' output against the output before our changes, to catch side effects.
Hence, if I get it right, the summary means
2 tests are new, no snapshots were available to compare against
50 tests still provide the same output as before
3 tests have been removed, but the snapshots are still around
So running with -u would
Update the time stamp for 50 snapshots, but not change their contents
Delete the files for 3 snapshots that are useless
Is that understanding correct?
It's been a while I posted this question and by know I can answer it myself:
"Obsolete" refers to snapshots or snapshot files, for which no .toMatchSnapshot() exists any more.
Snapshots are organised in one file per test suite. Single snapshots in those files are stored along with the name of their test, given in jest's it() function. If you rename a test, the old snapshot is still in the snapshots file, but recognised as "obsolete".
› 2 snapshots written in 1 test suite.
⇒ 2 tests are new, no snapshots were available to compare against
This one holds true.
› 50 obsolete snapshot files found
50 tests still provide the same output as before
Is wrong, the 50 corresponding test suites have been renamed, moved or removed. Such a high number is unusual and you should probably find a way to re-map the snapshots to their tests, before updating them.
› 3 obsolete snapshots found
⇒ 3 tests have been removed, but the snapshots are still around
So this is only partly right, since the tests might have been renamed, not removed.

Importing LCI database - dealing with unlinked exchanges

I am having an issue importing the ecoinvent v3.2 database (cut-off) in Brightway.
The steps followed were:
ei32cu = bw.SingleOutputEcospold2Importer(fp, "ecoinvent 3.2 cutoff")
ei32cu.apply_strategies()
All seemed to be going well. However, ei32cu.statistics() revealed that there were many unlinked exchanges:
12916 datasets
459268 exchanges
343020 unlinked exchanges
Type biosphere: 949 unique unlinked exchanges
Of course, the unlinked exchanges prevented the writing of the database using ei32cu.write_database() did not work: an "Invalid exchange" was raised.
My questions:
- How can I fix this?
- How can I access the log file (cited here) that might give me some insights?
- How can I generate a list of exchanges (and their related activities)?
It is strange you have unlinked exchanges with ei 3.2 cutoff, at least with python 3 should be very smooth importing 3.2 cutoff, are you perhaps on py2 or not using the latest version of bw2?
-difficult to give an answer without looking into the db, but if you are on py2 just try with the 3
-to check where the log is
`projects.logs_dir`
-to write the list of unlinked exchanges
ei32cu.write_excel(only_unlinked=True) #unlinked=False export the full list of exchanges
I now know why this problem occurred, and the solution is quite simple: in new projects, one needs to bw2setup before importing LCI databases.

How to know OPC job status using Syncsort or anyother method?

My objective is, I need to get the current timestamp using Syncsort if one OPC job(existing Job) run fine in production. In my case I can not interpret my new job after existing OPC job. Is there any facility to check the existing job ran fine in production ?
I mean any reference table to have production job details with status for each day ?
Please help anyone to move.
There are commercial packages that track jobs and job status. CA (computer associates) is one such vendor.
However, these packages cost a lot. A simple, home grown solution, is to have a dataset known to both jobs and write a one line record into that data set when job1 completes and the second job2 can read the dataset to "KNOW" if the job ran. IF this is what you are trying to do, it is not exactly clear from your question. But any solution along these lines works, until management wants to cough up $50K (or whatever) for a commercial package.

GDG Roll In Error

While executing one Proc, I am geting a 'GDG Roll In Error'. The Error Message says 'IGD07001I GDG ROLL IN ERROR -RETURN CODE 20 REASON CODE 0 MODULE IGG0CLEG'. The proc is supposed to create 19 generations of a GDG. This error occurs after creating first 6 Generatons. The parameters of the GDG are Limit=100, NOEMPTY,SCRATCH. What could be the reason.?
Experts, Please help.
If you look up IGD07001I it says, among other things, to look at IDC3009I for an explanation of the return and reason codes. For return code 20 reason code 0, IDC3009I says
RETURN CODE 20 Explanation: There is insufficient space in the
catalog to perform the requested update or addition.
The catalog cannot be extended for one of the following reasons:
There is no more space on the volume on which the catalog resides
The maximum number of extents has been reached
The catalog has reached the 4GB limit
There is not enough contiguous space on the volume (required when the catalog's secondary allocation is defined in tracks).
Programmer Response: Scratch unneeded data sets from the volume.
Delete all unnecessary entries from the catalog. The catalog may need
to be reallocated and rebuilt if these steps do not resolve the space
shortage.
I suggest contacting your DFSMS Administrator. I also suggest bookmarking the z/OS documentation for your version/release.

Resources