What is the use of “create essential data” in system update via hac in SAP HYBIRS? [closed] - sap-commerce-cloud

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 9 months ago.
Improve this question
While system update from hac, we have three options
Update running system
Create essential data
localised type
Can anyone please explain what’s the use of all these three fields and why do we prefer unchecking “create essential data” most of the times?

A couple of links related to what essential data is:
https://help.sap.com/docs/SAP_COMMERCE/4c33bf189ab9409e84e589295c36d96e/8ad2f0a7866910149c31803942c91149.html?locale=en-US
https://help.sap.com/docs/SAP_COMMERCE/3fb5dcdfe37f40edbac7098ed40442c0/9236d781bd6a4330ad12dbc3b8880e77.html?locale=en-US
What do those 3 items do:
Update DB schema + type meta data in the DB for this Hybris environment
Load/reload the configured essential/core data sets into the DB
Load localisation file contents into the DB: localisation for types/attributes for example
In my view the essential/core data should always be safe to load during a system update, including in production. It is intended to be items that are essential for correct operation of the system & should not vary.
If there are items in the essential data that are being updated at runtime & essential data overwrites that - you either have incorrectly assigned some imports to essential data (instead of project/sample data), or you have somebody doing runtime stuff that they should not. Data that is intended for runtime maintenance should be in project/sampledata (which can be viewed as a starting point data set for the lower environments such as dev) - these data sets generally should not be run during an update, especially in production.

The create essential data is used for creating essential data that is the data required for the basic setup of the system and includes entities like countries, currencies, etc.
Rest of the data specific to the project should be categorized as project data.

Related

Is there a standard pattern for invoking related pipelines in Kiba ETL? [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 2 years ago.
Improve this question
I'm working on an ETL pipeline with Kiba which imports into multiple, related models in my Rails app. For example, I have records which have many images. There might also be collections which contain many records.
The source of my data will be various, including HTTP APIs and CSV files. I would like to make the pipeline as modular and reusable as possible, so for each new type of source, I only have to create the source, and the rest of the pipeline definition is the same.
Given multiple models in the destination, and possibly several API calls to get the data from the source, what's the standard pattern for this in Kiba?
I could create one pipeline where the destination is 'the application' and has responsibility for all these models, this feels like the wrong approach because the destination would be responsible for saving data across different Rails models, uploading images etc.
Should I create one master pipeline which triggers more specific ones, passing in a specific type of data (e.g. image URLs for import)? Or is there a better approach than this?
Thanks.
Kiba author here!
It is natural & common to look for some form of genericity, modularity and reusability in data pipelines. I would say though, that like for regular code, it can be hard initially to figure out what is the correct way to get that (it will depend quite a bit on your exact situation).
This is why my recommendation would be instead to:
Start simple (on one specific job)
Very important: make sure to implement end-to-end automated tests (use webmock or similar to stub out API requests & make tests completely isolated, create tests with 1 row from source to destination) - this will make it easy to refactor stuff later
Once you have that (1 pipeline with tests), you can start implementing a second one, and refactor to extract interesting patterns as reusable bits, and iterate from there
Depending on your exact situation, maybe you will extract specific components, or maybe you will end up extracting a whole generic job, or generic families of jobs etc.
This approach works well even as you get more experience working with Kiba (this is how I gradually extracted the components that you will find in kiba-common and kiba-pro, too.

Agile-methodology in Project and Query-Driven methodology in Cassandra? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
We want to start a new project. Our DB will be Cassandra; and we do our project in a scrum team, based on agile.
My question is that, one of the most important issue is changes, that agile can handle this.
Agile software development teams embrace change, accepting the idea that requirements will evolve throughout a project. Agilists understand that because requirements evolve over time that any early investment in detailed documentation will only be wasted.
But we have:
Changes to just one of these query requirements will frequently warrant a data model change for maximum efficiency.
in Basic Rules of Cassandra Data Modeling article.
How can we manage our project gathering both rules together? the first one accept changes, but, the second one, want us to know every query that will be answered in our project. New requirements, causes new queries, that will changes our DB, and it will influence the quality(throughput).
How can we manage our project gathering both rules together? the first one accept changes easily, but, the second one, want us to know every query that will be answered in our project. New requirements, causes new queries, that will changes our DB, and it will influence the quality
The first rule does not suggest you accept changes easily just that you accept that changes to requirements will be a fact of life. Ie, you need to decide how to deal with that, rather than try to ignore it ore require sign off on final requirements up front.
I'd suggest you make part of your 'definition of done' (What you agree a piece of code must meet to be considered complete within a sprint) to include the requirements for changes to your DB code. This may mean changes to this code get higher estimates to allow you to complete the work in the sprint. In this way you are open to change, and have a plan to make sure it doesn't disrupt your work.
Consider the ways in which you can reduce the impact of a database change.
One good way to do this will be to have automated regression tests that cover the functionality that relies on the database. It will also be useful to have the database schema built regularly as a part of continuous integration. That then helps to remove the fear of refactoring the data model and encourages you to make the changes as often as necessary.
The work cycle then becomes:
Developer commits new code and new data model
Continuous integration tears down the test database
Continuous integration creates a new database based on the new data model
Continuous integration adds in some appropriate dummy data
Continuous integration runs a suite of regression tests to ensure nothing has been broken by the changes.
Team continues working with the confidence that nothing is broken
You may feel that writing automated tests and configuring continuous integration is a big commitment of time and resources. But think of the payoff in terms of how easily you can accept change during the project and in the future.
This kind of up-front investment in order to make change easier is a cornerstone of the agile approach.

Adding records to VSAM DATASET [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I have some confusions regarding VSAM as I am new to it. Do correct me where I am wrong and solve the queries.
A cluster contains control areas and a control area contains control intervals. One control interval contains one dataset. Now for defining a cluster we mention a data component and index component. Now this name of data component that we gives creates a dataset and name of index generates a key. My queries are as follows-
1)If I have to add a new record in that dataset, what is the procedure?
2)What is the procedure for creating a new dataset in control area?
3)How to access a dataset and a particular record after they are created?
I tried finding a simple code but was unable so kindly explain with a simple example.
One thing that is going to help you is the IBM Redbook VSAM Demystified: http://www.redbooks.ibm.com/abstracts/sg246105.html which, these days, you can even get on your smartphone, amongst several other ways.
However, your current understanding is a bit astray so you'll need to drop all of that understanding first.
There are three main types of VSAM file and you'll probably only come across two of those as a beginner: KSDS; ESDS.
KSDS is a Key Sequenced Data Set (an indexed file) and ESDS is an Entry Sequenced Data Set (a sequential file but not a "flat" file).
When you write a COBOL program, there is little difference between using an ESDS and a flat/PS/QSAM file, and not even that much difference when using a KSDS.
Rather than providing an example, I'll refer you to the chapter in the Enterprise COBOL Programming Guide for your release of COBOL, it is Chapter 10 you want, up to and including the section on handling errors, and the publication can be found here: http://www-01.ibm.com/support/docview.wss?uid=swg27036733, you can also use the Language Reference for the details of what you can use with VSAM once you have a better understanding of what it is to COBOL.
As a beginning programmer, you don't have to worry about what the structure of a VSAM dataset is. However, you've had some exposure to the topic, and taken a wrong turn.
VSAM datasets themselves can only exist on disk (what we often refer to as DASD). They can be backed-up to non-DASD, but are only directly usable on DASD.
They consist of Control Areas (CA), which you can regard as just being a lump of DASD, and almost exclusively that lump of DASD will be one Cylinder (30 Tracks on a 3390 (which these days is very likely emulated 3390). You won't need to know much more about CAs. CAs are more of a conceptual thing that an actual physical thing.
Control Intervals (CI) are where any data (including index data) is. CIs live in CAs.
Records, the things you will have in the FILE SECTION under an FD in a COBOL program, will live in CIs.
Your COBOL program needs to know nothing about the structure of a VSAM dataset. COBOL uses VSAM Access Method Services (AMS) to do all VSAM file accesses, as far as your COBOL program is concerned it is an "indexed" file with a little bit on the SELECT statement to say that it is a VSAM file. Or is is a sequential file with a little... you know by now.

How to define a PBI that has no perceived value to the user? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 5 years ago.
Improve this question
I need to add an item to our product backlog list that has no (perceived) value to the users.
Context: every week we need to parse and import a TXT file our system. Now the provider decided to change the format to XML, so we need to rewrite the parsing engine.
In the end the user won't see any benefit as he'll keep getting his new data, but we still have to do this to keep importing the data.
How to add an item like this to the product backlog list?
What happens if you don't make the change? Is there value to the user in preventing that from happening? If the answer is yes, I'd recommend tying your business value statement to that. Then, you can write a typical user story with business value and treat it like any other PBI.
It has no value to the user, but it has value to your company.
As company X I want to be able to support the new XML format so that I can keep importing data from provider Y.
How does that sound like? Not all stories necessarily evolve around the end user.
Note: technical stories and technical improvement stories are not a good practice and they should avoided. Why? Because you can't prioritize them correctly as they have no estimable value.
The correct way to do tech stories is to include them in the definition of done. For example: decide that every new story played is only complete once database access is via Dapper and not L2S. This is a viable DoD definition and makes sure you can evolve your system appropriately.
We typically just add it as a "technical improvement" and give it a priority that we think fits. If the user asks you about it, you just explain them high level what the change does and why it's needed.
Don't forget that your application will most likely start failing in the future if you don't make the change. Just tell them that, and let them decide whether they want that or not.

Linkable reporting library for Qt, with editor or easy markup [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
Is there a reporting library which can be linked with a Qt application to generate and print invoices (from within my own application, no seperate tool). The invoices are needed to print instantly, so I have the following requirements:
pipe data to be printed into the reporting library
choose from a predefined reporting template (created inside or outside my app doesn't matter)
integrated report generation inside my application
no preview-before-print - just create an order and print the invoice
based on Linux
document header / footer with delivery instructions, address and slogan
tableized line items, with sub items and line price / item price
order summary (total price, tax)
multiple pages should repeat table headers and show partial sums
nice to have: inclusion of dynamically generated image data
What is not needed/wanted:
Generate reports outside the application from SQL, CSV, or XML datasets
The report generator does not need to do the calculations
The environment is a custom-built POS system for food delivery / catering / restaurant. Orders come in by phone. Invoices are printed as two copies, one for the kitchen to prepare the delivery, one for the customer (and driver who delivers).
I am currently working with RichText-based templates but this is pretty cumbersome and the templates are hard to maintain - so this change is needed. The old application is Qt3 but the new one will be (and has to be) Qt4 - so the reporting library should be compatible with that. I don't want to pull in Gtk or Gnome dependencies.
The database runs on MySQL but doesn't (yet) store the ordering data neither any invoices. Invoices are just archived to the harddisk. This will probably change, but I don't think this matters when I can feed in data manually to the reporting lib.
Update: My POS application is going to be opensourced, so the library should be compatible with GPL or similar.
Have a look at KD Reports:
http://www.kdab.com/index.php?option=com_content&view=article&id=54:kd-reports&catid=35:kd-reports&Itemid=66
It's also available under GPL, although that's not advertised, so it might be necessary to contact KDAB to obtain it.
You can try NCReport, but since version 2.0, it becomes a commercial product.
There is an example found in internet.

Resources