What is the Difference between system initialization and update in hybris? - sap-commerce-cloud

I suppose it runs essentialdata.impex.
I am not sure of the difference.
Any inputs from hybris experts highly appreciated.

Initalization:
Droping old and create new empty system.
Creating schema and type system.
Reloading persistences
Clearing cache.
Initialize media storages.
Setting licence.
Restarting internals.
Clearing hMc configuration from database.
creating essential data.
Localizing types and creating project data.
Updating
Updating schema.
Reloading persistences
Clearing cache.
Initialize media storages
setting licence.

Initialization drops existing type definitions from the database prior to rebuilding, so the entire type system is created from scratch. So during an initialization, type system definitions are created to match the type system definition in the items.xml files.
During an update, type system definitions are modified to match the new type system definition in the items.xml files.
During the initialization and update processes, the platform looks for ImpEx files in the /resources/impex folder. In particular:
For essential data: The platform scans the
/resources/impex folders for files with names that
match the pattern essentialdata*.impex and imports the files during
the essential data creation.
For project data: The platform scans the
/resources/impex folders for files with names that
match the pattern projectdata*.impex and imports the files during the
project data creation.

In Update suppose you have done any change in attribute of items.xml then after update older will remain in table structure and newer will also create (it apply same for column also)
You need to intialize the system, only when you want to build the site from scratch or when you are creating new environemnt. like QA/Staging/Prod.
You will do system update - whenever you are making modifications to *items.xml file. like adding an itemtype or modifying existing item type or changing any restrictions etc..

Related

Duplicate files in DerivedData folder using CoreData generator

I'm trying to generate NSManagedModels from my datamodel. Generation works but after I got many errors :
error: filename "Station+CoreDataProperties.swift" used twice:
'/Users/Me/MyApp/Models/CoreData/Station+CoreDataProperties.swift' and
'/Users/Me/Library/Developer/Xcode/DerivedData/MyApp-gwacspwrsnabomertjnqfbuhjvwc/Build/Intermediates/MyApp.build/Debug-iphoneos/MyApp.build/DerivedSources/CoreDataGenerated/Model/Station+CoreDataProperties.swift'
:0: note: filenames are used to distinguish private
declarations with the same name
I try clean build folder and derivedData directory hard delete. I'm using Xcode 8 BETA maybe it's a bug ?
I get this in Xcode 8.1
For me following steps solved the issue. Please note that order matters.
1) Create entity in Core Data model.
2) Under class section, make settings as on following image.
Module: Current Product Name
Codegen: Manual/None
3) Generate your NSManagedObject subclass.
This post greatly helped me solve this problem myself. Personally I look at this as an Xcode bug. Bug or not this is a huge chicken and egg situation.
I ran into this by:
Created a new Project using Core Data
Generated my NSManagedObject subclass+extension (while codegen: ClassDefinition)
I accidentally saved the generated classes in the Wrong folder
I deleted the generated files
Re-generated in folder I wanted
đź’Ą- Xcode used twice errors
As others have posted I kept cleaning my build (and clean build folder) but that never fixed the build issue.
I finally figured out if you originally created your NSManagedObject generated classes with codegen: ClassDefinition, as I did without knowing then you are locked in for the chicken and egg issue.
I then deleted the auto generated classes thinking I had to re-generate, so I did. Once re-generated I would get the used twice build error again. I manually went into the ../DerivedSources/CoreDataGenerated/Model/.. and deleted the duplicates. Again, I re-generated thinking I'd only have 1 copy (in my project) but I was wrong. If codegen: ClassDefinition was originally set then Xcode will keep creating the auto-generated classes+extensions and put them in the buried folder ../DerivedSources/CoreDataGenerated/Model/... I repeated this chicken and egg a few times before catching on.
I later realized you do indeed need to mark codegen: Manual/None however to get things back in sync you need to delete the auto-generated files in ../DerivedSources/CoreDataGenerated/Model/.. and in your project if you have any there still.
Be careful setting codegen: Manual/None, for me it was bit tricky because codegen: Manual/None wouldn't stick. I had to click back and forth between entities multiple times to double/triple check each entity was set to codegen: Manual/None. Then auto generate the files. At this point your only copy of the auto generated files should be in your project and not in ../DerivedSources/CoreDataGenerated/Model/...
Last, I think this is a bug because if you specify codegen: Manual/None I don't expect Xcode to auto generate files at all, yet it does and puts them in your project. More confusing if your setting is codegen: ClassDefinition, who the heck knows Xcode will put the files in a buried directory yet it is available for use in your project. My beef with this is the auto generated files aren't source controlled and if I change computer I have to know to auto-generate them on the new station.
Hope this helps someone else!
Cheers!
This is indeed not a bug. As #Morrowless suggests both class definition and properties extension are created. If this is not wanted, select Manual/None under Codegen before generating the code. If the code is already generated, just delete them, and try Editor->Create NSManagedObject Subclass... again from the menu (after setting Manual/None).
Note, in the picture below, the Class Name 'Contact' is specific to my project. You will see your entity name instead.
If you generated CoreData subclasses with codegen: ClassDefinition your basically screwed. The only way to fix it is to:
Delete your CoreData subclasses.
Delete your derived data folder.
Clean your project (CMD+K).
Generate new CoreData subclasses, this time select Codegen: Manual/None and Module: Current Product Module
This is not a bug. Codegen generates these files in the DerivedData folder, so you don't need to create them again in your project, hence the compile error.
From Xcode 8.0 Release notes:
Xcode automatically generates classes or class extensions for the entities and properties in a Core Data data model. Automatic code generation is enabled and disabled on an entity by entity basis, and is enabled for all entities in new models that use the Xcode 8 file format. This feature is available for any data model that has been upgraded to the Xcode 8 format. You specify whether Xcode generates Swift or Objective-C code for a data model using the data model’s file inspector.
When automatic code generation is enabled for an entity, Xcode creates
either a class or class extension for the entity as specified in the
entity's inspector: the specified class name is used and the sources
are placed in the project’s Derived Data. For both Swift and
Objective-C, these classes are directly usable from the project’s
code. For Objective-C, an additional header file is created for all
generated entities in your model. The header file name conforms to the
naming convention “DataModelName+CoreDataModel.h”.
However, if you selected Category/Extension under the codegen pulldown menu in the data model inspector (because you want to add logic to your model): codegen will wrongly generate both the class definition and properties extension.
The solution is to simply delete the properties extension (ClassName+CoreDataProperties.swift). Your project should now compile.
After following the guidance from oyalhi and Vladimir Shutyuk, (deleting the NSManagedObject files, changing the entity codegen to Manual/None), I had to restart Xcode to allow it to index again before I could re-generate the NSManagedObject files and get a successful compile.
For the sake of completeness..:
I just ran into the same error, but none of the proposed solutions worked. What puzzled me was that even switching from automated code generation to manual for the one (as I thought) problematic entity didn't do anything.
Finally, I figured out that I had several entities with the same name, but they all shared the same classname. The reason for this was that I copy&pasted one entity several times to save me some work, because they also have a few attributes in common.
Turns out XCode renames the duplicates by adding 1, 2,... to the entity name, but leaves the class name as before. And since now entity name and class name are "unrelated", renaming the entity won't change the class name either.
Hope it helps someone - I have also filed a bug report for this.

What really happens during hybris update?

Does it backup whole database and then reload the data after update? or just adds/removes the affected columns and data in those columns?
By convention when you update a system Hybris will look for impex files matching
projectdata*.impex
When you run initializaton Hybris also look for files matching
essentialdata*.impex
So your question your question Hybris will simply reload data, from projectdata files (pattern could be override on your local.properties file)
Run the dryrun and you can see exactly what it will do.
During an update, type system definitions are modified to match the new type system definition in the items.xml files. You can understand what exactly it do by reading This page
Just to elaborate on Ram Tripathi's answer ...
Entire Type System Is Discarded & Re-Created from the Extension - items.xml.

Stale objects in Coded UI

To remove the stale object issue(ie..when we run the test script for multiple input,it fails for the second iteration as the object is not cleared at the end of each run)in my script, I have added always search configuration in the designer file. After this my script runs successfully on multiple inputs, but if there is a need to add some objects newly to the same designer file then my designer file will be regenerated and the Always search configuration changes will be lost.
Is there any way to retain the always search configuration remain in the designer file ever even when the designer file is regenerated?
When you generate a UI map there are actually two files that come with it. Firstly, as you've discovered, there's a generated file with all the ugly code that's generated by the coded UI test builder. Of course, making any changes to this outside of the code will regenerate the file. The second file is a partial class that accompanies the generated designer class. This file does NOT get regenerated but as a partial contains all the same object references and properties as the designer file (it just looks empty). You can reference the control you want to add this property to here and it will not be regenerated.
The other alternative to this, albeit probably not a good idea, is to put
Playback.PlaybackSettings.AlwaysSearchControls = true
inside of your test method/class initialize/test initialize. This will force the test(s) to always search for each and every control. As you might imagine, this can have a significant performance impact though when you're dealing with large UI maps or particularly long test methods.
You might also set the control object's search configuration to always search. Keep in mind that this will do searching for this control and all of it's children so I would not advise putting it on a parent with several children, such as the document.
aControl.SearchConfigurations.Add(SearchConfiguration.AlwaysSearch);

How to load files in a specific order

I would like to know how I can load some files in a specific order. For instance, I would like to load my files according to their timestamp, in order to make sure that subsequent data updates are replayed in the proper order.
Lets say I have 2 types of files : deal info files and risk files.
I would like to load T1_Info.csv, then T1_Risk.csv, T2_Info.csv, T2_Risk.csv...
I have tried to implement a comparator, as it is said on Confluence, but it seems that the loadInstructions file has the priority. It will order the Info files and the risk files independently. (loading T1_Info.csv, T2_Info.csv and then T1_Risk.csv, T2_Risk.csv..)
Do I have to implement a custom file loader, or is it possible using an AP configuration ?
The loading of the files based on load instructions is done in
com.quartetfs.tech.store.csv.impl.CSVDataModelFactory.load(List<FileLoadDescriptor>). The FileLoadDescriptor list you receive is created directly from the load instructions files.
What you can do is create a simple instructions files with 2 entries, one for deal info and one for risk. So your custom implementation of CSVDataModelFactory will be called with a list of two items. In your custom implementation you scan the directory where the files are, sort them in the order you want them to be parsed and call the super.load() with the list of FileLoadDescriptor you created from the directory scanning.
If you want to also load files that are place in the future in this folder you have to add to your load instructions a line that will match all files and that will make the super.load() implementation to create a directory watcher for that (you should then maybe override createDirectoryWatcher() to not watch the files already present in the folder when load is called).

how can i store elasticsearch settings+mappings in one file (like schema.xml for Solr)

How can I store elasticsearch settings+mappings in one file (like schema.xml for Solr)? Currently, when I want to make a change to my mapping, I have to delete my index settings and start again. Am I missing something?
I don't have a large data set as of now. But in preparation for a large amount of data that will be indexed, I'd like to be able to modify the settings and some how reindex without starting completely fresh each time. Is this possible and if so, how?
These are really multiple questions disguised as one. Nevertheless:
How can I store elasticsearch settings+mappings in one file (like schema.xml for Solr)?
First, note, that you don't have to specify mapping for lots of types, such as dates, integers, or even strings (when the default analyzer is OK for you).
You can store settings and mappings in various ways, in ElasticSearch < 1.7:
In the main elasticsearch.yml file
In an index template file
In a separate file with mappings
Currently, when I want to make a change to my mapping, I have to delete my index settings and start again. Am I missing something?
You have to re-index data, when you change mapping for an existing field. Once your documents are indexed, the engine needs to reindex them, to use the new mapping.
Note, that you can update index settings, in specific cases, such as number_of_replicas, "on the fly".
I'd like to be able to modify the settings and some how reindex without starting completely fresh each time. Is this possible and if so, how?
As said: you must reindex your documents, if you want to use a completely new mapping for them.
If you are adding, not changing mapping, you can update mappings, and new documents will pick it up when being indexed.
Since Elasticsearch 2.0:
It is no longer possible to specify mappings in files in the config directory.
Find the documentation link here.
It's also not possible anymore to store index templates within the config location (path.conf) under the templates directory.
The path.conf (/etc/default/elasticsearch by default on Ubuntu) stores now only environment variables including heap size, file descriptors.
You need to create your templates with curl.
If you are really desperate, you could create your indexes and then backup your data directory and then use this one as your "template" for new Elasticsearch clusters.

Resources