RACF data set access behavior - mainframe

If a dataset has a discrete profile defined and also falls under a generic dataset profile, which access rule is applied?
For e.g. there is a discrete dataset profile A.B.C with ALTER access defined for user A.
There is also a generic dataset profile A.B.** with READ access defined for user B.
Will user B be able to read dataset A.B.C?

If a data set is protected by both a generic profile and a discrete profile, the discrete profile sets the level of protection for the data set. If a data set is protected by multiple generic profiles, the most specific generic profile sets the level of protection for the data set.
From Choosing between discrete and generic profiles. This publication is for 2.1 but I guess it will apply to 2.2 and 2.3 also.
In other words you can use a generic profile for A.B.* to restrict the access to the user and at the same time make a discrete profile to grant access for A.B.C.

Since there is a discrete dataset profile defined for A.B.C, that profile will be used for access checks. If access is needed for B then B would also need to be permitted in some manner to the discrete dataset profile. This is from experimentation only, but the IBM documentation for the RACHECK marco hints at this behavior with its discussion of the GENERIC parameter.

Related

In featuretools, How to control the application of where_primitives?

In featuretools we have various primitive application control mechanisms to custom apply the primitives to select entities and columns.
They are very neatly documented here
The ignore_entities and ignore_variables parameters of DFS control entities and variables (columns) that should be ignored for all primitives. This is useful for ignoring columns or entities that don’t relate to the problem or otherwise shouldn’t be included in the DFS run.
Options for individual primitives or groups of primitives are set by the primitive_options parameter of DFS. This parameter maps any desired options to specific primitives.
Using primitive_options I can control the application of primitives to the individual entity or, more granularly, to columns within each entity. I can also control the columns by which I groupby to apply groupby_trans_primitive.
I cannot find (i have searched enough to think it does not exist) how to control the application of where primitives.
For example: say, I have a column for spend. I create a seed_feature to create buckets on the spend column. I might want to create the feature min(spend) on the whole. But, within the bucket [10000,15000], I might not want to create the min(spend where spend_bucket == 10000_15000). How do I go about having this kind of control where I control primitives application only when where clause is in effect
Currently the ability to control where primitives via primitive_options does not exist.
Potentially using drop_contains with a substring could give the desired control.
This issue will track adding support for this to Featuretools:
https://github.com/alteryx/featuretools/issues/1514

How can I have multiple MSRP based on location in broadleaf?

I need to have multiple MSRP based on customer location. I am planning to add "location" field in "product-option" which will be available for each product, so an SKU will be available for each product based on location. And I don't want the user to choose this option, it should be the default selected.
Is this a right way to do that, or please suggest a better way to achieve it.
I think it depends on how ephemeral your pricing considerations are. If the pricing will be changing on a regular basis based on a rate table (or something like that), you can utilize the dynamic pricing service, which will allow you to programmatically alter the pricing based on whatever business rules you deem important. Start with registering your own DynamicSkuPricingFilter (i.e. extend AbstractDynamicSkuPricingFilter - see DefaultDynamicSkuPricingFilter) and provide whatever attributes are important for your pricing determination. Then, register your own implementation of DynamicSkuPricingService (see DefaultDynamicSkuPricingServiceImpl) to return the correct pricing. I would also consider extending DiscreteOrderItemImpl with location information so you have a record in the cart of the associated location.
Otherwise, if you want to have more permanent data structure and explicit admin user intent about maintaining this pricing, then I would suggest a new entity structure that relates a simple location and price to your default sku. You should be able to naturally maintain this in the admin using #AdminPresentation annotations. For example, consider a new admin annotated collection of "LocationPrice" entities off of a custom extension of SkuImpl. Then, you would still use the dynamic pricing suggestion from above, but you would be basing your determination off of this maintained data. I think this is more natural in the framework than trying to use options and multiple skus.
Finally, I assume you're using the community version of Broadleaf. We generally fulfill this type of requirement using our commercial PriceList module, which is more feature rich.

Blazegraph Tinkerpop 3 Indexing

I am trying to learn about Blazegraph. At the moment I am puzzled how I can optimise simple lookups.
Suppose all my vertices have a property id, which is unique. This property is set by the user. Is there any way to speed up finding a vertex of a particular id while still sticking to the Tinkerpop APIs?
Is the search API defined here the only way?
My previous experience is in TitanDB and in Titan's case it's possible to define an index which the Tinkerpop APIs integrate with flawlessly. Is there any way to achieve the same results in Blazegraph without using the Search API?
Whether a mid-traversal V() uses an index or not, depends on a)
whether suitable index exists and b) if the particular graph system
provider implemented this functionality.
Gremlin (Tinkerpop) does not specify how to set indexes although the documentation presents things like the following
graph.createIndex("username",Vertex.class)
But may be reserved for the ThinkerGraph implementation, as a matter of fact it says
Each graph system will have different mechanism by which indices and
schemas are defined. TinkerPop3 does not require any conformance in
this area. In TinkerGraph, the only definitions are around indices.
With other graph systems, property value types, indices, edge labels,
etc. may be required to be defined a priori to adding data to the
graph.
There is an example for Neo4J
TinkerPop3 does not provide method interfaces for defining
schemas/indices for the underlying graph system. Thus, in order to
create indices, it is important to call the Neo4j API directly.
But the code is very specific for that plugin
graph.cypher("CREATE INDEX ON :person(name)")
Note that for BlazeGraph the search uses a built in full-text index

How to retrieve multilingual domain model?

I have a lot of entities with 3 language columns: DescriptionNL, DescriptionFR and DescriptionDE (Description, Info, Article, ... all in 3 languages).
My idea was to create a forth property Description which return the right value according to the Thread.CurrentThread.CurrentCulture.TwoLetterISOLanguageName.
But a drawback is that when you have a GetAll() method in your repository for a dropdownlist or something else, you return the 3 values to the application layer. So extra network traffic.
Adding a parameter language to the domain services to retrieve data is also "not done" according to DDD experts. The reason is that the language is part of the UI, not the Domain. So what is the best method to retrieve your models with the right description?
You are correct in stating that a language has no bearing on a domain model. If you need to manipulate objects or data you will need to use some canonical form of that data. This only applies to situation where the value has any kind of meaning in your domain. Anything that is there only for classification may not interest your model but it may be useful to still use a canonical value.
The added benefit of a canonical value is that you know what the value represent even across systems as you can do a mapping.
A canonical approach that was used on one of my previous projects had data sets with descriptions in various languages, yet the keys were the same for each value. For instance, Mr is key 1, whereas Mrs is key 2. Now in French M. would be key 1 and Mme would be key 2. These values are your organisational values. Now let's assume you have System A and System B. In System A Mr is value 67 and in System B Mr is value 22. Now you can map to these values via your canonical values.
You wouldn't necessarily store these as entities in a repository but they should be in some read model that can be easily queried. The trip to the database should not be too big of a deal as you could cache the results along with a version number or expiry date.

MDX Expression to restrict access to Measure Group

Say I have a bunch of measures which I wanted certain people to be unable to see (or vice versa only allow them to see) what kind MDX expression would I add to the SSAS Role?
I can get the user identity through the USERNAME function.
Is it even possible?
I can easily construct Dimension based security expressions, but I can't see how the Measures dimension access can be similarly curtailed.
This is the Cell Data tab in the role file.
When you enable the read permissions, the user will be dissallowed any access to cell data inside the cube.
In this place you must define the set of measures that the user will be able to see by providing the proper MDX epression.
The scope declation will provide you with information to write MDX expressions for the definiton of the data sets. (SCOPE is used for a totally different purpose and should not be used for permissions - only for reference for the needed MDX expressions)

Resources