How can I have multiple MSRP based on location in broadleaf? - broadleaf-commerce

I need to have multiple MSRP based on customer location. I am planning to add "location" field in "product-option" which will be available for each product, so an SKU will be available for each product based on location. And I don't want the user to choose this option, it should be the default selected.
Is this a right way to do that, or please suggest a better way to achieve it.

I think it depends on how ephemeral your pricing considerations are. If the pricing will be changing on a regular basis based on a rate table (or something like that), you can utilize the dynamic pricing service, which will allow you to programmatically alter the pricing based on whatever business rules you deem important. Start with registering your own DynamicSkuPricingFilter (i.e. extend AbstractDynamicSkuPricingFilter - see DefaultDynamicSkuPricingFilter) and provide whatever attributes are important for your pricing determination. Then, register your own implementation of DynamicSkuPricingService (see DefaultDynamicSkuPricingServiceImpl) to return the correct pricing. I would also consider extending DiscreteOrderItemImpl with location information so you have a record in the cart of the associated location.
Otherwise, if you want to have more permanent data structure and explicit admin user intent about maintaining this pricing, then I would suggest a new entity structure that relates a simple location and price to your default sku. You should be able to naturally maintain this in the admin using #AdminPresentation annotations. For example, consider a new admin annotated collection of "LocationPrice" entities off of a custom extension of SkuImpl. Then, you would still use the dynamic pricing suggestion from above, but you would be basing your determination off of this maintained data. I think this is more natural in the framework than trying to use options and multiple skus.
Finally, I assume you're using the community version of Broadleaf. We generally fulfill this type of requirement using our commercial PriceList module, which is more feature rich.

Related

How to use Azure Search Service with heterogenous data sources

I have worked on Azure Search service previously where I created an indexer directly on a SQL DB in the Azure Portal.
Now I have a use-case where I would want to ingest from multiple data sources each having different data schema. Assume these data sources to be 3 search APIs of X,Y,Z teams. All of them take search term and gives back results in their own schema. I want my Azure Search Service to be proxy for these so that I have one search API that a user can use to get results from multiple sources, ordered correctly.
How should I go about doing it? I assume that I might have to create a common schema and whenever user searches something, I would call these 3 APIs and get results, map them to a common schema and then index this data in common schema into Azure Search index. Finally, call this Azure Search API to give back the results to the caller.
I would appreciate any help! If I can get hold of a better documentation for doing this work, that will be great as well.
Your assumption is correct. You can work with 3 different indexes and fire queries against them, or you can try to combine all of them in the same index. The benefit of the second approach is a better way to implement ordering / paging as all the information will be stored in the same index.
It really depends on what you mean by ordered correctly. Should team X be able to see results from teams Y and Z? The only way you can get ranked results like this is to maintain a single index with a common schema containing data from all teams.
One potential pitfall with this approach is conflicts in the schema. For example if one team requires a field to be of a specific datatype or use a specific analyzer, while another team has different requirements. We do this in our indexes, but with some carefully selected common fields and then dedicated fields prefixed according to our own naming convention to avoid conflicts.
One thing to consider is the need to reset the index. If you need to add, change or remove fields you will have to delete the index and create it again with a new schema. If you have a common index and team X needs to add a new property, you would need to reset (delete and create) the common index which affects all teams.
So, creating separate indexes per team has its benefits. Each team can have their own schema without risk of conflicts and they can reset their index without affecting the other teams.

SharePoint online Cross site lookup values

I'm looking for cross site lookup values from one site to another site in SharePoint Online. is it possible to achieve this? if yes, could you please guide me the steps to perform this task.
It is not possible to use list items values from one site as options in lookup column in a list on a different site, without any customization. I think what you could be looking for is term management and manage metadata columns. Term store management is something you create for a tenant level. In a very simple explanation you may think of it as a place to create global and hierarchical dictionaries made of terms and groups. Then in SharePoint list or library you may add a manage metadata column that will allow the users to pick values from a specified termstore. You may create this kind of column on any list and any site as terms are available on tenant level.
Please look more into the concept here -> https://learn.microsoft.com/en-us/sharepoint/managed-metadata
And here is an example how to create a manage metadata column -> https://support.microsoft.com/en-us/office/create-a-managed-metadata-column-8fad9e35-a618-4400-b3c7-46f02785d27f
I hope this will be of any help.
Update
within term store you may create a nested structure grouping terms and then when user selects the value he may select from terms in this structure so it may be like a kind of cascading choice but presented differently. I am not sure if it will meet your needs

Filtering on a Azure service bus topic

I have a main application that allows a user to edit all of their data (about 20 fields). When this is updated I add this to a Service Bus topic which I have other areas of the system subscribing to.
One of these subscriptions only cares if a single field is updated (phone number). I'm wondering what is the best way to handle this?
Looking at the GitHub example here, it states:
The cost of choosing complex filter rules is lower overall message throughput at the subscription, topic, and namespace level, since evaluating rules costs compute time. Whenever possible, applications should choose correlation filters over SQL-like filters since they are much more efficient in processing and therefore have less impact on throughput.
So from what I can gather I could add what properties have been updated using the properties property on the BrokeredMessage class and filter based on that but that is not recommended based on the above statement.
I could let the message go through and action it (this would just update a row in a table to the same value) but this seems redundant.
Do I have any other options?
If you have predefined values you'd be filtering on for that subscription, use CorrelationFiter. In case it requires conditional match (e.g. a phone number that starts with area code 321), then SqlFilter is absolutely fine. It really depends how much of filtering you're going to perform. My suggestion would be benchmarking and measuring the performance, tweaking your filters to give you the optional result.

How do I add a set of strings to an Entity?

This is a simple requirement: I want to add a set of strings to Accounts in Dynamics 2011. The string are external IDs for other systems. All the strings should be unique accross all entities.
The only way I can see to do this is define the strings as entities (say 'ExternalCode') and set up a 1:N reslationship between Account and ExternalCode, but this seems incredibly overweight. Also, defining as an entity insists thhat I give the 'ExternalCode' a name, which it obviously doesn't have.
What's the best way to implement this?
Thank you
Ryan
It may seem overweight, but think about entities as if it were tables. Would you create a second table inside MS SQL? If so, then you should create another entity. CRM is very well optimized so I wouldn't worry about this additional overhead.
Alternatively, you could always carry the GUID in the other system.
How are these unique references entering your CRM system. Are you importing the data from each of the external systems? If so I assume the references are unique in the external system? Once imported you want to make sure that any of these references are not duplicated?
Additionally, how many strings are we talking about here? If it is a small number then it would make sense to just define attributes to manage them and check for duplicates in one of the following ways:-
1) Some javascript could be used to make an oData query to confirm the 'uniqueness' of your external reference number before the record is commited. (But, this is not sufficient is records will be created programmatically in the system also).
2) A plug-in which fires on pre-create to again query the system for other records which match the same unique reference numbers and handles the event of a match accordingly.
However, if there are many of them then it may make more sense to define a separate entity as you say and then as above you could associate a new 'reference record' with the entity via a plug-in, but again, check if the record already exists and then either handle an exception or merely associate with an existing record if that is appropriate.
I think they key is what you want to do if you do find a duplicate and how these records are going to be created in the system (e.g. via UI or programmatically or potentially both).
Happy to provide some more assistance if you have some more details.

Capitalization of Custom Entities, Fields in CRM 2011

In migrating to CRM 2011, we're discovering different developers used different capitalization of custom entities and custom fields. This creates a headache for custom programming using the early bound methods. http://msdn.microsoft.com/en-us/library/gg327844.aspx. Is there a way to normalize the entity/field names before (or after) the migration?
As far as I'm aware. The only way to achieve the correct capitalisation you desire is to recreate the entities with the appropriate names.
Before or after the migration is mostly the upgrade of the CRM Server installation and modifying the database schema to reflect the upgrade, while still maintaining the current data and customisation data.
Thats as far as the "Supported" spiel goes.
As for an actual workaround. If your looking to upgrade anyway I'd be tempted to restore your current CRM 4 system to a test domain. Then look at how feasible it is to change the schema names in the actual "untouchable" crm database. I believe there is a MetaDataSchema.Entity table where this is centrally stored so i'd test this to see how usable this is and what impact it has on say the webservice.
So you face a similiar choice that I face on multiple occasions while working with Dynamics CRM. Go with the supported way, or a bit of "Yee-Ha" development. Sorry it's probably not what you want to hear!
Edit:
In regards to what to change I can't say for sure as I haven't got a CRM 4.0 system to hand only a 2011 at this moment in time. However as an example, there will be a MetadataSchema.Entity table in the [OrganisationName_MSCRM] Database. Of which certain columns will jump out. Name, PhysicalName and Logical Name.
Logical name is the one that CRM users which it defaults to lowercase no matter how you enter it.
I believe PhysicalName and Name are the ones you would be looking to change into lower case.
The actual "Name" of the entity, eg logical name is "account" whereas in CRM where it is displayed in a user friendly way is related through a table called MetadataSchema.LocalizedLabel through the foreign key "ObjectId" which in this case would be the "EntityId" field.
This is where I would be looking to do the changes as it shouldn't have an impact on the rest of the data due to the "logicalname" field being the one CRM probably uses.
As far as your generation for strongly types classes goes.
if you use latebound such as
relatedEntity.LogicalName = "new_related_account";
relatedEntity["relatedaccountid"] = entity["accountid"];
then all the properties and logical names need to be lower case. As this will use the "logicalname" property previously identified in the MetadataSchema table.
Howver using SVCUtil I can only assume it looks at the "Name" and "Physical Name" attributes to give a slightly more user friendly coding experience when it generates the file for use in custom applications.
Though if you are looking to use the early bound class generation it shouldn't be a problem as the definition file generated will provide intellisense on the correct capitalisation of attributes and properties, and if you are using late bound like the example previously it's all lower case. So it's more it will just be a bit untidy to look at than completely impractical =)

Resources