I have two tables:
The Cost file:
Facility,
Product,
Cost Set,
Cost Bucket,
Cost
The Transaction file:
Facility,
Product,
Quantity,
Date
My issue is that I need to run through the cost file for a specific cost set (03) and present the information in a list report as :
Facility:Product:Quantity:Bucket 1:Bucket 2:Bucket 3:etc. etc.
I have tried many things, but cannot get the information to display without the quantity aggregation getting hosed by the number of cost buckets for the facility/item combination. I am pretty sure it has to do with determinants, but I set them up and found it to be of no help.
Can anyone point me in a direction that might be helpful, or is there something more I can add to this description that might be useful?
What's the relationship of cost bucket to product? When you say "Bucket 1", "Bucket 2", etc are these the data values within the Bucket data item? If you could give us some sample data which ultimately generates one line of output, that'd be great.
General advice. If this model is solely for RS use (not QS adhoc), you may find it easier to simply create an object within FM which is custom SQL joining the required tables together.
Related
I am building a product catalog for an e-comm website. I am having a requirement to build a azure search/solr/elastic search based index. The problem is saving the market specific attributes. The website is supporting 109 markets and there is each market specific data like ratings, price, views, wish-listed, etc. that I need to save in the index eg: Product1 will have 109 ratings(rating is different in each market)/109 prices(price might be different in each market) corresponding to 109 markets. Also I will have to use this attributes to add a boosting function so that when people are searching for this, products with higher view/ratings surfaces up. How do I design the index definition to support this? Can I achieve this by 1 index doc per product or do I have to create 1 index doc per market? Some pointers will be very helpful. I have spent couple of days on this and could not reach to a conclusion that is optimized for this use case. Thank you!
My proposed index definition:
-id
-mktUSA
--mktId
--rating
--views
--price
...
-mktCanada
--mktId
--rating
--views
--price
...
-locales
--En
--Fr
--Zh
...
...other properties
The problem with this approach is configuring a magnitude scoring functions inside scoring profile, to boost products based on the market
Say eg: If user is from Canada, only the Canada based rating/views should be considered and not the other market ratings while Cognitive search is calculating the search relevance score.
Is there any possible work around this? Elastic search has a neat solution of Function score query that can be used to configure the scoring function dynamically
From what I understand, your problem is that you want to have a single index with products that support 109 different markets. Many different properties for your Product model can then be market-specific. Your concern is that the model gets to big, or if it's a scalable design. It is. You can have 1000+ properties without a problem.
I have built a similar search solution for e-commerce for multiple markets.
For price, I specify one price per market. I have about 80 or so markets, so that's 80 prices. There is no way around it. I would probably do the same for ratings and views too. One per market.
In our application we use separate dimensions for market, language and country. A market can be Scandinavia, BeNeLux or Asia-Pacific. You need to clearly define what a market is in your case, and agree with the business which markets you have and how you handle changes. Countries can map directly to markets, but it may also differ. Finally, language is usually shared across markets/countries and you usually only have to support 20-25 languages.
Suggested data model
Id
TitleEnGb
TitleDeDe
TitleFrFr
...
PriceGb
PriceUs
PriceNo
PriceDe
...
RatingsGb
RatingsUs
RatingsNo
RatingsDe
...
DescriptionEnGb
DescriptionDeDe
DescriptionFrFr
...
I try to illustrate that the Title and Description are language-specific. The price and ratings are market-specific.
For the 20-25 language-specific properties, you have to think about what analyzers to use. You want to use language-specific analyzers, and preferably the Microsoft analyzers since they have much better linguistics support with full lemmatization and so on.
When you develop your frontend application you have to keep track of which market, country and language you then refer to the specific properties. This is the easiest way to support boosting and so on.
Per-market index is not recommended
You could create one index per market. I have gone down this route before. I would not recommend this. This means you have to update 109 indexes every time you add, change or delete an item. And Azure Search supports 50 indexes per service at the most anyways.
I want to sanity check myself on a view projection, in regards to if an intermediary concept can purely exist in the read model while providing a bridge between commands.
Let me use a contrived example to explain.
We place an order which raises an OrderPlaced event. The workflow then involves generating a picking slip, which is used to prepare a shipment.
A picking slip can be generated from an order (or group of orders) without any additional information being supplied from any external source or user. Is it acceptable then that the picking slip can be represented purely as a read model?
So:
PlaceOrderCommand -> OrderPlacedEvent
OrderPlacedEvent -> PickingSlipView
The warehouse manager can then view a picking slip, select the lines they would like to ship, and then perform a PrepareShipment command. A ShipmentPrepared event will then update the original order, and remove the relevant lines from the PickingSlipView.
I know it's a toy example, but I have a conceptually similar use case where a colleague believes the PickingSlip should be a domain entity/aggregate in its own right, as it's conceptually different to order. So you have PlaceOrder, GeneratePickingSlip, and PrepareShipment commands.
The GeneratePickingSlip command however simply takes an order number (identifier), transforms the order data into a picking slip entity, and persists the entity. You can't modify or remove a picking slip or perform any action on it, apart from using it to prepare a shipment.
This feels like introducing unnecessary overhead on the write model, for what is ultimately just a transformation of existing information to enable another command.
So (and without delving deeply into the problem space of warehouses and shipping)...
Is what I'm proposing a legitimate use case for a read model?
Acting as an intermediary between two commands, via transformation of some data into a different view. Or, as my colleague proposes, should every concept be represented in the write model in all cases?
I feel my approach is simpler, and avoiding unneeded complexity, but I'm new to CQRS and so perhaps missing something.
Edit - Alternative Example
Providing another example to explore:
We have a book of record for categories, where each record is information about products and their location. The book of record is populated by an external system, and contains SKU numbers, mapped to available locations:
Book of Record (Electronics)
SKU# Location1 Location2 Location3 ... Location 10
XXXX Introduce Remove Introduce ... N/A
YYYY N/A Introduce Introduce ... Remove
Each book of record is an entity, and each line is a value object.
The book of record is used to generate different Tasks (which are grouped in a TaskPlan to be assigned to a person). The plan may only cover a subset of locations.
There are different types of Tasks: One TaskPlan is for the individual who is on a location to add or remove stock from shelves. Call this an AllocateStock task. Another type of Task exists for a regional supervisor managing multiple locations, to check that shelving is properly following store guidelines, say CheckDisplay task. For allocating stock, we are interested in both introduced and removed SKUs. For checking the displays, we're only interested in newly Introduced SKUs, etc.
We are exploring two options:
Option 1
The person creating the tasks has a View (read model) that allows them to select Book of Records. Say they select Electronics and Fashion. They then select one or more locations. They could then submit a command like:
GenerateCheckDisplayTasks(TaskPlanId, List<BookOfRecordId>, List<Locations>)
The commands would then orchestrate going through the records, filtering out locations we don't need, processing only the 'Introduced' items, and creating the corresponding CheckDisplayTasks for each SKU in the TaskPlan.
Option 2
The other option is to shift the filtering to the read model before generating the tasks.
When a book of record is added a view model for each type of task is maintained. The data might be transposed, and would only include relevant info. ie. the CheckDisplayScopeView might project the book of record to:
Category SKU Location
Electronics (BookOfRecordId) XXXX Location1
Electronics (BookOfRecordId) XXXX Location3
Electronics (BookOfRecordId) YYYY Location2
Electronics (BookOfRecordId) YYYY Location3
Fashion (BookOfRecordId) ... ... etc
When generating tasks, the view enables the user to select the category and locations they want to generate the tasks for. Perhaps they select the Electronics category and Location 1 and 3.
The command is now:
GenerateCheckDisplayTasks(TaskPlanId, List<BookOfRecordId, SKU, Location>)
Where the command now no longer is responsible for the logic needed to filter out the locations, the Removed and N/A items, etc.
So the command for the first option just submits the ID of the entity that is being converted to tasks, along with the filter options, and does all the work internally, likely utilizing domain services.
The second option offloads the filtering aspect to the view model, and now the command submits values that will generate the tasks.
Note: In terms of the guidance that Aggregates shouldn't appear out of thin air, the Task Plan aggregate will create the Tasks.
I'm trying to determine if option 2 is pushing too much responsibility onto the read model, or whether this filtering behavior is more applicable there.
Sorry, I attempted to use the PickingSlip example as I thought it would be a more recognizable problem space, but realize now that there are connotations that go along with the concept that may have muddied the waters.
The answer to your question, in my opinion, very much depends on how you design your domain, not how you implement CQRS. The way you present it, it seems that all these operations and aggregates are in the same Bounded Context but at first glance, I would think that there are 3 (naming is difficult!):
Order Management or Sales, where orders are placed
Warehouse Operations, where goods are packaged to be shipped
Shipments, where packages are put in trucks and leave
When an Order is Placed in Order Management, Warehouse reacts and starts the Packaging workflow. At this point, Warehouse should have all the data required to perform its logic, without needing the Order anymore.
The warehouse manager can then view a picking slip, select the lines they would like to ship, and then perform a PrepareShipment command.
To me, this clearly indicates the need for an aggregate that will ensure the invariants are respected. You cannot select items not present in the picking slip, you cannot select more items than the quantities specified, you cannot select items that have already been packaged in a previous package and so on.
A ShipmentPrepared event will then update the original order, and remove the relevant lines from the PickingSlipView.
I don't understand why you would modify the original order. Also, removing lines from a view is not a safe operation per se. You want to guarantee that concurrency doesn't cause a single item to be placed in multiple packages, for example. You guarantee that using an aggregate that contains all the items, generates the packaging instructions, and marks the items of each package safely and transactionally.
Acting as an intermediary between two commands
Aggregates execute the commands, they are not in between.
Viewing it from another angle, an indication that you need that aggregate is that the PrepareShippingCommand needs to create an aggregate (Shipping), and according to Udi Dahan, you should not create aggregate roots (out of thin air). Instead, other aggregate roots create them. So, it seems fair to say that there needs to be some aggregate, which ensures that the policies to create shippings are applied.
As a final note, domain design is difficult and you need to know the domain very well, so it is very likely that my proposed solution is not correct, but I hope the considerations I made on each step are helpful to you to come up with the right solution.
UPDATE after question update
I read a couple of times the updated question and updated several times my answer, but ended up every time with answers very specific to your example again and I'm most likely missing a lot of details to actually be helpful (I'd be happy to discuss it on another channel though). Therefore, I want to go back to the first sentence of your question to add an important comment that I missed:
an intermediary concept can purely exist in the read model, while providing a bridge between commands.
In my opinion, read models are disposable. They are not a single source of truth. They are a representation of the data to easily fulfil the current query needs. When these query needs change, old read models are deleted and new ones are created based on the data from the write models.
So, only based on this, I would recommend to not prepare a read model to facilitate your commands operations.
I think that your solution is here:
When a book of record is added a view model for each type of task is maintained. The data might be transposed, and would only include relevant info.
If I understand it correctly, what you should do here is not create view model, but create an Aggregate (or multiple). Then this aggregate can receive the commands, apply the business rules and mutate the state. So, instead of having a domain service reading data from "clever" read models and putting it all together, you have an aggregate which encapsulates the data it needs and the business logic.
I hope it makes sense. It's a broad topic and we could talk about it for hours probably.
As background, we are very reliant on the serial items and every single serial item has it's own unit cost and price, etc...We are on 2017r2 for now.
We have had to overwrite the standard acumatica functionality that tries to pick up the stock items last cost to use that as it's unit cost fo that form/document. We have seen this on most documents in the inventory and distribution modules...
Anyway, we have realized that we missed a few spots and now we have a massive project to try and identify all of the items that were out of sync and correct them.
My questions/request for help is the following:
1) Is there any way to know all of the places that could effectively change what Acumatica has as the unit cost for a serial item? I know this happens on inventory receipts, purchase receipts, Adjustments, items that come out of the Manufacturing module, etc... Pretty much everywhere where it ends with a receipt of some sort i guess. I see this is also the case for inventory receipts on a two step transfer (This is the one that got us, we didn't realize that would happen).
2) Is there anyway on a more global level within Acumatica to have it automatically pick up the serial level unit cost as opposed to ever using the stock item cost statistics?
3) Does anyone have any ideas as to how to try and identify all of the possible items that may be effected, and how to resolve them once we identify them? Thankfully, we do have some custom fields that will hopefully have the correct unit cost, but we will still need to adjust these items back to their correct unit cost. But any ideas would be greatly appreciated.
I hope I explained the above clearly, and thank you in advance for anyone kind enough to help us out.
Please find the detailed explanation of the Item Costing in Acumatica by this link.
I think you can consider the generation of the Inventory Adjustment for all the Items with corresponding serial numbers, 0 quantity and fixed amount to correct all the costs without checking if the cost is correct or not. Please create backup snapshots before trying to generate such adjustments and check it on the test instance.
I want to use Azure ML to find related products using information from receipts from a store.
I got a file of reciepts:
44366,136778
79619,88975
78861,78864
53395,78129,78786,79295,79353,79406,79408,79417,85829,136712
32340,33973
31897,32905
32476,32697,33202,33344,33879,34237,34422,48175,55486,55490,55498
17800
32476,32697,33202,33344,33879,34237,34422,48175,55490,55497,55498,55503
47098
136974
85832
Each row represent one receipt and each number is a product id.
Given a product id I want to get a list of similar products, i.e. products that was bought together by other customers.
Can anyone point me in the right direction of how do to this?
This seems a good fit for their frequently bought together service (https://datamarket.azure.com/dataset/amla/mba). You may have to preprocess the dataset to get it in the required format. This service has a web UI as well: https://marketbasket.cloudapp.net/
This is a typical problem for Recommender, you can use a model called Machbox recommender to cover such a problem.
Recommender typically use Scoring about items to propose and the use some tricky calculation to predict scores for items users had not scored yet ( a score would be typically 1 user bought the item, 0 he did not)
If you need more details let me know ..(you have access to a free version of Azure ML where you can try all this)
Regards
I'm curious to know where people value Data Aggregation. I'm truly curious, if you don't mind letting me know how important this really is to you personally with respect to your work environment, and if you have to work directly with data agg in your line of work.
Really interested to hear about your feedback.
If you persist data (e.g. store it in a database) chances are that the data will be used by managers, statisticians, stake holders etc. to analyze the workings of their software-supported undertaking to make executive decisions. This analysis can only take place by methods of aggregation. There's no one in the world who can look at a million rows of raw data and glean insight. The data has to be summed, averaged, standard deviated etc. to make any sense to a human being.
A few examples of areas where data aggregation is important:
Public Health (CDC, WHO)
Marketing
Advertising
Politics
Organizational Management
Space Exploration
lol. Take your pick!
Very important, what else is there to say?
I work at a large hospital and not only do we have numerous departments using Analysis Services cubes we develioped but they rely heavily on the daily totals and different aggregations they can derive from these cubes by simple browsing. Without the very basic capability of being able to aggregate on some portions of your data you might as well write it on paper (IMO).
Say you have data over every individual sale.
Looking at these individual purchases could be interesting some level level(e.g. whne a customer comes and wants a refund)
However, I cannot send those 20 million records to my boss at the end of each month and say "Heres how much we sold this month".
This data needs to be aggregated and summarized on various levels. The business would not operate if the marketing guys couldn't get an aggregate for each product, the regional boss couldn't get an total aggregate over a time period and so on.
Our databases have millions of rows, of course we rely on aggregation for managment information, not to use it would be to put too heavy a load on the database in order to run large reports which would impact heavily on the users of the database. I can't think of many cases where the database contains business critical information that managers use to make decisions where aggregation would not be needed for managment reports.
I view data aggregation like data in a grid and being able to group, order, and sort columns. In a large grid of data, this is very important. It's really the difference between looking at a pile of numbers and looking at meaningful data.