Data Modeling for special e-commerce Site in Aerospike - node.js

Hello I have just started using Aerospike so I need some details what will be the good data model for my e-commerce platform, I am not able to design a data model for this in Aerospike which can work perfectly.
Here are some basic requirements for my e-commerce platform:
1.>User Set(For login & register an basic information of user)
2.>Product Set (For storing product info like name and image and options and color options etc)
3.>Order Set ( To track record of the user order )
The complex requirement for the special Set required for database is as follows:
1.For each product that a user will buy a Share Code will be generated which a user can share with his/her friends and family to get benefits for future.
2.The user who buys a product with somebody’s Share code, then the details that this user bought the “xyz” product must be transferred to the owner of the share code and also a Share code for this user will also be generate which he/she can share with his/her friend.
3.And also the user must be able to know how many persons shared his/her code an also the user’s who buys a product from the share code of the 1st level user’s Share Code.
So I want to keep record of the users 2 level below the current user.

Looks like you are trying to model a Multi-Level-Marketing order management system.
User and Product are straight forward. Records TTL = live for ever, never delete for all sets. user_info, product_info may be updated.
For simplicity, each product purchase is just one product_id.
By PK I mean the primary key of the record.
User Set: {PK:user_id, user_info:{.....}}
Product Set: {PK:product_id, product_info:{.....}}
Order Set: {
PK: order_id,
order_details:{buyer:user_id,
item:product_id, qty:...,
share_code: "order_id:xyz",
parent_code:"parent_order_id:abc" },
level1_orders:[List of order_ids],
level2_orders:[List of order_ids]
}
The share_code is composite string comprising the order_id:some_code.
For first set of orders (level 0 orders) parent_code may be zero.
Consider order_id = 213, share code: "213:abc", parent_code:"112:pqr"
If any user purchases using share code "213:abc", and this level 1 purchase order_id is 310, make entry in Order set for {PK:310, ... share_code:"310:cde", parent_code:"213:abc", ....} then update order_id=213 - to its level1_orders list, append 310 order_id.If order 213 has a parent_code, in this case it is "112:pqr" also update order_id 112 in its Level2 order lists by appending this order_id, 310 to it.
Now you have all the info you need for your model.
Note: This is a multi-record update model. Be mindful of potential inconsistencies if client fails midway or other bad things happen. There are advanced techniques to address that situation.
However, this may be a good starting point for you. Let me know if this was helpful. If it is different than what you wanted (your part 3 was not very clear to me), you may have to modify the model but the technique highlighted may be helpful.

Related

How to ensure data consistency between two different aggregates in an event-driven architecture?

I will try to keep this as generic as possible using the “order” and “product” example, to try and help others that come across this question.
The Structure:
In the application we have 3 different services, 2 services that follow the event sourcing pattern and one that is designed for read only having the separation between our read and write views:
- Order service (write)
- Product service (write)
- Order details service (Read)
The Background:
We are currently storing the relationship between the order and product in only one of the write services, for example within order we have a property called ‘productItems’ which contains a list of the aggregate Ids from Product for the products that have been added to the order. Each product added to an order is emitted onto Kafka where the read service will update the view and form the relationships between the data.
 
The Problem:
As we pull back by aggregate Id for the order and the product to update them, if a product was to be deleted, there is no way to disassociate the product from the order on the write side.
 
This in turn means we have inconsistency, that the order holds a reference to a product that no longer exists within the product service.
The Ideas:
Master the relationship on both sides, which means when the product is deleted, we can look at the associated orders and trigger an update to remove from each order (this would cause duplication of reference).
Create another view of the data that shows the relationships and use a saga to do a clean-up. When a delete is triggered, it will look up the view database, see the relationships within the data and then trigger an update for each of the orders that have the product associated.
Does it really matter having the inconsistencies if the Product details service shows the correct information? Because the view database will consume the product deleted event, it will be able to safely remove the relationship that means clients will be able to get the correct view of the data even if the write models appear inconsistent. Based on the order of the events, the state will always appear correct in the read view.
Another thought: as the aggregate Id is deleted, it should never be reused which means when we have checks on the aggregate such as: “is this product in the order already?” will never trigger as the aggregate Id will never be repurposed meaning the inconsistency should not cause an issue when running commands in the future.
Sorry for the long read, but these are all the ideas we have thought of so far, and I am keen to gain some insight from the community, to make sure we are on the right track or if there is another approach to consider.
 
Thank you in advance for your help.
Event sourcing suites very well human and specifically human-paced processes. It helps a lot to imagine that every event in an event-sourced system is delivered by some clerk printed on a sheet of paper. Than it will be much easier to figure out the suitable solution.
What's the purpose of an order? So that your back-office personnel would secure the necessary units at a warehouse, then customer would do a payment and you start shipping process.
So, I guess, after an order is placed, some back-office system can process it and confirm that it can be taken into work and invoicing. Or it can return the order with remarks that this and that line are no longer available, so that a customer could agree to the reduced order or pick other options.
Another option is, since the probability of a customer ordering a discontinued item is low, just not do this check. But if at the shipping it still occurs - then issue a refund and some coupon for inconvenience. Why is it low? Because the goods are added from an online catalogue, which reflects the current state. The availability check can be done on the 'Submit' button click. So, an inconsistency may occur if an item is discontinued the same minute (or second) the order has been submitted. And usually the actual decision to discontinue is made up well before the information was updated in the Product service due to some external reasons.
Hence, I suggest to use eventual consistency. Since an event-sourced entity should only be responsible for its own consistency and not try to fulfil someone else's responsibility.

Is an order something transient or not

In my company (train company) there is a sort of battle going on over two viewpoints on something. Before going to deep into the problem I'm first going to explain the different domains we have in our landscape now.
Product: All product master data and their characteristics.
Think their name, their possible list of choices...
Location: All location master data that can be chosen, like stations, stops, etc.
Quote: To get a price for a specific choice of a product with their attributes.
Order: The order domain where you can make a positive order but also a negative one for reimbursements.
Ticket: This is essentially what you get from paying the order. Its the product but in the state that its at, when gotten by the customer.
The problem
Viewpoint PURPLE (I don't want to create bias)
When an order is transformed into all "tickets", we convert the order details, like price, into the ticket model. In order to make Order something we can throw away. Order is seen as something transient. Kind of like the bag you have in a supermarket. Its the goods inside the bag that matter. Not the bag itself.
When a reimburse flow would start. You do not need to go to the order. You would have everything in the Ticket domain. So this means data from order will be duplicated to Ticket.
But not all, only the things that are relevant. Like price for example.
Viewpoint YELLOW (I don't want to create bias)
You do the same as above but you do not store the price in Ticket domain. The ticket domain only consist of details that are relevant for the "ticket" to work. Price is not allowed in there cause its a thing of the order. When a reimburse flow would start, its allowed to go fetch those details from the order. Making order not something you can throw away as its having crucial data inside of it.
The benefit here is that Order is not "polluting" the Ticket with unnecessary data. But this is debatable. The example of the price is a good example.
I wish to know your ideas about these two viewpoints.
There is no "Don't repeat yourself" when it comes to the business domain. The only thing that dictates the business domain is the business requirements. If the requirements state that the ticket should work independent of the order changes, then you have to duplicate things.
But in this case, the requirements are ambiguous. There is no correct design using the currently specified requirements. Building code based on assumptions is the #1 way of getting bad code, since you most likely will have to do a redesign down the road.
You need to go back to the product owner and ask him about the difference between the Order and the Ticket.
For instance:
What should happen to the ticket if the order is deleted?
What happens to the order and/or ticket if the product price changes?
What happens to a ticket if the order is reimbursed?
Go back, get better requirements and then start to design the application.

Blockchain Application Architecture: UML & Use Cases

For my internship, I need to implement a blockchain based solution to manage a drug supply chain. The management of this supply chain implies to track-and-trace (geolocate) a drug on the chain, but also to monitor the storage temperature to see if the cold chain is respected. For that I created a mock-up of the POC my Dapps (https://balsamiq.cloud/sum5oq5/p8lsped)and also I wanted to prepare myself by doing a UML and a use cases. However, I didn't find a lot of information about blockchain's UML and use cases besides two literatures which were quite different, so I don't know if what I did was correct or not...
The users of my Dapps will be the following ones:
The stakeholders (Manufacturers, Distributors and Retailers) which will use the Dapps to place orders and also monitor them. They also can search in the historic a specific order. Finally, trough IOT sensors they update the conditions of the order (temperature & location).
The administrator which roles is to update the Dapps and its rules. But also to add or delete user while also defining the rights that they have on the blockchain (I intend to use a permisionned blockchain). Finally, they are also here to help in case of technical problem.
The Dapps that I'm thinking about works in the following:
A user, the customer, can place an order (a list of products) to a
certain seller and choose the final destination of the order.
The order is then put together before being shipped or stocked in the
depots of one of the stakeholders (distributor or retailer) with a
description of the stocking and/or shipping condition of the product
(for example the product must be stocked or transported in a room
with a temperature of less than 5°C). During the shipping and
storing, an IOT device will feed the drops with the temperature and
geolocation of the product by updating the data each 5-10mn.
Obviously they will be a function that allows all the users to see
the history of the order passed and search inside a specific order.
In case where the temperature doesn't respect the temperature
recommended, then the smart-contract send an alert. The same if the
collocation of the product is "weird" like being in some European
countries and not in an Asian country, an alert will be sent again by
the smart-contractual. Finally, in the case where the product is sent
to the asked location by the customer, then the money for the order
will be paid to the seller.
So based on what I explained, I came here in hope that someone tell me if the use cases and UML that I did were correct or not.
I thank in advance anybody who'll take the time to help me.

Duplicate Detection In CRM

In my organization, I have a sales department. The users in the sales department have given leads. Daily They call to the different leads. Now I want that two Sales Persons are not calling the same lead. So how to prevent this situation in the CRM. Moreover we are giving the random leads to all the Sales Persons. There is a possibility that two or more sales persons have same leads.
The Privileges given to the Sales Persons are:
They are not able to see the leads of each other.
They are not able to see the accounts of each other.
They are not able to see the contacts of each other.
Now I want that two or more Sales Persons are not having the same leads so they are not calling the same person. So how to prevent this situation in the CRM?
The duplicate detection is limited to the records for which you are granted access. See Point 4 in this article: http://blogs.msdn.com/b/crm/archive/2008/04/29/duplicate-detection-security-model.aspx
All users with Read privilege on the base and duplicate records and Read privilege on System Job entity can view the duplicates. Every user will view the duplicates according to his access level on that entity. For example, if Tim has Basic read access on Accounts entity and Jack has Global read access, then for a duplicate detection job ran by Tim for all the account records in the system, Tim will see the duplicate account records that he owns but Jack will see duplicate account records, created by all users in the organization and detected in that run.
So you have to either run a duplicated detection with administrative rights peridically or you have to grant more rights.
The reason for this is pretty simple: how would you show a user that there is a possible duplicate record when he is not allowed to view it. So it will be handled like the records don't exist.
At last I have found the answer of the above question.
I have increased the privileges given to the sales persons. I have given access to see all the records of all the sales persons.
But In the Front End, I hide the system views like "All Leads", "Active Accounts", "Active Contacts" etc. and made custom views for the sales persons. Edit the filters of each view.
Because These are the views which able the sales persons to see all the records. So I hide these views from the sales persons and make custom views.
By this the two or more sales persons are not having the same leads.So they are not calling the same leads.
I will explain you how it helps: Suppose One sales person import the leads and after some time the other sales person also import some leads and if some leads are same, then the CRM will not import the duplicate leads and import the other leads which are not duplicate of the other.Moreover the sales person can see their leads which are not imported in their system by going in the Workplace and then to Imports.
Also I have made the custom Duplicate Detection Rules according to my requirement. These rules will check that the leads are not duplicate of each other according to my requirement.
I think you will need to increase the security privileges of each user, so they can see each others records. Or have an admin account for importing records.

How to determine the aggregate root

I have an application in which an Engineer accesses gas wells. He can see a list of wells by choosing any combination of 7 characteristics. The characteristics are company, state, county, basin, branch, field, operator in their respective order. The application starts and I need to retrieve a list of companies. The companies the user sees is based on their security credentials. What would be my aggregate root/domain object which to base my repository. I first thought user, but I never retrieve anything about a user. The combination of those items and a couple of other attributes are collectively called wellheader information. Would that be the aggregate root or domain object for my repository?
Thanks in advance
With a short description like that, it can only be a quess on how your design could be.
As I read it, your are really interested in wells for a given engineer. (is the engineer the user you mention?)
So a first try could be to model the concept of a well as an aggregate root.
So maybe something like this:
ICollection<Well> wells = WellRepository.GetWellsForEngineer(engineerInstance);
Maybe your engineer is associated with a characteristics object.
Either way, you have to associate the engineer with wells in a given company, state and so on to be able to extract which wells the engineer is actualy assigned to.
If this dosen't help you, maybe you could elaborate on your domain.

Resources