Backbone collection and model structure - node.js

I have an app in backbone that has essentially 3 main components: groups, users, and posts. I have models and collections for all 3, and then on top of that I am tracking in depth analytics for each type. Multiple users belong to a single group by the way.
For example, I have another collection called groups_index that has the fields: date, average # of posts, and average post length ( with a new row for each date ). And then I also have a user_index collection that has the fields: date, group_id, average # of posts, and average post length.
I want to be able to generate charts that show the average number of posts for a group across time and the same for the users of a specific group.
Does it make more sense to combine everything into the groups_index collection and add an user_id field? Or would that over-complicate it when showing the group average charts?

Related

how should create schemas for several mongoose models with different fields?

Suppose we want to build an e-commerce website that includes different products and for example these products are divided into four or five categories. Each category of products has its own fields and of course there are a series of common fields among them such as product name, price, description and ...
My question is, should we define four or five different schemas for each kind of product?
I do believe you should create one schema for each category of products, even though they may share some similar fields.
You can create one object for the similar fields and insert them into the Schema if it helps to not have to repeat yourself.
If one Schema have all the fields that all the products need, it would be too bloated. And it won't make sense to have different Models for each category too. When you query for a product, it will return many fields which do not relate to the product.

issue with the usage of BELONGSTOMANY or HASMANY on sequelize

I've been using NODE.JS - SEQUELIZE to deal with POSTGRES database. But, it's been a while that I am facing an issue.
I have two TABLES:
FIRST TABLE: Purchases. Inside of this table, there is a column which keeps the foreign key of the Products table, because they are associated. But, as long as I'veen been coding, I realized that I needed to "insert" more than one products at once, like an array, for those people who will buy more than one product at once.
SECOND TABLE: Products.
I want something like this => Allow to a purchase inside of Purchases to have more than one products associated with. But all that I can do is make the product foreign key column in purchases table accepts only intenger (ID) of only one product.
For exemple:
The user X buyed multiple products, so then in product in Purchases will have the products [1,3,5] and these numbers are the product's ID that I would like to associate with the Products table.
print of the PURCHASES MODEL: purchases MODEL(not the migration) on sequelize
print of the PURCHASES TABLE: purchases table structure
print of the PRODUCTS TABLE: products table structure
The conclusion I've have reached was using "Belongs to MANY" or "Has many", but I don't how.
Thanks.
I propose you to add another table to achieve multiple products in one order:
ORDER table - stores one record per a customer order (all columns that related with an order as a whole)
ORDER_ITEMS - stores items inside each order (columns: a link to ORDER, a link to PRODUCT, a quantity, a price and other related columns (a discount and so on)
PRODUCT - stores a catalog of products to buy

Vlookup/Display one-to-many relationship in excel

I have a massive training catalogue for employees.
Each employee has a job Position ( Example: Accountant)
A Lawyer has to take multiple courses in order to be an accountant.
What I'm trying to do is to input the employee job position in one cell and make it display ALL of the courses he needs to undertake.
Vlookup only displays the first course, I don't know how I am able to do this in excel ( Would normally use a data vizualization software..)

How to remove duplicate entries within a column in a database with a large number of columns

I need to create an excel database for a simulation with about 1000 users. Each user has a shopping list with between 0-100 items.
I have separately created a random list of shopping items with 300 items in total.
What I would like to know is how do I randomly assign shopping items to each user without repeating the item twice in each users shoppig bag (users can only have carrots for example appear once in their shopping bag) and given that each user might have different number of items in their shopping bag? Thank you for your help, Sue
This is exactly like shuffling and dealing playing cards (with 300 items rather than 52 cards):
For each player
randomize the items
pick a random number of samples
assign that user those samples
repeat for the next user
Using this approach, a given user cannot have any item repeated (just like a user cannot be dealt 2 3-of-hearts). Therefore, removing duplicates is not an issue.
This can easily be simulated in Excel.

Cassandra data modeling for one-to-many lookup

Consider the problem of storing users and their contacts. There are about a 100 million users, each has a few hundred contacts and on an average contacts are 1kb in size. There may be some users with too many contacts (>5000) and there may be some contacts that are much (say 10x) bigger than the average of 1kb. Users actively add contacts and less often also delete them. Contacts are not pointers to other users but just a bundle of information.
There are two kinds of queries -
Given a user and a contact name, lookup the contact details
Given a user, look up all associated contact names
I was thinking of a contacts table like this -
CREATE TABLE contacts {
user_name text,
contact_name text,
contact_details map<text, text>,
PRIMARY KEY ( (user_name, contact_name) )
// ^ Notice the composite primary key
}
The choice of composite primary key is due to the number and size of contacts per user. I wanted one contact per row.
This table easily addresses the query of looking up a contact's details given a user and a contact name.
I'm looking for suggestions to address the second query.
Two options (with related concerns) on my mind -
Create a second table called contact_names_by_user, with user_name as the partition key and contact_name as a clustering key. Concern: If there a user with way too many contacts (say 20k), would that result in a non-optimally wide row?
Create an index on user_name. Concern: However given the ratio of total number of users (100M) to average contacts per user (say 200), would that value be considered to have high-cardinality, hence bad for indexing?
In general, are there guideline around looking up many items (like contacts here) referred by one item (like user here) without running in wide rows or non-optimal indexes?
Creating index itself should not be a problem IMHO. Average cardinality of 200 sounds good.
Other option is you maintaining your own index like:
CREATE TABLE contacts_by_user (
user_name text PRIMARY KEY,
contacts set
)
though your index and contacts can go out of sync.

Resources