Power pivot many to many relationship between two tables - excel

As you can see from the image i have a one-to-many relation ship between these two tables. BUT i want to make it soo its a Many-to-many. Im using AssetID as the key for these relationships.
Any ideas on how i could create this??
The reason whu need it as a Many-to-Many as im using this in powerview and using Column headers as sliders. An example of this would be if i was to select Windows 7 in the tblOperatingSystem slider the slider which i use for tblAssets would only display what is relevant to windows 7, where as i want to be able to do the opposite and select in tblAssets silder and only the OS would appear which is relevant in the tblOperatingSystem slider
I have already Tried to create a new table which just has AssetID and then connect tblAssets and tblOperatingSystem to it but this method doesnt work for the sliders.
Any ideas round this?

If I'm understanding the issue correctly, this is down to a limitation of PowerPivot (and the SSAS tabular model) in which it is unable to properly model many-to-many relationships. The relationship can be enforced in one direction (as you can see in your OS slider), but doesn't work on the other direction.
A way I've managed to work around this in PowerPivot/PowerView in the past is to create an additional, de-normalised table, which contains all possible combinations of OS and Asset, with a new identity column (or a concatenation of OSID and AssetID) as a Key. Configure the one-to-many relationships to tblOperatingSystem and tblAsset as required.
The critical part to this, is to include your data columns here also, using DAX functions to populate the values. You can then use this new de-normalised table as the source for both of your sliders (and hide the originals from the client), which will automatically filter each other when one is selected.
Now, it's not terribly efficient as there's a lot of duplication, so if anyone else can suggest another way to achieve this, I'd be interested to hear it myself! Just beware of using this with really large data models, as it can slow things down a lot.
Alternatively, I came across this article (which contains good links to similar posts by Marco Russo and Alberto Ferrari) but I haven't tried it out, so I'm not sure how well it plays with PowerView, since both source articles pre-date PV.

PowerPivot doesn't support many-to-many relationship modeling nativly but you can emulate it using DAX. All you need to do is in you measure list the related many-to-many tables in your calculate statment. For example (from http://gbrueckl.wordpress.com/2012/05/08/resolving-many-to-many-relationships-leveraging-dax-cross-table-filtering/) given a layout like:
Then to write a measure on the Audience table that counts the number of rows but takes into account the filtering on the Targets table you would write:
RowCount_M2M:=CALCULATE(
[RowCount],
'Individuals',
'TargetsForIndividuals',
'Targets')
By listing the other tables their filter contexts will overlap and you get the joining you are looking for.

Related

Spotfire for cross fact table joins using conformed dimensions

A few years I ascertained that Spotfire cannot perform multi-fact table queries using conform dimensions a la Ralph Kimball - like Tableau in which this is still the case.
Is this still so? Most people I speak to are not aware of this. I am not in a position to quickly assess this, hence my question.
If you are reading from a DB, you can create custom information links using SQL (or what Spotfire calls SQL, its a little different) that can certainly join multiple fact tables together through conforming dimensions. These may perform well or poorly depending on the amount of data and structure of the tables in question.
You can also 'join' fact tables across dimensions (or directly to each other if you have the right keys) within the tool itself. These are called relations and work under the same principles, but don't kick off joined SQL statements.
If you create a view in the DB that does the joins as you have said, Spotfire can read those as well into an information link.

Create Mongoose Schema Dynamically for e-commerce website in Node

I would like to ask a question about a possible solution for an e-commerce database design in terms of scalability and flexibility.
We are going to use MongoDB and Node on the backend.
I included an image for you to see what we have so far. We currently have a Products table that can be used to add a product into the system. The interesting part is that we would like to be able to add different types of products to the system with varying attributes.
For example, in the admin management page, we could select a Clothes item where we should fill out a form with fields such as Height, Length, Size ... etc. The question is how could we model this way of structure in the database design?
What we were thinking of was creating tables such as ClothesProduct and many more and respectively connect the Products table to one of these. But we could have 100 different tables for the varying product types. We would like to add a product type dynamically from the admin management. Is this possible in Mongoose? Because creating all possible fields in the Products table is not efficient and it would hit us hard for the long-term.
Database design snippet
Maybe we should just create separate tables for each unique product type and from the front-end, we would select one of them to display the correct form?
Could you please share your thoughts?
Thank you!
We've got a mongoose backend that I've been working on since its inception about 3 years ago. Here some of my lessons:
Mongodb is noSQL: By linking all these objects by ID, it becomes very painful to find all products of "Shop A": You would have to make many queries before getting the list of products for a particular shop (shop -> brand category -> subCategory -> product). Consider nesting certain objects in other objects (e.g. subcategories inside categories, as they are semantically the same). This will save immense loading times.
Dynamically created product fields: We built a (now) big module that allows user to create their own databse keys & values, and assign them to different objects. In essence, it looks something like this:
SpecialFieldModel: new Schema({
...,
key: String,
value: String,
...,
})
this way, you users can "make their own products"
Number of products: Mongodb queries can handle huge dataloads, so I wouldn't worry too much about some tables beings thousands of objects large. However, if you want large reports on all the data, you will need to make sure your IDs are in the right place. Then you can use the Aggregation framework to construct big queries that might have to tie together multiple collectons in the db, and fetch the data in an efficient manner.
Don't reference IDs in both directions, unless you don't know what you're doing: Saving a reference to category ID in subcatgories and vice-versa is incredibly confusing. Which field do you have to update if you want to switch subcategories? One or the other? Or both? Even with strong tests, it can be very confusing for new developers to understand "which direction the queries are running in" (if you are building a proudct that might have to be extended in the future). We've done both which has led to a few problems. However, those modules that saved references to upper objects (rather than lower ones), I found to be consistently more pleasant and simple to work with.
created/updatedAt: Consider adding these fields to every single model & Schema. This will help with debugging, extensibility, and general features that you will be able to build in the future, which might otherwise be impossible. (ProductSchema.set('timestamps', true);)
Take my advice with a grain of salt, as I haven't designed most of our modules. But these are the sorts of things I consider as continue working on our applications.

NoSQL - how to implement autosuggest and best matches properly?

We're building a database of cars and their properties, supposed to be stored in a DynamoDB.
Creating a cars table and filling it with objects that has properties like brand, model, year etc is easy.
But we also want a few other features en the admin interface:
Suggestions when typing
When creating a car, it should suggest brand and model from existing cars, when typing in the field.
Should we then maintain a list of brands and models in another table, and make a query to that table, when the user types?
Or is it good enough to query the "rich" table of car definitions, and get all values for brand, all model values where brand has a certain value, etc? My first thought is that it would be a heavy operation and we'd want a separate index of cars and models. But I'm not a NoSQL expert...
Best matches
When enrolling a new car in our system we want to use use an existing defined car as a reference if possible.
So when the user has typed in a brand, model, year etc we want to show a few options of the best matches - we can accept that they year etc. is different, but want the best matches first.
What is the best way to do matches like this on data in a NoSQL database? Any links to tools, concepts etc. will be appreciated :)
Thanks in advance
In dynamodb (all nosql), the less you create tables the best is your architecture (this is one of the main reason we use nosql), so no need of a new table, just add a new attribute and fill it with the searchable data you want, just have in mind that querying by dynamodb is case sensitive and you only can use the begins_with or the contains function to query data
The cons are :
You will use lot of reading capacity unit
You have to manage the capital letters
You have to fabric at each creation the searchable attribute
The solution I suggest is using aws cloudsearch, which gives an out of the boxes suggester, you will will have better results and give a better user experience, the indexation in cloudsearch is automatic each time you have a new item, but be aware of the pricing, however they will give you 30 day for free

Designing indices to have paging with filters and random page jump Elasticsearch

I just want to have an expert opinion about my use case and the way I am planning to use indices to see if there is no problem in my approach or if there are any better ways to achieve it. Since I am new to ES, your opinions would really help me. We are storing data in couchdb in different databases based for each type of data.
I have database that serves as a link between 2 databases. For example, database A has 'floor' data, database B that links floor to items and then separate database for each item that a floor can have (e.g., card reader, camera etc).
We need to search for items that are linked to a floor and get them with filtering and paging. (Right now my links database has only ids and type but I am also planning to save name for each type as well in links db so that I can have filtering while I can do paging).
The way I want to achieve filtering and paging in my datastore is, I'll just have indices for each db. So based on floor, i'll get all its linked items for a type and 'search filter' (from index of links db) that would give me a page of certain items, i'll then use ids from that result to get those full objects (from index of) db of that item type.
Please let me know if there is any better approach in handling that, like e.g., if I can create one index for my floor and links and item databases and is it possible to do that through logstash couchdb plugin.
Many thanks.
Your setup does not sound wrong, but there are alternatives. You can use nested objects or parent-child relationships for an easier setup. Both approaches have their advantages. It all depends on the type of queries that you would like to do, and the amount of items that are related.
I would start by reading he next section of the definitive guide, that should give you a good start.
https://www.elastic.co/guide/en/elasticsearch/guide/current/modeling-your-data.html?q=model

How to perform intersection operation on two datasets in Key-Value store?

Let's say I have 2 datasets, one for rules, and the other for values.
I need to filter the values based on rules.
I am using a Key-Value store (couchbase, cassandra etc.). I can use multi-get to retrieve all the values from one table, and all rules for the other one, and perform validation in a loop.
However I find this is very inefficient. I move massive volume of data (values) over the network, and the client busy working on filtering.
What is the common pattern for finding the intersection between two tables with Key-Value store?
The idea behind the nosql data model is to write data in a denormalized way so that a table can answer to a precise query. To make an example imagine you have reviews made by customers on shops. You need to know the reviews made by a user on shops and also reviews received by a shop. This would be modeled using two tables
ShopReviews
UserReviews
In the first table you query by shop id in the second by user id but data are written twice and accessed directly using just a key access.
In the same way you should organize values by rules (can't be more precise without knowing what's the relation between them) and so on. One more consideration: newer versions of nosql db supports collections which might help to model 1 to many relations.
HTH, Carlo

Resources