Join feature from two different index - azure

We need data form two different azure search indexes,since we were not able to find any option to join indexes currently,we are replicating data from different indexes,to another new index,because of which we are facing issues to keep redundant data in sync in multiple indexes and also the cost aspect to maintain data in new index
Is there any other better option other than our current solution for our use case

Related

How to use Azure Search Service with heterogenous data sources

I have worked on Azure Search service previously where I created an indexer directly on a SQL DB in the Azure Portal.
Now I have a use-case where I would want to ingest from multiple data sources each having different data schema. Assume these data sources to be 3 search APIs of X,Y,Z teams. All of them take search term and gives back results in their own schema. I want my Azure Search Service to be proxy for these so that I have one search API that a user can use to get results from multiple sources, ordered correctly.
How should I go about doing it? I assume that I might have to create a common schema and whenever user searches something, I would call these 3 APIs and get results, map them to a common schema and then index this data in common schema into Azure Search index. Finally, call this Azure Search API to give back the results to the caller.
I would appreciate any help! If I can get hold of a better documentation for doing this work, that will be great as well.
Your assumption is correct. You can work with 3 different indexes and fire queries against them, or you can try to combine all of them in the same index. The benefit of the second approach is a better way to implement ordering / paging as all the information will be stored in the same index.
It really depends on what you mean by ordered correctly. Should team X be able to see results from teams Y and Z? The only way you can get ranked results like this is to maintain a single index with a common schema containing data from all teams.
One potential pitfall with this approach is conflicts in the schema. For example if one team requires a field to be of a specific datatype or use a specific analyzer, while another team has different requirements. We do this in our indexes, but with some carefully selected common fields and then dedicated fields prefixed according to our own naming convention to avoid conflicts.
One thing to consider is the need to reset the index. If you need to add, change or remove fields you will have to delete the index and create it again with a new schema. If you have a common index and team X needs to add a new property, you would need to reset (delete and create) the common index which affects all teams.
So, creating separate indexes per team has its benefits. Each team can have their own schema without risk of conflicts and they can reset their index without affecting the other teams.

Power Bi data source, optimal query option: copy, duplicate or reference?

I have a PBI file, in which I made a power query with several steps (such as merging two files).
I would like to produce several output files originating from this query, but specific steps for these files (I made a series of specific change to the data in these queries).
If I refresh my PBI file, I would like that all the queries from the original one to the three ones originating from this one get impacted.
I would also like to have impacts on the three other queries if I add a new step in the original query.
So far I used copy:
I took my original query, I did a right click and simply used the "copy" option. However, this is duplicating the previously merged files used to create the query at the same time.
I see that there is also the option "Duplicate" and "reference" in Power BI.
I tried doing some research and I read the following on "Duplicate":
"Duplicate copies a query with all the applied steps of it as a new query; an exact copy."
For me this seemed exactly that same as a "copy", I thus expected that I would get a copy of the previously merged files when I duplicated the query. But no. I tested it and only the selected query got duplicated.
When I did the test for "reference" however, my query appeared, only the result this time, (not the data use to create it), but it had no steps. When I try to click on "source", I cannot "see" the source.
I am thus puzzled as to the best way forward, and more broadly the best cases and practices to adopt.
What option could I choose enabling PBI to operate the same steps each time I refresh my source, i.e merging the two files and then doing a series of specific steps on three copies of my source ?
I suspect you want to do the following:
Load Universe 1 and Universe 2
Merge them into a single table Merge1
Create three new queries that reference Merge1
Create specific steps for each of the new queries
This way, each of the new queries starts at exactly the same place without having to load Universe 1 and Universe 2 three separate times.
If you go to your query dependencies view, it should look like this:
Notice that I've disabled load (to model) for the top 3 tables since we probably only need them as staging tables.

How can I implement an iterative optimization problem in Spark

Assume I have the following two sets of data. I'm attempting to associate products on hand with their rolled up tallies. For a roll up tally you may have products made of multiple categories with a primary and alternative category. In a relational database I would load the second set of data into a temporary table use a stored procedure to iterate through the rollup data and decrement the quantities until until they were zero or I had matched the tallies. I'm trying to implement a solution in Spark/PySpark and I'm not entirely sure where to start. I've attached a possible output solution that I'm trying to achieve though I recognize there are multiple outputs that would work.
#Rolled Up Quantities#
owner,category,alternate_category,quantity
ABC,1,4,50
ABC,2,3,25
ABC,3,2,15
ABC,4,1,10
#Actual Stock On Hand#
owner,category,product_id,quantity
ABC,1,123,30
ABC,2,456,20
ABC,3,789,20
ABC,4,012,30
#Possible Solution#
owner,category,product_id,quantity
ABC,1,123,30
ABC,1,012,20
ABC,2,456,20
ABC,2,789,5
ABC,3,789,15
ABC,4,012,10

Using Cognos 10.1 which is better an Inner Join or an "IN" Filter?

I'm using Cognos 10.1 and I have a report that uses two queries each with the same primary key.
Query 1: UniqueIds
Query 2: DetailedInfo
I'm not sure how to tell whether it's better build a report using the DetailedInfo query with a filter that says PrimaryKey in (UniqueIds.PrimaryKey) or should I create a third query that joins UniqueIds to DetailedInfo on PrimaryKey.
I'm new to Cognos and I'm learning to think differently. Using MicroSoft SQL Server I'd just use an inner join.
So my question is, in Cognos 10.1 which way is better and how can tell what the performance differences are?
You'd better start from the beginning.
You queries (I hope Query Subjects) should be joined in Framework Manager, in a model. Then you can easily filter second query by applying filters to first query.
Joins in Report Studio is the last solution.
The report writers ultimate weapon is a well indexed data warehouse, with a solid framework model built on top.
You want all of your filtering and joining to happen on the database side as much as possible. If not, then large data sets are brought over to the Cognos server before they are joined and filtered by Cognos.
The more work that happens on the database, the faster your reports will be. By building your reports in certain ways, you can mitigate Cognos side processing, and promote database side processing.
The first and best way to do this is with a good Framework Model, as Alexey pointed out. This will allow your reports to be simpler, and pushes most of the work to the database.
However a good model still exposes table keys to report authors so that they can have the flexibility to create unique data sets. Not every report warrants a new Star Schema, and sometimes you want to join the results of queries against two different Star Schema sources.
When using a join or a filter, Cognos attempts to push all of the work to the database as a default. It wants to have the final data set sent to it, and nothing else.
However when creating your filters, you have two ways of defining variables... with explicit names that refer to modeled data sources (ie. [Presentation View].[Sales].[Sales Detail].[Net Profit] ) or by referring to a column in the current data set (such as [Net Profit] ). Using explicit columns from the model will help ensure the filters are applied at the database.
Sometimes that is not possible, such as with a calculated column. For example, if you dont have Net Profit in your database or within your model, you may establish it with a Calculated column. If you filter on [Net Profit] > 1000, Cognos will pull the dataset into Cognos before applying your filter. Your final result will be the same, but depending on the size of data before and after the filter is applied, you could see a performance decrease.
It is possible to have nested queries within your report, and cognos will generate a single giant SQL statement for the highest level query, which includes sub queries for all the lower level data. You can generate SQL/MDX in order to see how Cognos is building the queries.
Also, try experimenting. Save your report with a new name, try it one way and time it. Run it a few times and take an average execution speed. Time it again with the alternate method and compare.
With smaller data sets, you are unlikely to see any difference. The larger your data set gets, the bigger a difference your method will affect the report speed.
Use joins to merge two queries together so that columns from both queries can be used in the report. Use IN() syntax if your only desire is to filter one query using the existence of corresponding rows in a second. That said, there are likely to be many cases that both methods will be equally performant, depending on the number of rows involved, indexes etc.
By the way, within a report Cognos only supports joins and unions between different queries. You can reference other queries directly in filters even without an established relationship but I've seen quirks with this, like it works when run interactively but not scheduled or exported. I would avoid doing this in reports.

How to perform intersection operation on two datasets in Key-Value store?

Let's say I have 2 datasets, one for rules, and the other for values.
I need to filter the values based on rules.
I am using a Key-Value store (couchbase, cassandra etc.). I can use multi-get to retrieve all the values from one table, and all rules for the other one, and perform validation in a loop.
However I find this is very inefficient. I move massive volume of data (values) over the network, and the client busy working on filtering.
What is the common pattern for finding the intersection between two tables with Key-Value store?
The idea behind the nosql data model is to write data in a denormalized way so that a table can answer to a precise query. To make an example imagine you have reviews made by customers on shops. You need to know the reviews made by a user on shops and also reviews received by a shop. This would be modeled using two tables
ShopReviews
UserReviews
In the first table you query by shop id in the second by user id but data are written twice and accessed directly using just a key access.
In the same way you should organize values by rules (can't be more precise without knowing what's the relation between them) and so on. One more consideration: newer versions of nosql db supports collections which might help to model 1 to many relations.
HTH, Carlo

Resources