Optimise conditional queries in Azure cognitive search - azure

we got a unique scenario while using Azure search for one of the project. So, our clients wanted to respect user's privacy, hence we have a feature where a user can restrict search for any PII data. So, if user has opted for Privacy, we can only search for him/her with UserID else we can search using Name, Phone, City, UserID etc.
JSON where Privacy is opted:
{
"Id": "<Any GUID>",
"Name": "John Smith", //searchable
"Phone": "9987887856", //searchable
"OtherInfo": "some info" //non-searchable
"Address" : {}, //searchable
"Privacy" : "yes", //searchable
"UserId": "XXX1234", //searchable
...
}
JSON where Privacy is not opted:
{
"Id": "<Any GUID>",
"Name": "Tom Smith", //searchable
"Phone": "7997887856", //searchable
"OtherInfo": "some info" //non-searchable
"Address" : {}, //searchable
"Privacy" : "no", //searchable
"UserId": "XXX1234", //searchable
...
}
Now we provide search service to take any searchText as input and fetch all data which matches to it (all searchable fields).
With above scenario,
We need to remove those results which has "Privacy" as "yes" if searchText is not matching with UserId
In case searchText is matching with UserId, we will be including it in result.
If "Privacy" is set "no" and searchText matches any searchable field, it will be included in result.
So we have gone with "Lucene Analysers" to check it while querying, resulting in a very long query as shown below. Let us assume searchText = "abc"
((Name: abc OR Phone: abc OR UserId: abc ...) AND Privacy: no) OR
((UserId: abc ) AND Privacy: yes)
This is done as we show paginated results i.e. bringing data in batches like 1 - 10, 11 - 20 and so on, hence, we get top 10 records in each query with total result count.
Is there any other optimised approach to do so??
Or Azure search service facilitates any internal mechanism for conditional queries?

If I understand your requirement correctly, it can be solved quite easily. You determine which property should be searchable and not in your data model. You don't need to construct a complicated query that repeats the end user input for every property. And you don't need to do any batching or processing of results.
If searchText is your user's input, you can use this:
(*searchText* AND Privacy:false)
This will search all searchable fields, but it will only return records that have allowed search in PII data.
You also have a requirement that allows the users to search for userid in all records regardless of the PII setting for the record. To support this, extend the query to:
(*searchText* AND Privacy:false) OR (UserId:*searchText*)
This allows users to search all fields in records where Privacy is false, and for all other records it allows search in the UserId only. This query pattern will solve all of your requirements with one optimized query.

From the client side you could dynamically add the ¨SearchFields¨ parameter as part of the query, that way if the user got the Privacy flag set to true, only UserId is set as part of the available Search fields.
https://learn.microsoft.com/en-us/dotnet/api/microsoft.azure.search.models.searchparameters.searchfields?view=azure-dotnet

Related

NetSuite: Need to get value Of Landed Cost Category Line Field using Transaction Saved Search

I am trying to somewhat mimic NetSuite's Landed Cost feature using NetSuite's customization options (SuiteBuilder,SuiteScript etc.) and then further extend the functionality according to my requirements.
For this I need to in script, get value of "LANDED COST CATEGORY" line field of item sublist in the Transaction records (like Bill, Purchase Order etc.) using saved search.
But in a saved search I was unable to find any Column/scriptId which would give me value of LANDED COST CATEGORY line field. We ARE able to get this value using record.load().getValue() but I need this value from multiple transaction records and using this approach may cause performance issues. So, please can you tell how we can access this value using saved search.
I don't believe Netsuite exposes that field in saved searches at this time. This is the records browser in Netsuite listing all of the available search columns for Transaction searches. The internal id for that column is landedcostcategory, and that doesn't show up on the list.
However, if your goal is to get this information in SuiteScript, then you can use the 'N/query' module. Pull up one of your Purchase Orders, open the Javascript console (Ctrl+Shift+J) and try this:
require(['N/query'], (query) => {
const suiteqlQuery = `SELECT
transaction as transaction_id,
BUILTIN.DF(transaction) as transaction_name,
BUILTIN.DF(item) as item_name,
item as item_id,
landedcostcategory as landedcostcategory_id,
BUILTIN.DF(landedcostcategory) as landedcostcategory_name
FROM
transactionline
WHERE
transaction='<internal id of your PO here>'`;
const results = query.runSuiteQL({query: suiteqlQuery}).asMappedResults();
console.log(JSON.stringify(results, null, 2));
/*
Example output for results:
[
{
"transaction_id": "12345",
"transaction_name": "Purchase Order #PO123456",
"item_name": "My Favorite iPod",
"item_id": 1234,
"landedcostcategory_id": 1,
"landedcostcategory_name": "Duties & Tariffs"
}
]
*/
})

Only one custom field been populated out of many

I have a document that I’d like to pre-populate. In the document the person’s name is repeated multiple times so I’ve set up a text fields, given them the same label e.g. “CandidateName” and set them all to not required and read only.
The reason why all of them are set to read only is because I programmatically set them when I call DocuSign via API (TextTabs). As result only the final field is populated. All the previous ones are blank.
Cheers
If your template contains multiple fields that have the same tabLabel and you want to populate all of those fields with the same value by using the API, you need to prefix the tabLabel value with \\*.
For example, here's the JSON for the tabs portion of a CreateEnvelope request that would populate every field which has the label CandidateName with the value John Smith.
"tabs":
{
"textTabs": [
{
"tabLabel": "\\*CandidateName",
"value": "John Smith"
}
]
}

Azure Stream Processing upsert to DocumentDB with array

I'm using Azure Stream Analytics to copy my Json over to DocumentDB using upsert to overwrite the document with the latest data. This is great for my base data, but I would love to be able to append the list data, as unfortunately I can only send one list item at a time.
In the example below, the document is matched on id, and all items are updated, but I would like the "myList" array to keep growing with the "myList" data from each document (with the same id). Is this possible? Is there any other way to use Stream Analytics to update this list in the document?
I'd rather steer clear of using a tumbling window if possible, but is that an option that would work?
Sample documents:
{
"id": "1234",
"otherData": "example",
"myList": [{"listitem": 1}]
}
{
"id": "1234",
"otherData": "example 2",
"myList": [{"listitem": 2}]
}
Desired output:
{
"id": "1234",
"otherData": "example 2",
"myList": [{"listitem": 1}, {"listitem": 2}]
}
My current query:
SELECT id, otherData, myList INTO [myoutput] FROM [myinput]
Currently arrays are not merged, this is the existing behavior of DocumentDB output from ASA, also mentioned in this article. I doubt using a tumbling window would help here.
Note that changes in the values of array properties in your JSON document result in the entire array getting overwritten, i.e. the array is not merged.
You could transform the input that is coming as an array (myList) into a dictionary using GetArrayElements function .
Your query might look something like --
SELECT i.id , i.otherData, listItemFromArray
INTO myoutput
FROM myinput i
CROSS APPLY GetArrayElements(i.myList) AS listItemFromArray
cheers!

How do I keep existing data in couchbase and only update the new data without overwriting

So, say I have created some records/documents under a bucket and the user updates only one column out of 10 in the RDBMS, so I am trying to send only that one columns data and update it in couchbase. But the problem is that couchbase is overwriting the entire record and putting NULL`s for the rest of the columns.
One approach is to copy all the data from the exisiting record after fetching it from Cbase, and then overwriting the new column while copying the data from the old one. But that doesn`t look like a optimal approach
Any suggestions?
You can use N1QL update Statments google for Couchbase N1QL
UPDATE replaces a document that already exists with updated values.
update:
UPDATE keyspace-ref [use-keys-clause] [set-clause] [unset-clause] [where-clause] [limit-clause] [returning-clause]
set-clause:
SET path = expression [update-for] [ , path = expression [update-for] ]*
update-for:
FOR variable (IN | WITHIN) path (, variable (IN | WITHIN) path)* [WHEN condition ] END
unset-clause:
UNSET path [update-for] (, path [ update-for ])*
keyspace-ref: Specifies the keyspace for which to update the document.
You can add an optional namespace-name to the keyspace-name in this way:
namespace-name:keyspace-name.
use-keys-clause:Specifies the keys of the data items to be updated. Optional. Keys can be any expression.
set-clause:Specifies the value for an attribute to be changed.
unset-clause: Removes the specified attribute from the document.
update-for: The update for clause uses the FOR statement to iterate over a nested array and SET or UNSET the given attribute for every matching element in the array.
where-clause:Specifies the condition that needs to be met for data to be updated. Optional.
limit-clause:Specifies the greatest number of objects that can be updated. This clause must have a non-negative integer as its upper bound. Optional.
returning-clause:Returns the data you updated as specified in the result_expression.
RBAC Privileges
User executing the UPDATE statement must have the Query Update privilege on the target keyspace. If the statement has any clauses that needs data read, such as SELECT clause, or RETURNING clause, then Query Select privilege is also required on the keyspaces referred in the respective clauses. For more details about user roles, see Authorization.
For example,
To execute the following statement, user must have the Query Update privilege on travel-sample.
UPDATE `travel-sample` SET foo = 5
To execute the following statement, user must have the Query Update privilege on the travel-sample and Query Select privilege on beer-sample.
UPDATE `travel-sample`
SET foo = 9
WHERE city = (SELECT raw city FROM `beer-sample` WHERE type = "brewery"
To execute the following statement, user must have the Query Update privilege on `travel-sample` and Query Select privilege on `travel-sample`.
UPDATE `travel-sample`
SET city = “San Francisco”
WHERE lower(city) = "sanfrancisco"
RETURNING *
Example
The following statement changes the "type" of the product, "odwalla-juice1" to "product-juice".
UPDATE product USE KEYS "odwalla-juice1" SET type = "product-juice" RETURNING product.type
"results": [
{
"type": "product-juice"
}
]
This statement removes the "type" attribute from the "product" keyspace for the document with the "odwalla-juice1" key.
UPDATE product USE KEYS "odwalla-juice1" UNSET type RETURNING product.*
"results": [
{
"productId": "odwalla-juice1",
"unitPrice": 5.4
}
]
This statement unsets the "gender" attribute in the "children" array for the document with the key, "dave" in the tutorial keyspace.
UPDATE tutorial t USE KEYS "dave" UNSET c.gender FOR c IN children END RETURNING t
"results": [
{
"t": {
"age": 46,
"children": [
{
"age": 17,
"fname": "Aiden"
},
{
"age": 2,
"fname": "Bill"
}
],
"email": "dave#gmail.com",
"fname": "Dave",
"hobbies": [
"golf",
"surfing"
],
"lname": "Smith",
"relation": "friend",
"title": "Mr.",
"type": "contact"
}
}
]
Starting version 4.5.1, the UPDATE statement has been improved to SET nested array elements. The FOR clause is enhanced to evaluate functions and expressions, and the new syntax supports multiple nested FOR expressions to access and update fields in nested arrays. Additional array levels are supported by chaining the FOR clauses.
Example
UPDATE default
SET i.subitems = ( ARRAY OBJECT_ADD(s, 'new', 'new_value' )
FOR s IN i.subitems END )
FOR s IN ARRAY_FLATTEN(ARRAY i.subitems
FOR i IN items END, 1) END;
If you're using structured (json) data, you need to read the existing record then update the field you want in your program's data structure and then send the record up again. You can't update individual fields in the json structure without sending it all up again. There isn't a way around this that I'm aware of.
It is indeed true, to update individual items in a JSON doc, you need to fetch the entire document and overwrite it.
We are working on adding individual item updates in the near future.

couchdb - Map Reduce - How to Join different documents and group results within a Reduce Function

I am struggling to implement a map / reduce function that joins two documents and sums the result with reduce.
First document type is Categories. Each category has an ID and within the attributes I stored a detail category, a main category and a division ("Bereich").
{
"_id": "a124",
"_rev": "8-089da95f148b446bd3b33a3182de709f",
"detCat": "Life_Ausgehen",
"mainCat": "COL_LEBEN",
"mainBereich": "COL",
"type": "Cash",
"dtCAT": true
}
The second document type is a transaction. The attributes show all the details for each transaction, including the field "newCat" which is a reference to the category ID.
{
"_id": "7568a6de86e5e7c6de0535d025069084",
"_rev": "2-501cd4eaf5f4dc56e906ea9f7ac05865",
"Value": 133.23,
"Sender": "Comtech",
"Booking Date": "11.02.2013",
"Detail": "Oki Drucker",
"newCat": "a124",
"dtTRA": true
}
Now if I want to develop a map/reduce to get the result in the form:
e.g.: "Name of Main Category", "Sum of all values in transactions".
I figured out that I could reference to another document with "_ID:" and ?include_docs=true, but in that case I can not use a reduce function.
I looked in other postings here, but couldn't find a suitable example.
Would be great if somebody has an idea how to solve this issue.
I understand, that multiple Category documents may have the same mainCat value. The technique called view collation is suitable to some cases where single join would be used in relational model. In your case it will not help: although you use two document schemes, you really have three level structure: main-category <- category <- transaction. I think you should consider changing the DB design a bit.
Duplicating the data, by storing mainCat value also in the transaction document, would help. I suggest to use meaningful ID for the transaction instead of generated one. You can consider for example "COL_LEBEN-7568a6de86e5e" (concatenated mainCat with some random value, where - delimiter is never present in the mainCat). Then, with simple parser in map function, you emit ["COL_LEBEN", "7568a6de86e5e"] for transactions, ["COL_LEBEN"] for categories, and reduce to get the sum.

Resources