I have something similar to the following json structure and I need to search for an specific filename in the Azure CosmosDB data explorer (no matter the position), I have been trying differents ways and also using CROSS APPLY FLATTEN but i cannot get it
{
"entityId": "f07256a5-0e60-412a-bcc9-2e1aa66b69f5",
"array1": [
{
"array2": [
{
"fileName": "filename1.pdf",
},
{
"fileName": "filename2.pdf",
}
]
}
]
}
Any ideas? thanks
You need something like below,
SELECT
c.fileName
FROM d
JOIN f IN d.array1
JOIN c IN f.array2
WHERE c.fileName = "filename1.pdf"
This one works for me:
SELECT c.entityId
FROM c
JOIN one IN c.array1
JOIN two IN one.array2
WHERE two.fileName = 'filename1.pdf'
It uses self-joins to create an object for each filename and then filters from those which has the right filename.
Related
I have a map like this:
root = {
nat = {
pr = {
dev = [{
value: "10.10.10.10",
description: "test1"
},
],
},
pr2 = {
dev = [{
value: "10.10.10.11",
description: "test2"
}],
prod = [],
}
},
cc = {
pr = {
jej = [{
value: "10.10.10.10",
description: "test"
}]
}
},
smt = {
st = [{
value = "10.10.10.10",
description = "test"
}],
s2 = [{
value = "10.10.10.10",
description = "tt"
}]
}
}
Which can be modified in future by adding new nested maps. That map will be in module. I will paste the path(key) in input of module like "root.nat" and will expect in output the array of objects, thats will be all arrays of objects in "root.nat", sample of output:
[{
value = "10.10.10.10",
description = "test1"
},
{
value = "10.10.10.11",
description = "test2"
},
]
The problem is actually, that I cannot know how many nested maps I will have when inputing the path(key). And I can't iterate using for, cause I don't know exact fields.
Is it actually possible?
Terraform isn't designed for this sort of general computation. This particular problem requires unbounded recursion and that in particular isn't available in the Terraform language: Terraform always expects to be dealing with fixed data structures whose shape is known statically as part of their type.
If possible I would suggest using something outside of Terraform to preprocess this data structure into a flat map from dot-separated string key to a list of objects:
{
"root.nat" = [
{
value = "10.10.10.11",
description = "test2"
},
# etc
]
# etc
}
If you cannot avoid doing this transformation inside Terraform -- for example, if the arbitrary data structure you showed is being loaded dynamically from some other service rather than fixed in your configuration, so that it's not possible to pre-generate the flattened equivalent -- then you could use my Terraform provider apparentlymart/javascript as an escape hatch to the JavaScript programming language, and implement your flattening algorithm in JavaScript rather than in the Terraform language.
Since JavaScript is a general-purpose language, you can write a recursive algorithm to re-arrange your data structure into a flat map of lists of objects with dot-separated keys, and then look up those keys in Terraform code using the index operator as normal.
Check my answer for this This Question.
Basically, you can create as many for loops inside each other as you believe it will be necessary.
As long you check for null before proceeding to the next loop the code won't fail.
So in cases like this:
pr2 = {
dev = [{
value: "10.10.10.11",
description: "test2"
}],
prod = [],
prod would not be part of the final array.
Additionally, you can always add try functions in your verification if null to find out if you are in the right "level" of the loop.
I'm trying to query through a somewhat simple collection with the following structure:
{
"name":[
{
"something":"",
"somethingelse":[
{
"name":"John",
"city":"NY"
}
]}]}
I have tried to search the value "city" with the dot notation but no success.
by reading this mongoDB doc, I see that to access array nested document you need to specify the index of the element instead of using ".".
For example with this object:
{
"name":"bob",
"info":[
{"birth":"10/01/1986"},
{"eyeColor":"blue"},
{"salary":2000}
]
}
I want to access the "birth" property, I will do something like this:
db.inventory.find({'info.0':"birth"})
Hope this help.
You can call the data and store it in an object say ob
ob= {
"name":[
{
"something":"",
"somethingelse":[
{
"name":"John",
"city":"NY"
}
]}]}
now you can do ob["name"][0]["somethingelse"][0]["city"]
I am using the Python library of Firestore to communicate with Firestore.
I have now run into a limitation of Firestore and I am wondering if there is a way around it.
Imagine we have this map / Dict (dictVar1):
dictVar1 = {
"testArray": ["Yes"],
"testMap": {
"test1": 1,
"test2": 1
}
}
To begin with, I used to store my testMap in an array, but due to Firestore query limitations (you can only have a single array-contains operation in a query), I changed my structure to a map instead (as you can see in the dictVar1 structure above). If Firestore queries did not have this limitation, I would not change my structure from an array.
Now I am facing another Firestore limitation due to the new structure.
What I would like to do & other conditions:
I want to add this map / dict to a Firestore document.
I would like to do it in one Firestore operation using Firestore batch
I don't know if the document exists or not before updating/creating
One batch can contain anything between 1 and 500 operations
If the document exists, I do not want to remove any other fields from the existing document if these fields are not present in dictVar1 dict / map.
The fields in dictVar1 dict / map should replace the fields in the document completely
So if the existing document would contain this data:
{
"doNotChange": "String",
"testMap": {
"test0": 1
}
}
It would be updated to ("test0" is removed from the inner map, basically how an array would work):
{
"doNotChange": "String",
"testArray": ["Yes"],
"testMap": {
"test1": 1,
"test2": 1
}
}
And if the document doesn't exist, the document would be set to:
{
"testArray": ["Yes"],
"testMap": {
"test1": 1,
"test2": 1
}
}
I see two ways to do this:
Do this in two operations
Instead of using testMap as a map, replace it with an array.
99% of the time the document exists, therefore I am fine with doing this in two operations if the document doesn't exist, but one operation if the document exists.
This could be done using Firestore's update function, but since I am using batch and potentially updating 100s of documents in one batch, if the document doesn't exist, it would ruin the whole batch operation.
Another potential solution would be to:
Run batch with updates, if it succeeds, then great, if 404 (document not found) is raised then:
Change the operation to set instead of an update for this document and then redo the batch, in a loop until the batch is successful
Two potential problems I see with this:
Will I be fully charged for all the failed batch operations or will I be just be charged 1 read per failed batch operation? If I get fully charged for the batch, then this is still not a good solution.
Is it possible to easily change the operation type for a specific document reference to a different operation type without having to recreate the batch operation totally from scratch?
Do you have any ideas on how I could solve one of these problems?
Here is the Python code to test out:
from json import dumps
from google.cloud import firestore
db = firestore.Client.from_service_account_json("firebaseKeysDev.json")
originalDoc = {
"doNotChange": "String",
"testMap": {
"test0": 1
}
}
dictVar1 = {
"testArray": ["Yes"],
"testMap": {
"test1": 1,
"test2": 1
}
}
prefOutput = {
"doNotChange": "String",
"testArray": [
"Yes"
],
"testMap": {
"test1": 1,
"test2": 1
}
}
# Let's first create the document with the original dict / map
originalSetOp = db.collection("test").document("testDoc").set(originalDoc)
# Now let's get the original map / dict from Firestore
originalOpDoc = db.collection("test").document("testDoc").get()
# Convert to Python Dict
originalOpDocDict = originalOpDoc.to_dict()
# Now let's print out the original document dict
print("Here is the original map:")
print(dumps(originalOpDocDict, ensure_ascii=False, sort_keys=True, indent=4))
# Now let's merge the original dict / map with our dictVar1 dict / map
mergeDictVar1WithODoc = db.collection("test").document("testDoc").set(dictVar1, merge=True)
# Now let's get the new merged map / dict from Firestore
newDictDoc = db.collection("test").document("testDoc").get()
# Convert to Python Dict
newDictDocDict = newDictDoc.to_dict()
# Let's print the new merged dict / map
print("\nHere is the merged map:")
print(dumps(newDictDocDict, ensure_ascii=False, sort_keys=True, indent=4))
print("\nHere is the output we want:")
print(dumps(prefOutput, ensure_ascii=False, sort_keys=True, indent=4))
Output:
Here is the original map:
{
"doNotChange": "String",
"testMap": {
"test0": 1
}
}
Here is the map we want to merge:
{
"testArray": [
"Yes"
],
"testMap": {
"test1": 1,
"test2": 1
}
}
Here is the merged map:
{
"doNotChange": "String",
"testArray": [
"Yes"
],
"testMap": {
"test0": 1,
"test1": 1,
"test2": 1
}
}
Here is the output we want:
{
"doNotChange": "String",
"testArray": [
"Yes"
],
"testMap": {
"test1": 1,
"test2": 1
}
}
You can try using .set() with SetOptions of merge or mergeFields instead of .update() - the field, in this case, would be your map.
Specifically, .set() will create a document if it doesn't exist. It seems (I'm not on the Firebase team) the PURPOSE of .update() failing is to signal the document doesn't already exist.
I use this extensively in a wrapper library I created for Firestore in my app.
Documented Here
I've two models - subscribers and tags
Sample data:
{
subscribers: [
{
name: "User 1",
tags: ["a","b"]
},
{
name: "User 2",
tags: ["c","d"]
}
]
}
I want to filter subscribers based on their tags.
If I give a and b tags, User 1 should list
If I give a and c tags,
both User 1 and User 2 should list
Here is what I tried:
Method 1:
tags is a column in subscribers model with array data type
/subscribers/?filter={"where":{"tags":{"inq":["a","b"]}}} // doesn't work
Method 2:
Created a separate table tags and set subscribers has many tags.
/subscribers/?filter={"where":{"tags":{"inq":["a","b"]}}} // doesn't work
How can I achieve this in Loopback without writing custom methods?
I've Postgresql as the connector
UPDATE
As mentioned in the loopback docs you should use inq not In
The inq operator checks whether the value of the specified property matches any of the values provided in an array. The general syntax is:
{where: { property: { inq: [val1, val2, ...]}}}
From this:
/subscribers/?filter={"where":{"tags":{"In":["a","b"]}}}
To this:
/subscribers/?filter={"where":{"tags":{"inq":["a","b"]}}}
Finally found a hack, using Regex! it's not a performant solution, but it works!!
{ "where": { "tags": { "regexp": "a|b" } } }
How to get data from this confstruction:
sth {
[
firstname="me"
second="sfdg"
]
[
adress="adfhajkfdh"
]
}
I used ConfigObject but when I get from it keySet it gives me whole list(firstname, secondname,adress) bu I have to separete it. Is any way to get data from only one tab e.g. only "firstname" and "secondname".
As I understand it, you used to have config like:
sth {
firstname="me"
second="sfdg"
adress="adfhajkfdh"
}
but you now want to structure that into columns?
One way this could be done is by structuring each column into a separate property like so:
sth {
column1 {
firstname="me"
second="sfdg"
}
column2 {
adress="adfhajkfdh"
}
}
Or, you could declare another columns property which constains a list of columns (each of which is a list of properties that you want in each column), ie:
sth {
firstname="me"
second="sfdg"
adress="adfhajkfdh"
columns = [ column1:[ 'firstname', 'second' ], column2:[ 'address' ] ]
}
Personally, I prefer the second approach as it should still work with your old code, you don't need to iterate the ConfigObject structure to get all the properties, and properties could be in multiple columns (if this becomes a future requirement)