Terraform How to iterate n times in one list - terraform

I want to perform something like this:
rules = [
for i in somethings : {
//actions
},
for s in others : {
//actions
}
]
I need to get in result one list with the result of few iterated loops. The flatten logic I think.
Is it possible?

It really needs flatten
The answer is
rules = flatten([[
for i in somethings : {
//actions
}],
[for s in others : {
//actions
}
],])

Related

Terraform is it possible to parse nested map with different keys?

I have a map like this:
root = {
nat = {
pr = {
dev = [{
value: "10.10.10.10",
description: "test1"
},
],
},
pr2 = {
dev = [{
value: "10.10.10.11",
description: "test2"
}],
prod = [],
}
},
cc = {
pr = {
jej = [{
value: "10.10.10.10",
description: "test"
}]
}
},
smt = {
st = [{
value = "10.10.10.10",
description = "test"
}],
s2 = [{
value = "10.10.10.10",
description = "tt"
}]
}
}
Which can be modified in future by adding new nested maps. That map will be in module. I will paste the path(key) in input of module like "root.nat" and will expect in output the array of objects, thats will be all arrays of objects in "root.nat", sample of output:
[{
value = "10.10.10.10",
description = "test1"
},
{
value = "10.10.10.11",
description = "test2"
},
]
The problem is actually, that I cannot know how many nested maps I will have when inputing the path(key). And I can't iterate using for, cause I don't know exact fields.
Is it actually possible?
Terraform isn't designed for this sort of general computation. This particular problem requires unbounded recursion and that in particular isn't available in the Terraform language: Terraform always expects to be dealing with fixed data structures whose shape is known statically as part of their type.
If possible I would suggest using something outside of Terraform to preprocess this data structure into a flat map from dot-separated string key to a list of objects:
{
"root.nat" = [
{
value = "10.10.10.11",
description = "test2"
},
# etc
]
# etc
}
If you cannot avoid doing this transformation inside Terraform -- for example, if the arbitrary data structure you showed is being loaded dynamically from some other service rather than fixed in your configuration, so that it's not possible to pre-generate the flattened equivalent -- then you could use my Terraform provider apparentlymart/javascript as an escape hatch to the JavaScript programming language, and implement your flattening algorithm in JavaScript rather than in the Terraform language.
Since JavaScript is a general-purpose language, you can write a recursive algorithm to re-arrange your data structure into a flat map of lists of objects with dot-separated keys, and then look up those keys in Terraform code using the index operator as normal.
Check my answer for this This Question.
Basically, you can create as many for loops inside each other as you believe it will be necessary.
As long you check for null before proceeding to the next loop the code won't fail.
So in cases like this:
pr2 = {
dev = [{
value: "10.10.10.11",
description: "test2"
}],
prod = [],
prod would not be part of the final array.
Additionally, you can always add try functions in your verification if null to find out if you are in the right "level" of the loop.

Elasticsearch dsl OR query formation

I have index with multiple documents. The documents contains below fields:
name
adhar_number
pan_number
acc_number
I want to create a elasticsearch dsl query. For this query two inputs are available like adhar_number and pan_number. This query should match OR Condition on this.
Example: If one document contains provided adhar_number only then I want that document too.
I have one dictionary with below contents (my_dict):
{
"adhar_number": "123456789012",
"pan_number": "BGPPG4315B"
}
I tried like below:
from elasticsearch import Elasticsearch
from elasticsearch_dsl import Search
es = Elasticsearch([{'host': 'localhost', 'port': 9200}])
s = Search(using=es, index="my_index")
for key, value in my_dict.items():
s = s.query("match", **{key:value})
print(s.to_dict())
response = s.execute()
print(response.to_dict())
It creates below query:
{
'query': {
'bool': {
'must': [
{
'match': {
'adhar_number': '123456789012'
}
},
{
'match': {
'pan_number': 'BGPPG4315B'
}
}
]
}
}
}
Above code is providing me the result with AND condition instead of OR Condition.
Please suggest me the good suggestions to include OR Condition.
To fix the ES query itself, all you need to do is use 'should' instead of 'must':
{
'query': {
'bool': {
'should': [
{
'match': {
'adhar_number': '123456789012'
}
},
{
'match': {
'pan_number': 'BGPPG4315B'
}
}
]
}
}
}
To achieve this in python, see the following example from the docs. The default logic is AND, but you can override it to OR as shown below.
Query combination Query objects can be combined using logical
operators:
Q("match", title='python') | Q("match", title='django')
# {"bool": {"should": [...]}}
Q("match", title='python') & Q("match", title='django')
# {"bool": {"must": [...]}}
~Q("match", title="python")
# {"bool": {"must_not": [...]}}
When you call the .query() method multiple times, the & operator will be used internally:
s = s.query().query() print(s.to_dict())
# {"query": {"bool": {...}}}
If you want to have precise control over the query form, use the Q shortcut to directly construct the combined
query:
q = Q('bool',
must=[Q('match', title='python')],
should=[Q(...), Q(...)],
minimum_should_match=1 ) s = Search().query(q)
So you want something like
q = Q('bool', should=[Q('match', **{key:value})])
You can use should as also mentioned by #ifo20. Note that you most likely want ot define the minimum_should_match parameters as well:
You can use the minimum_should_match parameter to specify the number or percentage of should clauses returned documents must match.
If the bool query includes at least one should clause and no must or filter clauses, the default value is 1. Otherwise, the default value is 0.
{
'query': {
'bool': {
'should': [
{
'match': {
'adhar_number': '123456789012'
}
},
{
'match': {
'pan_number': 'BGPPG4315B'
}
}
],
"minimum_should_match" : 1
}
}
}
Note also that the should clause contributes to the final score. I don't know how to avoid this but you may not want this to be part of an OR logic.

Pull from array which is in an array

I have a collection which contains a teams array and that contains a players array.
I would like to delete from the players array.
I think I know hot to delete from an array, but I can't make it work.
There was a case when it deleted all elements from the teams array.
Here it is how the doc look like:
"teams" : [
{
"_guid" : "5c5b3bc0-a957-11e5-b909-b7a1cbe2c8be",
"teamname" : "Ping-Win_team",
"_id" : ObjectId("567a68f6a7c726540b2d746b"),
"players" : [
ObjectId("567a68f6a7c726540b2d7469"),
ObjectId("567a68f7a7c726540b2d746c")
]
}
],
My probation:
db.lobbies.update({ _id: ObjectId('567a68f6a7c726540b2d746a') }, { $pull: { 'teams': { 'players.$': ObjectId('567a68f7a7c726540b2d746c') }}})
Thanks for helping,
Akos
Apply the $pull operator together with the $ positional operator in your update to change the name field. The $ positional operator will identify the correct element in the array to update without explicitly specifying the position of the element in the array, thus your final update statement should look like:
db.lobbies.update(
{ "teams.players": ObjectId("567a68f7a7c726540b2d746c") },
{
"$pull": {
"teams.$.players": ObjectId("567a68f7a7c726540b2d746c")
}
}
)
If I got correctly your question.. you can't pull one of the player IDs because:
The positional $ operator cannot be used for queries which traverse more than one array, such as queries that traverse arrays nested within other arrays, because the replacement for the $ placeholder is a single value
From: https://docs.mongodb.org/v3.0/reference/operator/update/positional/
Otherwise you will pull the whole item of the teams array that contains that specific player.
You can use below code to delete one or multiple values in a nested array
db.sessions.update(
{
"teams": {
$elemMatch: {
"players": {$in: [ObjectId("567a68f7a7c726540b2d746c")]}
}
}},
{
"$pull": {
"teams.$.players": {$in:[ObjectId("567a68f7a7c726540b2d746c")]}
}
})

Query for a list contained in another list in mongodb

I'm fairly new to mongo and while I can manage to do most basic operations with the $in, $or, $all, ect I can't make what I want to work.
I'll basically put a simple form of my problem. Part of my documents are list of number, eg :
{_id:1,list:[1,4,3,2]}
{_id:2,list:[1]}
{_id:3,list:[1,3,4,6]}
I want a query that given a list(lets call it L), would return me every document where their entire list is in L
for example with the given list L = [1,2,3,4,5] I want document with _id 1 and 2 to be returned. 3 musn't be returned since 6 isn't in L.
"$in" doesn't work because it would also return _id 3 and "$all" doesn't work either because it would only return _id 1.
I then thought of "$where" but I can't seem to find how to bound an external variable to the js code. What I call by that is that for example :
var L = [1,2,3,4,5];
db.collections('myCollection').find({$where:function(l){
// return something with the list "l" there
}.bind(null,list)})
I tried to bind list to the function as showed up there but to no avail ...
I'd glady appreciate any hint concerning this issue, thanks.
There's a related question Check if every element in array matches condition with an answer with a nice approach for this scenario. It refers to an array of embedded documents but can be adapted for your scenario like this:
db.list.find({
"list" : { $not : { $elemMatch : { $nin : [1,2,3,4,5] } } },
"list.0" : { $exists: true }
})
ie. the list must not have any element that is not in [1,2,3,4,5] and the list must exist with at least 1 element (assuming that's also a requirement).
You could try using the aggregation framework for this where you can make use of the set operators to achieve this, in particular you would need the $setIsSubset operator which returns true if all elements of the first set appear in the second set, including when the first set equals the second set; i.e. not a strict subset.
For example:
var L = [1,2,3,4,5];
db.collections('myCollection').aggregate([
{
"$project": {
"list": 1,
"isSubsetofL": {
"$setIsSubset": [ "$list", L ]
}
}
},
{
"$match": {
"isSubsetofL": true
}
}
])
Result:
/* 0 */
{
"result" : [
{
"_id" : 1,
"list" : [
1,
4,
3,
2
],
"isSubsetofL" : true
},
{
"_id" : 2,
"list" : [
1
],
"isSubsetofL" : true
}
],
"ok" : 1
}

Remove duplicate array objects mongodb

I have an array and it contains duplicate values in BOTH the ID's, is there a way to remove one of the duplicate array item?
userName: "abc",
_id: 10239201141,
rounds:
[{
"roundId": "foo",
"money": "123
},// Keep one of these
{// Keep one of these
"roundId": "foo",
"money": "123
},
{
"roundId": "foo",
"money": "321 // Not a duplicate.
}]
I'd like to remove one of the first two, and keep the third because the id and money are not duplicated in the array.
Thank you in advance!
Edit I found:
db.users.ensureIndex({'rounds.roundId':1, 'rounds.money':1}, {unique:true, dropDups:true})
This doesn't help me. Can someone help me? I spent hours trying to figure this out.
The thing is, I ran my node.js website on two machines so it was pushing the same data twice. Knowing this, the duplicate data should be 1 index away. I made a simple for loop that can detect if there is duplicate data in my situation, how could I implement this with mongodb so it removes an array object AT that array index?
for (var i in data){
var tempRounds = data[i]['rounds'];
for (var ii in data[i]['rounds']){
var currentArrayItem = data[i]['rounds'][ii - 1];
if (tempRounds[ii - 1]) {
if (currentArrayItem.roundId == tempRounds[ii - 1].roundId && currentArrayItem.money == tempRounds[ii - 1].money) {
console.log("Found a match");
}
}
}
}
Use an aggregation framework to compute a deduplicated version of each document:
db.test.aggregate([
{ "$unwind" : "$stats" },
{ "$group" : { "_id" : "$_id", "stats" : { "$addToSet" : "$stats" } } }, // use $first to add in other document fields here
{ "$out" : "some_other_collection_name" }
])
Use $out to put the results in another collection, since aggregation cannot update documents. You can use db.collection.renameCollection with dropTarget to replace the old collection with the new deduplicated one. Be sure you're doing the right thing before you scrap the old data, though.
Warnings:
1: This does not preserve the order of elements in the stats array. If you need to preserve order, you will have retrieve each document from the database, manually deduplicate the array client-side, then update the document in the database.
2: The following two objects won't be considered duplicates of each other:
{ "id" : "foo", "price" : 123 }
{ "price" : 123, "id" : foo" }
If you think you have mixed key orders, use a $project to enforce a key order between the $unwind stage and the $group stage:
{ "$project" : { "stats" : { "id_" : "$stats.id", "price_" : "$stats.price" } } }
Make sure to change id -> id_ and price -> price_ in the rest of the pipeline and rename them back to id and price at the end, or rename them in another $project after the swap. I discovered that, if you do not give different names to the fields in the project, it doesn't reorder them, even though key order is meaningful in an object in MongoDB:
> db.test.drop()
> db.test.insert({ "a" : { "x" : 1, "y" : 2 } })
> db.test.aggregate([
{ "$project" : { "_id" : 0, "a" : { "y" : "$a.y", "x" : "$a.x" } } }
])
{ "a" : { "x" : 1, "y" : 2 } }
> db.test.aggregate([
{ "$project" : { "_id" : 0, "a" : { "y_" : "$a.y", "x_" : "$a.x" } } }
])
{ "a" : { "y_" : 2, "x_" : 1 } }
Since the key order is meaningful, I'd consider this a bug, but it's easy to work around.

Resources