This question already has an answer here:
Firebase query - Find item with child that contains string
(1 answer)
Closed 4 years ago.
In firebase, how can one go about running a query to find matches which have a specific word in the string?
for example querying all descriptions that have the word happy in them.
I am able to do this with JS but that means I have to load the entire DB which is over 300 items...
Cheers!
You don't have to download the entire DB for this (i assume you'r using realtime database over firestore).
You can filter your data mixing orderByChild() or orderByKey() or orderByValue() with query methods like startAt(), endAt() and equalTo().
For example, if your list of nodes url is
https://mydb-xxxx.firebaseio.com/parentnode/childnodeslist
You can query in this way:
// Find all nodes whose property myString is "hello"
var ref = firebase.database().ref("parentnode/childnodeslist");
ref.orderByChild("myString").equalTo('hello').once(‘value’).then(function(element){
console.log(element);
});
see the query doc and how to structure your data in firebase
EDIT: BASED ON COMMENTS:
If you want to make a full text search you can:
1) Make it client side
// assume "myString" is your param name
var word = 'hello';
var listOfItems = [];
firebase.database().ref("parentnode/childnodeslist").then(function(list){
listOfItems = list.map(function(item){
if( item.myString.indexOf(word) >= 0){
return {id: item.id, word: item.myString}
}
})
})
2) Use a third party tools like ElasticSearch (as suggested by #Frank-van-Puffelen) or Algolia
3) Use Cloud Functions
Related
Hi I have two mongodb collections. The first one returns json data (array) and with the output of this, I want to return documents that match.
When I run Console.log (req.bidder.myBids) I get the following output:
[{"productId":"3798b537-9c7b-4395-9e41-fd0ba39aa984","price":3010},{"productId":"3798b537-9c7b-4395-9e41-fd0ba39aa984","price":3020},{"productId":"4c4bd71c-6664-4d56-b5d3-6428fe1bed19","price":1040},{"productId":"4c4bd71c-6664-4d56-b5d3-6428fe1bed19","price":1050},{"productId":"4c4bd71c-6664-4d56-b5d3-6428fe1bed19","price":1060},{"productId":"4c4bd71c-6664-4d56-b5d3-6428fe1bed19","price":1070},{"productId":"4c4bd71c-6664-4d56-b5d3-6428fe1bed19","price":1090},{"productId":"4c4bd71c-6664-4d56-b5d3-6428fe1bed19","price":1100}]
The productId has duplicates, I want to remove duplicates and then call a routine that finds all the products that match and output as json.
So far I have this code that only outputs one document, but cant figure out how to add the array of productId's and then fetch all corresponding products.
var agencyId = req.body.agencyId;
var productId = req.body.productId;
if (!validate.STRING(agencyId)) {
res.apiError(messages.server.invalid_request);
} else {
dbProduct.find({productId:{$in:['3798b537-9c7b-4395-9e41-fd0ba39aa984','4c4bd71c-6664-4d56-b5d3-6428fe1bed19']}
}).then(dbRes => {
console.log(dbRes);
Updated code and works with hard-wired productId and updated above code. Looking at how to get the array data and transpose replacing the hard-wired productId's
The $in operator is what you want. See the docs here: https://docs.mongodb.com/manual/reference/operator/query/in/
This question already has answers here:
Cloud Firestore Case Insensitive Sorting Using Query
(3 answers)
Are Cloud Firestore queries still case sensitive?
(1 answer)
Closed 1 year ago.
To support case-insensitive or any other canonicalization do we need to write a separate field that contains the canonicalized version and query against that??.
For example:
db.collection("users").where("name", "==", "Dan")
db.collection("users").where("name_lowercase", "==", "dan")
What I would do:
Before querying (maybe client-side): convert the query term in two or more variations (10 variations is maximum). For example, the search term "dan" (String) becomes an array of ["dan", "DAN", "Dan"]
Then I would do a "in" query, where I would search all of those variations in the same name field.
The "in" query type supports up to 10 equality (==) clauses with a logical "OR" operator. (documentation here)
This way, you can keep only one field "name" and query with possible variations on it.
It would look like this:
let query_variations = ["dan", "DAN", "Dan"]; // TODO: write a function that converts the query string into this kind of Array
let search = await db.collection("users").where("name", "in", query_variations).get();
In short, yes.
This is because Cloud Firestore (and the Firebase Realtime Database, when enabled) are indexed databases based on the values of each property in a document.
Rather than search through hundreds (if not thousands and thousands) of documents for matches, the index of the relevant property is queried for matching document IDs.
Consider the following "database" and it's index based on the name in the documents:
const documents = {
"docId1": {
name: "dan"
},
"docId2": {
name: "dan"
},
"docId3": {
name: "Dan"
},
"docId4": {
name: "Dan"
}
}
const nameIndex = {
"dan": ["docId1, docId2"],
"Dan": ["docId3, docId4"]
}
Instead of calling Object.entries(documents).filter(([id, data]) => data.name === "dan") on the entire list of documents, you can just ask the index instead using nameIndex["dan"] yielding the final results ["docId1, docId2"] near-instantly ready to be retrieved.
Continuing that same example, calling nameIndex["daniel"] gives undefined (no documents with that name) which can quickly be used to say that the data doesn't exist in the database).
Firestore introduced composite indexes, which allows you to index across multiple properties such as "name" and "age" so you can also quickly and efficiently search documents where the name is "Dan" but they are also 42 years of age.
Further reading: The Firebase documentation covers one solution for text-based search here.
This question already has an answer here:
using a variable in mongodb update
(1 answer)
Closed 2 years ago.
When I was using quick.db, I used to do this :
const table = new require('quick.db').table('table') // quick.db table
let guildID = msg.guild.id // snowflake generated by Discord; let's say it's equal to '1234567890'
table.set(`${guildID}.name`, `Cool Name`) // here, the variable name changes in function of guildID : so, to get 'Cool Name', I would have to look for '1234567890.name'.
And I am trying to do the same thing on mongoDB. In my view, it's a lot better and more complete than quick.db, but I still can't find how to do the same thing on mongo.
I guess that this is probably not possible, but I'd like to know if you have an idea about how I could replace this to easily get infos about a guild with its corresponding guildID.
I found a solution thanks to a comment that sent me to this page.
Even through with this method I can't create a value whose name is dependent of a variable directly as a field of the document, I can create one inside an object (the objects are selected with their names as strings, that's why I can insert a variable in that string)
collection.updateOne({id: "123"}, {$set: { ["object."+guildID+".name"]: "Cool Name" }}
Then, "Cool Name" can be found with:
collection.find({id: "123"}, {"object.1234567890.name": 1}) // note: I don't remember how to get a specific field of a document, please correct me or delete this note if you know what's correct
For example I have a thousands of documents with same structure, for example:
{
"key_1":"value_1",
"key_2":"value_2",
"key_3":"value_3",
...
...
}
And I need to get, let's say key_1, key_3 and key_23 from some set of documents with known IDs, for example, I need to process only 5 documents while my DB contains several thousands. Each time I have a different set of keys and document IDs. Is it possible to get that information for a one request?
You can use a list function (see: this, this, and this).
Since you know the ids, you can then query _all_docs with the list function:
POST /{db}/_design/{ddoc}/_list/{func}/_all_docs?include_docs=true&columns=["key_1","key_2","key_3"]
Accept: application/json
Content-Length: {whatever}
{
"keys": [
"docid002",
"docid005"
]
}
The list function needs to look at documents, and send the appropriate JSON for each one. Not tested:
(function (head, req) {
send('{"total_rows":' + head.total_rows + ',"offset":' + head.offset + ',"rows":[');
var columns = JSON.parse(req.query.columns);
var delim = '';
var row;
while (row = getRow()) {
var doc = {};
for (var k in columns) {
doc[k] = row.doc[k];
}
row.doc = doc;
send(delim + toJSON(row));
delim = ',';
}
send(']}');
})
Whether this is a good idea, I'm not sure. If your documents are big, and bandwidth savings important, it might.
Yes, that’s possible. Your question can be broken up into two distinct problems:
Getting only a part of the document (in your example: key_1, key_3 and key_23). This can be done using a view. A view is saved into a design document. See the wiki for more info on how to create views.
Retrieving only certain documents, which are defined by their ID. When querying views, you cannot only specify a single ID (or rather key), but also an array of keys, which is what you would need here. Again, see the section on querying views in the wiki for explanations and examples.
Even though you only need a subset of values from a document, you may find that the system as a whole performs better if you just ask for the entire document then select the values you need from that result.
To only get the specific key value pairs you need to create a view that has view entries with a multipart key consisting of the doc id and doc item name, with value of the corresponding doc item.
So your map function would look something like:
function(doc){
for(var i = 1; i < doc.keysInDoc; i++){
var k = "key_"+i;
emit([doc._id, k], doc.[k]);
}
}
You can then use multi key lookup with each key being of the form ["docid12345", "key_1"], ["docid56789", "key_23"], etc.
So a query like:
http://host:5984/db/_design/design/_view/view?&keys=[["docid002","key_8"],["docid005","key_7"]]
will return
{"total_rows":84,"offset":67,"rows":[
{"id":"docid002","key":["docid002","key_8"],"value":"value d2_k8"},
{"id":"docid005","key":["docid005","key_12"],"value":"value d5_k12"}
]}
I have a coucdb database which contains about 200000 tweets, keys are tweet ID. I have a query which needs to retrieve all documents to look for some information. I'm using lightcouch to work with couchdb in a java web app. If I create a dbClient like this:
List<JsonObject>tweets = dbClient.view("_all_docs").query(JsonObject.class);
and then loop through tweets, for each JsonObject in tweets, use
JsonObject tweetJson = dbClient.find(JsonObject.class, tweet.get("id").toString().replaceAll("\"", ""));
to retrieve each tweet one by one it took extremely long time for 200000 documents. If I load all documents in one single query using includeDocs(true)
List<JsonObject>allTweets = dbClient.view("_all_docs").includeDocs(true).query(JsonObject.class);
it caused outofmemory exception since the number of documents are too large. So how can i deal with this problem? I'm thinking about using limit(5000) to retrieve 5000 documents for each time and loop through whole database, but I don't know how to write the loop to continue to retrieve the next 5000 after the first 5000 docs. One possible solution is using startKey and endKey but I'm confused how to use them when the key is tweet ID.
Use queryPage but make sure to use a String as the Key
See: https://github.com/lightcouch/LightCouch/issues/26#event-122327174
0.1.6 still seems to show this behaviour.
A workaround that I found for this goes something like this:
changes = DbClient.changes()
.since(null) // or... since(since) if you want an offset
.includeDocs(true);
int size = 1;
getCursor("0");
while (size > 0 ) {
ChangesResult resultSet = changes.limit(40000).getChanges();
List<ChangesResult.Row> rowList = resultSet.getResults();
for (ChangesResult.Row feed: rowList) {
<instantiate your object via gson>
.
.
.
}
getCursor(resultSet.getLastSeq());
size = rowList.size();
}