paginating using angularfire2 - pagination

suppose I have a data structure in firebase real time database like
{ "donors" :
"uid1" : { "name" : "x", "bloodGroup" : "A+", "location" : "some Place"},
"uid2" : { "name" : "y", "bloodGroup" : "A-", "location" : "some place"},
...
...
}
now if I have millions of donor records like this. how could I filter them based on bloodGroup location and fetching say 100 records from server at a time using angularfire2.

I have found this page which was really helpful to me when using queries to query my firebase data:
https://howtofirebase.com/collection-queries-with-firebase-b95a0193745d
A very simple example would be along the lines of:
this.donorsData = af.database.list('/donors', {
query: {
orderByChild: 'bloodGroup',
equalTo: 'A+',
}
});
Not entirely sure how to fetch 100 records, then another 100, I am using datatables in my app, which fetches all my data and using the datatables for pagination.

Related

Firebase Realtime Database Query: Filtering data based on the UserId signed in and a further unique identifier within the database

I have a query regarding an app I am trying to develop with node js react and Firebase Realtime Database.
The app is for a school and I am trying to write the correct code for filtering the data by course based on the course that the student has signed up for.
On the Firebase realtime database, I have two structure as per below:
- Courses
{
"courseData" : [ {
"course" : {
"day" : "Tuesday",
"duration" : "10 weeks",
"language" : "German",
"location" : "Online",
"startdate" : "12th January",
"term" : "January",
"time" : "17.30-18.30",
"timeofday" : "Evening",
},
"courseID" : "JRNGETNXXOLTUV",
"dates" : {
"class1" : "12/01/2021",
}
}],
"users" : {
"kwvjUSgZKXXfxxxxxxxxxxxxxxxxxx" : {
"courseID" : "JRNGETNXXOLTUV",
"email" : "test#test.com",
"username" : "Test"
},
"vXf4WcRGQcxxxxxxxxxxxxxxxxxxx" : {
"courseID" : "JRNGETNXXOLTUV",
"email" : "test2#test.com",
"username" : "Test Test"
I have a courseID in both courseData and the users section of the Firebase Realtime Database.
At the moment I can generate course data for a specific course when I manually insert the courseID as you will see below in the excerpt below.
Excerpt 1
filtercourse(courseID) {
return function (coursedata) {
return coursedata.courseID === courseID;
};
}
....
Excerpt 2
<tbody>
{this.state.courseData.filter(this.filtercourse('JANSPADBGOLWEE')).map((data, index) => (
<tr key={index}>
...
Instead of manually inserting the courseID (in this case it's JANSPADBGOLWEE), I understand that I need to create a function where the courseData data is filtered by course/ courseID based on the courseData.courseID being equal to the users.uid.courseID, however, I this is beyond me it seems. Any help or advice here would be greatly appreciated.
I think you're looking for a Firebase query, which allow you to sort and filter data.
On Courses, you could get only its child nodes where courseID has a specific value with:
let courses = firebase.database().ref().child("courseData");
let courseSuery = courses.orderByChild("courseID").equalTo("JANSPADBGOLWEE");
courseSuery.once("value").then((snapshot) => {
snapshot.forEach((courseSnapshot) => {
console.log(courseSnapshot.key, courseSnapshot.child("course/day").val());
});
});
If you first need to look up the course ID for the current user, that'd be:
let users = firebase.database().ref().child("users");
if (!firebase.auth().currentUser) throw "No current user";
let uid = firebase.auth().currentUser.uid;
users.child(uid).once("value").then((userSnapshot) => {
console.log(userSnapshot.val().courseID);
});
Note that this is a fairly standard way of loading data from Firebase, so I recommend reading some more of the documentation, and taking a few tutorials to get self-sufficient with it.

Auto Increment a field value every time a doc is inserted in elastic search

I have a requirement to generate a unique number (ARN) in this format
DD/MM/YYYY/1, DD/MM/YYYY/2
and insert these in elastic search index.
The approach i am thinking of is to create an auto increment field in the doc and use it to generate a new entry and use the new auto generated number to create the ARN and update the doc.
doc structure that i am planning to use:
{ id: 1, arn: 17/03/2018/01 }
something like this.
How can i get auto increment field in elastic search?
It can't be done in a single step. First you have to insert the record into the database, and then update the ARN with it's id
There is no auto-increment equivalent, for example, to hibernate id generator. You could use the Bulk API (if you have to save multiple documents at a time) and increase the _id and the ending of your ARN value programmatically.
Note: if you want to treat your id as a number, you should implement it yourself (in this example, I added a new field "my_id", because the _id of the documents is treated as a string.
POST /bulk
{ "index" : { "_index" : "your_index", "_type" : "your_type", "_id" : "1" } }
{ "arn" : "2018/03/17/1", my_id: 1 }
{ "index" : { "_index" : "your_index", "_type" : "your_type", "_id" : "2" } }
{ "arn" : "2018/03/17/2", my_id: 2 }
Then, the next time that you want to save new documents, you query for the maximum id something like:
POST /my_index/my_type/_search?size=1
{
"query": {
"fields": ["my_id"],
"sort": [{
"my_id": { "order": "desc" } }
]
}
}
If your only requirement is that this ARN should be unique, you could also let elasticsearch calculate your _id by simply not setting it. Then you could relay at some unique token generator (UID.randomUUID().toString() if work with java). Pseudo code follows:
String uuid = generateUUID() // depends on the programming language
String payload = "{ \"arn\" : + uuid + "}" // concatenate the payload
String url = "http://localhost:9200/my_index" // your target index
executePost(url, payload) // implement the call with some http client library

Insertion order of array elements in MongoDB

I am having trouble preserving insertion order of a bulk insert into mongodb.
My applications requires posting data continuously (via HTTP POST # once a second) to a server. On the server side, the HTTP POST is handled and this data is stored in a capped collection in mongodb v2.4. The size of this capped collection is large (50MB). The format of this data is JSON and it has arrays in it like this:
{"Data":[{"Timestamp":"2014-08-02 13:38:18:852","Rabbit":[{"a":-0.0010663296561688185,"c":-0.0021321268286556005,"b":-0.0010663296561688185}],"Monkec":[{"a":17.511783599853516,"c":-0.42092469334602356,"b":-0.42092469334602356}]},{"Timestamp":"2014-08-02 13:38:18:858","Rabbit":[{"a":-0.0021321268286556005,"c":-0.0010663296561688185,"b":0.0010657970560714602}],"Monkec":[{"a":9.892329216003418,"c":-0.2339634746313095,"b":-0.2342628538608551}]},{"Timestamp":"2014-08-02 13:38:18:863","Rabbit":[{"a":-0.0010663296561688185,"c":-0.0021321268286556005,"b":0.0021315941121429205}],"Monkec":[{"a":9.702523231506348,"c":-0.24264541268348694,"b":-0.2148033082485199}]},{"Timestamp":"2014-08-02 13:38:18:866","Rabbit":[{"a":-0.0010663296561688185,"c":-0.0010663296561688185,"b":0.0010657970560714602}],"Monkec":[{"a":9.665101051330566,"c":-0.23366409540176392,"b":-0.2197430431842804}]},{"Timestamp":"2014-08-02 13:38:18:868","Rabbit":[{"a":-0.0021321268286556005,"c":-0.0010663296561688185,"b":0.0010657970560714602}],"Monkec":[{"a":9.693991661071777,"c":-0.2936892807483673,"b":-0.22857467830181122}]},{"Timestamp":"2014-08-02 13:38:18:872","Rabbit":[{"a":-0.0010663296561688185,"c":-0.0010663296561688185,"b":0.0010657970560714602}],"Monkec":[{"a":9.684710502624512,"c":-0.2296224981546402,"b":-0.13786330819129944}]},{"Timestamp":"2014-08-02 13:38:18:873","Rabbit":[{"a":-0.0021321268286556005,"c":-0.0010663296561688185,"b":0.0010657970560714602}],"Monkec":[{"a":9.67707633972168,"c":-0.31255003809928894,"b":-0.1902543604373932}]},{"Timestamp":"2014-08-02 13:38:18:875","Rabbit":[{"a":-0.0021321268286556005,"c":-0.0010663296561688185,"b":0}],"Monkec":[{"a":9.739496231079102,"c":-0.1899549812078476,"b":-0.18845809996128082}]},{"Timestamp":"2014-08-02 13:38:18:878","Rabbit":[{"a":-0.003197923768311739,"c":-0.0010663296561688185,"b":0.0010657970560714602}],"Monkec":[{"a":9.721234321594238,"c":-0.19205063581466675,"b":-0.17318984866142273}]},{"Timestamp":"2014-08-02 13:38:18:881","Rabbit":[{"a":-0.003197923768311739,"c":-0.003197923768311739,"b":0.0010657970560714602}],"Monkec":[{"a":9.78545093536377,"c":-0.2501298487186432,"b":-0.1953437775373459}]},{"Timestamp":"2014-08-02 13:38:18:882","Rabbit":[{"a":0,"c":-0.0010663296561688185,"b":0.0021315941121429205}],"Monkec":[{"a":9.686058044433594,"c":-0.21630020439624786,"b":-0.18247054517269135}]},{"Timestamp":"2014-08-02 13:38:18:884","Rabbit":[{"a":-0.0010663296561688185,"c":0,"b":0.0010657970560714602}],"Monkec":[{"a":9.67198657989502,"c":-0.18546432256698608,"b":-0.23156845569610596}]},{"Timestamp":"2014-08-02 13:38:18:887","Rabbit":[{"a":-0.0010663296561688185,"c":-0.0010663296561688185,"b":0.0010657970560714602}],"Monkec":[{"a":9.640103340148926,"c":-0.23276595771312714,"b":-0.25686585903167725}]},{"Timestamp":"2014-08-02 13:38:18:889","Rabbit":[{"a":-0.0010663296561688185,"c":0,"b":0}],"Monkec":[{"a":9.739346504211426,"c":-0.19130218029022217,"b":-0.22602996230125427}]},{"Timestamp":"2014-08-02 13:38:18:891","Rabbit":[{"a":-0.0021321268286556005,"c":-0.0010663296561688185,"b":0}],"Monkec":[{"a":9.716594696044922,"c":-0.22543121874332428,"b":-0.19728973507881165}]},{"Timestamp":"2014-08-02 13:38:18:898","Rabbit":[{"a":-0.0010663296561688185,"c":-0.0010663296561688185,"b":0.0010657970560714602}],"Monkec":[{"a":9.682914733886719,"c":-0.28680360317230225,"b":-0.1740879863500595}]},{"Timestamp":"2014-08-02 13:38:18:904","Rabbit":[{"a":-0.0010663296561688185,"c":0,"b":0.0021315941121429205}],"Monkec":[{"a":9.693093299865723,"c":-0.20866607129573822,"b":-0.2586621046066284}]},{"Timestamp":"2014-08-02 13:38:18:907","Rabbit":[{"a":-0.0021321268286556005,"c":-0.0010663296561688185,"b":0}],"Monkec":[{"a":9.690997123718262,"c":-0.18681152164936066,"b":-0.23216719925403595}]},{"Timestamp":"2014-08-02 13:38:18:910","Rabbit":[{"a":-0.003197923768311739,"c":-0.0010663296561688185,"b":0.0010657970560714602}],"Monkec":[{"a":9.671688079833984,"c":-0.15388000011444092,"b":-0.2588118016719818}]},{"Timestamp":"2014-08-02 13:38:19:055","Rabbit":[{"a":-0.0010663296561688185,"c":-0.0010663296561688185,"b":0}],"Monkec":[{"a":9.689650535583496,"c":-0.23605911433696747,"b":-0.1989363133907318}]}],"Serial":"35689"}
I am inserting this in mongodb (using NodeJs MongoClient driver) using a bulk insert command:
var length = 20; // only doing 20 inserts for testing purposes
for (var i = 0; i <length;i++) {
var bulk = col.initializeUnorderedBulkOp();
bulk.insert(data["Data"][i]); // data is my JSON data of interest
bulk.execute(function(err) {
if (err) {
return cb(err);
}
if (++inserted == length) {
cb(); // callback (not seen in this code snippet)
}
}); // end function
} // end of for loop
However, when I examine the entries in the database, they are not inserted in the order in which the data resides in the originating JSON array. My source data is in ascending Timestamp order, but a few entries in the mongodb capped collection are out of order. For instance, I see this:
{ "Timestamp" : "2014-08-02 13:38:18:910", "Rabbit" : [ { "a" : -0.003197923768311739, "c" : -0.0010663296561688185, "b" : 0.0010657970560714602 } ], "Monkec" : [ { "a" : 9.671688079833984, "c" : -0.15388000011444092, "b" : -0.2588118016719818 } ], "_id" : ObjectId("548e67a683946a5d25bc6d1a") }
{ "Timestamp" : "2014-08-02 13:38:18:884", "Rabbit" : [ { "a" : -0.0010663296561688185, "c" : 0, "b" : 0.0010657970560714602 } ], "Monkec" : [ { "a" : 9.67198657989502, "c" : -0.18546432256698608, "b" : -0.23156845569610596 } ], "_id" : ObjectId("548e67a683946a5d25bc6d13") }
{ "Timestamp" : "2014-08-02 13:38:18:904", "Rabbit" : [ { "a" : -0.0010663296561688185, "c" : 0, "b" : 0.0021315941121429205 } ], "Monkec" : [ { "a" : 9.693093299865723, "c" : -0.20866607129573822, "b" : -0.2586621046066284 } ], "_id" : ObjectId("548e67a683946a5d25bc6d18") }
so Timestamp" : "2014-08-02 13:38:18:910 is stored before "Timestamp" : "2014-08-02 13:38:18:884" even though it is the other way around in the source JSON.
How to ensure mongodb inserts data in the correct order? I also tried non bulk inserts (db.col.insert or db.col.insertOne) but still get this inconsistency. Thank You
If your queries aren't asking for any specific sorting/ordering, MongoDB makes no guarantees as to in which order they'll be returned.
How you insert your data is irrelevant. What you need to do is write your find query like this:
// Sort by ascending timestamp
db.my_collection.find({ ... }).sort({"TimeStamp": 1})
See http://docs.mongodb.org/manual/reference/method/cursor.sort/#cursor.sort for more information on how sorting works.
Of course, if you want to do that, you'll greatly benefit from adding an index on Timestamp to your collection (see http://docs.mongodb.org/manual/core/indexes/).

Nodejs mongo return data with pagination information

I am using node and mongo with the native client.
I would like to add pagination to my application.
To get pagination, I need my responses to always return count alongside data
I would like to get something like:
{
count : 111,
data : [ { 'a' : 'only first item was requested' } ]
}
I can do this in mongo
> var guy = db.users.find({}).limit(1)
> guy.count()
11
> guy.toArray()
[
{
"_id" : ObjectId("5381a7c004fb02b10b557ee3"),
"email" : "myEmail#guy.com",
"fullName" : "guy mograbi",
"isAdmin" : true,
"password" : "fe20a1f102f49ce45d1170503b4761ef277bb6f",
"username" : "guy",
"validated" : true
}
]
but when I do the same with nodejs mongo client I get errors.
var cursor = collection.find().limit(1);
cursor.toArray( function(){ .. my callback .. });
cursor.count();
It seems that
count is not defined on cursor
that once I applied toArray on cursor, I cannot use the cursor again
How, using nodejs, can I accomplish the same thing I can with mongo directly?
As others have said, if you want to have a total count of the items and then the data you will need to have two queries, there is no other way. Why are you concerned with creating two queries?

Group by date in MongoDB

I'm running a blog-style web application on AppFog (ex Nodester).
It's written in NodeJS + Express and uses Mongoose framework to persist to MongoDB.
MongoDB is version 1.8 and I don't know whether AppFog is going to upgrade it to 2.2 or not.
Why this intro? Well, now my "posts" are shown in a basic "paginated" visualization, I mean they're just picked up from mongo, sorted by date descending, a page at a time. Here's a snippet:
Post
.find({pubblicato:true})
.populate("commenti")
.sort("-dataInserimento")
.skip(offset)
.limit(archivePageSize)
.exec(function(err,docs) {
var result = {};
result.postsArray = (!err) ? docs : [];
result.currentPage = currentPage;
result.pages = howManyPages;
cb(null, result);
});
Now, my goal is to GROUP BY 'dataInserimento' and show posts like a "diary", I mean:
1st page => 2012/10/08: I show 3 posts
2nd page => 2012/10/10: I show 2 posts (2012/10/09 has no posts, so I don't allow a white page)
3rd page => 2012/10/11: 35 posts and so on...
My idea is to get first the list of all dates with grouping (and maybe counting posts for each day) then build the pages link and, when a page (date) is visited, query like above, adding date as parameter.
SOLUTIONS:
Aggregation framework would be perfect for that, but I can't get my hands on that version of Mongo, now
Using .group() in some way, but the idea it doesn't work in sharded environments does NOT excite me! :-(
writing a MAP-REDUCE! I think this is the right way to go but I can't imagine how map() and reduce() should be written.
Can you help me with a little example, please?
Thanks
EDIT :
The answer of peshkira is correct, however, I don't know if I need exactly that.
I mean, I will have URLs like /archive/2012/10/01, /archive/2012/09/20, and so on.
In each page, it's enough to have the date for querying for posts. But then I have to show "NEXT" or "PREV" links, so I need to know what's the next or previous day containing posts, if any. Maybe can I just query for posts with dates bigger or smaller than the current, and get the first one's date?
Assuming you have something similar as:
{
"author" : "john doe",
"title" : "Post 1",
"article" : "test",
"created" : ISODate("2012-02-17T00:00:00Z")
}
{
"author" : "john doe",
"title" : "Post 2",
"article" : "foo",
"created" : ISODate("2012-02-17T00:00:00Z")
}
{
"author" : "john doe",
"title" : "Post 3",
"article" : "bar",
"created" : ISODate("2012-02-18T00:00:00Z")
}
{
"author" : "john doe",
"title" : "Post 4",
"article" : "foo bar",
"created" : ISODate("2012-02-20T00:00:00Z")
}
{
"author" : "john doe",
"title" : "Post 5",
"article" : "lol cat",
"created" : ISODate("2012-02-20T00:00:00Z")
}
then you can use map reduce as follows:
Map
It just emits the date as key and the post title. You can change the title to the _id, which will probably be more useful to you. If you store the time of the date you will want to use only the date (without time) as the key, otherwise mongo will group by date time and not only date. In my test case all posts have the same time 00:00:00 so it does not matter.
function map() {
emit(this.created, this.title);
}
Reduce
It does nothing more, then just push all values for a key to an array and then the array is wrapped in a result object, because mongo does not allow arrays to be the result of a reduce function.
function reduce(key, values) {
var array = [];
var res = {posts:array};
values.forEach(function (v) {res.posts.push(v);});
return res;
}
Execute
Using db.runCommand({mapreduce: "posts", map: map, reduce: reduce, out: {inline: 1}}) will output the following result:
{
"results" : [
{
"_id" : ISODate("2012-02-17T00:00:00Z"),
"value" : {
"posts" : [
"Post 2",
"Post 1"
]
}
},
{
"_id" : ISODate("2012-02-18T00:00:00Z"),
"value" : "Post 3"
},
{
"_id" : ISODate("2012-02-20T00:00:00Z"),
"value" : {
"posts" : [
"Post 5",
"Post 4"
]
}
}
],
...
}
I hope this helps

Resources