I'm new to firebase and currently I'm still trying to learn how to get the latest children node based in this RTDB. My nodeMCU will send new data periodically so I'm trying to get the latest node when its added and the value of that node. Can you provide with a sample code for me to understand better? And if possible please explain like I'm 5. Thank you and have a good day.
From what I understand you have an Arduino module that is going to be constantly introducing data into your database.
What you want is to be able to read the value shown in the image as MQ7 every time a new value is added.
If this is the case there are different ways to obtain it.
The first and most common one would be to use the firebase Child Added event. With this event you can handle the data entered every time there is an addition to the reference to the database.
Using this event you would have a set of all the values entered in your reference and with each addition automatically (In Real Time) this set would be updated.
Taking your image as an example, the query code would be something like this (JS):
dbRef.child("Sensor MQ7").on("child_added", (snap) => {
for (i in snap.val()) {
const value_MQ7 = snap.child(i).child("MQ7").val()
// Do what you want with the value
console.log(value_MQ7)
}
})
If you don’t want to have that set with all the values entered in your reference, the best option would be a new function that returns only the value you are requesting, that is, a function that returns the MQ7 value of the last object entered in your reference sensor MQ7.
The query code would be something like this (JS):
const query = dbRef.child("Sensor MQ7").orderByKey().limitToLast(1);
query.get().then((snap) => {
for (i in snap.val()) {
// Do what you want with the value
const value_MQ7 = snap.child(i).child("MQ7").val()
console.log(value_MQ7)
}
})
Related
Here is my problem:
I have a firestore collection that has a number of documents. There are about 500 documents generated/updated every hour and saved to the collection.
I would like to query the collection and setup a real-time snapshot listener for a subset of document IDs, that are provided by the client.
I think maybe I could to something like this (this syntax is likely not correct...just trying to get a feel for if it's even possible...but isn't the "in" limited to an array of 10 items? ):
const subbedDocs = ["doc1","doc2","doc3","doc4","doc5"]
docsRef.where('docID', 'in', subbedDocs).onSnapshot((doc) => {
handleSnapshot(doc);
});
I'm sorry, that code probably doesn't make sense....I'm still trying to learn all the ins and outs of Firestore.
Essentially, what I am trying to do is take an array of ID's and setup a .onSnapshot listener for those ID's. This list of IDs could be upwards of 40-50 items. Is this even possible? I am trying to avoid just setting up a listener on the whole collection and filtering out things I am not "subscribed" too as that seems wasteful from a resources perspective.
If you have the doc IDs in your array (it looks like you have) you can loop over them and start a listener during that:
const subbedDocs = ["doc1", "doc2", "doc3", "doc4", "doc5"];
for (let i = 0; i < subbedDocs.length; i++) {
const docID = subbedDocs[i];
docsRef.doc(docID).onSnapshot((doc) => {
handleSnapshot(doc);
});
}
It would be better to listen to a query and all filtered docs at once. But if you want to listen to each of them with a explicit listener that would do the trick.
As you've discovered, Firestore's in operator only allows up to 10 entries in the array. I'm also guessing you've added the docID as a field in the document, since I don't believe 'docID references the actual documentid.
I would not take this approach, because of the 10-entry limitation. What I would do is, as the client is selecting documents to follow, set a field (same in each document) to a unique Id for the client, so your query completely avoids the limitation. You can allow an unlimited number of Client listeners (up to implementation limits of Firestore) if you add that client ID into an array (called something like "ListenerArray") [again, as the client is selecting them]. Your query would be more like:
docsRef.where('ListenerArray', 'array-contains', clientID).onSnapshot((doc) => {
handleSnapshot(doc);
})
array-contains checks a single value against all entries in a document array, without limit. Every client can mark any number of documents to subscribe to.
I'm working on a little web application that will crawl and update baseball standings by day and track teams positions (among other things) over time.
I have an API I grab all of this from and a collection in MongoDB that stores all the team data and information for the current day. Right now I just run this manually but eventually it'll be automated to run at like 3am or whenever.
The API supplies a unique ID for each team that never changes. So what I'm doing is I'm taking in the team data from the API. Passing it to a function that then extracts the teams data (there is other data from the response object I don't need), puts it into an object for replacement, and then wherever that team ID exist in the collection its document is replaced in a bulkWite.
async function currentStandings(db,team_standings,callback){
const current_standings = db.collection('current_standings');
let replacePool = [];
for(const single_team of team_standings.data.standing){
let replaceOnePusher = {
replaceOne: {
"filter": {"team_id": single_team.team_id},
"replacement": single_team
}
}
replacePool.push(replaceOnePusher);
}
await current_standings.bulkWrite(replacePool);
callback();
}
However when I execute this code for the first time each day I get an error reading BulkWriteError: After applying the update, the (immutable) field '_id' was found to have been altered to _id: ObjectId('5f26e57b6831761ac840bf1d') (not the same ID every day) and if I look in Compass the data isn't updated. If I immediately run the script again, it goes through successfully without error. Refreshing the data in compass generates the correct data.
Can someone explain to me what is going wrong here? This is actually my first time using MongoDB since I wanted to learn it and this pet project seemed like a good place to start.
I am developing a transaction workflow capsule, and I use the function transaction.retrieve() to get order data from the platform. But it returns only part of the order data.
MyReceipt is a structure stored the order informations, it is defined like this:
structure (MyReceipt) {
description (order info)
// properties
features { activity}
}
And it is built as a output concept of Commit Action, like this
action (CommitRequest) {
type (Commit)
description ()
collect {
// MyRequest
}
output (MyReceipt)
}
I try to get data like this
transaction.retrieve("bixby.MyCapsule.MyReceipt")
It is supposed to return all the MyReceipt Data. But it return only part of the Receipt data.Is it right to get all the orders? Or is there other ways to get all the receipt data?
And I have found the sample code use it just like this to get the last Receipt data
transaction.retrieve("bixby.MyCapsule.MyReceipt", "ALL", 1)
but it doesn't explain what these two parameter "ALL" and 1 represent for. And I want to get more details about the usage of this function.
Could you plz tell me how to use the function transaction.retrieve() or other function to get all the Receipt historical data, and How can I check out the transaction data for someone when I try to find the cause of the issue.
Copy the answer from dogethis. (Thanks, man! You do the hard work, I took credit)
We have the DOC ready online here
Basically, ALL is the default to get all state of transaction data, and 1 means only one record. The API page was not there before, so thanks for let us know.
I think it's the 1 cause you not get all record, but it does has a limit 20...
Have fun with Bixby!
i am using Strapi for a prototype and i am meeting the following issue. I have created a new content type "Checklist" and i added in it a relation property 1 to many with the User model provided by the users-permissions plugin.
Then i wanted to add some custom logic on the lifecycle call back, in beforeSave and in beforeUpdate from which i would like to access the user assigned to the Checklist.
The code looks like that:
{
var self = module.exports = {
// Before saving a value.
// Fired before an `insert` or `update` query.
generateLabel : (model) => {
var label = "";
var day = _moment(model.date,_moment.ISO_8601).year();
var month = _moment(model.date,_moment.ISO_8601).day();
var year = _moment(model.date,_moment.ISO_8601).month();
console.log(model);
if (model.user) {
label = `${model.user}-${year}-${month}-${day}`;
}else{
label = `unassigned-${year}-${month}-${day}`;
}
return label;
I call the method generateLabel from the callback. It works, but my model.user always returned undefined. It is a 1-n property. I can access model.date property (one of the field i have created) without any issue, so i guess the pbs is related to something i have to do to populate the user relation, but i am not sure on how to proceed.
When i log the model object, the console display what i guess is a complete mongoose object but i am not sure where to go from there as if i try to access the property that i see in the console, i will always reach an undefined.
Thanks in advance for your time, i use the following
strapi: 3.0.0-alpha.13.0.1
nodejs: v9.10.1
mongodb: 3.6.3
macos high sierra
Also running into the similar / same issue, I think this has to do with the user permissions plugin, and having to use that to access the User model. Or I thought about trying to find the User that’s associated with the id of the newly created record. I’m trying to use AfterCreate. Anyone that could shed some light on this would be great!
It's because relational attributes are not send in create fonction (See your checklist service add function).
Relations are handled in an other function updateRelations.
The thing you can do is to send values in Checklit.create()
I am scraping an 90K record database using JSON-RPC and I am trying to put in some basic error checking. I want to start by scraping the database twice using two different settings and adding a prefix to the second scrape. This way I can check to ensure that the two settings are not producing different records (due to dropped updates, etc). I wanted to implement the comparison using a view which compares each document from the first scrape with it's twin produced by the second scrape and then emit the names of records with a difference between them.
However, I cannot quite figure out how to pull in another doc in the view, everything I have read only discusses external docs using the emit() function, which is too late to permit me to compare it. In the example below, the lookup() function would grab the referenced document.
Is this just not possible?
function(doc) {
if(doc._id.slice(0,1)!=='$' && doc._id.slice(0,1)!== "_"){
var otherDoc = lookup('$test" + doc._id);
if(otherDoc){
var keys = doc.value.keys();
var same = true;
keys.forEach(function(key) {
if ((key.slice(0,1) !== '_') && (key.slice(0,1) !=='$') && (key!=='expires')) {
if (!Object.equal(otherDoc[key], doc[key])) {
same = false;
}
}
});
if(!same){
emit(doc._id, 1);
}
}
}
}
Context
You are correct that this is not possible in CouchDB. The whole point of the map function is that it must be idempotent, otherwise you lose all the other nice benefits of a pre-calculated index.
This is why you cannot access external resources in the map function, whether they be other records or the clock. Any time you run a map you must always get the same result if you put the same record into it. Since there are no relationships between records in CouchDB, you cannot promise that this is possible.
Solution
However, you can still achieve your end goal, just be different means. Some possibilities...
Assuming there is some meaningful numeric value in each doc, you could use a view to take the sum of all those values and group them by which import you did ({key: <batch id>, value: <meaningful number>}). Then compare the two numbers in your client or the browser to see if they match.
A brute force approach would be to use a view to pair the docs that should match. Each doc is on a different row, but they're grouped by a common field. Then iterate through the entire index comparing the pairs. This would certainly be the quickest to code and doesn't depend on your application or data.
Implement a validation function to enforce a schema on your data. Just be warned that this will reduce your write throughput since each written record will be piped out of Erlang and into the JS engine. Also, this is only applicable if you're worried about properly formed records instead of their precise content, which might not be the case.
Instead of your different batch jobs creating different docs, have them place them into the same doc. The structure might look like this: { "_id": "something meaningful", "batch_one": { ..data.. }, "batch_two": { ..data.. } } Then your validation function could compare them or you could create a view that indexes all the docs that don't match. All depends on where in your pipeline you want to do the error checking and correction.
Personally I like the last option better, but only if you don't plan to use the database as is in production. Ie., you wouldn't want to carry around all that extra data in each record.
Hope that helps.
Cheers.