Am I having a local Firestore database? - node.js

I want to understand what kind of Firestore database is installed to my box.
The code is running with node.js 9.
If I remove the internet for X minutes and put it back, I can see all the cached transactions going to Firestore (add, updates, deletes).
If I add firebase.firestore().enablePersistence() line after 'firebase.initializeApp(fbconfig), I am getting this error:
Error enabling offline persistence. Falling back to persistence
disabled: FirebaseError: [code=unimplemented]: This platform is either
missing IndexedDB or is known to have an incomplete implementation.
Offline persistence has been disabled.
Now, my question is. If I don't have persistence enabled or can't have it, how come when disconnecting my device from internet, I still have internal transaction going on? Am I really seeing it the proper way?
To me, beside not seeing the console.log() that I have inside the "then()" to batch.commit or transaction.update right away (only when putting back the internet) tells me that I have some kind of internal database persistence, don't you think?
Thanks in advance for your help.
UPDATE
When sendUpdate is called, it looks like the batch.commit is executed because I can see something going on in listenMyDocs(), but the console.log "Commit successfully!" is not shown until the internet comes back
function sendUpdate(response) {
const db = firebase.firestore();
let batch = db.batch();
let ref = db.collection('my-collection')
.doc('my-doc')
.collection('my-doc-collection')
.doc('my-new-doc');
batch.update(ref, { "variable": response.state });
batch.commit().then(() => {
console.log("Commit successfully!");
}).catch((error) => {
console.error("Commit error: ", error);
});
}
function listenMyDocs() {
const firebase = connector.getFirebase()
const db = firebase.firestore()
.collection('my-collection')
.doc('my-doc')
.collection('my-doc-collection');
const query = db.where('var1', '==', "true")
.where('var2', '==', false);
query.onSnapshot(snapshot => {
snapshot.docChanges().forEach(change => {
if (change.type === 'added') {
console.log('ADDED');
}
if (change.type === 'modified') {
console.log('MODIFIED');
}
if (change.type === 'removed') {
console.log('DELETED');
}
});
});

the console.log "Commit successfully!" is not shown until the internet comes back
This is the expected behavior. Completion listeners fire once the data is committed on the server.
Local events may fire before completion, in an effort to allow your UI to update optimistically. If the server changes the behavior that the client raised events for (for example: if the server rejects a write), the client will fire reconciliatory events (so if an add was rejected, it will firebase a change.type = 'removed event once that is detected).
I am not entirely sure if this applies to batch updates though, and it might be tricky to test that from a Node.js script as those usually bypass the security rules.

Related

Progress bar for express / react communicating with backend

I want to make a progress bar kind of telling where the user where in process of fetching the API my backend is. But it seems like every time I send a response it stops the request, how can I avoid this and what should I google to learn more since I didn't find anything online.
React:
const {data, error, isError, isLoading } = useQuery('posts', fetchPosts)
if(isLoading){<p>Loadinng..</p>}
return({data&&<p>{data}</p>})
Express:
app.get("api/v1/testData", async (req, res) => {
try {
const info = req.query.info
const sortByThis = req.query.sortBy;
if (info) {
let yourMessage = "Getting Data";
res.status(200).send(yourMessage);
const valueArray = await fetchData(info);
yourMessage = "Data retrived, now sorting";
res.status(200).send(yourMessage);
const sortedArray = valueArray.filter((item) => item.value === sortByThis);
yourMessage = "Sorting Done now creating geojson";
res.status(200).send(yourMessage);
createGeoJson(sortedArray)
res.status(200).send(geojson);
}
else { res.status(400) }
} catch (err) { console.log(err) res.status(500).send }
}
You can only send one response to a request in HTTP.
In case you want to have status updates using HTTP, the client needs to poll the server i.e. request status updates from the server. Keep in mind though that every request needs to be processed on the server side and will take resources away which are then not available for other (more important) requests from other clients. So don't poll too frequently.
If you want to support long running operations using HTTP have a look at the following API design pattern.
Alternatively you could also use a WebSockets connection to push updates from the server to the client. I assume your computation on the backend will not be minutes long and you want to update the client in real-time, so probably WebSockets will be the best option for you. A WebSocket connection has, once established, considerably less overhead than sending huge HTTP requests/ responses between client and server.
Have a look at this thread which dicusses abovementioned and other possibilites.

Spikes in execution time for cloud functions?

I have a pretty dead simple cloud function that writes a single value to my real-time database. The code is at the bottom of this post.
Watching the logs, I'm finding that the execution time is highly inconsistent. Here's a screenshot:
You can see that it's as low as 3ms (great!) and as high as 579ms (very bad-- and I've seen it reach 1000ms). The result is very noticeable delays in my chatroom implementation, with messages sometimes being appended out of order from how they were sent. (i.e. "1" "2" "3" is being received as "2" "3" "1")
Why might execution time vary this wildly? Cold start vs warm start doesn't seem to apply since you can see these calls happened directly one after the other. I also can't find any documented limits on writes/sec for real-time db, unlike the 1 write/sec limit on firestore documents.
Here's the code:
import * as functions from 'firebase-functions';
import * as admin from 'firebase-admin';
admin.initializeApp();
const messagesRef = admin.database().ref('/messages/general');
export const sendMessageToChannel = functions.https.onCall(async (data, context) => {
if (!context.auth) {
throw new functions.https.HttpsError(
'failed-precondition',
'User must be logged-in.'
);
}
try {
await messagesRef.push({
uid: context.auth.uid,
displayName: data.displayName,
body: data.body
});
} catch (error) {
throw new functions.https.HttpsError('aborted', error);
}
});
Edit: I've seen this similar question from two years ago, where the responder indicates that the tasks themselves have variable execution time.
Is that the case here? Does the real-time database have wildly variable write times (varying by ~330x, from 3ms to 1000ms!)?
That's something quite hard to control based on the code.
You have a lot of steps going on there: \
verifying the user authentication
send his message to the collection
trying to catch any possible errors
So you can't rely simply on response time to organize the messaging order.
You should be setting a serverside timestamp from within the client side to trace that.
You can achieve this with the following explained piece of code:
try {
message.createdAt = firebase.firestore.FieldValue.serverTimestamp() // server-side timestamp
... // calls to functions
} catch(err) {
console.log("Couldn't set timestamp or send to functions")
}
This way you would set a server side timestamp for your message before sending it to be saved, so your users would see whenever a message is being registered (timestamp), saved (functions call) and confirmed (200 when sent).

What is the reason for using GET instead of POST in this instance?

I'm walking through the Javascript demos of pg-promise-demo and I have a question about the route /api/users/:name.
Running this locally works, the user is entered into the database, but is there a reason this wouldn't be a POST? Is there some sort of advantage to creating a user in the database using GET?
// index.js
// --------
app.get('/api/users/:name', async (req, res) => {
try {
const data = (req) => {
return db.task('add-user', async (t) => {
const user = await t.users.findByName(req.params.name);
return user || t.users.add(req.params.name);
});
};
} catch (err) {
// do something with error
}
});
For brevity I'll omit the code for t.users.findByName(name) and t.users.add(name) but they use QueryFile to execute a SQL command.
EDIT: Update link to pg-promise-demo.
The reason is explained right at the top of that file:
IMPORTANT:
Do not re-use the HTTP-service part of the code from here!
It is an over-simplified HTTP service with just GET handlers, because:
This demo is to be tested by typing URL-s manually in the browser;
The focus here is on a proper database layer only, not an HTTP service.
I think it is pretty clear that you are not supposed to follow the HTTP implementation of the demo, rather its database layer only. The demo's purpose is to teach you how to organize a database layer in a large application, and not how to develop HTTP services.

Looking for direction on a pouchdb error

error:"unauthorized"
id:"_design/db"
message:"You are not a db or server admin."
name:"unauthorized"
ok:true
reason:"You are not a db or server admin."
rev:"137-81fe83389359c1cfb50bf928f3558b81"
status:500
Pouchdb is trying to push a design document, after a full uninstall/reinstall of the app (so the local pouchdb should have been erased). I am guessing this is in the change stream somewhere. But the weird part is the couchdb is on revision 133, not 137.
How do I fix this? I tried a compact but that didn't work. Only obvious answer I can think of is manually make a bunch of revisions to the design on couch, so that it's newer than 137.
I ran a search on the changes stream using this code
var http=require('http');
var url = "http:/server/db/_changes?style=all_docs";
http.get(url, function(res){
var body = '';
res.on('data', function(chunk){
body += chunk;
});
res.on('end', function(){
var test = JSON.parse(body);
test.results.forEach(function(item,index){
if (item.id==="_design/db"){
console.log(item);
}
});
});
}).on('error', function(e){
console.log("Got an error: ", e);
});
And got 1 result, rev 133, the correct one. So where is pouchdb getting this from?
--Edit
Deleting the pouch database seems to fix it until the next app install.
The error status code is 500 which based on the documentation is:
500 - Internal Server Error
The request was invalid, either because the supplied JSON was invalid,
or invalid information was supplied as part of the request.
Also, the error message and reason mention that:
message:"You are not a db or server admin."
reason:"You are not a db or server admin."
I think the error might be caused by database admin and member permissions. Because, ordinary database member users/roles cannot PUT design docs, only database admin users/roles can PUT design docs:
You mentioned that:
It's really just because the phone has some future version of the
design doc ...
If there is a problem with revision, there should be received a 409 - Conflict error NOT a 500 - Internal Server Error.
I'm not sure, just an idea.
So it turns out Android now uses google drive to make backups of indexdb. This was causing the installed version of the app to keep getting a future version of the document after database rollbacks during testing.
The only way around it I found was to do this.
.on('denied', function (result) {
if (result.doc.error === "unauthorized" && result.doc.id === "_design/db") {
//catastrophic failure
var DBDeleteRequest = window.indexedDB.deleteDatabase("_pouch_");
DBDeleteRequest.onerror = function (event) {
console.error("Error deleting database.");
throw new Error("Error deleting database.");
};
DBDeleteRequest.onsuccess = function (event) {
console.log("Database deleted successfully");
window.location.reload(); //reload the app after purge
};
}
}
Even a pouchdb.destroy would not fully clear the problem. It's a bit of a nuke from orbit solution.

Choosing strategy to handle response from asynchronous API call

There is a web application which does not have any own database, but communicates with remote through API. Performing a call to API takes some time and we do it asynchronously. The responsiveness of the application must be high from the user point of view.
Let's assume that the user is changing some data. To store the data we need to make a call. We start showing the new data right after making the call. But what can we do if the response of the call is unsuccessful? We need to restore the old values and show some kind of warning to the user. But the user may leave the page, where data were changed and see completely different page. What are general patterns to handle such situations?
If you are using .Net 4.5 you can do this using async/await. If your web client that you are calling provides an asynchronous API that returns a Task you can simply await the call inside a try/catch block. The await will cause the method to return immediately so the user will continue to observe the old data while it is executing. Once the we client call completes the method "resumes" after the await and you can update your data.
If the web client call causes an exception, the method will resume in the catch block and you can display an error to the user.
public async Task CallAPI()
{
try
{
var client = ...
await client.CallAPI();
}
catch(Exception ex)
{
// show warning message
}
}
If your web client does not provide an asynchronous API you can achieve the equivalent with the Task Parallel Library.
public void CallAPI1()
{
Task.Factory.StartNew( () =>
{
var client = ...
client.CallAPI();
}).ContinueWith( t =>
{
if(t.Exception != null)
{
// display error
}
else
{
// update web page with
}
},
null,
CancellationToken.None, TaskScheduler.FromCurrentSynchronizationContext());
}
This article has more information on async/await. And this one has some of the best practices to follow.

Resources