Firebase Cloud Function, getting a 304 error - node.js

I have a firebase cloud function that resets a number under every user's UID every day back to 0. I have about 600 users and so far it's been working perfectly fine.
But today, it's giving me a 304 error and not reseting the value. Here is a screenshot:
And here is the function code:
export const resetDailyQuestsCount = functions.https.onRequest((req, res) => {
const ref = db.ref('users');
ref.once('value').then(snap => {
snap.forEach(item => {
const uid = item.child('uid').val();
ref.child(uid).update({ dailyQuestsCount: 0 }).catch(err => {
res.status(500).send(err);
});
});
}).catch(err => {
res.status(500).send(err);
})
res.status(200).send('daily quest count reset');
})
Could this be my userbase growing too large? I doubt it, 600 is not that big.
Any help would be really appreciated! This is really affecting my users.

An HTTP function must only send a single response to the client. This means a single call to send(). Your function can possibly attempt to send multiple responses to the client in the even that there are multiple updates that fail. Your logging isn't complete enough to demonstrate this, but it's a very real possibility with what you've shown.
Also bear in mind that this function is very much not scalable since it reads all of your users prior to processing them. For large number of users, this presents memory problems. You should look into ways to limit the number of nodes read by your query in order to prevent future problems.

Related

Spikes in execution time for cloud functions?

I have a pretty dead simple cloud function that writes a single value to my real-time database. The code is at the bottom of this post.
Watching the logs, I'm finding that the execution time is highly inconsistent. Here's a screenshot:
You can see that it's as low as 3ms (great!) and as high as 579ms (very bad-- and I've seen it reach 1000ms). The result is very noticeable delays in my chatroom implementation, with messages sometimes being appended out of order from how they were sent. (i.e. "1" "2" "3" is being received as "2" "3" "1")
Why might execution time vary this wildly? Cold start vs warm start doesn't seem to apply since you can see these calls happened directly one after the other. I also can't find any documented limits on writes/sec for real-time db, unlike the 1 write/sec limit on firestore documents.
Here's the code:
import * as functions from 'firebase-functions';
import * as admin from 'firebase-admin';
admin.initializeApp();
const messagesRef = admin.database().ref('/messages/general');
export const sendMessageToChannel = functions.https.onCall(async (data, context) => {
if (!context.auth) {
throw new functions.https.HttpsError(
'failed-precondition',
'User must be logged-in.'
);
}
try {
await messagesRef.push({
uid: context.auth.uid,
displayName: data.displayName,
body: data.body
});
} catch (error) {
throw new functions.https.HttpsError('aborted', error);
}
});
Edit: I've seen this similar question from two years ago, where the responder indicates that the tasks themselves have variable execution time.
Is that the case here? Does the real-time database have wildly variable write times (varying by ~330x, from 3ms to 1000ms!)?
That's something quite hard to control based on the code.
You have a lot of steps going on there: \
verifying the user authentication
send his message to the collection
trying to catch any possible errors
So you can't rely simply on response time to organize the messaging order.
You should be setting a serverside timestamp from within the client side to trace that.
You can achieve this with the following explained piece of code:
try {
message.createdAt = firebase.firestore.FieldValue.serverTimestamp() // server-side timestamp
... // calls to functions
} catch(err) {
console.log("Couldn't set timestamp or send to functions")
}
This way you would set a server side timestamp for your message before sending it to be saved, so your users would see whenever a message is being registered (timestamp), saved (functions call) and confirmed (200 when sent).

Nodejs proxy request coalescing

I'm running into an issue with my http-proxy-middleware stuff. I'm using it to proxy requests to another service which i.e. might resize images et al.
The problem is that multiple clients might call the method multiple times and thus create a stampede on the original service. I'm now looking into (what some services call request coalescing i.e. varnish) a solution that would call the service once, wait for the response and 'queue' the incoming requests with the same signature until the first is done, and return them all in a single go... This is different from 'caching' results due to the fact that I want to prevent calling the backend multiple times simultaneously and not necessarily cache the results.
I'm trying to find if something like that might be called differently or am i missing something that others have already solved someway... but i can't find anything...
As the use case seems pretty 'basic' for a reverse-proxy type setup, I would have expected alot of hits on my searches but since the problemspace is pretty generic i'm not getting anything...
Thanks!
A colleague of mine has helped my hack my own answer. It's currently used as a (express) middleware for specific GET-endpoints and basically hashes the request into a map, starts a new separate request. Concurrent incoming requests are hashed and checked and walked on the separate request callback and thus reused. This also means that if the first response is particularly slow, all coalesced requests are too
This seemed easier than to hack it into the http-proxy-middleware, but oh well, this got the job done :)
const axios = require('axios');
const responses = {};
module.exports = (req, res) => {
const queryHash = `${req.path}/${JSON.stringify(req.query)}`;
if (responses[queryHash]) {
console.log('re-using request', queryHash);
responses[queryHash].push(res);
return;
}
console.log('new request', queryHash);
const axiosConfig = {
method: req.method,
url: `[the original backend url]${req.path}`,
params: req.query,
headers: {}
};
if (req.headers.cookie) {
axiosConfig.headers.Cookie = req.headers.cookie;
}
responses[queryHash] = [res];
axios.request(axiosConfig).then((axiosRes) => {
responses[queryHash].forEach((coalescingRequest) => {
coalescingRequest.json(axiosRes.data);
});
responses[queryHash] = undefined;
}).catch((err) => {
responses[queryHash].forEach((coalescingRequest) => {
coalescingRequest.status(500).json(false);
});
responses[queryHash] = undefined;
});
};

Shopify API - Get all products (60k products) Got Request Time Out or socket hang up

I am trying to get all of products but I got Request timed out while trying to get 60k products for inventory management app.
I am using nodejs to loop into 200 pages, each page limit to 250 products. I limited 2 requests ever 10 seconds for my calls (5 seconds/1 request)
sometime I got these errors one some pages. Sometimes not
read ECONNRESET
Request timed out
socket hang up
Could any one please tell me what is the problem? I would appreciate your help.
for (var i = 1; i<= totalPage;i++)
{
var promise = shopify.product.list({limit:limit,page:i,fields:fields})
.then(products =>{
// do some thing here when got products list
// loop through each product then save to DB
// ShopifyModel.updateOne(.....)
}).catch(error=>{
// some time it fired error here
})
}
I also tried to rewrite a function to get products of 1 page:
const request = require('request-promise');
var getProductOnePage = function (Url_Page,headers,cb){
request.get(productUrl, { headers: headers,gzip:true })
.then((ListProducts) => {
console.log(" Got products list of one page");
cb(ListProducts);
})
.catch(err=>{
// Got All Error Here when try to put into for loop or map or forEach with promise.all
console.log("Error Cant get product of 1 page: ",err.message);
});
}
EDIT:
I found some problem similar to my case here:
https://github.com/request/request/issues/2047
https://github.com/twilio/twilio-node/issues/312
ECONNRESET and Request timed out errors are there mostly due to network problem. Check if you have a stable internet connection.
If you're using shopify api node package, then use autoLimit property. It will take care of rate limiting.
eg:
const shopify = new Shopify({
shopName: shopName,
apiKey: api_key,
password: password,
autoLimit : { calls: 2, interval: 1000, bucketSize: 30 }
});
Edit: Instead of writing then catch inside a for loop, use async await. Because weather you implement request and wait approach or not, for loop will send all requests. But if you use await, it will process one request at a time.
let getProducts = async () => {
for (var i = 1; i<= totalPage;i++)
{
try {
let products = await shopify.product.list({limit:limit,page:i,fields:fields});
if(!products.length) {
// all products have been fetched
break;
}
// do you stuffs here
} catch (error) {
console.log(error);
}
}
}
You have to understand the concept of rate limiting. With any public API like Shopify, you can only make so many calls before they put you on hold. So when you get a response back from Shopify, you can check the header for how many requests you can make. If it is zero, you'll get back a 429 if you try a request.
So when you get a 0 for credits, or a 429 back, you can set yourself a little timeout and wait to make your next call.
If on the other hand, as you say, you are only doing 2 calls every 10 seconds (not at all clear from your code how you do that, and why??) and you're getting timeouts, then your Internet connection to Shopify is probably the problem.

Problems making batch request in SPFx

Cheers guys and gals.
Having some problems with $batching requests to SP from SPFx.
Some background: The SP structure has one site collection with lots of subsites. Each subsite has a list whose name is identical on all subsites. I need to access all of those lists.
A normal SPHttpClient call gives me the url of all of the sites. So far so good.
The plan was then to $batch the calls to get the data from the lists. Unfortunatly I only get the answer from one of the calls. The rest of the batched calls gives me "InvalidClientQueryException". If I change the order of the calls it seems like only the first call succeeds.
const spBatchCreationOptions: ISPHttpClientBatchCreationOptions = {
webUrl: absoluteUrl
};
const spBatch: SPHttpClientBatch = spHttpClient.beginBatch(spBatchCreationOptions);
// Add three calls to the batch
const dan1 = spBatch.get("<endpoint1>",SPHttpClientBatch.configurations.v1);
const dan2 = spBatch.get("<endpoint2>",SPHttpClientBatch.configurations.v1);
const dan3 = spBatch.get("<endpoint3>",SPHttpClientBatch.configurations.v1);
// Execute the batch
spBatch.execute().then(() => {
dan1.then((res1) => {
return res1.json().then((res10) => {
console.log(res10);
});
});
dan2.then((res2) => {
return res2.json().then((res20) => {
console.log(res20);
});
});
dan3.then((res3) => {
return res3.json().then((res30) => {
console.log(res30);
});
});
});
So in this case only the call dan1 succeeds. If I however change call2 to have an identical endpoint as the first call they both succeed.
I can't really wrap my head around this, so if someone has any input it would be much appreciated.
//Dan
Make sure that your endpoint is always the same site per one batch. You cannot mix different sites within one batch. In that case only the first call(s) will succeed which are from the same site.
To overcome that you might switch to a search call in case you wanna retrieve information (what you can do over the same site URL).
See my blogpost on that for further information.

Should I return an array or data one by one in Mongoose

I have this simple app that I created using IOS, it is a questionnaire app, whenever user clicks play, it will invoke a request to node.js/express server
Technically after a user clicks an answer it will go to the next question
I'm confused to use which method, to fetch the questions/question
fetch all the data at once and present it to the user - which is an array
Fetch the data one by one as user progress with the next question - which is one data per call
API examples
// Fetch all the data at once
app.get(‘/api/questions’, (req, res, next) => {
Question.find({}, (err, questions) => {
res.json(questions);
});
});
// Fetch the data one by one
app.get('/api/questions/:id', (req, res, next) => {
Question.findOne({ _id: req.params.id }, (err, question) => {
res.json(question);
});
});
The problem with number 1 approach is that, let say there are 200 questions, wouldn’t it be slow for mongodb to fetch at once and possibly slow to do network request
The problem with number 2 approach, I just can’t imagine how to do this, because every question is independent and to trigger to next api call is just weird, unless there is a counter or a level in the question mongodb.
Just for the sake of clarity, this is the question database design in Mongoose
const mongoose = require('mongoose');
const Schema = mongoose.Schema;
const QuestionSchema = new Schema({
question: String,
choice_1: String,
choice_2: String,
choice_3: String,
choice_4: String,
answer: String
});
I'd use Dave's approach, but I'll go a bit more into detail here.
In your app, create an array that will contain the questions. Then also store a value which question the user currently is on, call it index for example. You then have the following pseudocode:
index = 0
questions = []
Now that you have this, as soon as the user starts up the app, load 10 questions (see Dave's answer, use MongoDB's skip and limit for this), then add them to the array. Serve questions [index] to your user. As soon as the index reaches 8 (= 9th question), load 10 more questions via your API, and add them to the array. This way, you will always have questions available for the user.
Very good question. I guess the answer to this question depends on your future plans about this app.
If you are planning to have 500 questions, then getting them one by one will require 500 api calls. Not the best option always. On the other hand, if you fetch all of them at once, it will delay the response depending on the size of each object.
So my suggestion will be to use pagination. Bring 10 results, when the user reaches 8th question update the list with next 10 questions.
This is a common practice among mobile developers, this will also give you the flexibility to update next questions on the basis of previous responses from user. Like Adaptive test and all.
EDIT
You can add pageNumber & pageSize query parameter in your request for fetching questions from server, something like this.
myquestionbank.com/api/questions?pageNumber=10&pageSize=2
receive these parameters in on server
var pageOptions = {
pageNumber: req.query.pageNumber || 0,
pageSize: req.query.pageSize || 10
}
and while querying from your database provide these additional parameters.
Question.find()
.skip(pageOptions.pageNumber * pageOptions.pageSize)
.limit(pageOptions.pageOptions)
.exec(function (err, questions) {
if(err) {
res.status(500).json(err); return;
};
res.status(200).json(questions);
})
Note: start your pageNumber with zero (0) it's not mandatory, but that's the convention.
skip() method allows you to skip first n results. Consider the first case, pageNumber will be zero, so the product (pageOptions.pageNumber * pageOptions.pageSize) will become zero, and it will not skip any record.
But for next time (pageNumber=1) the product will result to 10. so it will skip first 10 results which were already processed.
limit() this method limits the number of records which will be provided in result.
Remember that you'll need to update pageNumber variable with each request. (though you can vary limit also, but it is advised to keep it same in all the requests)
So, all you have to do is, as soon as user reaches second last question, you can request for 10 (pageSize) more questions from the server as put it in your array.
code reference : here.
You're right, the first option is a never-to-use option in my opinion too. Fetching that much data is useless if it is not being used or has a chance to not to be used in the context.
What you can do is that you can expose a new api call:
app.get(‘/api/questions/getOneRandom’, (req, res, next) => {
Question.count({}, function( err, count){
console.log( "Number of questions:", count );
var random = Math.ceil(Math.random() * count);
// random now contains a simple var
now you can do
Question.find({},{},{limit:1,skip:random}, (err, questions) => {
res.json(questions);
});
})
});
The skip:random will make sure that each time a random question is fetched. This is just a basic idea of how to fetch a random question from all of your questions. You can put further logics to make sure that the user doesn't get any question which he has already solved in the previous steps.
Hope this helps :)
you can use the concept of limit and skip in mongodb.
when you are hitting the api for the first time you can have your limit=20 and skip=0 increase your skip count every time you that api again.
1st time=> limit =20, skip=0
when you click next => limit=20 , skip=20 and so on
app.get(‘/api/questions’, (req, res, next) => {
Question.find({},{},{limit:20,skip:0}, (err, questions) => {
res.json(questions);
});
});

Resources