TypeScript: how to handle async functions inside setTimeout()? - node.js

I have the following timed function to periodically refetch credentials from an external API in your usual movie fetching IMDB clone app:
// This variable I pass later to Apollo Server, below all this code.
let tokenToBeUsedLater: string;
// Fetch credentials and schedule a refetch before they expire
const fetchAndRefreshToken = async () => {
try {
// fetchCredentials() sends an http request to the external API:
const { access_token, expires_in} = await fetchCredentials() as Credentials;
tokenToBeUsedLater = access_token;
// The returned token expires, so the timeout below is meant to recursively
// loop this function to refetch fresh credentials shortly before expiry.
// This timeout should not stop the app's execution, so can't await it.
// Have also tried the following with just setTimeout() and
// no `return new Promise()`but it throws a 2nd identical error.
return new Promise(resolve => setTimeout(() => {
resolve(fetchAndRefreshToken());
}, expires_in - 60000));
} catch (err) { throw new Error(err); }
};
fetchAndRefreshToken(); // <-- TypeScript error here
I have tried rewriting it in a thousand ways, but no matter what I do I get a Promises must be handled appropriately or explicitly marked as ignored with the 'void' operator error.
I can get rid the error by by:
Using .then().catch() when calling refreshToken(). Not ideal since I don't want to mix it with async and try/catch.
Putting a void operator ahead of refreshToken(). Feels like cheating.
Putting await ahead of refreshToken(). Essentially breaks my app (pauses execution of the app while waiting for the token to expire, so users in the frontend can't search for movies).
Any idea about how to solve this?
Also, any suggested resources/topics to study about this? Because I had a similar issue yesterday and I still can't figure this one out despite having already solved the other one. Cheers =)

Related

How can I prevent my ServiceWorker from intercepting cross-origin requests?

I am trying to clean up some code on my application https://git.sequentialread.com/forest/sequentialread-password-manager
I am using a ServiceWorker to enable the application to run offline -- however, I noticed that the ServiceWorker is intercepting cross-origin requests to backblazeb2.com. The app makes these cross origin requests as a part of its normal operation.
You can see here how I am registering the ServiceWorker:
navigator.serviceWorker.register('/serviceworker.js', {scope: "/"}).then(
(reg) => {
...
And inside the serviceworker.js code, I manually avoid caching any requests to backblazeb2.com:
...
return fetch(event.request).then(response => {
const url = new URL(event.request.url);
const isServerStorage = url.pathname.startsWith('/storage');
const isVersion = url.pathname == "/version";
const isBackblaze = url.host.includes('backblazeb2.com');
const isPut = event.request.method == "PUT";
if(!isServerStorage && !isVersion && !isBackblaze && !isPut) {
... // cache the response
However, this seems silly, I wish there was a way to limit the ServiceWorker to only intercept requests for the current origin.
I already tried inserting the origin into the scope property during registration, but this didn't work:
navigator.serviceWorker.register('/serviceworker.js', {scope: window.location.origin}).then(
(reg) => {
...
It was behaving the same way. I am assuming that perhaps this is because there are CORS headers present on the responses from backblazeb2.com, making those requests "technically" within the "scope" of the current origin ?
One idea I had, I could serve a permanent redirect from / to /static/index.html and then configure the serviceworker with a scope of /static, meaning it would only cache resources in that folder. But that seems like an ugly hack I should not have to do.
Is there a clean and "correct" way to do this??
As far as I can tell, the answer is, you can't do this. the serviceworker api won't let you.
Someone explained it to me as "the scope for the serviceworker limits where FROM the requests can be intercepted, not where TO the requests can be intercepted. So in otherwords, if I register a serviceworker at /app/, then a javascript from / or /foo/ will be able to make requests without them being intercepted.
It turns out that actually what I REALLY needed was to understand how service worker error handling works.
In the old version of my app, when the fetch() promise rejected, my code would return null.
return fetch(event.request).then(response => {
...blahblahblah...
}).catch( e => {
....
return null;
});
This was bad news bears and it was causing me to want to skip the serviceworker. what I didn't understand was; its not the serviceworkers fault per se as much as the fact that my serviceworker did not handle errors correctly. So the solution was to handle errors better.
This is what I did. I introduced a new route on the server called /error that always returns the string "serviceworker request failed".
http.HandleFunc("/error", func(response http.ResponseWriter, request *http.Request) {
response.WriteHeader(200)
fmt.Fprint(response, "serviceworker request failed")
})
And then I made sure to cache that endpoint when the service worker is installed.
self.addEventListener('install', event => {
event.waitUntil(clients.get(event.clientId).then(client => {
return caches.open(cacheVersion).then(cache => {
return cache.addAll([
'/',
'/error',
....
Finally, when the serviceworker fetch() promise rejects, I fall back to returning the cached version.
return fetch(event.request).then(response => {
...blahblahblah...
}).catch( e => {
....
return caches.match('/error');
});
I got the idea from the MDN serviceworker example project which does a similar thing and simply returns a cached image of darth vader if the fetch() promise rejects.
This allowed me to gracefully handle these errors and retry instead of silently failing. I simply had to make sure that my code does the right thing when it encounters an http response that matches the literal string "serviceworker request failed".
const requestFailedBytes = app.sjcl.codec.bytes.fromBits(app.sjcl.codec.utf8String.toBits("serviceworker request failed"));
...
var httpRequest = new XMLHttpRequest();
....
httpRequest.onloadend = () => {
...
if(app.cryptoService.uint8ArrayEquals(new Uint8Array(httpRequest.response), requestFailedBytes)) {
reject(false);
return
}
The fetch event in the service worker has a property on the request called "mode." This property allows you to check if the mode was set to "cors."
Here is an example of how to prevent cors requests.
self.addEventListener('fetch', (e) =>
{
if(e.request.method !== 'GET')
{
return false;
}
if(e.request.mode === 'navigate')
{
e.respondWith(caches.match('index.html'));
return false;
}
else if(e.request.mode === 'cors')
{
return false;
}
const response = this.fetchFile(e);
e.respondWith(response);
});

Slow promise chain

I'm fairly new to node.js and Promises in general, although I think I get the gist of how they are supposed to work (I've been forced to use ES5 for a looooong time). I also have little in-depth knowledge of Cloud Functions (GCF), though again I do understand at a high level how they work
I'm using GCF for part of my app, which is meant to receive a HTTP request, translate them and send them onward to another endpoint. I need to use promises, as there are occasions when the originating HTTP request has multiple 'messages' sent at once
So, my function works in regards to making sure messages are sent in the correct order, but the subsequent messages are sent on very slowly (the logs suggest it's around a 20 second difference in terms of actually being sent onward)
I'm not entirely sure why that is happening - I would've expected it to be less than a couple of seconds difference. Maybe it's something to do with GCF and not my code? Or maybe it is my code? Either way, I'd like to know if there's something I can do to speed it up, especially since it's supposed to send it onward to a user in Google chat
(Before anyone comments on why it's request.body.body, I don't have control over the format of the incoming request)
exports.onward = (request, response) => {
response.status(200).send();
let bodyArr = request.body.body;
//Chain Promises over multiple messages sent at once stored in bodyArr
bodyArr.reduce(async(previous, next) =>{
await previous;
return process(next);
}, Promise.resolve());
};
function process(body){
return new Promise((resolve, reject) => {
//Obtain JWT from Google
let jwtClient = new google.auth.JWT(
privatekey.client_email,
null,
privatekey.private_key,
['https://www.googleapis.com/auth/chat.bot']
);
//Authorise JWT, reject promise or continue as appropriate
jwtClient.authorize((err, tokens) => {
if(err){
console.error('Google OAuth failure ' + err);
reject();
}else{
let payload = copyPayload();
setValues(payload, body); //Other function which sets payload values
axios.post(url, payload,
{
headers: {
'Content-Type': 'application/json',
'Accept': 'application/json',
'Authorization': 'Bearer ' + tokens.access_token
},
})
.then(response => {
//HTTP 2xx response received
resolve();
})
.catch(error => {
switch(true){
//Something bad happened
reject();
});
}
});
});
}
EDIT: After testing the same thing, it's gone done a bit to around a 3-6 second delay between promises. Given that the code didn't change, I suspect that it's something to do with GCF?
By doing
exports.onward = (request, response) => {
response.status(200).send();
let bodyArr = request.body.body;
// Any work
};
you are incorrectly managing the life cycle of your Cloud Function: As a matter of fact by doing response.status(200).send(); you are indicating to the Cloud Functions platform that your function successfully reached its terminating condition or state and that, consequently, the platform can shut it down. See here in the doc for more explanations.
Since you send this signal at the beginning of your Cloud Function it may happen that the Cloud Functions shuts it down before the asynchronous job is finished.
In addition, you are potentially generating some "erratic" behavior of the Cloud Function that makes it difficult to debug. Sometimes your Cloud Function is terminated before the asynchronous work is completed, for the reason explained above. But some other times, the Cloud Function platform does not terminate the Function immediately and the asynchronous work can be completed (i.e. has the possibility to complete before the Cloud Function is terminated).
So, you should send the response after all the work is completed.
If you want to immediately acknowledged the user that the work has been started, without waiting for this work to be completed, you should use Pub/Sub: in your main Cloud Function, delegate the work to a Pub/Sub triggered Cloud Function and then return the response.
If you want to acknowledge the user when the work is completed (i.e. when the Pub/Sub triggered Cloud Function is completed), there are several options: Send a notification, an email or write to a Firestore document that you watch from your app.

Firebase cloud functions throws timeout exception but standalone works fine

I am trying to call a third party API using my Firebase cloud functions. I have billing enabled and all my other function are working fine.
However, I have one method that throws Timeout exception when it tries to call third API. The interesting thing is, when I run the same method from a standalone nodeJS file, it works fine. But when I deploy it on Firebase cloud or start the function locally, it shows timeout error.
Following is my function:
exports.fetchDemo = functions.https.onRequest(async (req, response) =>
{
var res = {};
res.started = true;
await myMethod();
res.ended = true;
response.status(200).json({ data: res });
});
async function myMethod() {
var url = 'my third party URL';
console.log('Line 1');
const res = await fetch(url);
console.log('Line 2'); // never prints when run with cloud functions
var data = await res.text();
console.log(`Line 3: ${data}`);
}
Just now I also noticed, when I hit the same URL in the browser it gives the following exception. It means, it works only with standalone node.
<errorDTO>
<code>INTERNAL_SERVER_ERROR</code>
<uid>c0bb83ab-233c-4fe4-9a9e-3f10063e129d</uid>
</errorDTO>
Any help will be appreciated...
It turned out that one of my colleague wrote a new method with the name fetch. I was not aware about it. So when my method was calling to the fetch method, it was actually calling his method he wrote down the file. I just took git update and did not notice he wrote this method.

How to pass arguments from Dart to Cloud functions written in Typescript and it also gives me error code UNAUTHENTICATED

Dart function (passing token to sendToDevice):
Future<void> _sendNotification() async {
CloudFunctions functions = CloudFunctions.instance;
HttpsCallable callable = functions.getHttpsCallable(functionName: "sendToDevice");
callable.call({
'token': await FirebaseMessaging().getToken(),
});
}
index.ts file where I have defined sendToDevice method.
import * as functions from 'firebase-functions';
import * as admin from 'firebase-admin';
admin.initializeApp();
const fcm = admin.messaging();
export const sendToDevice = functions.firestore
.document('users/uid')
.onCreate(async snapshot => {
const payload: admin.messaging.MessagingPayload = {
notification: {
title: 'Dummy title',
body: `Dummy body`,
click_action: 'FLUTTER_NOTIFICATION_CLICK'
}
};
return fcm.sendToDevice(tokens, payload); // how to get tokens here passed from above function?
}
);
Questions:
How can I receive tokens passed from my Dart function _sendNotification to Typescript's sendToDevice function.
When I was directly passing tokens inside index.ts file, I was getting this exception:
[ERROR:flutter/lib/ui/ui_dart_state.cc(157)] Unhandled Exception: PlatformException(functionsError, Cloud function failed with exception., {code: UNAUTHENTICATED, details: null, message: UNAUTHENTICATED})
Can anyone please explain if I am supposed to authenticate something here? The command firebase login shows I am already signed in. I am very new to Typescript so please bear with these stupid questions.
Your Flutter side of code seems right, what's wrong is on the Cloud Function.
The sendToDevice function is not a callable function. It is a Cloud Firestore Triggers, it is only meant to be automatically called whenever a document matches users/{uid} is created.
Instead, you'll want to create a Callable Function, see below
export const sendToDevice = functions.https
.onCall(async (data) => {
const { token } = data; // Data is what you'd send from callable.call
const payload: admin.messaging.MessagingPayload = {
notification: {
title: 'Dummy title',
body: `Dummy body`,
click_action: 'FLUTTER_NOTIFICATION_CLICK'
}
};
return fcm.sendToDevice(token, payload);
}
);
You have created a database trigger, what you should do is create a callable function as shown below
exports.sendToDevice = functions.https.onCall(async (data, context) => {
const payload: admin.messaging.MessagingPayload = {
notification: {
title: 'Dummy title',
body: `Dummy body`,
click_action: 'FLUTTER_NOTIFICATION_CLICK'
}
};
return await fcm.sendToDevice(data.token, payload);
});
There are few things to mention here:
1st The function used in 'getHttpsCallable' must be triggered by https trigger (reference here). Here we have a function triggered by firestore document create, so it won't work.
2nd You do not have parameter of your function, but you call it with parameters. If you need example of calling cloud function with parameter you can find it on pud.dev
3rd I do not have at the moment possibility to play with it, but I think that if you implement https triggered function with token parameter you should be able to pass this parameter.
I hope it will help!
UPDATE:
According to doc https triggered function has to be created with functions.https. There is a nice example in the doc. To function triggered this way you can add request body when you can pass needed data.
This answer might not solve your problem but will give you a few things to try, and you'll learn along the way. Unfortunately I wasn't able to get the callable https working with the emulator. I'll probably submit a github issue about it soon. The flutter app keeps just getting different types of undecipherable errors depending on the local URL I try.
It's good that you've fixed one of the problems: you were using document trigger (onCreate) instead of a https callable. But now, you're running a https callable and the Flutter apps needs to communicate with your functions directly. In the future, you could run the functions emulator locally, and do a lot of console.log'ing to understand if it actually gets triggered.
I have a few questions/ things you can try:
Is your user logged in the flutter app? FirebaseAuth.instance.currentUser() will tell you.
Does this problem happen on both iOS and android?
Add some logs to your typescript function, and redeploy. Read the latest logs through StackDriver or in terminal, firebase functions:log --only sendToDevice. (sendToDevice is your callable function name)
Are you deploying to the cloud and testing with the latest deployment of your functions? You can actually test with a local emulator. On Android, the url is 10.0.2.2:5001 as shown above. You also need to run adb reverse tcp:5001 tcp:5001 in the terminal. If you're on the cloud, then firebase login doesn't matter, I think your functions should already have the credentials.
To call the emulator https callable:
HttpsCallable callable = CloudFunctions.instance
.useFunctionsEmulator(origin: "http://10.0.2.2:5001")
.getHttpsCallable(functionName: "sendToDevice");
And iOS you need to follow the solution here.
One mistake I spotted. You should at least do return await fcm.sendToDevice() where you wait for the promise to resolve, because otherwise the cloud function runtime will terminate your function before it resolves. Alternatively, for debugging, instead of returning sendToDevice in your cloud function, you could have saved it into a variable, and console.log'd it. You would see its actually a promise (or a Future in dart's terminology) that hadn't actually resolved.
const messagingDevicesResponse: admin.messaging.MessagingDevicesResponse = await fcm.sendToDevice(
token,
payload
);
console.log({ messagingDevicesResponse });
return;
Make the function public
The problem is asociated with credentials. You can change the security policy of the CF and sheck if the problem is fixed. Se how to manage permisions on CF here

Shopify API - Get all products (60k products) Got Request Time Out or socket hang up

I am trying to get all of products but I got Request timed out while trying to get 60k products for inventory management app.
I am using nodejs to loop into 200 pages, each page limit to 250 products. I limited 2 requests ever 10 seconds for my calls (5 seconds/1 request)
sometime I got these errors one some pages. Sometimes not
read ECONNRESET
Request timed out
socket hang up
Could any one please tell me what is the problem? I would appreciate your help.
for (var i = 1; i<= totalPage;i++)
{
var promise = shopify.product.list({limit:limit,page:i,fields:fields})
.then(products =>{
// do some thing here when got products list
// loop through each product then save to DB
// ShopifyModel.updateOne(.....)
}).catch(error=>{
// some time it fired error here
})
}
I also tried to rewrite a function to get products of 1 page:
const request = require('request-promise');
var getProductOnePage = function (Url_Page,headers,cb){
request.get(productUrl, { headers: headers,gzip:true })
.then((ListProducts) => {
console.log(" Got products list of one page");
cb(ListProducts);
})
.catch(err=>{
// Got All Error Here when try to put into for loop or map or forEach with promise.all
console.log("Error Cant get product of 1 page: ",err.message);
});
}
EDIT:
I found some problem similar to my case here:
https://github.com/request/request/issues/2047
https://github.com/twilio/twilio-node/issues/312
ECONNRESET and Request timed out errors are there mostly due to network problem. Check if you have a stable internet connection.
If you're using shopify api node package, then use autoLimit property. It will take care of rate limiting.
eg:
const shopify = new Shopify({
shopName: shopName,
apiKey: api_key,
password: password,
autoLimit : { calls: 2, interval: 1000, bucketSize: 30 }
});
Edit: Instead of writing then catch inside a for loop, use async await. Because weather you implement request and wait approach or not, for loop will send all requests. But if you use await, it will process one request at a time.
let getProducts = async () => {
for (var i = 1; i<= totalPage;i++)
{
try {
let products = await shopify.product.list({limit:limit,page:i,fields:fields});
if(!products.length) {
// all products have been fetched
break;
}
// do you stuffs here
} catch (error) {
console.log(error);
}
}
}
You have to understand the concept of rate limiting. With any public API like Shopify, you can only make so many calls before they put you on hold. So when you get a response back from Shopify, you can check the header for how many requests you can make. If it is zero, you'll get back a 429 if you try a request.
So when you get a 0 for credits, or a 429 back, you can set yourself a little timeout and wait to make your next call.
If on the other hand, as you say, you are only doing 2 calls every 10 seconds (not at all clear from your code how you do that, and why??) and you're getting timeouts, then your Internet connection to Shopify is probably the problem.

Resources