Can not reach API in async method of class - node.js

This is a node.js question.
I am trying to use the contentful API which is a CMS service.
This works:
const contentful = require("contentful");
const client = contentful.createClient({
space: SPACE_ID,
accessToken: ACCESS_TOKEN
})
client.getEntry("27xsh9l8AWraqFQwICeaCn").then((doc)=>{
console.log(doc);
});
However when I use the exact same code like this:
A.js
module.exports = class Contentful{
constructor(){
this.produceTemplate();
}
async produceTemplate() {
console.log("Fetching Contentful");
var doc = await client.getEntry("27xsh9l8AWraqFQwICeaCn")
console.log("done");
}
}
B.js
class Interpreter {
constructor(user) {
let contentful = new Contentful();
// This essentially waits until the contentful object successfully fetches the data and allocates it to a variable called RootTemplate
if(contentful.RootTemplate == null || contentful.RootTemplate == undefined){
console.log("Waiting for Contentful service..");
while(contentful.RootTemplate == null || contentful.RootTemplate == undefined){}
}
}
}
When I run the code output is
Fetching Contentful
So the promise
var doc = await client.getEntry("27xsh9l8AWraqFQwICeaCn")
never resolves. It gots stock.
Why is this happening? Is the class blocking me? or is there something wrong with the "contentful" library?
Note: I construct the Interpreter from my index.js, I did not put it because I think it is irrelevant
Note2: I create a client object in A.js with exact same space_ID and Access_Token. I cut them out while posting here in order to make code more readeble

Related

How do I pass parameters to Apify BasicCrawler handleRequestFunction?

I'm trying to migrate an existing function to use it inside an Apify actor.
Originally, the function loads a given URL, reads its JSON response, and according to some supplied parameters, extracts some data and returns an object with results.
If you ask, it's not scraping anything "final" at this point. Its results are temporary and will be used to create other URLs which will be scraped then (with another crawler) for actual, useful results.
The current function that executes the crawler is something like this:
let url = new URL('/content', someBaseURL);
url.searchParams.set('search', someKeyword);
const reqList = new apify.RequestList({
sources: [ { url: url.toString() } ]
});
await reqList.initialize();
const crawler = new apify.BasicCrawler({
requestList: reqList,
handleRequestFunction: reqHandler
});
// How do I set the inputs for reqHandler() here ?
await crawler.run();
// How do I get the output from reqHandler() here ?
And the reqHandler code is something like this:
async function reqHandler(options) {
const response = await apify.utils.requestAsBrowser({
url: options.request.url
});
// How do I read parameters from the caller here ?
let searchResults = JSON.parse(response.body);
// ... result object creation logic goes here ...
// How do I return a result to the caller here ?
}
I am pretty new to this Apify thing and lost in the documentation.
Thanks for your help.
handleRequestFunction doesn't take any external input or produce any outputs. Simply use it as a closure and capture inputs from the surrounding code or you can wrap it in a different function.
Normally we do it like this:
const context = {}; // put your inputs here
const crawler = new apify.BasicCrawler({
requestList: reqList,
handleRequestFunction: async () => {
// use context here
// output data
await Apify.pushData(results);
}
});
EDIT: I forgot to mention a use-case on how to pass input. You need to do it via the request.userData object when adding to a queue or a list.
// The same userData is available in request list.
await requestQueue.addRequest({
url: 'https://example.com',
userData: { myInput: 'any-data' }
});
// Then in handleRequestFunction
handleRequestFunction: async (( request }) => {
const { myInput } = request.userData;
// ...
}

Firebase Functions and Express: listen to firestore data live

I have a website that runs its frontend of Firebase Hosting and its server which is written using node.js and Express on Firebase Functions
What I want to have redirect links from my website so I can map for example mywebsite.com/youtube to my youtube channel. the way I am creating these links is from my admin panel, and adding them to my Firestore database.
My data is roughly something like this:
The first way I approached this, is by querying my Firestore database on every request, but that is heavily expensive and slow.
Another way I tried to approach this is by setting some kind of background listener to the Firestore database which will always provide up to date data. but unfortunately that did not work because Firebase Functions suspends the main function when the current request execution ends.
lastly, which is the most convenience way, I configured an api route, which will be called from my Admin Panel when any change happens to the data, and I would save the new data to some json file. I tried this on my local but it did not work on production because appearently Firebase Functions is a Read-only system, so we can't edit any files after they are deployed. after some research I found out that Firebase Functions allows writing to the tmp directory, so I went forward with this, and tried deploying it. but again, Firebase Functions was resetting the tmp folder when some request execution ends.
here is my api request code which updates the utm_data.json file in the tmp directory:
// my firestore provider
const db = require('../db');
const fs = require('fs');
const os = require('os')
const mkdirp = require('mkdirp');
const updateUrlsAPI = (req, res) => {
// we wanna get the utm list from firestore, and update the file
// tmp/utm_data.json
// query data from firestore
db.collection('utmLinks').get().then(async function(querySnapshot) {
try {
// get the path to `tmp` folder depending on
// the os running this program
let tmpFolderName = os.tmpdir()
// create `tmp` directory if not exists
await mkdirp(tmpFolderName)
let docsData = querySnapshot.docs.map(doc => doc.data())
let tmpFilePath = tmpFolderName + '/utm_data.json'
let strData = JSON.stringify(docsData)
fs.writeFileSync(tmpFilePath, strData)
res.send('200')
} catch (error) {
console.log("error while updating utm_data.json: ", error)
res.send(error)
}
});
}
and this is my code for reading the utm_data.json file on an incoming request:
const readUrlsFromJson = (req, res) => {
var url = req.path.split('/');
// the url will be in the format of: 'mywebsite.com/routeName'
var routeName = url[1];
try {
// read the file ../tmp/utm_data.json
// {
// 'createdAt': Date
// 'creatorEmail': string
// 'name': string
// 'url': string
// }
// our [routeName] should match [name] of the doc
let tmpFolderName = os.tmpdir()
let tmpFilePath = tmpFolderName + '/utm_data.json'
// read links list file and assign it to the `utms` variable
let utms = require(tmpFilePath)
if (!utms || !utms.length) {
return undefined;
}
// find the link matching the routeName
let utm = utms.find(utm => utm.name == routeName)
if (!utm) {
return undefined;
}
// if we found the doc,
// then we'll redirect to the url
res.redirect(utm.url)
} catch (error) {
console.error(error)
return undefined;
}
}
Is there something I am doing wrong, and if not, what is an optimal solution for this case?
You can initialize the Firestore listener in global scope. From the documentation,
The global scope in the function file, which is expected to contain the function definition, is executed on every cold start, but not if the instance has already been initialized.
This should keep the listener active even after the function's execution has completed until that specific instance is running (which should be about ~30 minutes). Try refactoring the code as shown below:
import * as functions from "firebase-functions";
import * as admin from "firebase-admin";
admin.initializeApp();
let listener = false;
// Store all utmLinks in global scope
let utmLinks: any[] = [];
const initListeners = () => {
functions.logger.info("Initializing listeners");
admin
.firestore()
.collection("utmLinks")
.onSnapshot((snapshot) => {
snapshot.docChanges().forEach(async (change) => {
functions.logger.info(change.type, "document received");
switch (change.type) {
case "added":
utmLinks.push({ id: change.doc.id, ...change.doc.data() });
break;
case "modified":
const index = utmLinks.findIndex(
(link) => link.id === change.doc.id
);
utmLinks[index] = { id: change.doc.id, ...change.doc.data() };
break;
case "removed":
utmLinks = utmLinks.filter((link) => link.id !== change.doc.id);
default:
break;
}
});
});
return;
};
// The HTTPs function
export const helloWorld = functions.https.onRequest(
async (request, response) => {
if (!listener) {
// Cold start, no listener active
initListeners();
listener = true;
} else {
functions.logger.info("Listeners already initialized");
}
response.send(JSON.stringify(utmLinks, null, 2));
}
);
This example stores all UTM links in an array in global scope which won't be persisted in new instances but you won't have to query each link for every request. The onSnapshot() listener will keep utmLinks updated.
The output in logs should be:
If you want to persist this data permanently and prevent querying in every cold start, then you can try using Google Cloud Compute that keeps running unlike Cloud functions that timeout eventually.

Dynamic Slash Command Options List via Database Query?

Background:
I am building a discord bot that operates as a Dungeons & Dragons DM of sorts. We want to store game data in a database and during the execution of certain commands, query data from said database for use in the game.
All of the connections between our Discord server, our VPS, and the VPS' backend are functional and we are now implementing slash commands since traditional ! commands are being removed from support in April.
We are running into problems making the slash commands though. We want to set them up to be as efficient as possible which means no hard-coded choices for options. We want to build those choice lists via data from the database.
The problem we are running into is that we can't figure out the proper way to implement the fetch to the database within the SlashCommandBuilder.
Here is what we currently have:
const {SlashCommandBuilder} = require('#discordjs/builders');
const fetch = require('node-fetch');
const {REST} = require('#discordjs/rest');
const test = require('../commonFunctions/test.js');
var options = async function getOptions(){
let x = await test.getClasses();
console.log(x);
return ['test','test2'];
}
module.exports = {
data: new SlashCommandBuilder()
.setName('get-test-data')
.setDescription('Return Class and Race data from database')
.addStringOption(option =>{
option.setName('class')
.setDescription('Select a class for your character')
.setRequired(true)
for(let op of options()){
//option.addChoice(op,op);
}
return option
}
),
async execute(interaction){
},
};
This code produces the following error when start the npm for our bot on our server:
options is not a function or its return value is not iterable
I thought that maybe the function wasn't properly defined, so I replaced the contents of it with just a simple array return and the npm started without errors and the values I had passed showed up in the server.
This leads me to think that the function call in the modules.exports block is immediatly attempting to get the return value of the function and as the function is async, it isn't yet ready and is either returning undefined or a promise or something else not iteratable.
Is there a proper way to implement the code as shown? Or is this way too complex for discord.js to handle?
Is there a proper way to implement the idea at all? Like creating a json object that contains the option data which is built and saved to a file at some point prior to this command being registered and then having the code above just pull in that file for the option choices?
Alright, I found a way. Ian Malcom would be proud (LMAO).
Here is what I had to do for those with a similar issues:
I had to basically re-write our entire application. It sucks, I know, but it works so who cares?
When you run your index file for your npm, make sure that you do the following things.
Note: you can structure this however you want, this is just how I set up my js files.
Setup a function that will setup the data you need, it needs to be an async function as does everything downstream from this point on relating to the creation and registration of the slash commands.
Create a js file to act as your application setup "module". "Module" because we're faking a real module by just using the module.exports method. No package.jsons needed.
In the setup file, you will need two requires. The first is a, as of yet, non-existent data manager file; we'll do that next. The second is a require for node:fs.
Create an async function in your setup file called setup and add it to your module.exports like so:
module.exports = { setup }
In your async setup function or in a function that it calls, make a call to the function in your still as of yet non-existent data manager file. Use await so that the application doesn't proceed until something is returned. Here is what mine looks like, note that I am writing my data to a file to read in later because of my use case, you may or may not have to do the same for yours:
async function setup(){
console.log('test');
//build option choice lists
let listsBuilt = await buildChoiceLists();
if (listsBuilt){
return true;
} else {
return false;
}
}
async function buildChoiceLists(){
let classListBuilt = await buildClassList();
return true;
}
async function buildClassList(){
let classData = await classDataManager.getClassData();
console.log(classData);
classList = classData;
await writeFiles();
return true;
}
async function writeFiles(){
fs.writeFileSync('./CommandData/classList.json', JSON.stringify(classList));
}
Before we finish off this file, if you want to store anything as a property in this file and then get it later on, you can do so. In order for the data to return properly though, you will need to define a getter function in your exports. Here is an example:
var classList;
module.exports={
getClassList: () => classList,
setup
};
So, with everything above you should have something that looks like this:
const classDataManager = require('./DataManagers/ClassData.js')
const fs = require('node:fs');
var classList;
async function setup(){
console.log('test');
//build option choice lists
let listsBuilt = await buildChoiceLists();
if (listsBuilt){
return true;
} else {
return false;
}
}
async function buildChoiceLists(){
let classListBuilt = await buildClassList();
return true;
}
async function buildClassList(){
let classData = await classDataManager.getClassData();
console.log(classData);
classList = classData;
await writeFiles();
return true;
}
async function writeFiles(){
fs.writeFileSync('./CommandData/classList.json', JSON.stringify(classList));
}
module.exports={
getClassList: () => classList,
setup
};
Next that pesky non-existent DataManager file. For mine, each data type will have its own, but you might want to just combine them all into a single .js file for yours.
Same with the folder name, I called mine DataManagers, if you're combining them all into one, you could just call the file DataManager and leave it in the same folder as your appSetup.js file.
For the data manager file all we really need is a function to get our data and then return it in the format we want it to be in. I am using node-fetch. If you are using some other module for data requests, write your code as needed.
Instead of explaining everything, here is the contents of my file, not much has to be explained here:
const fetch = require('node-fetch');
async function getClassData(){
return new Promise((resolve) => {
let data = "action=GetTestData";
fetch('http://xxx.xxx.xxx.xx/backend/characterHandler.php', {
method: 'post',
headers: { 'Content-Type':'application/x-www-form-urlencoded'},
body: data
}).then(response => {
response.json().then(res => {
let status = res.status;
let clsData = res.classes;
let rcData = res.races;
if (status == "Success"){
let text = '';
let classes = [];
let races = [];
if (Object.keys(clsData).length > 0){
for (let key of Object.keys(clsData)){
let cls = clsData[key];
classes.push({
"name": key,
"code": key.toLowerCase()
});
}
}
if (Object.keys(rcData).length > 0){
for (let key of Object.keys(rcData)){
let rc = rcData[key];
races.push({
"name": key,
"desc": rc.Desc
});
}
}
resolve(classes);
}
});
});
});
}
module.exports = {
getClassData
};
This file contacts our backend php and requests data from it. It queries the data then returns it. Then we format it into an JSON structure for use later on with option choices for the slash command.
Once all of your appSetup and data manager files are complete, we still need to create the commands and register them with the server. So, in your index file add something similar to the following:
async function getCommands(){
let cmds = await comCreator.appSetup();
console.log(cmds);
client.commands = cmds;
}
getCommands();
This should go at or near the top of your index.js file. Note that comCreator refers to a file we haven't created yet; you can name this require const whatever you wish. That's it for this file.
Now, the "comCreator" file. I named mine deploy-commands.js, but you can name it whatever. Once again, here is the full file contents. I will explain anything that needs to be explained after:
const {Collection} = require('discord.js');
const {REST} = require('#discordjs/rest');
const {Routes} = require('discord-api-types/v9');
const app = require('./appSetup.js');
const fs = require('node:fs');
const config = require('./config.json');
async function appSetup(){
console.log('test2');
let setupDone = await app.setup();
console.log(setupDone);
console.log(app.getClassList());
return new Promise((resolve) => {
const cmds = [];
const cmdFiles = fs.readdirSync('./commands').filter(f => f.endsWith('.js'));
for (let file of cmdFiles){
let cmd = require('./commands/' + file);
console.log(file + ' added to commands!');
cmds.push(cmd.data.toJSON());
}
const rest = new REST({version: '9'}).setToken(config.token);
rest.put(Routes.applicationGuildCommands(config.clientId, config.guildId), {body: cmds})
.then(() => console.log('Successfully registered application commands.'))
.catch(console.error);
let commands = new Collection();
for (let file of cmdFiles){
let cmd = require('./commands/' + file);
commands.set(cmd.data.name, cmd);
}
resolve(commands);
});
}
module.exports = {
appSetup
};
Most of this is boiler plate for slash command creation though I did combine the creation and registering of the commands into the same process. As you can see, we are grabbing our command files, processing them into a collection, registering that collection, and then resolving the promise with that variable.
You might have noticed that property, was used to then set the client commands in the index.js file.
Config just contains your connection details for your discord server app.
Finally, how I accessed the data we wrote for the SlashCommandBuilder:
data: new SlashCommandBuilder()
.setName('get-test-data')
.setDescription('Return Class and Race data from database')
.addStringOption(option =>{
option.setName('class')
.setDescription('Select a class for your character')
.setRequired(true)
let ops = [];
let data = fs.readFileSync('./CommandData/classList.json','utf-8');
ops = JSON.parse(data);
console.log('test data class options: ' + ops);
for(let op of ops){
option.addChoice(op.name,op.code);
}
return option
}
),
Hopefully this helps someone in the future!

Synchronous mongoose request

Is it possible to process a db.model.find() query inside of function context and retrieve a result without using callbacks and promises with mongoose library?
I need to get assured, if some user exists in process of running controller, so, I can't minimize current scope to callback due to large amount of same operations (for example, communication with database). Also I'm trying to realize MVC model in my project, so, I want to keep the helper libs (modules) in separated files. That's why I don't want to use any callbacks or promises - they will much times complicate everything even more then things already do.
For example, how should I rewrite the following code to be executed successfully (if it's actually possible) (you can ignore login model and controller - they are written to represent complicacy if to rewrite that code using callbacks):
user.js lib
var db = require('./lib/db');
class User{
constructor(id){ //get user by id
var result = db.models.user.findOne({_id: id}); //unsupported syntax in real :(
if(!result || result._id != _id)
return false;
else{
this.userInfo = result;
return result;
}
}
}
module.exports = User;
login model
var user = require('./lib/user')
var model = {};
model.checkUserLogged(function(req){
if(!req.user.id || req.user.id == undefined)
return false;
if(!(this.user = new user(req.user.id)))
return false;
else
return true;
});
module.exports = model;
login controller
var proxy = require('express').router();
proxy.all('/login', function(req, res){
var model = require('./models/login');
if(!model.checkUserLogged()){
console.log('User is not logged in!');
res.render('unlogged', model);
}else{
console.log('User exists in database!');
res.render('logged_in', model);
}
});
Generator functions/yields, async/await (es2017), and everything et cetera can be used just to solve the problem without nesting.
Thx in advance.
There are two points wrong:
Mongoose methods can't be called synchronously (Anyway a call to a DB done synchronously is not a good idea at all).
Nor async/await nor generators can be used in the constructor of an ES6 Class. It is explained in this answer.
If you don't want nested code an easy option could be to use async/await (currently available in Node.js using a flag, not recommended for production). Since Mongoose methods return promises they can be used with async/await.
But as I said you can not do that in the constructor, so it has to be somewhere else.
As an example you could do something like this:
var proxy = require('express').router();
var db = require('./lib/db');
proxy.all('/login', async function(req, res){
const result = await db.models.user.findOne({_id: req.user.id}).exec();
if (!result) {
console.log('User is not logged in!');
return res.render('unlogged');
}
res.render('logged_in');
});
Old question, but I want to share a method for handling this that I didn't see in my first couple searches.
I want to get data from a model, run some logic and return the results from that logic. I need a promise wrapper around my call to the model.
Below is a slightly abstracted function that takes a model to run a mongoose/mongo query on, and a couple params to help it do some logic. It then returns the value that is expected in the promise or rejects.
export function promiseFunction(aField: string, aValue, model: Model<ADocument, {}>): Promise<aType> {
return new Promise<string>((resolve, reject) => {
model.findOne({[aField]: aValue}, (err, theDocument) => {
if(err){
reject(err.toString());
} else {
if(theDocument.someCheck === true){
return(theDocument.matchingTypeField)
} else {
reject("there was an error of some type")
}
}
});
})
}

How to translate Kafka pub-sub semantics into a peakNext promise semantics for unittesting in NodeJS

While unittesting my NodeJS application I'm trying to create a simple helper class that will translate the Kafka pub-sub semantics into a simpler API suited for unittesting.
My idea is to be able to write mocha unittest like this:
const testSubscriber = kafkaTestHelper.getTestSubscriber({topic:'test'});
return someKafkaProducer.sendAsync({topic: 'test', message: randomWord})
.then(() =>
testSubscriber.next()
).then(msg => {
msg.should.equal(randomWord);
});
Of course I would also add helper methods such as
testSubscriber.nextUntil(someFilter)
This is inspired by the AKKA.NET TestKit which has a similar approach.
I have two questions:
Is this a reasonable approach or is there some cleaner way to unittest application logic based on Kafka stream processing in NodeJS?
Can anybody post coding examples showing how to make testSubscriber work as I intend?
This might not be the most elegant solution but it seems to work, at least for my initial testing. The trick is to create an ever growing list of Promises for which the resolver function is kept by reference in an array called 'resolvers'. Then when a message comes in, the resolver is invoked with the message. In this way I can return promises to any unittest invoking next() and it will work transparently if either the message was already delivered or it will be delivered in the future.
I still feel I'm reinventing the wheel here, so any comments would still be greatly appreciated.
function TestSubscriber(consumer, initialMessageFilter) {
this.consumer = consumer;
let promiseBuffer = [];
let resolvers = [];
let resolveCounter = 0;
let isStarted = false;
const ensurePromiseBuffer = function() {
if (promiseBuffer.length === 0 || resolveCounter >= resolvers.length) {
const newPromise = new Promise(function(resolve, reject) {
resolvers.push(resolve);
});
promiseBuffer.push(newPromise);
}
}
const that = this;
this.consumer.on('message', function(message) {
if (!isStarted) {
//Determine if we should start now.
isStarted = initialMessageFilter === undefined || initialMessageFilter(message);
}
if (isStarted) {
ensurePromiseBuffer();
const resolver = resolvers[resolveCounter];
resolver(message);
resolveCounter++;
that.consumer.commit(function(err, data) {
if (err) {
//Just log any errors here as we are running inside a unittest
log.warn(err)
}
})
}
});
this.next = function() {
ensurePromiseBuffer();
return promiseBuffer.shift();
};
}
const cache = {};
module.exports = {
getTestSubscriber: function({topic}, initialMessageFilter) {
if (!cache[topic]) {
const consumer = kafka.getConsumer({topic, groupId: GROUP_ID});
cache[topic] = new TestSubscriber(consumer, initialMessageFilter);
}
return cache[topic];
}
}

Resources