How to pass variable on button click in Inline Keyboard in Telegraf? - node.js

I found this solution that helps to solve my problem:
function makeMenu(clientId){
let clientButton = [Markup.callbackButton('📗 '+clientId+' Information', 'info-'+clientId)]
return Markup.inlineKeyboard([clientButton])
}
bot.action(/^[client]+(-[a-z]+)?$/, ctx => {
console.log(ctx.match[1].split('-')[1] )
})
But this is a poor solution or a workaround because I need to pass a long list of parameters and there is a limitation in the telegram's api to pass strings up to 64 bytes.

One solution to the issue you're facing would be to put the large data inside a database and passing an Id (or a ref to that data) as the callback data and using the code you've posted.
An example code would be:
function makeMenu(clientId){
const id = storeDataToDB('info-'+clientId) // store the large data to DB in here
let clientButton = [Markup.callbackButton('📗 '+clientId+' Information', id)]
return Markup.inlineKeyboard([clientButton])
}
bot.action(/^[client]+(-[a-z]+)?$/, ctx => {
const data = getDataFromDB(ctx.match[1].split('-')[1]) // fetch the data from DB and continue..
console.log(data)
})
You could use firebase, mongodb, or any other DB.. (just make sure the ID adheres to the limit imposed by telegram)

Related

get the number of documents before and after Firestore trigger - Cloud functions

I'm trying to get the number of documents in a collection before and after a document has been added using cloud functions, the code in nodeJs I wrote is this:
exports.onShowcaseCreated = functions.firestore
.document("Show/{document}")
.onCreate((snapshot, context) => {
const showcaseDict = snapshot.data();
const uid = showcaseDict.uid;
return db.collection("Showcases").where("uid", "==", uid).get()
.then((showsnap) => {
const numberOfShowcaseBefore = showsnap.size.before;
const numberOfShowcaseAfter = showsnap.size.after;
console.log( numberOfShowcaseBefore, numberOfShowcaseAfter);
if ( numberOfShowcaseBefore == 0 && numberOfShowcaseAfter == 1 ) {
return db.collection("Users").doc(uid).update({
timestamp: admin.firestore.Timestamp.now(),
});
});
});
});
but the console logs are undefined undefined it seems like this is not the right approach for taking the number of documents before and after a document has beed added
The before and after properties are only defined on the argument that is passed to onCreate. You call that snapshot in your code, but it's actually a Change object as defined here.
Reading data from the database gives you a QuerySnapshot object as defined here. As you can see, that size on that QuerySnapshot is just a number and does not have before or after properties.
So there's no way to determine the size before the event triggered with your approach. Any query you run in the code, runs after the event was triggered so will give you the size at that moment.
To implement this use-case I'd recommend storing a count of the number of relevant documents in the database itself, and then triggering a Cloud Function when that document changes. Inside the Cloud Function code you can then read the previous and new value of the size from the change document that is passed in.

Poor Performance Writing to Firebase Realtime Database from Google Cloud Function

I have been developing a game where on the user submitting data, the client writes some data to a Firebase Realtime Database. I then have a Google Cloud Function which is triggered onUpdate. That function checks the submissions from various players in a particular game and if certain criteria are met, the function writes an update to the DB which causes the clients to move to the next round of the game.
This is all working, however, I have found performance to be quite poor.
I've added logging to the function and can see that the function takes anywhere from 2-10ms to complete, which is acceptable. The issue is that the update is often written anywhere from 10-30 SECONDS after the function has returned the update.
To determine this, my function obtains the UTC epoch timestamp from just before writing the update, and stores this as a key with the value being the Firebase server timestamp.
I then have manually checked the two timestamps to arrive at the time between return and database write:
The strange thing is that I have another cloud function which is triggered by an HTTP request, and found that the update from that function is typically from 0.5-2 seconds after the function calls the DB update() API.
The difference between these two functions, aside from how they are triggered, is how the data is written back to the DB.
The onUpdate() function writes data by returning:
return after.ref.update(updateToWrite);
Whereas the HTTP request function writes data by calling the update API:
dbRef.update({
// object to write
});
I've provided a slightly stripped out version of my onUpdate function here (same structure but sanitised function names) :
exports.onGameUpdate = functions.database.ref("/games/{gameId}")
.onUpdate(async(snapshot, context) => {
console.time("onGameUpdate");
let id = context.params.gameId;
let after = snapshot.after;
let updatedSnapshot = snapshot.after.val();
if (updatedSnapshot.serverShouldProcess && !isFinished) {
processUpdate().then((res)=>{
// some logic to check res, and if criteria met, move on to promises to determine update data
determineDataToWrite().then((updateToWrite)=>{
// get current time
var d = new Date();
let triggerTime = d.getTime();
// I use triggerTime as the key, and firebase.database.ServerValue.TIMESTAMP as the value
if(updateToWrite["timestamps"] !== null && updateToWrite["timestamps"] !== undefined){
let timestampsCopy = updateToWrite["timestamps"];
timestampsCopy[triggerTime] = firebase.database.ServerValue.TIMESTAMP;
updateToWrite["timestamps"][triggerTime] = firebase.database.ServerValue.TIMESTAMP;
}else{
let timestampsObj = {};
timestampsObj[triggerTime] = firebase.database.ServerValue.TIMESTAMP;
updateToWrite["timestamps"] = timestampsObj;
}
// write the update to the database
return after.ref.update(updateToWrite);
}).catch((error)=>{
// error handling
})
}
// this is just here because ES Linter complains if there's no return
return null;
})
.catch((error) => {
// error handling
})
}
});
I'd appreciate any help! Thanks :)

how to use redis and sequelize together

im using redis for caching and sequelize as my orm.
i want to cache every query as key and it's result as value.
let me give you an example of how i'm trying to do it
imagine user request for all blogs that are created by himself, normally we would write something like this
blogs.findAll({where:{author:req.params.id}})
when i want to cache something like this i add an attribute named as model and for this example model would be equal to blog after that i will stringify this object and use it as key. this way i can easily create the key and check whether user response is cached or not, but i don't want to rewrite code for every request for checking redis and deciding to make query to database or not and i have 2 models now so i write this piece of code
for (m in models) {
models[m].myFindAll = function (options = {}) {
return new Promise(async function (resolve, reject) {
try {
const key = Object.assign({}, options);
key.model = m;
key.method = "findAll";
var result = await redis.get(JSON.stringify(key));
if (result) {
resolve(JSON.parse(result));
}
result = await models[m].findAll(options);
redis.set(JSON.stringify(key), JSON.stringify(result));
resolve(result);
} catch (err) {
reject(err);
}
});
};
}
as you can see i have an object that contains every model that i have and it is named models.
firstly i added User and after that i added Blog.
my problem is that when i try to use myFindAll function on User Model it won't work becuse it tries to set key.model with value of variable m which will be Blog in the run time, i solved it when i passed the model name an argument to my function but i don't want it that way and i think there should be a better way be i can't find it, isn't there some way of accessing right model through this object?
another thing that i tried to used libraries like sequelize-redis-cache and ... but i wanted to do it my way and i don't want to use this library

Javascript trigger action for form fields created using PDFTron WebViewer FormBuilder UI

I am currently evaluating WebViewer version 5.2.8.
I need to set some javascript function/code as an action for triggers like calculate trigger, format trigger and keystroke trigger through the WebViewer UI.
Please help me on how to configure javascript code for a form field trigger in WebViewer UI.
Thanks in advance,
Syed
Sorry for the late response!
You will have to create the UI components yourself that will take in the JavaScript code. You can do something similar to what the FormBuilder demo does with just HTML and JavaScript. However, it may be better to clone the open source UI and add your own components.
As for setting the action, I would recommend trying out version 6.0 instead as there is better support for widgets and form fields in that version. However, we are investigating a bug with the field actions that will throw an error on downloading the document. You should be able to use this code to get it working first:
docViewer.on('annotationsLoaded', () => {
const annotations = annotManager.getAnnotationsList();
annotations.forEach(annot => {
const action = new instance.Actions.JavaScript({ javascript: 'alert("Hello World!")' });
// C cor Calculate, and F for Format
annot.addAction('K', action);
});
});
Once the bug has been dealt with, you should be able to download the document properly.
Otherwise, you will have to use the full API and that may be less than ideal. It would be a bit more complicated with the full API and I would not recommend it if the above feature will be fixed soon.
Let me know if this helps or if you need more information about using the full API to accomplish this!
EDIT
Here is the code to do it with the full API! Since the full API works at a low level and very closely to the PDF specification, it does take a lot more to make it work. You do still have to update the annotations with the code I provided before which I will include again.
docViewer.on('documentLoaded', async () => {
// This part requires the full API: https://www.pdftron.com/documentation/web/guides/full-api/setup/
const doc = docViewer.getDocument();
// Get document from worker
const pdfDoc = await doc.getPDFDoc();
const pageItr = await pdfDoc.getPageIterator();
while (await pageItr.hasNext()) {
const page = await pageItr.current();
// Note: this is a PDF array, not a JS array
const annots = await page.getAnnots();
const numAnnots = await page.getNumAnnots();
for (let i = 0; i < numAnnots; i++) {
const annot = await annots.getAt(i);
const subtypeDict = await annot.findObj('Subtype');
const subtype = await subtypeDict.getName();
const actions = await annot.findObj('AA');
// Check to make sure the annot is of type Widget
if (subtype === 'Widget') {
// Create the additional actions dictionary if it does not exist
if (!actions) {
actions = await annot.putDict('AA');
}
let calculate = await actions.findObj('C');
// Create the calculate action (C) if it does not exist
if (!calculate) {
calculate = await actions.putDict('C');
await Promise.all([calculate.putName('S', 'JavaScript'), calculate.putString('JS', 'app.alert("Hello World!")')]);
}
// Repeat for keystroke (K) and format (F)
}
}
pageItr.next();
}
});
docViewer.on('annotationsLoaded', () => {
const annotations = annotManager.getAnnotationsList();
annotations.forEach(annot => {
const action = new instance.Actions.JavaScript({ javascript: 'app.alert("Hello World!")' });
// K for Keystroke, and F for Format
annot.addAction('C', action);
});
});
You can probably put them together under the documentLoaded event but once the fix is ready, you can delete the part using the full API.

Google Cloud Datastore, how to query for more results

Straight and simple, I have the following function, using Google Cloud Datastore Node.js API:
fetchAll(query, result=[], queryCursor=null) {
this.debug(`datastoreService.fetchAll, queryCursor=${queryCursor}`);
if (queryCursor !== null) {
query.start(queryCursor);
}
return this.datastore.runQuery(query)
.then( (results) => {
result=result.concat(results[0]);
if (results[1].moreResults === _datastore.NO_MORE_RESULTS) {
return result;
} else {
this.debug(`results[1] = `, results[1]);
this.debug(`fetch next with queryCursor=${results[1].endCursor}`);
return this.fetchAll(query, result, results[1].endCursor);
}
});
}
The Datastore API object is in the variable this.datastore;
The goal of this function is to fetch all results for a given query, notwithstanding any limits on the number of items returned per single runQuery call.
I have not yet found out about any definite hard limits imposed by the Datastore API on this, and the documentation seems somewhat opaque on this point, but I only noticed that I always get
results[1] = { moreResults: 'MORE_RESULTS_AFTER_LIMIT' },
indicating that there are still more results to be fetched, and the results[1].endCursor remains stuck on constant value that is passed on again on each iteration.
So, given some simple query that I plug into this function, I just go on running the query iteratively, setting the query start cursor (by doing query.start(queryCursor);) to the endCursor obtained in the result of the previous query. And my hope is, obviously, to obtain the next bunch of results on each successive query in this iteration. But I always get the same value for results[1].endCursor. My question is: Why?
Conceptually, I cannot see a difference to this example given in the Google Documentation:
// By default, google-cloud-node will automatically paginate through all of
// the results that match a query. However, this sample implements manual
// pagination using limits and cursor tokens.
function runPageQuery (pageCursor) {
let query = datastore.createQuery('Task')
.limit(pageSize);
if (pageCursor) {
query = query.start(pageCursor);
}
return datastore.runQuery(query)
.then((results) => {
const entities = results[0];
const info = results[1];
if (info.moreResults !== Datastore.NO_MORE_RESULTS) {
// If there are more results to retrieve, the end cursor is
// automatically set on `info`. To get this value directly, access
// the `endCursor` property.
return runPageQuery(info.endCursor)
.then((results) => {
// Concatenate entities
results[0] = entities.concat(results[0]);
return results;
});
}
return [entities, info];
});
}
(except for the fact, that I don't specify a limit on the size of the query result by myself, which I have also tried, by setting it to 1000, which does not change anything.)
Why does my code run into this infinite loop, stuck on each step at the same "endCursor"? And how do I correct this?
Also, what is the hard limit on the number of results obtained per call of datastore.runQuery()? I have not found this information in the Google Datastore documentation thus far.
Thanks.
Looking at the API documentation for the Node.js client library for Datastore there is a section on that page titled "Paginating Records" that may help you. Here's a direct copy of the code snippet from the section:
var express = require('express');
var app = express();
var NUM_RESULTS_PER_PAGE = 15;
app.get('/contacts', function(req, res) {
var query = datastore.createQuery('Contacts')
.limit(NUM_RESULTS_PER_PAGE);
if (req.query.nextPageCursor) {
query.start(req.query.nextPageCursor);
}
datastore.runQuery(query, function(err, entities, info) {
if (err) {
// Error handling omitted.
return;
}
// Respond to the front end with the contacts and the cursoring token
// from the query we just ran.
var frontEndResponse = {
contacts: entities
};
// Check if more results may exist.
if (info.moreResults !== datastore.NO_MORE_RESULTS) {
frontEndResponse.nextPageCursor = info.endCursor;
}
res.render('contacts', frontEndResponse);
});
});
Maybe you can try using one of the other syntax options (instead of Promises). The runQuery method can take a callback function as an argument, and that callback's parameters include explicit references to the entities array and the info object (which has the endCursor as a property).
And there are limits and quotas imposed on calls to the Datastore API as well. Here are links to official documentation that address them in detail:
Limits
Quotas

Resources