Connection to Postgres (AWS RDS) with Google Firebase Functions [duplicate] - node.js

I am trying the Optimizing Networking of firebase cloud functions like here with Typescript
const http = require('http');
const functions = require('firebase-functions');
const agent = new http.Agent({keepAlive: true});
export const getXXX = functions.https.onRequest((request, response) => {
const req = http.request({
host: 'localhost',
port: 443,
path: '',
method: 'GET',
agent: agent,
}, res => {
let rawData = '';
res.setEncoding('utf8');
res.on('data', chunk => { rawData += chunk; });
res.on('end', () => {
response.status(200).send(`Data: ${rawData}`);
});
});
req.on('error', e => {
response.status(500).send(`Error: ${e.message}`);
});
req.end();
});
but I keep getting
error: connect ECONNREFUSED 127.0.0.1:443
I am not very familiar with TypeScript and js so please help me.
Another question when is res.on 'Data' gets triggered ?

Turns out I need to be on a paid plan in order to make external HTTP requests from inside my function.

You can't access anything on "localhost" (127.0.0.1) in Cloud Functions. I suspect that you meant to put a different host in there, and ensure that your project is on the Blaze plan to enable outgoing connections to services not fully controlled by Google.

You can run Cloud Functions on localhost. All you need to do is run a local emulator of the Cloud services. Which Google has provided! It's a really awesome tool and a great setup!
Follow these steps for the Firebase tool suite: https://firebase.google.com/docs/functions/local-emulator
Follow these steps for the Cloud tool suite: https://cloud.google.com/functions/docs/emulator
They are pretty much similar.
You do not need Blaze plan, you can use the "pay as you go" plan, which includes the free tier quota. "Free usage from Spark plan included*" https://firebase.google.com/pricing

Related

AWS Lambda, NodeJS, pg-promise library - High latency > 10000ms

I have an AWS Lambda function setup using NodeJS, which makes a call to a postgres database using the pg-promise library to retrieve data - which then sends HTTPS GET requests etc. but none of the rest is important.
I was using the 'pg' library originally but ran into connection closing & async issues which is why I switched to pg-promise (Which DID fix my other problem!). With the regular pg library I was getting expected latency of <1000ms
I have boiled down and redacted the code to just a simple query, and I am getting these response times from the last 3 test runs: 11790.78 ms, 11232.22 ms, 12002.04 ms. Every single time it is over 10000ms...
EDIT: Fixed code
const pgp = require('pg-promise')();
const https = require('https');
const xmlParser = require('xml2js').Parser();
const client = pgp({
database: process.env.DATABASE,
host: process.env.HOST,
port: process.env.PORT,
user: process.env.USERNAME,
password: process.env.PASSWORD
});
exports.handler = function(event, context, callback) {
client.one("SELECT period FROM pay WHERE company='XXX' ORDER BY moddate DESC LIMIT 1;")
.then(function(data) {
callback(null, {
"statusCode": 200,
"headers": {
'Content-Type' : 'application/json'
},
"body": data.period
});
})
.catch(function(error) {
console.error(error);
});
};
As stated, I was having no problems with latency when using the 'pg' library, so I know there is no problem with the lambda-RDS postgres connection.
Does anyone have any idea why this is?
Thanks,
In the end I figured out what the problem was... leaving pg-promise to automatically shut down the connection pool is what was causing the latency.
Chaining a
.finally(pgp.end);
after the .catch gave me a 200ms response time.
Thanks everyone

How to get HTML source of HTTPS website in Node

I have this following code snippet which works with Google, but I noticed that trying to reach websites like Amazon which force HTTPS will throw an error 301 (permanently moved). I think the problem may be that I’m using the http package, but the HTTPS package confuses me. If anyone could help me out, that would be stupendous.
var vars = {
host: “www.google.com”,
port: 80,
path: “/index.html”
}
http.get(vars, function(res) {
console.log(res.statusCode);
res.setEncoding(“utf8”);
res.on(“data”, function(data) {
console.log(data);
}
})
You can just use https.get(). But, for https, you have to use a different port (443). I prefer to just pass in the URL and let the library handle the default port for me:
const https = require('https');
https.get("https://www.google.com/index.html", function(res) {
console.log(res.statusCode);
res.setEncoding('utf8');
res.on('data', function(data) {
console.log(data);
});
}).on('error', function(err) {
console.log(err);
});
This may return the data in multiple data events so if you want all the data, you'd have to manually combine all the data.
Personally, I prefer to use a higher level library that is promise-based and makes lots of things simpler:
const got = require('got');
got("https://www.google.com/index.html").then(result => {
console.log(result);
}).catch(err => {
console.log(err);
});
Among many other features, the got() library will automatically collect the whole response for you, uses promises, will follow redirects, will automatically parse JSON results, will check the status and provide an error for 4xx and 5xx statuses, supports lots of authentication means, etc... It's just easier to use than the plain http/https libraries.

This is a general expressjs running on node.js inside a docker container and on the cloud question

I have built two docker images. One with nginx that serves my angular web app and another with node.js that serves a basic express app. I have tried to access the express app from my browser in two different tabs at the same time.
In one tab the angular dev server (ng serve) serves up the web page. In the other tab the docker nginx container serves up the web page.
While accessing the node.js express app at the same time from both tabs the data starts to mix and mingle and the results returned to both tabs are a mix mash of the two requests (one from each browser tab)...
I'll try and make this more simple by showing my express app code here...but to answer this question you may not even need to know what the code is at all...so maybe check the question as stated below the code first.
'use strict';
/***********************************
GOOGLE GMAIL AND OAUTH SETUP
***********************************/
const fs = require('fs');
const {google} = require('googleapis');
const gmail = google.gmail('v1');
const clientSecretJson = JSON.parse(fs.readFileSync('./client_secret.json'));
const oauth2Client = new google.auth.OAuth2(
clientSecretJson.web.client_id,
clientSecretJson.web.client_secret,
'https://us-central1-labelorganizer.cloudfunctions.net/oauth2callback'
);
/***********************************
EXPRESS WITH CORS SETUP
***********************************/
const PORT = 8000;
const HOST = '0.0.0.0';
const express = require('express');
const cors = require('cors');
const cookieParser = require('cookie-parser');
const bodyParser = require('body-parser');
const whiteList = [
'http://localhost:4200',
'http://localhost:80',
'http://localhost',
];
const googleApi = express();
googleApi.use(
cors({
origin: whiteList
}),
cookieParser(),
bodyParser()
);
function getPageOfThreads(pageToken, userId, labelIds) {
return new Promise((resolve, reject) => {
gmail.users.threads.list(
{
'auth': oauth2Client,
'userId': userId,
'labelIds': labelIds,
'pageToken': pageToken
},
(error, response) => {
if (error) {
console.error(error);
reject(error);
}
resolve(response.data);
}
)
});
}
async function getPages(nextPageToken, userId, labelIds, result) {
while (nextPageToken) {
let pageOfThreads = await getPageOfThreads(nextPageToken, userId, labelIds);
console.log(pageOfThreads.nextPageToken);
pageOfThreads.threads.forEach((thread) => {
result = result.concat(thread.id);
})
nextPageToken = pageOfThreads.nextPageToken;
}
return result;
}
googleApi.post('/threads', (req, res) => {
console.log(req.body);
let threadIds = [];
oauth2Client.credentials = req.body.token;
let getAllThreadIds = new Promise((resolve, reject) => {
gmail.users.threads.list(
{ 'auth': oauth2Client, 'userId': 'me', 'maxResults': 500 },
(err, response) => {
if (err) {
console.error(err)
reject(err);
}
if (response.data.threads) {
response.data.threads.forEach((thread) => {
threadIds = threadIds.concat(thread.id);
});
}
if (response.data.nextPageToken) {
getPages(response.data.nextPageToken, 'me', ['INBOX'], threadIds).then(result => {
resolve(result);
}).catch((err) => {
console.error(err);
reject(err);
});
} else {
resolve(threadIds);
}
}
);
});
getAllThreadIds
.then((result) => {
res.send({ threadIds: result });
})
.catch((error) => {
res.status(500).send({ error: 'Request failed with error: ' + error })
});
});
googleApi.get('/', (req, res) => res.send('Hello World!'))
googleApi.listen(PORT, HOST);
console.log(`Running on http://${HOST}:${PORT}`);
The angular app makes a simple request to the express app and waits for the reply...which it properly receives...but when I try to make two requests at the exact same time data starts to get mixed together and results are given back to each browser tab from different accounts...
...and the question is... When running containers in the cloud is this kind of thing an issue? Does one need to spin up a new container for each client that wants to actively connect to the express service so that their data doesn't get mixed?
...or is this an issue I am seeing because the express app is being accessed from locally inside my machine? If two machines with two different ip address tried to access this express server at the same time would this sort of data mixing still be an issue or would each get back it's own set of results?
Is this why people use CaaS instead of IaaS solutions?
FYI: this is demo code and the data will not be actually going back to the consumer directly...plans are to have it placed into a database and then re-extracted from the database to download all of the metadata headers for each email.
-Thank you for your time
I can only clear up a small part of this question:
When running containers in the cloud is this kind of thing an issue?
No. Docker is not causing any of the quirky behaviour that you are describing.
Does one need to spin up a new container for each client?
A docker container generally can serve as much users as the application inside of it can. So as long as your application can handle a lot of users (and it should), you don't have to start the same application in multiple containers. That said, when you expect a very large number of customers, there exist docker tools like Docker Compose, Docker Swarm and a lot of alternatives that will enable you to scale up later. For now, you don't need to worry about this at all.
I think I may have found out the issue with my code...and this is actually very important if you are using the node.js googleapis client library...
It is entirely necessary to create a new oauth2Client for each request that comes in
const oauth2Client = new google.auth.OAuth2(
clientSecretJson.web.client_id,
clientSecretJson.web.client_secret,
'https://us-central1-labelorganizer.cloudfunctions.net/oauth2callback'
);
Problem:
When this oauth2Client is shared it is shared by each and every person that connects at the same time...So it is necessary to create a new one each and every time a user connects to my /threads endpoint so that they do not share the same memory space (i.e. access_token etc.) while the processing is done.
Setting the client secret etc. and creating the oauth2Client just once at the top and then simply resetting the token for each request leads to the conflicts mentioned above.
Solution:
For now simply moving the creation of this oauth2Client into each and every request that comes in makes this work properly.
Each client that connects to the service NEEDS to have their own newly created oauth2Client instance or these types of conflicts will occur...
...it's kind of a no brainer but I still find it odd that there is nothing about this in the docs? and their own examples (https://github.com/googleapis/google-api-nodejs-client) seem to show only one instance being created for the whole of their app...but those examples are snippets so...

Hitting connection limits with Azure functions and Azure SQL (node.js)

I have a function app that serves a node.js API. We are hitting the 900 concurrent connections limit with tedious connected to Azure SQL and realize we should add connection pools (unless there is a better recommendation of course).
Azure Functions + Azure SQL + Node.js and connection pooling between requests? seems to answer our prayers but wanted to validate how you can use a single connection pool with Azure functions
Is the best practice to put "let pool = new ConnectionPool(poolConfig, connectionConfig);" above mode.exports on all functions? Is that not creating a new pool every time an individual function is called?
Microsoft doesn't have clear documentation on this for node.js unfortunately so any help would be greatly appreciated!
To make the whole Function app share one single pool, we need to put the initialization part to a shared module. Christiaan Westerbeek had posted a wonderful solution using mssql, there's not so much difference between a Function app and a web app in this respect.
I recommend using mssql(use tedious and generic-pool internally) instead of tedious-connection-pool which seems not updated for 2 years.
Put the connection code in poolConfig.js under a SharedLib folder.
const sql = require('mssql');
const config = {
pool:{
max:50 // default: 10
},
user: '',
password: '',
server: '',
database: '',
options: {
encrypt: true // For Azure Sql
}
};
const poolPromise = new sql.ConnectionPool(config).connect().then(pool => {
console.log('Connected to MSSQL');
return pool;
})
.catch(err => console.log('Database Connection Failed! Bad Config: ', err));
module.exports = {
sql, poolPromise
}
And load the module to connect to sql. We use await to get ConnectionPool the function should be async(default for v2 js function).
const { poolPromise } = require('../SharedLib/poolConfig');
module.exports = async function (context, req) {
var pool = await poolPromise;
var result = await pool.request().query("");
...
}
Note that if Function app is scaled out with multiple instances, new pool will be created for each instance as well.

How do I create an Alexa Skill that gets data from an HTTP/HTTPS API (using "Alexa Skills Kit" for Node.js on AWS Lambda)

I want to create a skill for Amazon Alexa that - when triggered by voice commands - gets some information from an API via a HTTPS request and uses the result as spoken answer to the Alexa user.
There is a little challenge here (especially for node.js newbies) due to the event-driven concept of node.js and the internals of the Alexa Skills Kit for Node.js. And getting a hand on parameters from the user isn't that easy, either.
Can somebody provide some sample code to start with?
Preliminaries
To get started you need an Amazon account, and you must enable AWS for the account.
Then there is a nice step-by-step guide on the Amazon Website: https://developer.amazon.com/edw/home.html#/skills
It walks you through step-by-step through the process of creating a "skill". A skill is the ability for Alexa to answer questions in natural language.
During this process you also create a Lambda function (select to create one of the demo script applications, and you get all required libraries automatically)
Then you can then edit the code in the WebUI of the AWS Console).
The "skill" is automatically enabled on all your personal Alexa Devices, like my Amazon Echo dot at home.
remember that you can look at the console output in your AWS Cloudwatch section of the AWS console.
The two things I had to understand (and that others may run into, too)
While I created my first Alexa Skill I was new node.js, Lambda and the Alexa Skills SDK. So I ran into a few problems. I'd like to document the solutions here for the next person who runs into the same problem.
When you make an http get request in node.js using https.get() you need to provide a handler for the end callback like res.on('end', function(res) {});
The answer is sent back from the Lambda script to the Alexa Service when you call this.emit(':tell', 'blabla'); (this is what is used in the samples from AWS). But in the end-handler "this" isn't the right "this" anymore, you need to store the handle beforehand (I am doing this a little crookedly using mythis, I am sure there are smarter solutions, but it works).
I would have easily saved two hours of debugging had I had the following code snippet. :-)
The code
I my sample the lambda script already gets the preformatted text from the API. If you call an XML/JSON or whatever API you need to work with the answer a bit more.
'use strict';
const Alexa = require('alexa-sdk');
var https = require('https');
const APP_ID = ''; // TODO replace with your app ID (OPTIONAL).
const handlers = {
'functionwithoutdata': function() {
var responseString = '';
var mythis = this;
https.get('**YOURURL**?**yourparameters**', (res) => {
console.log('statusCode:', res.statusCode);
console.log('headers:', res.headers);
res.on('data', (d) => {
responseString += d;
});
res.on('end', function(res) {
const speechOutput = responseString;
console.log('==> Answering: ', speechOutput);
mythis.emit(':tell', 'The answer is'+speechOutput);
});
}).on('error', (e) => {
console.error(e);
});
},
'functionwithdata': function() {
var mydata = this.event.request.intent.slots.mydata.value;
console.log('mydata:', mydata);
var responseString = '';
var mythis = this;
https.get('**YOURURL**?**yourparameters**&mydata=' + mydata, (res) => {
console.log('statusCode:', res.statusCode);
console.log('headers:', res.headers);
res.on('data', (d) => {
responseString += d;
});
res.on('end', function(res) {
const speechOutput = responseString;
console.log('==> Answering: ', speechOutput);
mythis.emit(':tell', 'The answer is'+speechOutput);
});
}).on('error', (e) => {
console.error(e);
});
}
};
exports.handler = (event, context) => {
const alexa = Alexa.handler(event, context);
alexa.APP_ID = APP_ID;
alexa.registerHandlers(handlers);
alexa.execute();
};
How to create an Amazon Alexa bot from scratch?
If you’re looking for a way to create an Alexa voice enabled bot then you’re on the right place!
Let’s create an Amazon Alexa bot from scratch using node server running on our localhost and tunneled through ngrok.
Sign up for an Amazon developer account, if you don’t have one
Go to Alexa developer page
Go to the Alexa console
Click on Create skill
Give a name to the skill, I have named mine TestSkill and click on Next
Choose a model to add to your skill, I’ve selected custom for my experiments
Click on Create skill
This way you reach to the Alexa skill dashboard
Provide an invocation name, I’ve named it “give me pizza” and click on Save Model
Click on the Endpoint
Now, we need to provide the endpoint to Alexa console but first we need to setup an endpoint.
Creating a node server
Create a server which can accept POST requests on default location i.e. “/”.
There are so many techniques for creating a server, I personally prefer node.
I assume that you’ve node, npm and Visual studio code already installed
For the specific requirements of this tutorial, we will create a Hello World node app following the steps below:
Run npm init on a terminal and when asked for the package name Alexa
Follow the terminal wizard
Run cd Alexa
In the package.json file
Run npm i express http -save and this will add the following entry in the package.json file:
“dependencies”: {
“express”: “4.16.3”,
“http”: “0.0.0”
}
Set value to index.js of the main key in the package.json file
Add a file index.js on same level
Àdd the following code to the index.js file:
const express = require('express');
const app = express();
app.post('/', (req, res) =>
res.send({
version: '1.0',
response: {
shouldEndSession: false,
outputSpeech: {
type: 'SSML',
text: 'Hello World!',
ssml: 'Hello World!'
}
}
}));
app.listen(8080, () => console.log('Example app listening on port 8080!'));
Set value of scripts to { “start”: “node index.js” }
Run npm start on the terminal
Tunnel your localhost
Add a tunnel to the localhost on PORT 8080 running the node server using ngrok following the below steps:
Download and install ngrok, if not already done
Run ngrok http 8080 on a terminal
Copy the SSL enabled forwarded link, in my case https://6d0d6e60.ngrok.io is the SSL link
provide the link in the Enter URL link
Select HTTPS, and under SSL certificate type drop down select the 2nd option:
My development endpoint is a sub-domain of a domain that has a wildcard certificate from a certificate authority
Click Save Endpoints
Click on JSON editor and provide the following model:
{
"interactionModel": {
"languageModel": {
"invocationName": "get me pizza",
"intents": [
{
"name": "PizzaOrderIntent",
"slots": [],
"samples": [
"Give me a pizza"
]
},
{
"name": "AMAZON.FallbackIntent",
"samples": [
"I didn't understand"
]
},
{
"name": "AMAZON.CancelIntent",
"samples": [
"cancel plz"
]
},
{
"name": "AMAZON.HelpIntent",
"samples": [
"help"
]
},
{
"name": "AMAZON.StopIntent",
"samples": [
"stop"
]
}
],
"types": []
}
}
}
Click on Save Model and click on Build Model
Once, the skill model is build - we need to test it. Click on the Test tab and toggle ON the “Test is enabled for this skill”.
That’s it, you’ve created an Alexa bot connected to your locally running node project.
Questions? Comments? Do reach me at info#nordible.com
Read the full article
This is a sample code (not mine) that uses the alexa sdk which doesnt need to use awsLambda and works with just express, pure nodejs server
https://github.com/Glogo/alexa-skill-sample-nodejs-express

Resources