Azure Function connect Azure PostgreSQL ETIMEDOUT, errno: -4039 - node.js

I have an Azure (AZ) Function does two things:
validate submitted info involving 3rd party packages.
when ok call a postgreSQL function at AZ to fetch a small set of data
Testing with Postman, this AF localhost response time < 40 ms. Deployed to Cloud, change URL to AZ, same set of data, took 30 seconds got Status: 500 Internal Server Error.
Did a search, thought this SO might be the case, that I need to bump my subscription to the expensive one to avoid cold start.
But more investigation running part 1 and 2 individually and combined, found:
validation part alone runs perfect at AZ, response time < 40ms, just like local, suggests cold start/npm-installation is not an issue.
pg function call always long and status: 500 regardless it runs alone or succeeding part 1, no data returned.
Application Insight is enabled and added a Diagnostic settings with:
FunctionAppLogs and AllMetrics selected
Send to LogAnalytiscs workspace and Stream to an event hub selected
Following queries found no error/exceptions:
requests | order by timestamp desc |limit 100 // success is "true", time taken 30 seconds, status = 500
traces | order by timestamp desc | limit 30 // success is "true", time taken 30 seconds, status = 500
exceptions | limit 30 // no data returned
How complicated my pg call is? Standard connection, simple and short:
require('dotenv').config({ path: './environment/PostgreSql.env'});
const fs = require("fs");
const pgp = require('pg-promise')(); // () = taking default initOptions
require('dotenv').config({ path: './environment/PostgreSql.env'});
const fs = require("fs");
const pgp = require('pg-promise')(); // () = taking default initOptions
db = pgp(
{
user: process.env.PGuser,
host: process.env.PGhost,
database: process.env.PGdatabase,
password: process.env.PGpassword,
port: process.env.PGport,
ssl:
{
rejectUnauthorized: true,
ca: fs.readFileSync("./environment/DigiCertGlobalRootCA.crt.pem").toString(),
},
}
);
const pgTest = (nothing) =>
{
return new Promise((resolve, reject) =>
{
var sql = 'select * from schema.test()'; // test() does a select from a 2-row narrrow table.
db.any(sql)
.then
(
good => resolve(good),
bad => reject({status: 555, body: bad})
)
}
);
}
module.exports = { pgTest }
AF test1 is a standard httpTrigger anonymous access:
const x1 = require("package1");
...
const xx = require("packagex");
const pgdb = require("db");
module.exports = function(context)
{
try
{
pgdb.pgTest(1)
.then
(
good => {context.res={body: good}; context.done();},
bad => {context.res={body: bad}; context.done();}
)
.catch(err => {console.log(err)})
}
catch(e)
{ context.res={body: bad}; context.done(); }
}
Note:
AZ = Azure.
AZ pg doesn't require SSL.
pg connectivity method: public access (allowed IP addresses)
Postman tests on Local F5 run against the same AZ pg database, all same region.
pgAdmin and psql all running fast against the same.
AF-deploy is zip-file deployment, my understanding it is using the same configuration.
I'm new to Azure but based on my experience, if it's about credential then should come back right away.
Update 1, FunctionAppLogs | where TimeGenerated between ( datetime(2022-01-21 16:33:20) .. datetime(2022-01-21 16:35:46) )
Is it because my pg network access set to Public access?

My AZ pgDB is a flexible server, current Networking is Public access (allowed IP address), and I have added some Firewall rule w/ client IP address. My assumption is access is allowed within AZ, but it's not.
Solution 1, simply check this box: Allow public access from any Azure servcie within Azure to this server at the bottom of the Settings -> Networking.
Solution 2, find out all AF's outbound IP and add them into Firewall rule, under Settings -> Networking. Reason to add them all is Azure select an outbound IP randomly.

Related

How to call cloud function nearest to the user

I have a cloud function like this which has been set to run in multiple regions.
export const cloudFunction = functions
.region(["asia-south1", "us-central1", "europe-west1", "southamerica-east1"])
.https.onCall(async (data, context) => {});
How can I call the cloud function region nearest to the user? From any client side framework?
The best solution is to use an HTTPS Load Balancer and to create a serverless NEG with your Cloud Functions. The HTTPS Load Balancer will deploy a unicast IP, I mean an IP known in different PoP (Point of Presence) of Google, and will route the request to the closest location (from the PoP). It's native and out of the box, nothing to code.
You'll have to find the closet region based on user's timezone/location yourself and specify the region on client side for routing based on region w/o a balancer as each Cloud Function has it's own URL containing the region. For example, one way would be like:
const getClosestGcpRegion = () => {
const regions = ['asia-south1', 'us-central1', 'europe-west1']
const regionOffsets = {
'asia-south1': '+05:30',
'us-central1': '-06:00',
'europe-west1': '+01:00',
}
let closestRegion = ''
let closestOffset = Number.MAX_SAFE_INTEGER
for (const region of regions) {
const offset = regionOffsets[region].split(':')
const offsetMinutes = Number(offset[0]) * 60 + Number(offset[1])
const offsetDiff = Math.abs(DateTime.local().offset - offsetMinutes)
if (offsetDiff < closestOffset) {
closestOffset = offsetDiff
closestRegion = region
}
}
console.log({ closestRegion })
return closestRegion;
}
export const functions = getFunctions(app, getClosestGcpRegion())
Alternatively, also checkout Global external HTTP(S) load balancer with Cloud Functions that can help you achieve the same goal.

Ensuring Azure keyvault secrets are loaded to config (node-config) at application startup

I have a NodeJS application that uses Node-Config (https://www.npmjs.com/package/config) to load application configurations. What I'm trying to do is to load secrets from Azure Keyvault to the config during startup, and ensure these are available before required (e.g. connecting to databases etc).
I have no problem connecting to and retrieving values from the Keyvault, but I am struggling with the non-blocking nature of JS. The application startup process is continuing before the config values have completed loaded (asynchronously) to the config.
One strategy could be to delay application launch to await the keyvault secrets loading How to await in the main during start up in node?
Another would be to not load them in Config but instead modify code where-ever secrets are used to load these asynchronously via promises
It seems like this will be a common problem, so I am hoping someone here can provide examples or a design pattern of the best way of ensuring remote keyvault secrets are loaded during startup.
Thanks in advance for suggestions.
Rod
I have now successfully resolved this question.
A key point to note is setting process.env['ALLOW_CONFIG_MUTATIONS']=true;
Configs are immutable by default (they can't be changed after initial setting). Since async is going to resolve these later, it's critical that you adjust this setting. Otherwise you will see asynchronous configs obtaining correct values from the keystore, but when you check with config.get they will not have been set. This really should be added to the documentation at https://github.com/node-config/node-config/wiki/Asynchronous-Configurations
My solution: first, let's create a module for the Azure keystore client - azure-keyvault.mjs :
import { DefaultAzureCredential } from '#azure/identity';
import { SecretClient } from '#azure/keyvault-secrets';
// https://learn.microsoft.com/en-us/azure/developer/javascript/how-to/with-web-app/use-secret-environment-variables
if (
!process.env.AZURE_TENANT_ID ||
!process.env.AZURE_CLIENT_ID ||
!process.env.AZURE_CLIENT_SECRET ||
!process.env.KEY_VAULT_NAME
) {
throw Error('azure-keyvault - required environment vars not configured');
}
const credential = new DefaultAzureCredential();
// Build the URL to reach your key vault
const url = `https://${process.env.KEY_VAULT_NAME}.vault.azure.net`;
// Create client to connect to service
const client = new SecretClient(url, credential);
export default client;
In the config (using #node-config) files:
process.env['ALLOW_CONFIG_MUTATIONS']=true;
const asyncConfig = require('config/async').asyncConfig;
const defer = require('config/defer').deferConfig;
const debug = require('debug')('app:config:default');
// example usage debug(`\`CASSANDRA_HOSTS\` environment variable is ${databaseHosts}`);
async function getSecret(secretName) {
const client = await (await (import('../azure/azure-keyvault.mjs'))).default;
const secret = await client.getSecret(secretName);
// dev: debug(`Get Async config: ${secretName} : ${secret.value}`);
return secret.value
}
module.exports = {
//note: defer just calculates this config at the end of config generation
isProduction: defer(cfg => cfg.env === 'production'),
database: {
// use asyncConfig to obtain promise for secret
username: asyncConfig(getSecret('DATABASE-USERNAME')),
password: asyncConfig(getSecret('DATABASE-PASSWORD'))
},
...
}
Finally modify application startup to resolve the async conferences BEFORE config.get is called
server.js
const { resolveAsyncConfigs } = require('config/async');
const config = require('config');
const P = require('bluebird');
...
function initServer() {
return resolveAsyncConfigs(config).then(() => {
// if you want to confirm the async configs have loaded
// try outputting one of them to the console at this point
console.log('db username: ' + config.get("database.username"));
// now proceed with any operations that will require configs
const client = require('./init/database.js');
// continue with bootstrapping (whatever you code is)
// in our case let's proceed once the db is ready
return client.promiseToBeReady().then(function () {
return new P.Promise(_pBootstrap);
});
});
}
I hope this helps others wishing to use config/async with remote keystores such as Azure. Comments or improvements on above welcome.
~ Rod

How to connect to Google Cloud SQL (PostgreSQL) from Cloud Functions?

I feel like I've tried everything. I have a cloud function that I am trying to connect to Cloud SQL (PostgreSQL engine). Before I do so, I pull connection string info from Secrets Manager, set that up in a credentials object, and call a pg (package) pool to run a database query.
Below is my code:
Credentials:
import { Pool } from 'pg';
const credentials: sqlCredentials = {
"host":"127.0.0.1",
"database":"myFirstDatabase",
"port":"5432",
"user":"postgres",
"password":"postgres1!"
}
const pool: Pool = new Pool(credentials);
await pool.query(`select CURRENT_DATE;`).catch(error => console.error(`error in pool.query: ${error}`));
Upon running the cloud function with this code, I get the following error:
error in pool.query: Error: connect ECONNREFUSED 127.0.0.1:5432
I have attempted to update the host to the private IP of the Cloud SQL instance, and also update the host to the Cloud SQL instance name on this environment, but that is to no avail. Any other ideas?
Through much tribulation, I figured out the answer. Given that there is NO documentation on how to solve this, I'm going to put the answer here in hopes that I can come back here in 2025 and see that it has helped hundreds. In fact, I'm setting a reminder in my phone right now to check this URL on November 24, 2025.
Solution: The host must be set as:
/cloudsql/<googleProjectName(notId)>:<region>:<sql instanceName>
Ending code:
import { Pool } from 'pg';
const credentials: sqlCredentials = {
"host":"/cloudsql/my-first-project-191923:us-east1:my-first-cloudsql-inst",
"database":"myFirstDatabase",
"port":"5432",
"user":"postgres",
"password":"postgres1!"
}
const pool: Pool = new Pool(credentials);
await pool.query(`select CURRENT_DATE;`).catch(error => console.error(`error in pool.query: ${error}`));

Calling CosmosDB server from Azure Cloud Function

I am working on an Azure Cloud Function (runs on node js) that should return a collection of documents from my Azure Cosmos DB for MongoDB API account. It all works fine when I build and start the function locally, but fails when I deploy it to Azure. This is the error: MongoNetworkError: failed to connect to server [++++.mongo.cosmos.azure.com:++++] on first connect ...
I am new to CosmosDB and Azure Cloud Functions, so I am struggling to find the problem. I looked at the Firewall and virtual networks settings in the portal and tried out different variations of the connection string.
As it seems to work locally, I assume it could be a configuration setting in the portal. Can someone help me out?
1.Set up the connection
I used the primary connection string provided by the portal.
import * as mongoClient from 'mongodb';
import { cosmosConnectionStrings } from './credentials';
import { Context } from '#azure/functions';
// The MongoDB Node.js 3.0 driver requires encoding special characters in the Cosmos DB password.
const config = {
url: cosmosConnectionStrings.primary_connection_string_v1,
dbName: "****"
};
export async function createConnection(context: Context): Promise<any> {
let db: mongoClient.Db;
let connection: any;
try {
connection = await mongoClient.connect(config.url, {
useNewUrlParser: true,
ssl: true
});
context.log('Do we have a connection? ', connection.isConnected());
if (connection.isConnected()) {
db = connection.db(config.dbName);
context.log('Connected to: ', db.databaseName);
}
} catch (error) {
context.log(error);
context.log('Something went wrong');
}
return {
connection,
db
};
}
2. The main function
The main function that execute the query and returns the collection.
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('Get all projects function processed a request.');
try {
const { db, connection } = await createConnection(context);
if (db) {
const projects = db.collection('projects')
const res = await projects.find({})
const body = await res.toArray()
context.log('Response projects: ', body);
connection.close()
context.res = {
status: 200,
body
}
} else {
context.res = {
status: 400,
body: 'Could not connect to database'
};
}
} catch (error) {
context.log(error);
context.res = {
status: 400,
body: 'Internal server error'
};
}
};
I had another look at the firewall and private network settings and read the offical documentation on configuring an IP firewall. On default the current IP adddress of your local machine is added to the IP whitelist. That's why the function worked locally.
Based on the documentation I tried all the options described below. They all worked for me. However, it still remains unclear why I had to manually perform an action to make it work. I am also not sure which option is best.
Set Allow access from to All networks
All networks (including the internet) can access the database (obviously not advised)
Add the inbound and outbound IP addresses of the cloud function project to the whitelistThis could be challenging if the IP addresses changes over time. If you are on the consumption plan this will probably happen.
Check the Accept connections from within public Azure datacenters option in the Exceptions section
If you access your Azure Cosmos DB account from services that don’t
provide a static IP (for example, Azure Stream Analytics and Azure
Functions), you can still use the IP firewall to limit access. You can
enable access from other sources within the Azure by selecting the
Accept connections from within Azure datacenters option.
This option configures the firewall to allow all requests from Azure, including requests from the subscriptions of other customers deployed in Azure. The list of IPs allowed by this option is wide, so it limits the effectiveness of a firewall policy. Use this option only if your requests don’t originate from static IPs or subnets in virtual networks. Choosing this option automatically allows access from the Azure portal because the Azure portal is deployed in Azure.

VM firewall rules update

Is there an API to update the firewall rules using NodeJS, an example would be really appreciated.
Requirement: I have a list of CDN trusted IPs around 1700, to be allowed to access specific VM in GCP on port 80.
As I understand, we can have a maximum of 256 source ips per firewall rule. I can create and update 8 of them, and tag with the same name,
Question: can we do it using NodeJS API?
This API doesn't return firewall rules.
Equivalent of cli commands is as below
gcloud compute firewall-rules describe alltraffic
gcloud compute firewall-rules update alltraffic --source-ranges="14.201.176.140/32","14.201.176.144/32"
gcloud compute firewall-rules create ramtest1 --allow="tcp:80" --description="ramtest1" --source-ranges="205.251.192.0/19","52.95.174.0/24" --target-tags="tcp-111"
https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/
don't see the update command in the nodejs api
https://cloud.google.com/nodejs/docs/reference/compute/0.10.x/Firewall#create
https://cloud.google.com/nodejs/docs/reference/compute/0.10.x/Compute#createFirewall
exports.run_process = async (req, res) => {
const Compute = require('#google-cloud/compute');
const compute = new Compute();
const network = compute.network('default');
const firewalls = (await network.getFirewalls())[0];
for(const firewall of firewalls) {
// console.log('firewall == '+JSON.stringify(firewall));
console.log('firewall = '+firewall.metadata.name);
if(firewall.metadata.name === 'alltraffic') {
console.log(' xxxxxxxxxxxxxxxxxxxx changing all traffic xxxxxxxxxxxxxx ');
}
}
return res.status(200).send('ok');
};
This code above lists the firewall rule, NFI why its called as firewall, when in the console its called as firewall rules, it's so confusing
You should use the setMetadata function to update a firewall rule. For example, take this nodejs snippet which reads and updates the description of a firewall rule:
async function doit() {
const Compute = require('#google-cloud/compute');
const compute = new Compute();
const f = compute.firewall('default-allow-10000');
f.get().then(data => {
const firewall = data[0];
console.log('initial description: ' + firewall.metadata.description);
const metadata = {
description: 'new description for this rule'
};
return firewall.setMetadata(metadata);
}).then(data => {
const firewall = data[0];
console.log('description set');
return compute.firewall('default-allow-10000').get();
}).then(data => {
const firewall = data[0];
console.log('current description: ' + firewall.metadata.description);
});
}
doit();
In my example, this gives the output of:
initial description: old description
description set
current description: new description for this rule
To see what exists on the metadata object, you should look at the definition of the Firewall resource in the REST API.

Resources