I have a cloud function like this which has been set to run in multiple regions.
export const cloudFunction = functions
.region(["asia-south1", "us-central1", "europe-west1", "southamerica-east1"])
.https.onCall(async (data, context) => {});
How can I call the cloud function region nearest to the user? From any client side framework?
The best solution is to use an HTTPS Load Balancer and to create a serverless NEG with your Cloud Functions. The HTTPS Load Balancer will deploy a unicast IP, I mean an IP known in different PoP (Point of Presence) of Google, and will route the request to the closest location (from the PoP). It's native and out of the box, nothing to code.
You'll have to find the closet region based on user's timezone/location yourself and specify the region on client side for routing based on region w/o a balancer as each Cloud Function has it's own URL containing the region. For example, one way would be like:
const getClosestGcpRegion = () => {
const regions = ['asia-south1', 'us-central1', 'europe-west1']
const regionOffsets = {
'asia-south1': '+05:30',
'us-central1': '-06:00',
'europe-west1': '+01:00',
}
let closestRegion = ''
let closestOffset = Number.MAX_SAFE_INTEGER
for (const region of regions) {
const offset = regionOffsets[region].split(':')
const offsetMinutes = Number(offset[0]) * 60 + Number(offset[1])
const offsetDiff = Math.abs(DateTime.local().offset - offsetMinutes)
if (offsetDiff < closestOffset) {
closestOffset = offsetDiff
closestRegion = region
}
}
console.log({ closestRegion })
return closestRegion;
}
export const functions = getFunctions(app, getClosestGcpRegion())
Alternatively, also checkout Global external HTTP(S) load balancer with Cloud Functions that can help you achieve the same goal.
Related
I want to make sure I'm thinking about Cloud Tasks right conceptually, and not sure that I am.
The examples I've been looking at seem to trigger a cloud function first that then schedules a task, that then calls a cloud function again.
(Or at least this is what I'm understanding, I could be wrong).
I'd like to set up something so that when a user clicks a button, it schedules a cloud task for some time in the future (anywhere from 1 minute to an hour and half). The cloud task then triggers the cloud function to upload the payload to the db.
I tried to set this up client side but I've been getting the error "You need to pass auth instance to use gRPC-fallback client in browser or other non-Node.js environments."
I don't want the user to have to authenticate if that's what this is saying (not sure why I'd have to do that for my use case).
This is the code that gives that error.
const {CloudTasksClient} = require('#google-cloud/tasks');
const client = new CloudTasksClient();
// import { Plugins } from '#capacitor/core';
// const { RemotePlugin } = Plugins;
const scheduleTask = async(seconds) => {
async function createHttpTask() {
const project = 'spiral-productivity';
const queue = 'spiral';
const location = 'us-west2';
const url = 'https://example.com/taskhandler';
const payload = 'Hello, World!';
const inSeconds = 5;
// Construct the fully qualified queue name.
const parent = client.queuePath(project, location, queue);
const task = {
httpRequest: {
httpMethod: 'POST',
url,
},
};
if (payload) {
task.httpRequest.body = Buffer.from(payload).toString('base64');
}
if (inSeconds) {
// The time when the task is scheduled to be attempted.
task.scheduleTime = {
seconds: inSeconds + Date.now() / 1000,
};
}
// Send create task request.
console.log('Sending task:');
console.log(task);
const request = {parent: parent, task: task};
const [response] = await client.createTask(request);
console.log(`Created task ${response.name}`);
}
createHttpTask();
// [END cloud_tasks_create_http_task]
}
More recently I set up a service account and download a .json file and all of that. But doesn't this mean my users will have to authenticate?
That's why I stopped. Maybe I'm on the wrong track, but if anyone wants to answer what I need to do to schedule a cloud task from the client side without making the user authenticate, it would be a big help.
As always, I'm happy to improve the question if anything isn't clear. Just let me know, thanks!
Yes.
Your understanding is mostly accurate. Cloud Tasks is a way to queue "tasks". The examples are likely using Cloud Functions as an analog for "some app" (a web app) that would be analogous to your Node.js (web) app, i.e. your Node.js app can submit tasks to Cloud Tasks. To access Google Cloud Platform services (e.g. Cloud Tasks), you need to authenticate and authorize.
Since your app is the "user" of the GCP services, you're correct in using a Service Account.
See Application Default Credentials to understand authenticating (code) as a service account.
Additionally, see Controlling access to webapps.
I have an Azure (AZ) Function does two things:
validate submitted info involving 3rd party packages.
when ok call a postgreSQL function at AZ to fetch a small set of data
Testing with Postman, this AF localhost response time < 40 ms. Deployed to Cloud, change URL to AZ, same set of data, took 30 seconds got Status: 500 Internal Server Error.
Did a search, thought this SO might be the case, that I need to bump my subscription to the expensive one to avoid cold start.
But more investigation running part 1 and 2 individually and combined, found:
validation part alone runs perfect at AZ, response time < 40ms, just like local, suggests cold start/npm-installation is not an issue.
pg function call always long and status: 500 regardless it runs alone or succeeding part 1, no data returned.
Application Insight is enabled and added a Diagnostic settings with:
FunctionAppLogs and AllMetrics selected
Send to LogAnalytiscs workspace and Stream to an event hub selected
Following queries found no error/exceptions:
requests | order by timestamp desc |limit 100 // success is "true", time taken 30 seconds, status = 500
traces | order by timestamp desc | limit 30 // success is "true", time taken 30 seconds, status = 500
exceptions | limit 30 // no data returned
How complicated my pg call is? Standard connection, simple and short:
require('dotenv').config({ path: './environment/PostgreSql.env'});
const fs = require("fs");
const pgp = require('pg-promise')(); // () = taking default initOptions
require('dotenv').config({ path: './environment/PostgreSql.env'});
const fs = require("fs");
const pgp = require('pg-promise')(); // () = taking default initOptions
db = pgp(
{
user: process.env.PGuser,
host: process.env.PGhost,
database: process.env.PGdatabase,
password: process.env.PGpassword,
port: process.env.PGport,
ssl:
{
rejectUnauthorized: true,
ca: fs.readFileSync("./environment/DigiCertGlobalRootCA.crt.pem").toString(),
},
}
);
const pgTest = (nothing) =>
{
return new Promise((resolve, reject) =>
{
var sql = 'select * from schema.test()'; // test() does a select from a 2-row narrrow table.
db.any(sql)
.then
(
good => resolve(good),
bad => reject({status: 555, body: bad})
)
}
);
}
module.exports = { pgTest }
AF test1 is a standard httpTrigger anonymous access:
const x1 = require("package1");
...
const xx = require("packagex");
const pgdb = require("db");
module.exports = function(context)
{
try
{
pgdb.pgTest(1)
.then
(
good => {context.res={body: good}; context.done();},
bad => {context.res={body: bad}; context.done();}
)
.catch(err => {console.log(err)})
}
catch(e)
{ context.res={body: bad}; context.done(); }
}
Note:
AZ = Azure.
AZ pg doesn't require SSL.
pg connectivity method: public access (allowed IP addresses)
Postman tests on Local F5 run against the same AZ pg database, all same region.
pgAdmin and psql all running fast against the same.
AF-deploy is zip-file deployment, my understanding it is using the same configuration.
I'm new to Azure but based on my experience, if it's about credential then should come back right away.
Update 1, FunctionAppLogs | where TimeGenerated between ( datetime(2022-01-21 16:33:20) .. datetime(2022-01-21 16:35:46) )
Is it because my pg network access set to Public access?
My AZ pgDB is a flexible server, current Networking is Public access (allowed IP address), and I have added some Firewall rule w/ client IP address. My assumption is access is allowed within AZ, but it's not.
Solution 1, simply check this box: Allow public access from any Azure servcie within Azure to this server at the bottom of the Settings -> Networking.
Solution 2, find out all AF's outbound IP and add them into Firewall rule, under Settings -> Networking. Reason to add them all is Azure select an outbound IP randomly.
How can I get the current size of a GKE node pool using the REST (or Node) API?
I'm managing my own worker pool using my Express app running on my cluster, and can set the size of the pool and track the success of the setSize operation, but I see no API for getting the current node count. The NodePool resource only includes the original node count, not the current count. I don't want to use gcloud or kubectl on one of my production VMs.
I could go around GKE and try to infer the size using the Compute Engine (GCE) API, but I haven't looked into that approach yet. Note that it seems difficult to get the node count even from Stack Driver. Has anyone found any workarounds to get the current node size?
The worker pool size can be retrieved from the Compute Engine API by getting the instance group associated with the node pool.
const { google } = require('googleapis')
const Compute = require('#google-cloud/compute')
const container = google.container('v1')
const compute = new Compute()
const projectId = 'project-12345'
const zone = 'us-central1-a'
const nodePoolId = 'worker-pool'
const clusterId = 'cluster-name'
async function authorize() {
const auth = new google.auth.GoogleAuth({
scopes: [ 'https://www.googleapis.com/auth/cloud-platform' ],
})
return auth.getClient()
}
const getNodePoolSize = async () => {
const auth = await authorize()
const clusterName = `projects/${projectId}/zones/${zone}/clusters/${clusterId}`
const request = { name: clusterName, auth }
const response = await container.projects.locations.clusters.get(request)
const nodePool = response.data.nodePools.find(({ name }) => name === nodePoolId)
const igName = nodePool.instanceGroupUrls[0].match(/.*\/instanceGroupManagers\/([a-z0-9-]*)$/)[1]
const instanceGroup = await compute.zone(zone).instanceGroup(igName).get()
return instanceGroup[1 /* 0 is config, 1 is instance */].size
}
Note that this is using two different Node API mechanisms. We could use google.compute instead of #google-cloud/compute. Also, the two APIs are authenticated differently. The former uses the authorize() method to get a client, while the latter is authenticated via the default account set in environment variables.
Is there an API to update the firewall rules using NodeJS, an example would be really appreciated.
Requirement: I have a list of CDN trusted IPs around 1700, to be allowed to access specific VM in GCP on port 80.
As I understand, we can have a maximum of 256 source ips per firewall rule. I can create and update 8 of them, and tag with the same name,
Question: can we do it using NodeJS API?
This API doesn't return firewall rules.
Equivalent of cli commands is as below
gcloud compute firewall-rules describe alltraffic
gcloud compute firewall-rules update alltraffic --source-ranges="14.201.176.140/32","14.201.176.144/32"
gcloud compute firewall-rules create ramtest1 --allow="tcp:80" --description="ramtest1" --source-ranges="205.251.192.0/19","52.95.174.0/24" --target-tags="tcp-111"
https://cloud.google.com/sdk/gcloud/reference/compute/firewall-rules/
don't see the update command in the nodejs api
https://cloud.google.com/nodejs/docs/reference/compute/0.10.x/Firewall#create
https://cloud.google.com/nodejs/docs/reference/compute/0.10.x/Compute#createFirewall
exports.run_process = async (req, res) => {
const Compute = require('#google-cloud/compute');
const compute = new Compute();
const network = compute.network('default');
const firewalls = (await network.getFirewalls())[0];
for(const firewall of firewalls) {
// console.log('firewall == '+JSON.stringify(firewall));
console.log('firewall = '+firewall.metadata.name);
if(firewall.metadata.name === 'alltraffic') {
console.log(' xxxxxxxxxxxxxxxxxxxx changing all traffic xxxxxxxxxxxxxx ');
}
}
return res.status(200).send('ok');
};
This code above lists the firewall rule, NFI why its called as firewall, when in the console its called as firewall rules, it's so confusing
You should use the setMetadata function to update a firewall rule. For example, take this nodejs snippet which reads and updates the description of a firewall rule:
async function doit() {
const Compute = require('#google-cloud/compute');
const compute = new Compute();
const f = compute.firewall('default-allow-10000');
f.get().then(data => {
const firewall = data[0];
console.log('initial description: ' + firewall.metadata.description);
const metadata = {
description: 'new description for this rule'
};
return firewall.setMetadata(metadata);
}).then(data => {
const firewall = data[0];
console.log('description set');
return compute.firewall('default-allow-10000').get();
}).then(data => {
const firewall = data[0];
console.log('current description: ' + firewall.metadata.description);
});
}
doit();
In my example, this gives the output of:
initial description: old description
description set
current description: new description for this rule
To see what exists on the metadata object, you should look at the definition of the Firewall resource in the REST API.
I've been trying to retrieve the Resource with the path "/" (the root) from AWS Api Gateway using the Nodejs AWS SDK. I know the naïve solution would be to do it this way:
var AWS = require('aws-sdk');
var __ = require('lodash');
var Promise = require('bluebird');
var resources = [];
var apiGateway = Promise.promisifyAll(new AWS.APIGateway({apiVersion: '2015-07-09', region: 'us-west-2'}));
var _finishRetrievingResources = function (resources) {
var orderedResources = __.sortBy(resources, function (res) {
return res.path.split('/').length;
});
var firstResource = orderedResources[0];
};
var _retrieveNextPage = function (resp) {
resources = resources.concat(resp.data.items);
if (resp.hasNextPage()) {
resp.nextPage().on('success', _retrieveNextPage).send();
} else {
_finishRetrievingResources(resources);
}
};
var foo = apiGateway.getResources({restApiId: 'mah_rest_api_id'}).on('success', _retrieveNextPage).send();
However, does anybody know of an alternate method? I'd prefer to know that I'll alway have to do at most one call than having to do multiple.
PS: I know there are several optimizations that could be made (e.g. check for root path on every response), I really want to know if there's a single SDK Call that could fix this.
There is not a single call, though it can be if you have less than 500 resources. As a consolation prize, this is the best-practice, using position to prevent accidental misses if there are over 500 resources. If there are less than 500 resources, this will work with one call:
https://github.com/andrew-templeton/cfn-api-gateway-restapi/blob/bd964408bcb4bc6fc8ec91b5e1f0387c8f11691a/index.js#L77-L102