Google cloud datastore slow (>800ms) with simple query from compute engine - node.js

When I try to query the Google Cloud Datastore from a (micro) compute engine, it usually takes >800ms to get a reply. The best I got was 450ms, the worst was >3 seconds.
I was under the impression that latency should be much, much lower (like 20-80ms), so I'm guessing I'm doing something wrong.
This is the (node.js) code I'm using to query (from a simple datastore with just a single entity):
const Datastore = require('#google-cloud/datastore');
const projectId = '<my-project-id>';
const datastoreClient = Datastore({
projectId: projectId
});
var query = datastoreClient.createQuery('Test').limit(1);
console.time('query');
query.run(function (err, test) {
if (err) {
console.log(err);
return;
}
console.timeEnd('query');
});
Not sure if it's relevant, but my app-engine project is in the US-Central region, as is the compute engine I'm running the query from.
UPDATE
After some more testing I found out that the default authentication (token?) that you get when using the Node.js library provided by Google expires after about 4 minutes.
So in other words: if you use the same process, but you wait 4 minutes or more between requests, query times are back to >800ms.
I also tried authenticating using a keyfile, and that seemed to do better: subsequent request are still faster, but the initial request only takes half the time (>400ms).

This latency that you see for your initial requests to the Datastore are most likely due to caching being warmed up. The Datastore uses a distributed architecture to manage scaling, which allows your queries to scale with the size of your result set. By performing more of the same query, the better prepared the Datastore is to serve your query, and the more consistent the speeds of your results.
If you want similar result speeds on low Datastore access rates, it is recommended to configure your own caching layer. Google App Engine provides Memcache which is optimized for use with the Datastore. Since you are making requests from Compute Engine, you can use other third-party solutions such as Redis or Memcached.

Related

How to create a Flutter Stream using MongoDB (watch collection?) with Firebase Cloud Function

I've been trying out MongoDB as database for my Flutter project lately, since I want to migrate from pure Firebase database (some limitations in Firebase are an issue for my project, like the "in-array" limit of 10 for queries).
I already made some CRUD operations methods in some Firebase Cloud Functions, using MongoDB. I'm now able to save data and display it as a Future in a Flutter App (a simple ListView of Users in a FutureBuilder).
My question is : how would it be possible to create a StreamBuilder thanks to MongoDB and Firebase Cloud Functions ? I saw some stuff about watch collection and Stream change but nothing clear enough for me (usually I read a lot of examples or tutorial to understand).
Maybe some of you would have some clues or maybe tutorial that I can read/watch to learn a little bit more about that subject ?
For now, I have this as an example (NodeJS Cloud Function stored in Firebase), which obviously produces a Future in my Future app (not realtime) :
exports.getUsers = functions.https.onCall(async (data, context) => {
const uri = "mongodb+srv://....";
const client = new MongoClient(uri);
await client.connect();
var results = await client.db("myDB").collection("user").find({}).toArray();
await client.close();
return results;
});
What would you advice me to obtain a Stream instead of a Future, using maybe watch collection and Stream change from MongoDB, providing example if possible !
Thank you very much !
Cloud Functions are meant for short-lived operations, not for long-term listeners. It is not possible to create long-lived connections from Cloud Functions, neither to other services (such as you're trying to do to MongoDB here) nor from Cloud Functions back to the calling client.
Also see:
If I implement onSnapshot real-time listener to Firestore in Cloud Function will it cost more?
Can a Firestore query listener "listen" to a cloud function?
the documentation on EventArc, which is the platform that allows you build custom triggers. It'll be (a lot* more. involved though.

Cloud Firestore big data error - Deadline Exceeded [duplicate]

I would like to load collection that is ~30k records. I.e load it via.
const db = admin.firestore();
let documentsArray: Array<{}> = [];
db.collection(collection)
.get()
.then(snap => {
snap.forEach(doc => {
documentsArray.push(doc);
});
})
.catch(err => console.log(err));
This will always throw Deadline Exceeded error. I have searched for some sorts of mechanism that will allow me to paginate trough it but I find it unbelievable not to be able to query for not that big amount in one go.
I was thinking that it may be that due to my rather slow machine I was hitting the limit but then I deployed simple express app that would do the fetching to app engine and still had no luck.
Alternatively I could also export the collection with gcloud beta firestore export but it does not provide JSON data.
I'm not sure about firestore, but on datastore i was never able to fetch that much data in one shot, I'd always have fetch pages of about 1000 records at a time and build it up in memory before processing it. You said:
I have searched for some sorts of mechanism that will allow me to paginate trough
Perhaps you missed this page
https://cloud.google.com/firestore/docs/query-data/query-cursors
In the end the issue was that machine that was processing the 30k records from the Firestore was not powerful enough to get the data needed in time. Solved by using, GCE with n1-standard-4 GCE.

Why does a simple SQL query causes significant slow down in my Lambda function?

I built a basic node.js API hosted on AWS Lambda and served over AWS API Gateway. This is the code:
'use strict';
// Require and initialize outside of your main handler
const mysql = require('serverless-mysql')({
config: {
host : process.env.ENDPOINT,
database : process.env.DATABASE,
user : process.env.USERNAME,
password : process.env.PASSWORD
}
});
// Import the Dialogflow module from the Actions on Google client library.
const {dialogflow} = require('actions-on-google');
// Instantiate the Dialogflow client.
const app = dialogflow({debug: true});
// Handle the Dialogflow intent named 'trip name'.
// The intent collects a parameter named 'tripName'.
app.intent('trip name', async (conv, {tripName}) => {
// Run your query
let results = await mysql.query('SELECT * FROM tablename where field = ? limit 1', tripName)
// Respond with the user's lucky number and end the conversation.
conv.close('Your lucky number is ' + results[0].id);
// Run clean up function
await mysql.end()
});
// Set the DialogflowApp object to handle the HTTPS POST request.
exports.fulfillment = app;
It receives a parameter (a trip name), looks it up in MySQL and returns the result.
My issue is that the API takes more than 5 seconds to respond which is slow.
I'm not sure why it's slow? the MySQL is a powerful Amazon Aurora and node.js is supposed to be fast.
I tested the function from the same AWS region as the MySQL (Mumbai) and it still times out so I don't think it has to do with distance between different regions.
The reason of slowness is carrying out any SQL query (even a dead simple SELECT). It does bring back the correct result but slowly.
When I remove the SQL part it becomes blazing fast. I increased the memory for Lambda to the maximum and redeployed Aurora to a far more powerful one.
Lambda functions will run faster if you configure more memory. The less the memory configured, the worse the performance.
This means if you have configured your function to use 128MB, it's going to be run in a very low profile hardware.
On the other hand, if you configure it to use 3GB, it will run in a very decent machine.
At 1792MB, your function will run in a hardware with a dedicated core, which will speed up your code significantly considering you are making use of IO calls (network requests, for example). You can see this information here
There's no magic formula though. You have to run a few tests and see what memory configuration best suits your application. I would start with 3GB and eventually decrease it by chunks of 128MB until you find the right configuration.

What is the best way to stream data in real time into Big Query (using Node)?

I want to stream HTTP requests into BigQuery, in real time (or near real time).
Ideally, I would like to use a tool that provides an endpoint to stream HTTP requests to and allows me to write simple Node such that:
1. I can add the appropriate insertId so BigQuery can dedupe requests if necessary and
2. I can batch the data so I don't send a single row at a time (which would result in unnecessary GCP costs)
I have tried using AWS Lambdas or Google Cloud Functions but the necessary setup for this problem on those platforms far exceeds the needs of the use case here. I assume many developers have this same problem and there must be a better solution.
Since you are looking for a way to stream HTTP requests to BigQuery and also send them in batch to minimize Google Cloud Platform costs, you might want to take a look at the public documentation where this issue is explained.
You can also find a Node.js template on how to perform the stream insert into BigQuery:
// Imports the Google Cloud client library
const {BigQuery} = require('#google-cloud/bigquery');
/**
* TODO(developer): Uncomment the following lines before running the sample.
*/
// const projectId = "your-project-id";
// const datasetId = "my_dataset";
// const tableId = "my_table";
// const rows = [{name: "Tom", age: 30}, {name: "Jane", age: 32}];
// Creates a client
const bigquery = new BigQuery({
projectId: projectId,
});
// Inserts data into a table
await bigquery
.dataset(datasetId)
.table(tableId)
.insert(rows);
console.log(`Inserted ${rows.length} rows`);
As for the batch part, the recommended ratio is to use 500 rows per request even though it can be up to 10,000. More information about that Quotas & Limits for streaming inserts can be found in the public documentation.
You can make use of Cloud functions. With the help of cloud functions, you can create your own API in Node JS and then it can be used for Streaming data in BQ.
Target Architecture for STREAM will be like this:
Pubsub Subscriber (PUSH TYPE) -> Google Cloud Function -> Google Big Query
You can make use of this API in batch mode as well with the help of Cloud Composer (i.e. Apache Airflow) or Cloud Scheduler to schedule your API as per your requirements.
Target Architecture for BATCH will be like this:
Cloud Scheduler/Cloud Composer -> Google Cloud Function -> Google Big Query

Node.js API choking with concurrent connections

This is the first time I've used Node.js and Mongo, so please excuse any ignorance. I come from a PHP background. It was my understanding that Node.js scaled well because of the event-driven nature of it. As such, I built my API in node and have been testing it on a localhost. Today, I deployed it to my cloud server and everything works great, except...
As the requests start to pile up, they start to take a long time to fulfill. With just 2 clients connecting to the API, already I'm seeing 30sec+ page load times when both clients are trying to make several requests at once (which does sometimes happen).
Most of the work done by the API is either (a) reading/writing to MongoDB, which resides on a 2nd server on the cloud (b) making requests to other APIs, websites, etc. and returning the results. Both of these operations should not be blocking, but I can imagine the problem being something to do with a bottleneck either on the Mongo DB server (a) or to the external APIs (b).
Of course, I will have multiple application servers in the end, but I would expect each one to handle more than a couple concurrent clients without choking.
Some considerations:
1) I have some console.logs that I left in my node code, and I have a SSH client open to monitor the cloud server. I suspect that this could cause slowdown
2) I use express, mongoose, Q, request, and a handful of other modules
Thanks for taking the time to help a node newb ;)
Edit: added some pics of performance graphs after some responses below...
EDIT: here's a typical callback -- it is called by the express router, and it uses the Q module and OAuth to make a Post API call to Facebook:
post: function(req, links, images, callback)
{
// removed some code that calculates the target (string) and params (obj) variables
// the this.request function is a simple wrapper around the oauth.getProtectedResource function
Q.ncall(this.request, this, target, 'POST', params)
.then(function(res){
callback(null, res);
})
.fail(callback).end();
},
EDIT: some "upsert" code
upsert: function(query, callback)
{
var id = this.data._id,
upsertData = this.data.toObject(),
query = query || {'_id': id};
delete upsertData._id;
this.model.update(query, upsertData, {'upsert': true}, function(err, res, out){
if(err)
{
if(callback) callback(new Errors.Database({'message':'the data could not be upserted','error':err, 'search': query}));
return;
}
if(callback) callback(null);
});
},
Admittedly, my knowledge of Q/promises is weak. But, I think I have consistently implemented them in a way that does not block...
Your question has provided half of the relevant data: the technology stack. However, when debugging performance issues, you also need the other half of the data: performance metrics.
You're running some "cloud servers", but it's not clear what these servers are actually doing. Are they spiked on CPU? on Memory? on IO?
There are lots of potential issues. Are you running Express in production mode? Are you taking up too much IO on your MongoDB server? Are you legitimately downloading too much data? Did you get caught in an infinite Node.JS loop? (it happens)
I would like to provide better advice, but without knowing the status of the servers involved it's really impossible to start picking at any specific underlying technology. You may be a "Node newb", but basic server monitoring is pretty standard across programming languages.
Thank you for the extra details, I will re-iterate the most important part of my comments above: Where are these servers blocked?
CPU? (clearly not from your graph)
Memory? (doesn't seem likely here)
IO? (where are the IO graphs, what is your DB server doing?)

Resources