I'm stuck past few days. I've googled but nothings working to fetch me the results. The following code works onto my local m/c all results are fetched in 4ms.
But then in production level, the following code throws gateway time out error.
The application is deployed onto OKD (node module- okd-api) cluster where my application is one of its pods.
Here, I'm fetching all the pods
let fetchListarr=[];
aws_app.get('/List', (req,res) =>
{
try
{
Promise.all(promisesArray).then(values => {
// do stuff with values here
res.send(values)
})
.catch((err)=>{console.log(err)});
}
catch (e){console.log( e);}
});
var WMArr=[];
var prom1 = new Promise(function(resolve, reject) {
let config = {
cluster:'my/url/to/openshift',
user: 'user',
password: 'password',
strictSSL: false
};
login(config)
.then(okd=>{
okd.namespace('namespace').pod.watch_all(pods=>{
pods.map((v)=> {
if(!WMArr.includes(v.object.metadata.labels.app))
{ let obj = {
TargetServiceName: v.object.metadata.labels.app,
Instance:
WMArr.lastIndexOf(v.object.metadata.labels.app) ===
WMArr.indexOf(v.object.metadata.labels.app)
? 1
: WMArr.lastIndexOf(v.object.metadata.labels.app) + 1,
Status: v.object.status.phase
};
fetchListarr.push(obj);
}
WMArr.push(v.object.metadata.labels.app);
});
})
setTimeout(function() {
resolve( fetchListarr);
}, 5000);
})
.catch(err=>{console.log(err)})
});
var promisesArray= [prom1];
Increasing the timeout won't do.
Can anyone please lemme know is the issue with the code?
Or as to where do I need to configure timeout setting (I'm new to using OKD(openshift) to deploy the app)
Related
I have a lambda function using Node 12.
I need to add a new connection to a Redis database hosted in AWS ElastiCache.
Both are in one private VPC and the security groups/subnets are configured properly.
Solution:
globals.js:
const redis = require('redis');
const redisClient = redis.createClient(
`redis://${process.env.REDIS_HOST}:${process.env.REDIS_PORT}/${process.env.REDIS_DB}`,
);
redisClient.on('error', (err) => {
console.log('REDIS CLIENT ERROR:' + err);
});
module.exports.globals = {
REDIS: require('../helpers/redis')(redisClient),
};
index.js (outside handler):
const { globals } = require('./config/globals');
global.app = globals;
const lambda_handler = (event, context, callback) => { ... }
exports.handler = lambda_handler;
helpers/redis/index.js:
const get = require('./get');
module.exports = (redisClient) => {
return {
get: get(redisClient)
};
};
helpers/redis/get.js:
module.exports = (redisClient) => {
return (key, cb) => {
redisClient.get(key, (err, reply) => {
if (err) {
cb(err);
} else {
cb(null, reply);
}
});
};
};
Function call:
app.REDIS.get(redisKey, (err, reply) => {
console.log(`REDIS GET: ${err} ${reply}`);
});
Problem:
When increasing lambda timeout to a value greater than Redis timeout, I get this error:
REDIS CLIENT ERROR:Error: Redis connection to ... failed - connect ETIMEDOUT ...
Addition:
I tried quiting/closing the connection after each transaction:
module.exports = (redisClient) => {
return (cb) => {
redisClient.quit((err, reply) => {
if (err) {
cb(err);
} else {
cb(null, reply);
}
});
};
};
app.REDIS.get(redisKey, (err, reply) => {
console.log(`REDIS GET: ${err} ${reply}`);
if (err) {
cb(err);
} else {
if (reply) {
app.REDIS.quit(() => {
cb()
});
}
}
})
Error:
REDIS GET: AbortError: GET can't be processed. The connection is already closed.
Extra Notes:
I have to use callbacks, this is why I pass ones in the above examples
I'm using "redis": "^3.0.2"
It's not a configuration issue as the cache was accessed hundred of times in a small period of time but it then started giving the timeout errors.
Everything works normally locally
It's not a configuration issue as the cache was accessed hundred of times in a small period of time but it then started giving the timeout errors.
i think it is origin of issue, probably redis database size hit the size limit, and it cannot process new data?
Can you delete old data in it?
Also it is possible Elastic Cache has limits on new TCP clients' connections, and if its depleted, new connections are refused with similar error message you mentioned.
If redis client in aws lambda function cannot establish connection, aws lambda function fails - and new one is started. New lambda function makes one more connection to redis, redis cannot process it, and one more lambda function is started...
So, at one moment, we hit the limit on active redis connections, and system is in deadlock.
I think you can temporary stop all lambda functions, and scale up Elastic Cache redis database.
I am working on creating a zip of multiple files on the server and stream it to the client while creating. Initially, I was using ArchiverJs It was working fine if I was appending buffer to it but it fails when I need to add streams into it. Then after having some discussion on Github, I switched to Node zip-stream which started working fine thanks to jntesteves. But as I deploy the code on GKE k8s I Started getting Network Failed errors for huge files.
Here is my sample code :
const ZipStream = require("zip-stream");
/**
* #summary Adding readable stream provided by https module into zipStreamer using entry method
*/
const handleEntryCB = ({ readableStream, zipStreamer, fileName, resolve }) => {
readableStream.on("error", () => {
console.error("Error while listening readableStream : ", error);
resolve("done");
});
zipStreamer.entry(readableStream, { name: fileName }, error => {
if (!error) {
resolve("done");
} else {
console.error("Error while listening zipStream readableStream : ", error);
resolve("done");
}
});
};
/**
* #summary Handling downloading of files using native https, http and request modules
*/
const handleUrl = ({ elem, zipStreamer }) => {
return new Promise((resolve, reject) => {
let fileName = elem.fileName;
const url = elem.url;
//Used in most of the cases
if (url.startsWith("https")) {
https.get(url, readableStream => {
handleEntryCB({ readableStream, zipStreamer, url, fileName, resolve, reject });
});
} else if (url.startsWith("http")) {
http.get(url, readableStream => {
handleEntryCB({ readableStream, zipStreamer, url, fileName, resolve, reject });
});
} else {
const readableStream = request(url);
handleEntryCB({ readableStream, zipStreamer, url, fileName, resolve, reject });
}
});
};
const downloadZipFile = async (data, resp) => {
let { urls = [] } = data || {};
if (!urls.length) {
throw new Error("URLs are mandatory.");
}
//Output zip name
const outputFileName = `Test items.zip`;
console.log("Downloading using streams.");
//Initialize zip-stream instance
const zipStreamer = new ZipStream();
//Set headers to response
resp.writeHead(200, {
"Content-Type": "application/zip",
"Content-Disposition": `attachment; filename="${outputFileName}"`,
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Methods": "GET, POST, OPTIONS"
});
//piping zipStreamer to the resp so that client starts getting response
//as soon as first chunk is added to the zipStreamer
zipStreamer.pipe(resp);
for (const elem of urls) {
await handleUrl({ elem, zipStreamer });
}
zipStreamer.finish();
};
app.post(restPrefix + "/downloadFIle", (req, resp) => {
try {
const { data } = req.body || {};
downloadZipFile(data, resp);
} catch (error) {
console.error("[FileBundler] unknown error : ", error);
if (resp.headersSent) {
resp.end("Unknown error while archiving.");
} else {
resp.status(500).end("Unknown error while archiving.");
}
}
});
I tested for 7-8 files of ~4.5 GB each on local, it works fine and when I tried the same on google k8s, I got network failed error.
After some more research, I Increased server timeout on k8s t0 3000 seconds, than it starts working fine, but I guess the increasing timeout is not good.
Is there anything I am missing on code level or can you suggest some good GKE deployment configuration for a server that can download large files with many concurrent users?
I am stuck on this for the past 1.5+ months. please help!
Edit 1: I edited the timeout in the ingress i.e Network services-> Load Balancing ->edit the timeout in the service
I have a AWS Lambda (Node.js) talking to an Aurora database. Both belong to the same VPC, with internet access enabled via subnet. The RDS cluster also has a inbound rule that allows traffic from the VPC, used for the Lambda (which should be the same VPC). To my surprise, I found that the RDSDataService from AWS-SDK fails to connect to the database, whereas when I use mysql pacakge, it works. Following are the 2 code snippets.
I would like it very much to use AWS-SDK, as that will reduce the deployment bundle size, as I don't have to include that in the bundle that at all. Is there anyway to achieve that?
Failed attempt to use RDSDataService
const AWS = require("aws-sdk");
const rdsData = new AWS.RDSDataService({
params: {
dbClusterOrInstanceArn: 'rds.cluster.arn',
awsSecretStoreArn: 'rds.cluster.secret.arn',
database: 'mydb'
},
endpoint: 'mydb.endpoint'
});
return new Promise((resolve, reject) => {
try {
rdsData.executeSql({
dbClusterOrInstanceArn: 'rds.cluster.arn',
awsSecretStoreArn: 'rds.cluster.secret.arn',
database: 'mydb',
sqlStatements: "select 1 + 1 as result;"
}, (err, data) => {
if (err) {
reject(err);
}
const response = {
statusCode: 200,
body: JSON.stringify(data),
};
resolve(response);
});
} catch (er) {
reject(er);
}
});
Working implementation using mysql
const mysql = require('mysql');
const connection = mysql.createConnection({
host: 'mydb.endpoint',
user: 'user',
password: 'password',
port: 3306,
database: 'mydb',
debug: false
});
connection.connect(function (err) {
if (err) context.fail();
else {
connection.query('select 1 + 1 as result', function (error, results, fields) {
if (error) throw error;
resolve('The solution is: ' + JSON.stringify(results, undefined, 2));
});
}
});
connection.end();
As it turned out, Data API is not yet available for my region. The supported regions are listed here: https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/data-api.html#data-api.regions
I have an AWS Lambda that uses Sequelize ORM to talk to AWS Aurora. It works fine the first time it's accessed but then after some unknown amount of minutes the Lambda errors out with a Sequelize error saying access denied for user#ip.address
async function connect() {
const signer = new AWS.RDS.Signer({
'region': region,
'username': dbUsername,
'hostname': dbEndpoint,
'port': dbPort
});
let token;
await signer.getAuthToken((error, result) => {
if (error) {
throw error;
}
token = result;
});
return token;
};
const sequelizeOptions = {
'host': dbEndpoint,
'port': dbPort,
'ssl': true,
'dialect': 'mysql',
'dialectOptions': {
'ssl': 'Amazon RDS',
'authSwitchHandler': (data, callback) => {
if (data.pluginName === 'mysql_clear_password') {
const password = token + '\0';
const buffer = Buffer.from(password);
callback(null, buffer);
}
}
},
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
}
};
let token;
exports.create = async () => {
token = await connect();
return new Sequelize(dbName, dbUsername, token, sequelizeOptions);
}
exports.buildResponse = resultsArray => {
return {
"statusCode": 200,
headers: {
"Access-Control-Allow-Origin": "*",
"Access-Control-Allow-Credentials": true
},
"body": JSON.stringify(resultsArray),
"isBase64Encoded": false
};
};
reference: article
Posting as a more explicit answer than my previous comment.
Short answer
As you are reusing a token and db connection created outside of the lambda handler, one or both of those things is timing out.
Longer answer
Lambdas run in containers, those containers will be re-used until killed due to inactivity or code change, but once a container is running only the code inside of the handler function is run on subsequent invocations.
This means that code run outside of a handler function is only run when a new container is started (because there is no running container or a concurrent invocation is received).
If code outside of the handler creates something that is time limited, like creating a db connection or receiving a time limited token, and the lambda is invoked often enough not to kill the container, time will simply run out.
I am using NodeJS SDK. In the basic sample that follows I am opening a bucket to insert a single record. I have put each method in a promise to force them run one after another (sequentially) so I can measure each method’s running time.
My OS: Ubuntu 16.04
'use strict';
const couchbase = require('couchbase');
const cluster = new couchbase.Cluster('couchbase://localhost');
const uuid = require('uuid/v4');
console.time('auth');
cluster.authenticate('administrator', 'adminadmin');
console.timeEnd('auth');
function open() {
return new Promise((resolve, reject) => {
console.time('open');
let bucket = cluster.openBucket('test', function (err) {
if (err) {
console.error(err);
reject(err);
}
resolve(bucket);
});
});
}
function insert(bucket, obj) {
return new Promise((resolve, reject) => {
console.time('upsert');
bucket.upsert(`uuid::${blog.name}`, blog, function (err, result) {
if (err) {
console.log(err);
reject(err);
}
resolve(bucket);
});
});
}
function dc(bucket) { // disconnect
return new Promise((resolve, reject) => {
console.time('dc');
bucket.disconnect();
resolve('ok');
});
}
// data to insert
let blog = {
id: uuid(),
name: 'Blog A',
posts: [
{
id: uuid(),
title: 'Post 1',
content: 'lorem ipsum'
}
]
};
open().then((bucket) => {
console.timeEnd('open');
insert(bucket, blog).then((bucket) => {
console.timeEnd('upsert');
dc(bucket).then((res) => {
console.timeEnd('dc');
console.log(res);
});
});
});
The output is:
auth: 0.237ms
open: 58117.771ms <--- this shows the problem
upsert: 57.006ms
dc: 0.149ms
ok
I ran sdk-doctor. It gave me two lines worth mentioning:
“WARN: Your connection string specifies only a single host. You should consider adding additional static nodes from your cluster to this list to improve your applications fault-tolerance”
“INFO: Failed to retreive cluster information (status code: 401)”
and the summary is:
Summary:
[WARN] Your connection string specifies only a single host. You should consider adding additional static nodes from your cluster to this list to improve your applications fault-tolerance
Would anyone please help?
According to this answer in the Couchbase forum, it seemed that my DNS servers were not configured properly.
It looks as though your DNS servers may be configured improperly. As part of the normal bootstrap procedure, we attempt to resolve SRV records for the hostname that is provided, it looks like you’re DNS servers may be timing out when trying to do this, causing a substantial delay when connecting. A quick way to test this theory is to add an additional hostname to your bootstrap list to disqualify the connection string from our DNS-SRV policy (for instance, use: couchbase://localhost,invalidhostname).