The function of SFTP is working locally and it has the correct configuration.
The problem is with the logs of Lambda that complains about the configuration I believe by giving me an Error: Timed out while waiting for a handshake.
const config = {
host: 'ftp.xxxx.at',
port: '22',
username: 'SFTP_xxx',
password: 'xxx9fBcS45',
MD5_Fingerprint: 'xxx:8f:5b:1a',
protocol: "sftp",
algorithms: {
serverHostKey: ['ssh-rsa', 'ssh-dss']
}
};
// get file
// get sftp file
// here ....
const get = () => {
sftp.connect(config).then(() => {
return sftp.get('./EMRFID_201811210903.csv', null, 'utf8', null);
}).then((stream) => {
let body = stream.on('data', (chunk) => {
body += chunk;
})
stream.on('end', () => {
uploadRFIDsToS3(body)
// close connection
sftp.end()
});
}).catch((err) => {
console.log('catch err:', err)
})
};
-
vpc:
securityGroupIds:
- sg-01c29be1d8fbxx59
subnetdIds:
- subnet-007a88d9xxea434d
-
2019-02-18T13:53:51.121Z e688c7bd-24fc-45a1-a565-f2a4c313f846 catch err: { Error: Timed out while waiting for handshake
at Timeout._onTimeout (/var/task/node_modules/ssh2-sftp-client/node_modules/ssh2/lib/client.js:695:19)
at ontimeout (timers.js:482:11)
at tryOnTimeout (timers.js:317:5)
at Timer.listOnTimeout (timers.js:277:5) level: 'client-timeout' }
I added VPC and Security Group in AWS and I still get the same error.
I ran out of ideas of how to fix it.
Increase the time-out value for the lambda function. The option to allocate more memory and time-out are some of the settings for any lambda function in aws
I figure it out.
What was wrong is that Lambda was actually going to another function without getting the connection established.
So what I did is that I added await to the connection and other functions that should not interfere with each other to make to work.
To understand more about await go to this link:
https://javascript.info/async-await
Related
Im trying to implement firebase functions v2. however the runtime options i am passing into the onCall function does not seem to work. It is timing out after 60 seconds. Am I doing something wrong here?
const {onCall} = require("firebase-functions/v2/https");
const runtimeOpts = {
timeoutSeconds: 500,
memory: '8GB' as '8GB',
};
export const calculatev2 = onCall(runtimeOpts,
async (event: any) => {
try {
//run calculations
return({calculations: 'the calculated stuffs'});
} catch (err) {
console.log(err.message);
throw new functions.https.HttpsError('internal', 'Error in calculating', err);
}
}
);
⚠ functions: Your function timed out after ~60s. To configure this timeout, see
https://firebase.google.com/docs/functions/manage-functions#set_timeout_and_memory_allocation.
> Error: Function timed out.
> at Timeout._onTimeout (/Users/bic/.nvm/versions/node/v16.13.2/lib/node_modules/firebase-tools/lib/emulator/functionsEmulatorRuntime.js:634:19)
> at listOnTimeout (node:internal/timers:557:17)
> at processTimers (node:internal/timers:500:7)
I am using node -v v14.17.0 and "ssh2-sftp-client": "^7.0.0" and method fastPut https://github.com/theophilusx/ssh2-sftp-client#sec-5-2-9
Checking the remote files is okay, so connection works.
My environment is wsl2 Ubuntu-20.04
Problem I face is error
RuntimeError: abort(Error: fastPut: No response from server Local: /home/draganddrop/testi.txt Remote: Downloads/testi.txt). Build with -s ASSERTIONS=1 for more info.
at process.J (/home/draganddrop/node_modules/ssh2/lib/protocol/crypto/poly1305.js:20:53)
at process.emit (events.js:376:20)
at processPromiseRejections (internal/process/promises.js:245:33)
at processTicksAndRejections (internal/process/task_queues.js:96:32)
I have tried also with sftp> put /home/draganddrop/testi.txt Downloads/testi.txt from console, which works.
Code I am using:
let Client = require('ssh2-sftp-client');
let sftp = new Client();
let remotePath = 'Downloads/testi.txt';
let localPath = '/home/draganddrop/testi.txt'
const config = {
host: 'XX.XX.XXX.XXX',
port: '22',
username: 'XXXXX',
password: 'XXXXXX'
};
sftp.connect(config)
.then(() => {
sftp.fastPut(localPath, remotePath);
//return sftp.exists(remotePath);
})
//.then(data => {
// console.log(data); // will be false or d, -, l (dir, file or link)
//})
.then(() => {
sftp.end();
})
.catch(err => {
console.error(err.message);
});
I have no idea what causes this error, I've tried with different paths and get either bad path error or this. What could be the cause?
The reason for the problem is that the connection is closing before the finish of executing fastPut.
You are running connect, after that in the first .then, the method fastPut runs asynchronously, it doesn't wait for the finish execution and return undefined to the next method .then of the chain.
To solve the problem you just need to return a promise received from fastPut
sftp.connect(config)
.then(() => sftp.fastPut(localPath, remotePath))
.then((data) => {/* do something*/}
.finally(() => sftp.end())
I'm trying to connect to a PostgreSQL database using Knex.js, but I just can't get a connection to happen. The only exception I'm seeing is:
Error KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
I built this simple test script to make sure it wasn't part of our program:
const knex = require("knex")({
client: 'pg',
connection: {
host : 'localhost',
port: 5432,
database: 'postgres',
user: 'postgres',
password: 'password'
},
pool: {
afterCreate: (conn, done) => {
console.log("Pool created");
done(false, conn);
}
},
debug: true,
acquireConnectionTimeout: 2000
});
console.log("A")
const a = knex.raw('select 1+1 as result').then(result => console.log("A Success", result)).catch(err => console.log("A Error", err));
console.log("B")
const b = knex.select("thing").from("testdata").then(data => console.log("B Success", data)).catch(err => console.log("B Error", err));
console.log("C")
const c = knex.transaction(trx => {
trx.select("thing").from("testdata")
.then(data => {
console.log("C Success", data);
})
.catch(err => {
console.log("C Error", err);
});
})
.catch(err => {
console.log("C Error", err);
});
console.log("waiting on query")
// Promise.all([a, b, c]).then(() => {
// console.log("Destroying")
// knex.destroy()
// })
Which is producing the following output
A
B
C
waiting on query
A Error KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_PG.acquireConnection (E:\DEV\work\niba-backend\node_modules\knex\lib\client.js:347:26)
at runNextTicks (internal/process/task_queues.js:58:5)
at listOnTimeout (internal/timers.js:520:9)
at processTimers (internal/timers.js:494:7) {
sql: 'select 1+1 as result',
bindings: undefined
}
B Error KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_PG.acquireConnection (E:\DEV\work\niba-backend\node_modules\knex\lib\client.js:347:26) {
sql: undefined,
bindings: undefined
}
C Error KnexTimeoutError: Knex: Timeout acquiring a connection. The pool is probably full. Are you missing a .transacting(trx) call?
at Client_PG.acquireConnection (E:\DEV\work\niba-backend\node_modules\knex\lib\client.js:347:26)
at async Transaction.acquireConnection (E:\DEV\work\niba-backend\node_modules\knex\lib\transaction.js:213:28)
It's never calling the afterCreate method. I have tried it against both our dev database using settings that work for everyone else, and a locally running postgres installation with every combination of settings I could come up with. I even passed this off to another member of the team and it worked fine, so there's something up in my machine, but I have no idea what could be blocking it. I'm not seeing any connection attempts in the postgres logs, and I can't seem to get any better error messages to work off of.
If anyone could come up with things I can try, or ways to get more information out of Knex I would really appreciate it.
I traced the issue down to the verion of the 'pg' package we were using. Was using 7.18 and when I upgraded to the latest version (8.4) it started connecting. No idea why the 7.x version wasn't working.
I'm attempting to ssh into an ec2 instance with a lambda function. I keep receiving a null value for my response every time I try run my function with my logs displaying the following error:
START RequestId: xxxx-xxxx-xxxx-xxxx-xxxx Version: $LATEST
2020-02-19T22:56:16.574Z xxxx-xxxx-xxxx-xxxx-xxxx INFO failed connection, boo
2020-02-19T22:56:16.576Z xxxx-xxxx-xxxx-xxxx-xxxx INFO Error: Timed out while waiting for handshake
at Timeout._onTimeout (/var/task/node_modules/ssh2/lib/client.js:687:19)
at listOnTimeout (internal/timers.js:531:17)
at processTimers (internal/timers.js:475:7) {
level: 'client-timeout'
}
With it being a timed out error, I figured it was an issue with connecting to the instance rather than my code, but the lambda is apart of the same VPC that the EC2 is apart of, along with a security group enabling ssh connections. I'm also able to connect to ssh into the instance manually as well.
exports.handler = async (event) => {
const fs = require('fs')
const SSH = require('simple-ssh');
const pemfile = 'xxx.pem';
const user = 'ec2-user';
const host = 'xx.xx.xx.xx';
// all this config could be passed in via the event
const ssh = new SSH({
host: host,
user: user,
key: fs.readFileSync(pemfile)
});
let cmd = "ls";
if (event.cmd == "long") {
cmd += " -l";
}
let prom = new Promise(function(resolve, reject) {
let ourout = "";
ssh.exec('mkdir /home/ec2-user/ssh_success', {
exit: function() {
ourout += "\nsuccessfully exited!";
resolve(ourout);
},
out: function(stdout) {
ourout += stdout;
}
}).start({
success: function() {
console.log("successful connection!");
},
fail: function(e) {
console.log("failed connection, boo");
console.log(e);
}
});
});
const res = await prom;
const response = {
statusCode: 200,
body: res,
};
return response;
};
I'm building a lambda function that is supposed to save a game feedback, like a performance grade, into my Postgres database, which is in AWS RDS.
I'm using NodeJS typescript and the function is kinda working, but in a strange way.
I made an API Gateway so I can POST data to the URL to the lambda process it and save it, the thing is, when I POST the data the function seems to process it until it reaches a max limit of connected clients, and than it seems to lose the other clients'data.
Another problem is that every time I POST data I get a response saying that there was a Internal Server Error and with a 'X-Cache→Error from cloudfront' header. For a GET request I figured it out that it was giving me this response because the format of the response was incorrect, but in this case I fixed the response format and still get this problem...
Sometimes I get a timeout response.
My function's code:
import { APIGatewayEvent, Callback, Context, Handler } from "aws-lambda";
import { QueryConfig, Client, Pool, PoolConfig } from "pg";
export const insert: Handler = async (
event: APIGatewayEvent,
context: Context,
cb: Callback
) => {
// context.callbackWaitsForEmptyEventLoop = false;
const config: PoolConfig = {
user: process.env.PG_USER,
host: process.env.PG_HOST,
database: process.env.PG_DB,
password: process.env.PG_PASS,
port: parseInt(process.env.PG_PORT),
idleTimeoutMillis: 0,
max: 10000
};
const pool = new Pool(config);
let postdata = event.body || event;
console.log("POST DATA:", postdata);
if (typeof postdata == "string") {
postdata = JSON.parse(postdata);
}
let query: QueryConfig = <QueryConfig>{
name: "get_all_questions",
text:
"INSERT INTO gamefeedback (gameid, userid, presenterstars, gamestars) VALUES ($1, $2, $3, $4);",
values: [
parseInt(postdata["game_id"]),
postdata["user_id"],
parseInt(postdata["presenter_stars"]),
parseInt(postdata["game_stars"])
]
};
console.log("Before Connect");
let con = await pool.connect();
let res = await con.query(query);
console.log("res.rowCount:", res.rowCount);
if (res.rowCount != 1) {
cb(new Error("Error saving the feedback."), {
statusCode: 400,
body: JSON.stringify({
message: "Error saving data!"
})
});
}
cb(null, {
statusCode: 200,
body: JSON.stringify({
message: "Saved successfully!"
})
});
console.log("The End");
};
Than the log from CloudWatch error with max number of clients connected looks like this:
2018-08-03T15:56:04.326Z b6307573-9735-11e8-a541-950f760c0aa5 (node:1) UnhandledPromiseRejectionWarning: error: sorry, too many clients already
at u.parseE (/var/task/webpack:/node_modules/pg/lib/connection.js:553:1)
at u.parseMessage (/var/task/webpack:/node_modules/pg/lib/connection.js:378:1)
at Socket.<anonymous> (/var/task/webpack:/node_modules/pg/lib/connection.js:119:1)
at emitOne (events.js:116:13)
at Socket.emit (events.js:211:7)
at addChunk (_stream_readable.js:263:12)
at readableAddChunk (_stream_readable.js:250:11)
at Socket.Readable.push (_stream_readable.js:208:10)
at TCP.onread (net.js:607:20)
Can any of you guys help me with this strange problem?
Thanks
Well for one thing you need to put creating a pool above the handler, like so:
const config: PoolConfig = {
user: process.env.PG_USER,
...
};
const pool = new Pool(config);
export const insert: Handler = async (
event: APIGatewayEvent,
context: Context,
cb: Callback
) => {
..etc
The way you have it you are creating a pool on every invocation. If you create the pool outside the handler it gives Lambda a chance to share the pool between invocations.