Unable to connect Mongodb Atlas Cluster from Node js - node.js

I am unable to connect Mongodb atlas Cluster from node js getting following error
{
error: 1,
message: 'Command failed: mongodump -h cluster0.yckk6.mongodb.net --port=27017 -d databaseName -p -u --gzip --archive=/tmp/file_name_2022-09-19T09-42-05.gz\n' +
'2022-09-19T14:42:08.931+0000\tFailed: error connecting to db server: no reachable servers\n'
}
Can anyone help me to solve this problem and following is my backup code
function databaseBackup() {
let backupConfig = {
mongodb: "mongodb+srv://<username>:<password>#cluster0.yckk6.mongodb.net:27017/databaseName?retryWrites=true&w=majority&authMechanism=SCRAM-SHA-1", // MongoDB Connection URI
s3: {
accessKey: "SDETGGAKIA2GL", //AccessKey
secretKey: "Asad23rdfdg2teE8lOS3JWgdfgfdgfg", //SecretKey
region: "ap-south-1", //S3 Bucket Region
accessPerm: "private", //S3 Bucket Privacy, Since, You'll be storing Database, Private is HIGHLY Recommended
bucketName: "backupDatabase" //Bucket Name
},
keepLocalBackups: false, //If true, It'll create a folder in project root with database's name and store backups in it and if it's false, It'll use temporary directory of OS
noOfLocalBackups: 5, //This will only keep the most recent 5 backups and delete all older backups from local backup directory
timezoneOffset: 300 //Timezone, It is assumed to be in hours if less than 16 and in minutes otherwise
}
MBackup(backupConfig).then(onResolve => {
// When everything was successful
console.log(onResolve);
}).catch(onReject => {
// When Anything goes wrong!
console.log(onReject);
});
}

Related

AWS SDK S3 V3 Client in NodeJS in Docker : "The specified key does not exist"

This problem is driving me mad for 2 days now.
I am trying to run a NodeJS (NestJS) application in a Docker Container.
The application does some things with AWS SDK S3 (v3).
Code
To get the Client I use the following code:
private client = new S3Client({
credentials: fromIni({
profile: 'default',
filepath: '~/.aws/credentials',
configFilepath: '~/.aws/config',
}),
region: this.bucketRegion,
});
Then I try to get all S3 objects:
const command = new ListObjectsCommand({
// eslint-disable-next-line #typescript-eslint/naming-convention
Bucket: CONSTANTS.FILES.S3.BUCKET,
});
const filesInS3Response = await this.client.send(command);
const filesInS3 = filesInS3Response.Contents;
Error Message
When I start the Docker Container, and query this endpoint, I get the following error in docker-compose logs:
[Nest] 1 - 02/16/2023, 11:40:15 AM ERROR [ExceptionsHandler] The specified key does not exist.
NoSuchKey: The specified key does not exist.
at deserializeAws_restXmlNoSuchKeyResponse (/usr/src/app/node_modules/#aws-sdk/client-s3/dist-cjs/protocols/Aws_restXml.js:6155:23)
at deserializeAws_restXmlGetObjectAttributesCommandError (/usr/src/app/node_modules/#aws-sdk/client-s3/dist-cjs/protocols/Aws_restXml.js:4450:25)
at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
at async /usr/src/app/node_modules/#aws-sdk/client-s3/node_modules/#aws-sdk/middleware-serde/dist-cjs/deserializerMiddleware.js:7:24
at async /usr/src/app/node_modules/#aws-sdk/client-s3/node_modules/#aws-sdk/middleware-signing/dist-cjs/middleware.js:14:20
at async /usr/src/app/node_modules/#aws-sdk/client-s3/node_modules/#aws-sdk/middleware-retry/dist-cjs/retryMiddleware.js:27:46
at async /usr/src/app/node_modules/#aws-sdk/client-s3/node_modules/#aws-sdk/middleware-logger/dist-cjs/loggerMiddleware.js:5:22
at async AdminS3FilesService.showS3Files (/usr/src/app/dist/src/admin/admin_s3files.service.js:57:37)
Dockerfile
The relevant part from the Dockerfile:
RUN mkdir -p /root/.aws
COPY --from=builder /root/.aws/credentials /root/.aws/credentials
COPY --from=builder /root/.aws/config /root/.aws/config
RUN ls -la /root/.aws
RUN whoami
And when I look in the running Container, there is indeed a credentials and config file in the ~/.aws directory.
They look like:
(Credentials)
[default]
aws_access_key_id=AKIA3UHGDIBNT3MSM2WN
aws_secret_access_key=xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx
And config:
[profile default]
region=eu-central-1
Full code (NestJS)
#Injectable()
export class AdminS3FilesService {
constructor(
private readonly configService: ConfigService,
private filesService: FilesService,
) {}
private readonly logger = new Logger(AdminS3FilesService.name);
private bucketRegion = this.configService.get('AWS_S3_REGION');
private client = new S3Client({
credentials: fromIni({
profile: 'default',
filepath: '~/.aws/credentials',
configFilepath: '~/.aws/config',
}),
region: this.bucketRegion,
});
async showS3Objects(): Promise<any> {
this.logger.log(
`In showS3Objects with bucket [${CONSTANTS.FILES.S3.BUCKET}]`,
);
const messages: any[] = [];
const command = new ListObjectsCommand({
// eslint-disable-next-line #typescript-eslint/naming-convention
Bucket: CONSTANTS.FILES.S3.BUCKET,
});
const filesInS3Response = await this.client.send(command);
const filesInS3 = filesInS3Response.Contents;
for (const f of filesInS3) {
messages.push(
`Bucket = ${CONSTANTS.FILES.S3.BUCKET}; Key = ${f.Key}; Size = ${f.Size}`,
);
}
return {
messages: messages,
}; // <-- This is line 57 in the code
}
}
I've tried many different things, like naming the profile (into something else than 'default'), leaving out the config file, leaving out the filepath in the code (since ~/.aws/credentials is the default).
But no luck with any of that.
What am I doing wrong here?
Does anybody have AWS SDK S3 V3 running in a Docker Container (NodeJS/NestJS) and how did you do the credentials?
Hope somebody can help me.
Solution
Thanks to Frank I've found the solution:
Just ignore all that FromINI method and specify the keys in the call to S3Client.
The method of specifying the keys in the call was not in the docs (at least, I haven't found it in the V3 docs)
Code :
private client = new S3Client({
credentials: {
accessKeyId: this.configService.get('AWS_S3_ACCESS_KEY_ID'),
secretAccessKey: this.configService.get('AWS_S3_SECRET_ACCESS_KEY'),
},
region: this.bucketRegion,
});
The error message you're seeing suggests that the specified key does not exist in your S3 bucket. However, the code you've provided doesn't include any reference to a specific key or object in your bucket. Instead, you're simply trying to list all objects in the bucket.
The issue may be related to the credentials you're using to authenticate with AWS S3. Here are a few things you can try:
Check that the profile you're using in your credentials file has the necessary permissions to list objects in the S3 bucket. You can do this in the AWS Management Console by navigating to the IAM service, selecting "Users" from the left-hand menu, and then selecting the user associated with the access key ID in your credentials file. From there, you can review the user's permissions and make sure they have the necessary permissions to list objects in the S3 bucket.
Try providing your access key ID and secret access key directly in the S3Client constructor instead of using a profile. For example:
private client = new S3Client({
credentials: {
accessKeyId: 'YOUR_ACCESS_KEY_ID',
secretAccessKey: 'YOUR_SECRET_ACCESS_KEY',
},
region: this.bucketRegion,
});
If this works, it may indicate an issue with your profile configuration.
Check that the region specified in your S3Client constructor matches the region of your S3 bucket.
Check that your Docker container is able to access your credentials file. You can try running a command inside the container to check if the file exists and is readable, for example:
docker exec -it CONTAINER_NAME ls -la /root/.aws/credentials
If the file isn't accessible, you may need to adjust the permissions on the file or the directory containing it.
I hope these suggestions help you solve the issue. Let me know if you have any further questions!
If you have confirmed that the credentials are correct and accessible in the container, the issue may be related to the way that you are setting the region. You are setting the region using the bucketRegion variable, which you are getting from the ConfigService. Make sure that the value of AWS_S3_REGION that you are getting from the ConfigService is correct.
You can also try setting the region directly in the S3 client constructor like this:
private client = new S3Client({
credentials: fromIni({
profile: 'default',
filepath: '~/.aws/credentials',
configFilepath: '~/.aws/config',
}),
region: 'eu-central-1',
});
Replace 'eu-central-1' with the actual region you are using.
If the issue still persists, you can try adding some debug logs to your code to see where the issue is happening. For example, you can log the response from await this.client.send(command) to see if it contains any helpful information.

Azure Function connect Azure PostgreSQL ETIMEDOUT, errno: -4039

I have an Azure (AZ) Function does two things:
validate submitted info involving 3rd party packages.
when ok call a postgreSQL function at AZ to fetch a small set of data
Testing with Postman, this AF localhost response time < 40 ms. Deployed to Cloud, change URL to AZ, same set of data, took 30 seconds got Status: 500 Internal Server Error.
Did a search, thought this SO might be the case, that I need to bump my subscription to the expensive one to avoid cold start.
But more investigation running part 1 and 2 individually and combined, found:
validation part alone runs perfect at AZ, response time < 40ms, just like local, suggests cold start/npm-installation is not an issue.
pg function call always long and status: 500 regardless it runs alone or succeeding part 1, no data returned.
Application Insight is enabled and added a Diagnostic settings with:
FunctionAppLogs and AllMetrics selected
Send to LogAnalytiscs workspace and Stream to an event hub selected
Following queries found no error/exceptions:
requests | order by timestamp desc |limit 100 // success is "true", time taken 30 seconds, status = 500
traces | order by timestamp desc | limit 30 // success is "true", time taken 30 seconds, status = 500
exceptions | limit 30 // no data returned
How complicated my pg call is? Standard connection, simple and short:
require('dotenv').config({ path: './environment/PostgreSql.env'});
const fs = require("fs");
const pgp = require('pg-promise')(); // () = taking default initOptions
require('dotenv').config({ path: './environment/PostgreSql.env'});
const fs = require("fs");
const pgp = require('pg-promise')(); // () = taking default initOptions
db = pgp(
{
user: process.env.PGuser,
host: process.env.PGhost,
database: process.env.PGdatabase,
password: process.env.PGpassword,
port: process.env.PGport,
ssl:
{
rejectUnauthorized: true,
ca: fs.readFileSync("./environment/DigiCertGlobalRootCA.crt.pem").toString(),
},
}
);
const pgTest = (nothing) =>
{
return new Promise((resolve, reject) =>
{
var sql = 'select * from schema.test()'; // test() does a select from a 2-row narrrow table.
db.any(sql)
.then
(
good => resolve(good),
bad => reject({status: 555, body: bad})
)
}
);
}
module.exports = { pgTest }
AF test1 is a standard httpTrigger anonymous access:
const x1 = require("package1");
...
const xx = require("packagex");
const pgdb = require("db");
module.exports = function(context)
{
try
{
pgdb.pgTest(1)
.then
(
good => {context.res={body: good}; context.done();},
bad => {context.res={body: bad}; context.done();}
)
.catch(err => {console.log(err)})
}
catch(e)
{ context.res={body: bad}; context.done(); }
}
Note:
AZ = Azure.
AZ pg doesn't require SSL.
pg connectivity method: public access (allowed IP addresses)
Postman tests on Local F5 run against the same AZ pg database, all same region.
pgAdmin and psql all running fast against the same.
AF-deploy is zip-file deployment, my understanding it is using the same configuration.
I'm new to Azure but based on my experience, if it's about credential then should come back right away.
Update 1, FunctionAppLogs | where TimeGenerated between ( datetime(2022-01-21 16:33:20) .. datetime(2022-01-21 16:35:46) )
Is it because my pg network access set to Public access?
My AZ pgDB is a flexible server, current Networking is Public access (allowed IP address), and I have added some Firewall rule w/ client IP address. My assumption is access is allowed within AZ, but it's not.
Solution 1, simply check this box: Allow public access from any Azure servcie within Azure to this server at the bottom of the Settings -> Networking.
Solution 2, find out all AF's outbound IP and add them into Firewall rule, under Settings -> Networking. Reason to add them all is Azure select an outbound IP randomly.

How to connect to Google Cloud SQL (PostgreSQL) from Cloud Functions?

I feel like I've tried everything. I have a cloud function that I am trying to connect to Cloud SQL (PostgreSQL engine). Before I do so, I pull connection string info from Secrets Manager, set that up in a credentials object, and call a pg (package) pool to run a database query.
Below is my code:
Credentials:
import { Pool } from 'pg';
const credentials: sqlCredentials = {
"host":"127.0.0.1",
"database":"myFirstDatabase",
"port":"5432",
"user":"postgres",
"password":"postgres1!"
}
const pool: Pool = new Pool(credentials);
await pool.query(`select CURRENT_DATE;`).catch(error => console.error(`error in pool.query: ${error}`));
Upon running the cloud function with this code, I get the following error:
error in pool.query: Error: connect ECONNREFUSED 127.0.0.1:5432
I have attempted to update the host to the private IP of the Cloud SQL instance, and also update the host to the Cloud SQL instance name on this environment, but that is to no avail. Any other ideas?
Through much tribulation, I figured out the answer. Given that there is NO documentation on how to solve this, I'm going to put the answer here in hopes that I can come back here in 2025 and see that it has helped hundreds. In fact, I'm setting a reminder in my phone right now to check this URL on November 24, 2025.
Solution: The host must be set as:
/cloudsql/<googleProjectName(notId)>:<region>:<sql instanceName>
Ending code:
import { Pool } from 'pg';
const credentials: sqlCredentials = {
"host":"/cloudsql/my-first-project-191923:us-east1:my-first-cloudsql-inst",
"database":"myFirstDatabase",
"port":"5432",
"user":"postgres",
"password":"postgres1!"
}
const pool: Pool = new Pool(credentials);
await pool.query(`select CURRENT_DATE;`).catch(error => console.error(`error in pool.query: ${error}`));

Timeout when trying to connect to redshift from node using node-redshift

I am trying to connect to redshift from my nodejs code to run a code to copy from S3 into redshift.
I am using the node-redshift package for this using the below code.
var Redshift = require('node-redshift');
var client = {
user: 'awsuser',
database: 'dev',
password: 'zxxxx',
port: '5439',
host: 'redshift-cluster-1.xxxxxxxxxx.us-east-1.redshift.amazonaws.com',
};
var redshiftClient = new Redshift(client);
var pg_query = "copy test1 from 's3://aws-bucket/" + file_name + "ACCESS_KEY_ID 'xxxxxxx' SECRET_ACCESS_KEY 'xxxxxxxxxx';";
redshiftClient.query(pg_query, {raw: true}, function (err1, pgres) {
if (err1) {
console.log('error here');
console.error(err1);
} else {
//upload successful
console.log('success');
}
});
}
});
I have tried using explicit connect also but in any case I am getting the timeout error as below
Error: Error: connect ETIMEDOUT XXX.XX.XX.XX:5439
The redshift cluster is assigned to a role for S3 full access and also has the default security group assigned.
Am I missing something here?
Make sure your cluster is publicly visible. The cluster should be sitting in a certain subnet. For that subnet, the security groups' inbound rules in VPC should have an entry that states that all IPs are allowed to connect to your Redshift cluster on port 5439.
If your public IP is present in that set then only you can connect to the cluster.
Say you have SQL Workbench/J which allows you to connect to the redshift cluster. If you are able to connect with this SQL client, you can ignore the above matter because it means that your IP is able to connect to the redshift cluster via SQL Workbench/J.

InvalidAction: The action or operation requested is invalid. Verify that the action is typed correctly

I am using AWS Cognito Service Provider to create and list User Pool Clients. I have a locally installed DynamoDB to store the additional data. But I am getting the above error in the callback. I looked a lot for the error context but couldn't fine one.
const cognitoidentityserviceprovider = new AWS.CognitoIdentityServiceProvider();
cognitoidentityserviceprovider.listUserPoolClients(params, function(clientListError, clientListData) {
console.log(clientListError)
if(clientListError){
return res.json({
status: false,
message: 'Error Fetching Client Apps',
data: clientListError
})
}
return res.json({
status: true,
message: 'List fetch success',
data: clientListData
})
});
This is for fetching the user pool client apps. In the same way I am creating the user pool client but I am getting the same error "InvalidAction"
The error thrown was from Dynamodb because I was connected to my local DB which had no tables and data and I was also not passing the credentials generated by the token manager. I removed the local DB URL from the config and then passed the credentials from the token manager and I got the desired result.
I am facing the same issue, but unable to solve it. Can you guide me on:
I removed the local DB URL from the config and then passed the credentials from the token manager and I got the desired result.
I am configuring DB this way:
static DB_CONFIG = AppConfig.ENVIRONMENT === 'localhost' ? { endpoint: 'http://localhost:8000', region: 'us-east-1' } : { region: 'us-east-1' };
which, in my case is localhost and first object gets passed in.

Resources