Timeout when trying to connect to redshift from node using node-redshift - node.js

I am trying to connect to redshift from my nodejs code to run a code to copy from S3 into redshift.
I am using the node-redshift package for this using the below code.
var Redshift = require('node-redshift');
var client = {
user: 'awsuser',
database: 'dev',
password: 'zxxxx',
port: '5439',
host: 'redshift-cluster-1.xxxxxxxxxx.us-east-1.redshift.amazonaws.com',
};
var redshiftClient = new Redshift(client);
var pg_query = "copy test1 from 's3://aws-bucket/" + file_name + "ACCESS_KEY_ID 'xxxxxxx' SECRET_ACCESS_KEY 'xxxxxxxxxx';";
redshiftClient.query(pg_query, {raw: true}, function (err1, pgres) {
if (err1) {
console.log('error here');
console.error(err1);
} else {
//upload successful
console.log('success');
}
});
}
});
I have tried using explicit connect also but in any case I am getting the timeout error as below
Error: Error: connect ETIMEDOUT XXX.XX.XX.XX:5439
The redshift cluster is assigned to a role for S3 full access and also has the default security group assigned.
Am I missing something here?

Make sure your cluster is publicly visible. The cluster should be sitting in a certain subnet. For that subnet, the security groups' inbound rules in VPC should have an entry that states that all IPs are allowed to connect to your Redshift cluster on port 5439.
If your public IP is present in that set then only you can connect to the cluster.
Say you have SQL Workbench/J which allows you to connect to the redshift cluster. If you are able to connect with this SQL client, you can ignore the above matter because it means that your IP is able to connect to the redshift cluster via SQL Workbench/J.

Related

Unable to connect Mongodb Atlas Cluster from Node js

I am unable to connect Mongodb atlas Cluster from node js getting following error
{
error: 1,
message: 'Command failed: mongodump -h cluster0.yckk6.mongodb.net --port=27017 -d databaseName -p -u --gzip --archive=/tmp/file_name_2022-09-19T09-42-05.gz\n' +
'2022-09-19T14:42:08.931+0000\tFailed: error connecting to db server: no reachable servers\n'
}
Can anyone help me to solve this problem and following is my backup code
function databaseBackup() {
let backupConfig = {
mongodb: "mongodb+srv://<username>:<password>#cluster0.yckk6.mongodb.net:27017/databaseName?retryWrites=true&w=majority&authMechanism=SCRAM-SHA-1", // MongoDB Connection URI
s3: {
accessKey: "SDETGGAKIA2GL", //AccessKey
secretKey: "Asad23rdfdg2teE8lOS3JWgdfgfdgfg", //SecretKey
region: "ap-south-1", //S3 Bucket Region
accessPerm: "private", //S3 Bucket Privacy, Since, You'll be storing Database, Private is HIGHLY Recommended
bucketName: "backupDatabase" //Bucket Name
},
keepLocalBackups: false, //If true, It'll create a folder in project root with database's name and store backups in it and if it's false, It'll use temporary directory of OS
noOfLocalBackups: 5, //This will only keep the most recent 5 backups and delete all older backups from local backup directory
timezoneOffset: 300 //Timezone, It is assumed to be in hours if less than 16 and in minutes otherwise
}
MBackup(backupConfig).then(onResolve => {
// When everything was successful
console.log(onResolve);
}).catch(onReject => {
// When Anything goes wrong!
console.log(onReject);
});
}

How to connect to Google Cloud SQL (PostgreSQL) from Cloud Functions?

I feel like I've tried everything. I have a cloud function that I am trying to connect to Cloud SQL (PostgreSQL engine). Before I do so, I pull connection string info from Secrets Manager, set that up in a credentials object, and call a pg (package) pool to run a database query.
Below is my code:
Credentials:
import { Pool } from 'pg';
const credentials: sqlCredentials = {
"host":"127.0.0.1",
"database":"myFirstDatabase",
"port":"5432",
"user":"postgres",
"password":"postgres1!"
}
const pool: Pool = new Pool(credentials);
await pool.query(`select CURRENT_DATE;`).catch(error => console.error(`error in pool.query: ${error}`));
Upon running the cloud function with this code, I get the following error:
error in pool.query: Error: connect ECONNREFUSED 127.0.0.1:5432
I have attempted to update the host to the private IP of the Cloud SQL instance, and also update the host to the Cloud SQL instance name on this environment, but that is to no avail. Any other ideas?
Through much tribulation, I figured out the answer. Given that there is NO documentation on how to solve this, I'm going to put the answer here in hopes that I can come back here in 2025 and see that it has helped hundreds. In fact, I'm setting a reminder in my phone right now to check this URL on November 24, 2025.
Solution: The host must be set as:
/cloudsql/<googleProjectName(notId)>:<region>:<sql instanceName>
Ending code:
import { Pool } from 'pg';
const credentials: sqlCredentials = {
"host":"/cloudsql/my-first-project-191923:us-east1:my-first-cloudsql-inst",
"database":"myFirstDatabase",
"port":"5432",
"user":"postgres",
"password":"postgres1!"
}
const pool: Pool = new Pool(credentials);
await pool.query(`select CURRENT_DATE;`).catch(error => console.error(`error in pool.query: ${error}`));

Calling CosmosDB server from Azure Cloud Function

I am working on an Azure Cloud Function (runs on node js) that should return a collection of documents from my Azure Cosmos DB for MongoDB API account. It all works fine when I build and start the function locally, but fails when I deploy it to Azure. This is the error: MongoNetworkError: failed to connect to server [++++.mongo.cosmos.azure.com:++++] on first connect ...
I am new to CosmosDB and Azure Cloud Functions, so I am struggling to find the problem. I looked at the Firewall and virtual networks settings in the portal and tried out different variations of the connection string.
As it seems to work locally, I assume it could be a configuration setting in the portal. Can someone help me out?
1.Set up the connection
I used the primary connection string provided by the portal.
import * as mongoClient from 'mongodb';
import { cosmosConnectionStrings } from './credentials';
import { Context } from '#azure/functions';
// The MongoDB Node.js 3.0 driver requires encoding special characters in the Cosmos DB password.
const config = {
url: cosmosConnectionStrings.primary_connection_string_v1,
dbName: "****"
};
export async function createConnection(context: Context): Promise<any> {
let db: mongoClient.Db;
let connection: any;
try {
connection = await mongoClient.connect(config.url, {
useNewUrlParser: true,
ssl: true
});
context.log('Do we have a connection? ', connection.isConnected());
if (connection.isConnected()) {
db = connection.db(config.dbName);
context.log('Connected to: ', db.databaseName);
}
} catch (error) {
context.log(error);
context.log('Something went wrong');
}
return {
connection,
db
};
}
2. The main function
The main function that execute the query and returns the collection.
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('Get all projects function processed a request.');
try {
const { db, connection } = await createConnection(context);
if (db) {
const projects = db.collection('projects')
const res = await projects.find({})
const body = await res.toArray()
context.log('Response projects: ', body);
connection.close()
context.res = {
status: 200,
body
}
} else {
context.res = {
status: 400,
body: 'Could not connect to database'
};
}
} catch (error) {
context.log(error);
context.res = {
status: 400,
body: 'Internal server error'
};
}
};
I had another look at the firewall and private network settings and read the offical documentation on configuring an IP firewall. On default the current IP adddress of your local machine is added to the IP whitelist. That's why the function worked locally.
Based on the documentation I tried all the options described below. They all worked for me. However, it still remains unclear why I had to manually perform an action to make it work. I am also not sure which option is best.
Set Allow access from to All networks
All networks (including the internet) can access the database (obviously not advised)
Add the inbound and outbound IP addresses of the cloud function project to the whitelistThis could be challenging if the IP addresses changes over time. If you are on the consumption plan this will probably happen.
Check the Accept connections from within public Azure datacenters option in the Exceptions section
If you access your Azure Cosmos DB account from services that don’t
provide a static IP (for example, Azure Stream Analytics and Azure
Functions), you can still use the IP firewall to limit access. You can
enable access from other sources within the Azure by selecting the
Accept connections from within Azure datacenters option.
This option configures the firewall to allow all requests from Azure, including requests from the subscriptions of other customers deployed in Azure. The list of IPs allowed by this option is wide, so it limits the effectiveness of a firewall policy. Use this option only if your requests don’t originate from static IPs or subnets in virtual networks. Choosing this option automatically allows access from the Azure portal because the Azure portal is deployed in Azure.

node net.createServer get connection path

Trying to cluster Socket.io using net.createServer. All examples are using IP to split what connection goes to witch node. However I'm using 4 servers with a load balancer that points ip;s to the different servers.
So in node cluster I would like to use an unique id to point the connection to a specific cluster.
Figure that each user that wants to connect can add a parameter to the connection url ws://localhost/socket.io?id=xxyyzz
How can I get the connection url in net.createServer
todays code for ip:
var server = net.createServer({ pauseOnConnect: true }, function(connection) {
// We received a connection and need to pass it to the appropriate
// worker. Get the worker for this connection's source IP and pass
// it the connection.
var remote = connection.remoteAddress;
var local = connection.localAddress;
var ip = (remote+local).match( /[0-9]+/g )[0].replace(/,/g, '');
var wIndex = ip % num_processes;
var worker = workers[wIndex];
worker.send('sticky-session:connection', connection);
});

Connection to Redis cluster failed

I have setup Redis cluster in Google compute Engine by click to deploy option. Now i want to connect to this redis server from my node js code using 'ioredis' here is my code to connect to single instance of redis
var Redis = require("ioredis");
var store = new Redis(6379, 'redis-ob0g');//to store the keys
var pub = new Redis(6379, 'redis-ob0g');//to publish a message to all workers
var sub = new Redis(6379, 'redis-ob0g');//to subscribe a message
var onError = function (err) {
console.log('fail to connect to redis ',err);
};
store.on('error',onError);
pub.on('error',onError);
sub.on('error',onError);
And it worked. Now i want to connect to redis as cluster, so i change the code as
/**
* list of server in replica set
* #type {{port: number, host: string}[]}
*/
var nodes =[
{ port: port, host: hostMaster},
{ port: port, host: hostSlab1},
{ port: port, host: hostSlab2}
];
var store = new Redis.Cluster(nodes);//to store the keys
var pub = new Redis.Cluster(nodes);//to publish a message to all workers
var sub = new Redis.Cluster(nodes);//to subscribe a message channel
Now it throw this error:
Here is my Redis cluster in my google compute console:
Ok, I think there is a confusion here.
A Redis Cluster deployment is not the same than a number of standard Redis instances protected by Sentinel. Two very different things.
The click-to-deploy option of GCE deploys a number of standard Redis instances protected by Sentinel, not Redis Cluster.
ioredis can handle both kind of deployments, but you have to use the corresponding API. Here, you were trying to use the Redis Cluster API, resulting in this error (cluster related commands are not activated for standard Redis instances).
According to ioredis documentation, you are supposed to connect with:
var redis = new Redis({
sentinels: [{ host: hostMaster, port: 26379 },
{ host: hostSlab1, port: 26379 },
{ host: hostSlab2, port: 26379 } ],
name: 'mymaster'
});
Of course, check the sentinel ports and name of the master. ioredis will manage automatically the switch to a slave instance when the master fails, and sentinel will ensure the slave is promoted as master just before.
Note that since you use pub/sub, you will need several redis connections.

Resources