Unable to connect to redis cluster with node client, what am I doing wrong? - node.js

I have spun up an AWS ElastiCache redis instance running in clustered mode, which currently has 1 shard and 2 nodes
In order to connect to it from my local machine I have opened up an SSH tunnel using my SSH config file
Host myRedisTunnel
HostName 1.2.3.4
...
LocalForward 6378 5.6.7.8:6379
the tunnel works, I can connect to my VPC successfully
$ ssh myRedisTunnel
Welcome to Ubuntu 18.04.6 LTS (GNU/Linux 5.4.0-1060-aws x86_64)
...
I can connect to the redis cluster locally via redis-cli after opening my tunnel and passing -c as an argument for clustered mode
$ redis-cli -c -h localhost -p 6378
localhost:6378> ping
PONG
but when I try to use redis for nodejs it wont connect, it just times out, am i missing some configuration settings, or is it physically impossible to connect to my remote redis via tunnel?
const { createCluster } = require('redis')
const client = createCluster({
rootNodes: [
{ url: 'redis://localhost:6378' }
]
})
await client.connect()
const res = await client.ping()
console.log({ res })
Error: Connection timeout
at Socket.<anonymous> (node_modules/#node-redis/client/dist/lib/client/socket.js:163:124)
at Object.onceWrapper (node:events:513:28)
at Socket.emit (node:events:394:28)
at Socket._onTimeout (node:net:486:8)
at listOnTimeout (node:internal/timers:557:17)
at processTimers (node:internal/timers:500:7)
I have tried several nodejs clients for redis and all of them have timed out in the same way, so I know the issue has to either be that I have a config setting wrong in my nodejs redis client configuration - or it has something to do with only one of the redis ip addresses is accessible via tunnel, all the rest of the cluster would likely not be accessible unless i open tunnels for each one. Im just at a loss for how mock my production environment in development so i can write code.

As you've pointed out, you'll need to create a tunnel for each node in the cluster
The clients are using the CLUSTER NODES command in order to discover all the nodes and the cluster, and this command will return the actual IPs and ports of the nodes, not the tunnels. You can use ioredis with "NAT Mapping" to solve that.

Related

Not able to connect to EC2 instance in AWS via python/psycopg2

I am having trouble connecting to aws via python. I have a macOSX operating system.
What i did:
I created an ec2 instance and chose an operating system (ubuntu) and downloaded postgresssql in the remote server. Then i created securitygroups where i added the following configuration:
type:
ssh
protocol:
tcp
port: 22
source: custom, 0.0.0.0/0
Then i added another rule:
postgresql
TCP
5432
custom
my_computer_ip_address 71.???.??.???/32
where i added question marks just to hide the address. but its in that format.
Now, aws had me create a .pem file in order to query from the database. I downloaded this pem file into a secret location.
When i go to my local machine, go to my terminal and type:
ssh -i "timescale.pem" ubuntu#ec2-??-???-??-???.compute-1.amazonaws.com"
i am able to connect. I also went to my dbeaver and created a new connection and set up a connection where i am using an ssh tunnel and a public key to read the 'timescale.pem' file i created. I then go to main and type my username and password:
username: postgres
database: example
password: mycustompassword
and i am able to connect with no issues.
Now, when I go to python with psycopg2 library, i am just unable to connect at all. I have gone through all the examples here in stackoverflow and none of them have helped me. Heres what i am using to connect to aws from python:
path_to_secret_key = os.path.expanduser("~/timescale.pem")
conn = psycopg2.connect(
dbname='example',
user='postgres',
password='pass123',
host='ec2-??-??-??-???.compute-1.amazonaws.com',
port='22',
sslmode='verify-full',
sslrootcert=path_to_secret_key)
I then get this error:
connection to server at "ec2-34-???-??-???.compute-1.amazonaws.com" (34.201.??.???), port 22 failed: could not read root certificate file "/Users/me/timescale.pem": no certificate or crl found
Ok...then i switched ports and added '5432' and get this warning:
connection to server at "ec2-??-???-??-???.compute-1.amazonaws.com" (34.???.??.212), port 5432 failed: Connection refused
Is the server running on that host and accepting TCP/IP connections?
When i ssh into my terminal and type: netstat -nl |grep 5432 i get the following:
tcp 0 0 127.0.0.1:5432 0.0.0.0:* LISTEN
unix 2 [ ACC ] STREAM LISTENING 812450 /var/run/postgresql/.s.PGSQL.5432
Can someone please help? Thanks
The .pem file is for connecting to the EC2 instance over SSH, on port 22. PostgreSQL is running on port 5432. You don't use the .pem file for database connections, only for ssh connections. You need to change your Python script to use port='5432', to connect directly to the PostgreSQL service running on the EC2 instance.
This seems to work now:
from sshtunnel import SSHTunnelForwarder
mypkey = paramiko.Ed25519Key.from_private_key_file(path_to_secret_key)
tunnel = SSHTunnelForwarder(
('remote_public_port', 22),
ssh_username='ubuntu',
ssh_pkey=mypkey,
remote_bind_address=('localhost', 5432))
tunnel.start()
conn = psycopg2.connect(
dbname='example_db',
user='postgres',
password='secret123',
host='127.0.0.1',
port=tunnel.local_bind_port)

Connect local postgres database from PGAdmin Container

I am trying to connect to my local postgres database from PGadmin in container, but it is through following error:
Unable to connect to Server: connection to server at "localhost" (127.0.0.1), port 5432 failed. connection refused is the server running on that host and accepting TCP/IP connections?
configurations i am trying for creating server are as follows:
hostname/address = localhost
port = 5432
username = postgres
my OS configurations
OS = Linux- Ubuntu
Version = 18.04
PS: there are some existing questions regarding this on Stackoverflow, but no solution helped me.
While it runs locally, the docker container is like a different machine. You need to put the IP address of the local host machine instead of localhost.

Developing Node.js Applications Inside Docker Container

I'm trying to set up my local development environment to the point that all my Node.js applications are developed inside a docker container. Our team works on Linux, macOS, and Windows so this should help us limit some of the issues that we see due to this.
We're using Sails.js for our Node framework, and I'm not sure if the issue is in my Docker setup, or an issue with Sails itself.
Here's my docker run command, which almost works:
docker run --rm -it -p 3000:3000 --name my-app-dev -v $PWD:/home/app -w /home/app -u node node:latest /bin/bash
This almost works, but the application we're developing needs access to the machine's localhost for some database applicationss (MongoDB and SQL Server) and to a RabbitMQ instance. SQL Server is on port 1433 (running in Docker), RabbitMQ is on port 5672 (also running in Docker) and MongoDB is on 27017, but not running in Docker.
When I run that Docker command and then start the application, I get an error saying that the application cannot connect to those localhost ports, which makes sense from what I've read because by default the docker container has its own localhost, which is where it would try to connect by default.
So, I added the following to the docker run command: --net=host, hoping to give the container access to my machine's localhost. This seems to get rid of the issue for RabbitMQ, but not MongoDB. There are two errors in the console for it:
2019-09-05 15:58:38.800 | error | error: Could not tear down the ORM hook. Error details: Error: Consistency violation: Attempting to tear down a datastore (`myMongoTable`) which is not currently registered with this adapter. This is usually due to a race condition in userland code (e.g. attempting to tear down the same ORM instance more than once), or it could be due to a bug in this adapter. (If you get stumped, reach out at http://sailsjs.com/support.)
at Object.teardown (/home/app/node_modules/sails-mongo/lib/index.js:390:19)
at /home/app/node_modules/waterline/lib/waterline.js:758:27
at /home/app/node_modules/waterline/node_modules/async/dist/async.js:3047:20
at eachOfArrayLike (/home/app/node_modules/waterline/node_modules/async/dist/async.js:1002:13)
at eachOf (/home/app/node_modules/waterline/node_modules/async/dist/async.js:1052:9)
at Object.eachLimit (/home/app/node_modules/waterline/node_modules/async/dist/async.js:3111:7)
at Object.teardown (/home/app/node_modules/waterline/lib/waterline.js:742:11)
at Hook.teardown (/home/app/node_modules/sails-hook-orm/index.js:246:30)
at Sails.wrapper (/home/app/node_modules/#sailshq/lodash/lib/index.js:3275:19)
at Object.onceWrapper (events.js:291:20)
at Sails.emit (events.js:203:13)
at Sails.emitter.emit (/home/app/node_modules/sails/lib/app/private/after.js:56:26)
at /home/app/node_modules/sails/lib/app/lower.js:67:11
at beforeShutdown (/home/app/node_modules/sails/lib/app/lower.js:45:12)
at Sails.lower (/home/app/node_modules/sails/lib/app/lower.js:49:3)
at Sails.wrapper [as lower] (/home/app/node_modules/#sailshq/lodash/lib/index.js:3275:19)
at whenSailsIsReady (/home/app/node_modules/sails/lib/app/lift.js:68:13)
at /home/app/node_modules/sails/node_modules/async/dist/async.js:3861:9
at /home/app/node_modules/sails/node_modules/async/dist/async.js:421:16
at iterateeCallback (/home/app/node_modules/sails/node_modules/async/dist/async.js:924:17)
at /home/app/node_modules/sails/node_modules/async/dist/async.js:906:16
at /home/app/node_modules/sails/node_modules/async/dist/async.js:3858:13
at /home/app/node_modules/sails/lib/app/load.js:261:22
at /home/app/node_modules/sails/node_modules/async/dist/async.js:421:16
at /home/app/node_modules/sails/node_modules/async/dist/async.js:1609:17
at /home/app/node_modules/sails/node_modules/async/dist/async.js:906:16
at /home/app/node_modules/sails/lib/app/load.js:186:25
at /home/app/node_modules/sails/node_modules/async/dist/async.js:3861:9
at /home/app/node_modules/sails/node_modules/async/dist/async.js:421:16
at iterateeCallback (/home/app/node_modules/sails/node_modules/async/dist/async.js:924:17)
at /home/app/node_modules/sails/node_modules/async/dist/async.js:906:16
at /home/app/node_modules/sails/node_modules/async/dist/async.js:3858:13
at afterwards (/home/app/node_modules/sails/lib/app/private/loadHooks.js:350:27)
at /home/app/node_modules/sails/node_modules/async/dist/async.js:3861:9
at /home/app/node_modules/sails/node_modules/async/dist/async.js:421:16
at iterateeCallback (/home/app/node_modules/sails/node_modules/async/dist/async.js:924:17)
at /home/app/node_modules/sails/node_modules/async/dist/async.js:906:16
at /home/app/node_modules/sails/node_modules/async/dist/async.js:3858:13
at /home/app/node_modules/sails/node_modules/async/dist/async.js:421:16
at iteratorCallback (/home/app/node_modules/sails/node_modules/async/dist/async.js:996:13)
at /home/app/node_modules/sails/node_modules/async/dist/async.js:906:16
at /home/app/node_modules/sails/lib/app/private/loadHooks.js:233:40
at processTicksAndRejections (internal/process/task_queues.js:75:11)
2019-09-05 15:58:38.802 | verbose | verbo: (The error above was logged like this because `sails.hooks.orm.teardown()` encountered an error in a code path where it was invoked without providing a callback.)
2019-09-05 15:58:38.808 | error | error: Failed to lift app: Error: Consistency violation: Unexpected error creating db connection manager:
MongoError: failed to connect to server [localhost:27017] on first connect [Error: connect ECONNREFUSED 127.0.0.1:27017
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1056:14) {
name: 'MongoError',
message: 'connect ECONNREFUSED 127.0.0.1:27017'
}]
at flaverr (/home/app/node_modules/flaverr/index.js:94:15)
at Function.module.exports.parseError (/home/app/node_modules/flaverr/index.js:371:12)
at Function.handlerCbs.error (/home/app/node_modules/machine/lib/private/help-build-machine.js:665:56)
at connectCb (/home/app/node_modules/sails-mongo/lib/private/machines/create-manager.js:130:22)
at connectCallback (/home/app/node_modules/sails-mongo/node_modules/mongodb/lib/mongo_client.js:428:5)
at /home/app/node_modules/sails-mongo/node_modules/mongodb/lib/mongo_client.js:335:11
at processTicksAndRejections (internal/process/task_queues.js:75:11)
at Object.error (/home/app/node_modules/sails-mongo/lib/index.js:268:21)
at /home/app/node_modules/machine/lib/private/help-build-machine.js:1514:39
at proceedToFinalAfterExecLC (/home/app/node_modules/parley/lib/private/Deferred.js:1153:14)
at proceedToInterceptsAndChecks (/home/app/node_modules/parley/lib/private/Deferred.js:913:12)
at proceedToAfterExecSpinlocks (/home/app/node_modules/parley/lib/private/Deferred.js:845:10)
at /home/app/node_modules/parley/lib/private/Deferred.js:303:7
at /home/app/node_modules/machine/lib/private/help-build-machine.js:952:35
at Function.handlerCbs.error (/home/app/node_modules/machine/lib/private/help-build-machine.js:742:26)
at connectCb (/home/app/node_modules/sails-mongo/lib/private/machines/create-manager.js:130:22)
at connectCallback (/home/app/node_modules/sails-mongo/node_modules/mongodb/lib/mongo_client.js:428:5)
at /home/app/node_modules/sails-mongo/node_modules/mongodb/lib/mongo_client.js:335:11
at processTicksAndRejections (internal/process/task_queues.js:75:11)
The first issue seems to be related to Sails.js and its sails-mongo ORM adapter. The second just seems to be an issue with connecting to the database. So I'm not sure if the first issue is a red herring and its underlying issue is the lack of database connection.
If anyone has any suggestions for how to run a Sails.js app inside a Docker container with access to the machine's localhost and MongoDB, I'd love some help with this!
Along with --network host in the docker run command, you need to define the host's IP in the connection properties and not localhost, since localhost in the container refers to the container itself. If you would like to make connection properties in the code consistent, you can have each developer set up a loopback alias in /etc/hosts, e.g. 127.0.0.1 my.host.com and set the connection properties to that host name ("my.host.com"), e.g. my.host.com:27017 for MongoDB.
By default, Docker creates a bridge network and assigns any container attached and the host OS an IP address. Running ifconfig and searching for docker0 interface will show the IP address range that Docker uses for the network.
This is normally quite useful because it isolates any running Docker container from the local network ensuring that only ports that are explicitly opened to the local network are exposed avoiding any potential conflicts.
Sometimes though, there are cases where a Docker container might require access to services for the host.
There are two options to achieve this:
Obtain the host IP address from within the container with the code below:
#get the IP of the host computer from within Docker container
/sbin/ip route|awk '/default/ { print $3 }'
You can Attach the Docker container to the local network by running this command:
docker run --network="host"
If you are using docker-compose you can add a network with host as driver
This will attach the container to the host network, which allows the Docker container to reach any services running on the host via localhost. It also works for any other Docker containers that are running on the local network or have their ports exposed to the localhost.

Cannot create redis cluster (Sorry, can't connect to node)

I'm trying to follow the redis cluster tutorial but whenever I try to run:
./redis-trib.rb create --replicas 1 127.0.0.1:7000 127.0.0.1:7001 \
127.0.0.1:7002 127.0.0.1:7003 127.0.0.1:7004 127.0.0.1:7005`
I get the error:
[ERR] Sorry, can't connect to node 127.0.0.1:7000
The server is running and I can connect to port 7000 using
redis-cli -p 7000
What am I missing?
Turns out I had REDIS_URL set in .bashrc from a previous project. Apparently the redis gem was setting the password from that url for ALL redis connections (even though I was not using the url for my cluster).
Thanks to soveran for pointing out this posibility in this question

socket.io redis store on openshift

I'm trying to set up socket.io on node.js to use redisstore so i can comunicate with pubsub with multiple node on the opeshift platform, but i can't manage to connect to the redis server. I'm using this cartridge.
I tried to connect with
var pub = redis.createClient(process.env.OPENSHIFT_REDIS_DB_PORT,
process.env.OPENSHIFT_REDIS_DB_HOST);
but it doesn't work (and I found out why: createClient() only accept IP addresses) and it fallback to the default port and host, then I ran rhc port-forward:
$ rhc port-forward appname
Checking available ports ... done
Forwarding ports ...
Address already in use - bind(2) while forwarding port 8080. Trying local port 8081
To connect to a service running on OpenShift, use the Local address
Service Local OpenShift
--------------- --------------- ---- ----------------------------------------------
haproxy 127.0.0.1:8080 => 127.5.149.130:8080
haproxy 127.0.0.1:8081 => 127.5.149.131:8080
s_redis_db_host 127.0.0.1:54151 => blabla.appname.rhcloud.com:54151
Press CTRL-C to terminate port forwarding
So I tought I was doing all wrong and I had to set just the port like this:
var pub = redis.createClient(process.env.OPENSHIFT_REDIS_DB_PORT);
but all I get is this
info: socket.io started
events.js:72
throw er; // Unhandled 'error' event
^
Error: Redis connection to 127.0.0.1:54151 failed - connect ECONNREFUSED
at RedisClient.on_error (/var/lib/openshift/532c3790e0b8cd9bb000006b/app-root/runtime/repo/node_modules/socket.io/node_modules/redis/index.js:149:24)
at Socket.<anonymous> (/var/lib/openshift/532c3790e0b8cd9bb000006b/app-root/runtime/repo/node_modules/socket.io/node_modules/redis/index.js:83:14)
at Socket.EventEmitter.emit (events.js:95:17)
at net.js:426:14
at process._tickCallback (node.js:415:13)
DEBUG: Program node server.js exited with code 8
I tried to connect via
telnet $OPENSHIFT_REDIS_DB_HOST $OPENSHIFT_REDIS_DB_PORT
And it works fine... Do you have any suggestions? Am I doing it wrong? (I'm still new to redis and socket.io)
(I omitted the rest of the code 'cause I know it works, I have no problem on my local machine, I just can't get the connection...)
Thanks a lot
but it doesn't work (and I found out why: createClient() only accept IP addresses) and it fallback to the default port and host
It does support Host, createClient uses net.createConnection(port, host); that does support hostname.
The following code will help you find the issue:
console.log(process.env);
var pub = redis.createClient(process.env.OPENSHIFT_REDIS_DB_PORT,
process.env.OPENSHIFT_REDIS_DB_HOST, {auth_pass: process.env.OPENSHIFT_REDIS_DB_PASSWORD});
pub.on('error', console.log.bind(console));
pub.on('ready', console.log.bind(console, 'redis ready'));
Does your openshift Redis instance requires AUTH ?
don't know if it is about a recent change on openshift, but i think the problem is in the variables. Although they work for telnet.
you can try this
var redisHost = process.env.OPENSHIFT_REDIS_HOST;
var redisPort = process.env.OPENSHIFT_REDIS_PORT;
var redisPass = process.env.REDIS_PASSWORD;
var client = redis.createClient( redisPort, redisHost );
client.auth( redisPass );

Resources