while trying to connect to my cloudsql instance I am getting this error
{
errorno: "ETIMEDOUT",
code: "ETIMEDOUT",
syscall: "connect",
fatal: true
}
this is the log from my cloudsqlproxy container
2020/06/09 15:53:04 current FDs rlimit set to 1048576, wanted limit is 8500. Nothing to do here.
2020/06/09 15:53:04 using credential file for authentication; email=lion-db#estatelion-test-275318.iam.gserviceaccount.com
2020/06/09 15:53:04 Listening on 127.0.0.1:3306 for estatelion-test-275318:us-central1:estatelion
2020/06/09 15:53:04 Ready for new connections
but my nodejs application is unable to connect to it
const connection = mysql.createPool({
connectionLimit: 10,
host: config.get("database").host, // localhost
user: config.get("database").user, // cloud proxy user for my cloudsql instance
password: config.get("database").password, // password
database: config.get("database").db_name, // database
});
Well I found the solution to this
I just updated the Cluster version to 1.15.11-gke.15 from 1.14.* (I dont remember the rest ) and it worked fine.
I think there is some bug in the default GKE version of the deployment.
Thanks Everyone for your gracious time and effort. :-)
Related
I am trying to connect to a LocalDB\MSSQLLocalDB SQLExpress instance in my Node.js application that uses Objection.js/Knex for the data layer.
When I try to run a migration it fails to connect to the database. But I can connect via SSMS without issue.
Here is my Knexfile.js connection settings
module.exports = {
development: {
client: 'mssql',
useNullAsDefault: true,
connection: {
server: '(LocalDB)\\MSSQLLocalDB',
user: 'localadmin',
password: 'password$',
database: 'database'
},
migrations: {
directory: './migrations',
tableName: 'knex_migrations'
},
seeds: {
directory: './seeds'
},
...knexSnakeCaseMappers()
}
};
When I run a migration it gives the following error:
Failed to connect to (LocalDB)\MSSQLLocalDB:1433 - getaddrinfo ENOTFOUND (LocalDB)\MSSQLLocalDB
Error: Failed to connect to (LocalDB)\MSSQLLocalDB:1433 - getaddrinfo ENOTFOUND (LocalDB)\MSSQLLocalDB
at Connection.socketError (D:\Code\backend\node_modules\tedious\lib\connection.js:1393:28)
at D:\Code\backend\node_modules\tedious\lib\connection.js:1153:21
at GetAddrInfoReqWrap.callback (D:\Code\backend\node_modules\tedious\lib\connector.js:195:16)
at GetAddrInfoReqWrap.onlookupall [as oncomplete] (node:dns:77:17)
I believe this is an issue with tcp/ip settings in SqlExpress but I cannot find an easy way to change those.
Is there any settings for the Knexfile that will allow for a SQLExpress database connection?
LocalDB only accepts named pipes connections. Use proper SQL Express and enable TCP connections
When changing the settings inside the new bull Queue object, I get an error in the console. When running Bull Queue locally the application works perfectly fine. As soon as I change the credentials to Azure, I get the error below. When running locally I run the redis-server but not when using the Azure credentials.
I have tried the example tutorial, on the Azure website, with nodejs and the redis npm package, and the Azure redis cache works perfectly fine. Therefore, I am left to believe that I am doing something wrong in the config. I have also tried adding "maxRetriesPerRequest" and "enableReadyCheck" to the redis object however, they have had no effect. I also make sure I execute the done function within the process function.
const queue = new Queue('sendQueue', {
defaultJobOptions: { removeOnComplete: true },
redis: {
port: env.AZURE_REDIS_PORT,
host: env.AZURE_REDIS_HOST,
password: env.AZURE_REDIS_PASSWORD
},
});
at Queue.<anonymous> (/Users/abc/Projects/Sean/dist/tasks/sendQueue.js:47:11)
at Queue.emit (events.js:208:15)
at Redis.emit (events.js:203:13)
at Redis.silentEmit (/Users/abc/Projects/Sean/node_modules/ioredis/built/redis/index.js:482:26)
at Socket.<anonymous> (/Users/abc/Projects/Sean/node_modules/ioredis/built/redis/event_handler.js:122:14)
at Object.onceWrapper (events.js:291:20)
at Socket.emit (events.js:203:13)
at emitErrorNT (internal/streams/destroy.js:91:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:59:3)
at processTicksAndRejections (internal/process/task_queues.js:77:11)
Error: read ECONNRESET
at TCP.onStreamRead (internal/stream_base_commons.js:183:27)
Try to add configuration for TLS when using Azure redis cache. Should be the same config value as host. I did not manage to get a connection without it.
var notificationQueue = new Queue('notifications', {
redis: {
port: Number(process.env.REDIS_PORT),
host: process.env.REDIS_HOST,
password: process.env.REDIS_PASS,
tls: {
servername: process.env.REDIS_HOST
}
}});
I have 3 servers which run a MongoDB replicaset, 1 primary, 1 secondary, 1 arbiter. And I have problem connecting to this replicaset. Tested with a test.js file which runs on localhost and a spare server.
Connect from localhost, node 6.5.0: OK
Connect from localhost, node 10.15.1: FAILED
Connect from the spare server, node 6.5.0: OK
Connect from the spare server, node 10.15.1: OK
Here's my test.js file:
const MongoClient = require("mongodb").MongoClient;
const url = "mongodb://root:password"+
"#mgdb1.mydomain,mgdb2.mydomain/vApp?replicaSet=rs0&authSource=admin";
console.log("Connecting...");
MongoClient.connect(url,(err,client)=>{
if (err!=null){
console.log("Error:",err);
return;
}
console.log("Connected.");
process.exit();
});
The strange thing is that it shows ECONNREFUSED error, but not at the IP of the 3 servers in the replicaset, it's the IP in range of my ISP. So why does it fail afterConnect? It shows TCPConnectWrap.afterConnect, does it mean the connection is already made?
The error is this way:
Connecting...
(node:1412) DeprecationWarning: current URL string parser is deprecated, and will be removed in a future version. To use the new parser, pass option { useNewUrlParser: true } to MongoClient.connect.
Error: { Error: connect ECONNREFUSED 125.235.4.59:27017
at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1104:14)
errno: 'ECONNREFUSED',
code: 'ECONNREFUSED',
syscall: 'connect',
address: '125.235.4.59',
port: 27017 }
Edit:
My current work-around is connecting directly to the primary server without replicaset=rs0, however, this is not the desired manner.
I found out the problem. When created the replica set on a group of machines on cloud, the addresses to the servers are server names which are known only by other machines on the same server LAN. On localhost in office, there are no such server names.
Work-around 1:
Connect only to the primary or secondary server
Work-around 2:
Edit /etc/hosts file (or c:\windows\system32\drivers\etc\hosts)
Point the name of the servers to their appropriate IPs
Using datastax nodejs driver: 'cassandra-driver'
Connecting to database in my nodejs app server backend as:
const cassandra = require('cassandra-driver');
const client = new cassandra.Client({ contactPoints: [ '${azure_vm_ip}' ] });
Output log:
{ [Error: All host(s) tried for query failed. First host tried, azure_vm_ip:9042: Error: Connection timeout. See innerErrors.]
innerErrors:
{ 'azure_vm_ip:9042':
{ [Error: Connection timeout]
message: 'Connection timeout',
info: 'Cassandra Driver Error' } },
info: 'Represents an error when a query cannot be performed because no host is available or could be reached by the driver.',
message: 'All host(s) tried for query failed. First host tried, azure_vm_ip:9042: Error: Connection timeout. See innerErrors.' }
Questions:
Should I edit something on the default cassandra.yaml file? If
so, what?
Should I do something with firewall? If so, what?
Should I pass in more options in new cassandra.Client({
contactPoints: [ '${azure_vm_ip}' ] })? If so, What?
Make sure you have set rule for 9042 port in NSG. Also check whether the same port is in listening mode inside the VM.
After deploying the Ghost blogging platform with success, i tried to configure it to use Mysql instead of sqllite3 using this database section of their config page, which says:
Database
By default, Ghost comes configured to use an SQLite database, which
requires no configuration.
Alternatively Ghost can also be used with a MySQL database by changing
the database configuration. You must create a database and user first,
you can then change the existing sqlite config to something like:
database: { client: 'mysql', connection: {
host : '127.0.0.1',
user : 'your_database_user',
password : 'your_database_password',
database : 'ghost_db',
charset : 'utf8' }
}
So ok, the setup is straight forward. but i'm still unable to connect ghost with mysql. The error i receive after starting the platform using npm start --production is :
> ghost#0.6.2 start /var/www/ghost
> node index
Migrations: Database initialisation required for version 003
Migrations: Creating tables...
Migrations: Creating table: posts
ERROR: connect ECONNREFUSED
Error: connect ECONNREFUSED
at errnoException (net.js:905:11)
at Object.afterConnect [as oncomplete] (net.js:896:19)
--------------------
at Protocol._enqueue (/var/www/ghost/node_modules/mysql/lib/protocol/Protocol.js:110:48)
at Protocol.handshake (/var/www/ghost/node_modules/mysql/lib/protocol/Protocol.js:42:41)
at Connection.connect (/var/www/ghost/node_modules/mysql/lib/Connection.js:98:18)
at /var/www/ghost/node_modules/knex/lib/dialects/mysql/index.js:105:16
at tryCatch2 (/var/www/ghost/node_modules/bluebird/js/main/util.js:53:21)
at Promise._resolveFromResolver (/var/www/ghost/node_modules/bluebird/js/main/promise.js:544:13)
at new Promise (/var/www/ghost/node_modules/bluebird/js/main/promise.js:84:37)
at Client_MySQL.acquireRawConnection (/var/www/ghost/node_modules/knex/lib/dialects/mysql/index.js:104:10)
at Object.create (/var/www/ghost/node_modules/knex/lib/pool.js:33:19)
at Object.Pool.createResource (/var/www/ghost/node_modules/knex/node_modules/generic-pool-redux/pool.js:288:12)
I'm not sure what could be wrong, since i have other applications using mysql working without any problems.
Thanks in advance.
connection refused = tcp connection was attempted, but nothing is listening on the port, or was explicitly denied. Unless you explicitly enabled TCP support in mysql (and have the correct ip/port), you should probably be using a local unix-domain socket instead.