elasticsearch is not able to establish connection on ubuntu machine - node.js

I am trying to run elastic search on my express/nodejs app on ubuntu 17.10. But somehow, it is giving the error as shown below:
Error: Request error, retrying
HEAD http://localhost:9200/ => connect ECONNREFUSED 127.0.0.1:9200
My code for that is as shown below:
const connectElasticSearchClient = () => {
client = new elasticsearch.Client({
hosts: ['localhost:9200'],
maxRetries: 10,
keepAlive: true,
maxSockets: 10,
minSockets: 10
});
client.ping({
requestTimeout: 300000,
}, function (error) {
if (error) {
console.error('elasticsearch cluster is down!' + error);
} else {
console.log('Everything is ok');
}
});
};
I am trying to run it on node version 8.11.1. The version of elasticsearch npm package is 13.3.1.

Related

node.js fix error: SSL routines:ssl3_get_record:wrong version number

I am trying to connect to an IMAP server, but using different packages I always get this error:
Error: 139844877236160:error:1408F10B:SSL routines:ssl3_get_record:wrong version number:../deps/openssl/openssl/ssl/record/ssl3_record.c:332:
I have the feeling that this is due to some nodejs configuration. I am running nodejs with the latest version in docker from https://hub.docker.com/_/node .
How can I change the ssl3 version number?
const Imap = require("imap");
onst imap = new Imap({
user: "xx",
password: "xx",
host: "imap.ionos.com",
port: 993,
tls: true,
authTimeout: 3000,
});
imap.once("error", function (err) {
console.log(err);
});
imap.once("end", function () {
console.log("Connection ended");
});
imap.connect();

Getting Error 'Error: Unable to load PFX certificate' on a Node Js application while connecting to ElasticSearch

I am getting the below error stack trace while querying the elastic from NodeJs app.
Error: Unable to load PFX certificate
at configSecureContext (node:internal/tls:304:15)
at Object.createSecureContext (node:_tls_common:113:3)
at Object.connect (node:_tls_wrap:1619:48)
at HttpsAgent.createConnection (node:https:136:22)
at HttpsAgent.createSocket (/Users/arshpreetsingh/Desktop/Desktop – MacBook Pro/Diagnotics Services/diagnostic-search/node_modules/agentkeepalive/lib/_http_agent.js:265:26)
at HttpsAgent.createSocket (/Users/arshpreetsingh/Desktop/Desktop – MacBook Pro/Diagnotics Services/diagnostic-search/node_modules/agentkeepalive/lib/agent.js:77:11)
at HttpsAgent.addRequest (/Users/arshpreetsingh/Desktop/Desktop – MacBook Pro/Diagnotics Services/diagnostic-search/node_modules/agentkeepalive/lib/_http_agent.js:239:10)
at new ClientRequest (node:_http_client:305:16)
at Object.request (node:https:317:10)
at Object.request (/Users/arshpreetsingh/Desktop/Desktop – MacBook Pro/Diagnotics Services/diagnostic-search/node_modules/agent-base/patch-core.js:25:22)
My elastc client looks like this.
import elasticsearch from 'elasticsearch';
import CONFIG from '../config/index';
const elasticClient = new elasticsearch.Client({
host: CONFIG.es_url,
});
export default {
search: async (...args) => {
try {
return await elasticClient.search(...args)
} catch (error) {
if (error.status == 400) {
return { hits: { total: 0 } }
}
throw error
}
}
}
The URL of the elastic search starts with https.
there is a way to solve this。
Node.js version 14.x
add this config
new elasticsearch.Client({ host: CONFIG.es_url, ssl:{ rejectUnauthorized: false } });

Failed to connect ElastiCache from NodeJS server on Elastic Beanstalk

We have a nodeJS server with express on AWS Elastic Beanstalk and we are trying to connect it with the Elasticache(Redis clustered) from the NodeJS but getting this error Redis Client Connection Error ClusterAllFailedError: Failed to refresh slots cache.. The error seems very common as a lot of people are facing the same bug. In order to connect to ElastiCache, we are using an npm module named ioredis.
A lot of people recommend using the same VPC and security group for both ElastiCache and Elastic Beanstalk. We are already using the same VPC and on Elastic Beanstalk we are using two security groups one of them matches the security group of ElastiCache. For the default VPC, we have enabled All Traffic for the inbound and outbound rules, but still, we are facing the same bug.
In order to connect to ElastiCache from NodeJS server I am using the following code:
const Redis = require("ioredis");
exports.connect = () => {
const client = new Redis.Cluster(
["xxxxx.xxxxx.clustercfg.use1.cache.amazonaws.com:6379"],
{
slotsRefreshTimeout: 10000,
dnsLookup: (address, callback) => callback(null, address),
redisOptions: {
showFriendlyErrorStack: true,
tls: {
checkServerIdentity: (/*host, cert*/) => {
// skip certificate hostname validation
return undefined;
},
},
},
}
);
client.on("ready", () => {
console.log("Redis Client Ready");
});
client.on("connect", () => {
console.log("Redis Client Connected");
});
client.on("error", (error) => {
console.log("Redis Client Connection Error", error);
});
client.on("reconnecting", () => {
console.log("Redis Client Reconnecting");
});
client.on("end", () => {
console.log("Redis Client Connection ended");
});
return client;
};
ElastiCache Configuration
Default VPC Security Group with Inbound and Outbound rules
Elastic Beanstalk security group(Same as default)
Error information from Elastic Beanstalk
Versions:
Node.js running on 64bit Amazon Linux with platform version 4.15.1
NodeJS version: 12.18.3
ioredis version: 4.17.3
npm version: 6.14.6
express version: 4.17.1
UPDATE: I am able to access the ElastiCache from ElasticBeanstalk if I do ssh and use redis-cli, but unable to access it using ioredis on NodeJS which is running on ElasticBeanstalk.
I have a similar setup and eventually got it working, a few key points:
Elasticbeanstalk and Elasticache have to be in the same VPC
Elasticache's security group should have an inbound rule to allow traffic from Elasticbeanstalk
Here's a code to connect:
import { RedisPubSub } from 'graphql-redis-subscriptions';
import Redis from 'ioredis';
import config from '../../config/env';
const options = {
// AWS host will look like this: somecache-dev-ro.k6sjdj.ng.0001.use1.cache.amazonaws.com
host: config.redis.host || 'localhost',
port: config.redis.port || 6379,
retryStrategy: (times: number): number => {
// reconnect after
return Math.min(times * 50, 2000);
},
};
export const pubsub = new RedisPubSub({
publisher: new Redis(options),
subscriber: new Redis(options),
});
I was debugging a similar issue. To access redis, I had to add tls: {} to the ioredis options:
{
host: process.env.REDIS_HOST,
port: process.env.REDIS_PORT,
password: process.env.REDIS_PASSWORD,
tls: {}
}
you can simply create connection
const Redis = require("ioredis");
const client = new Redis(
6379,
"Configiration Endpoint (xxx.xxxx.xxxcache.amazonaws.com)"
);
client.on("ready", () => {
console.log("Redis Client Ready");
client.send(
});
client.on("connect", () => {
console.log("Redis Client Connected");
});
client.on("error", (error) => {
console.log("Redis Client Connection Error", error);
});

How to handle error of closed MySQL connection in Node.js?

I've built a Node.js API that connects to a MySQL database. I use node-mysql2 as driver. The API and the database run in separate Docker container. At some point after deployment in a Kubernetes cluster I get the following error:
Error: Can't add new command when connection is in closed state
at PromiseConnection.query (/usr/src/app/node_modules/mysql2/promise.js:92:22)
I wonder why this error happens and how to catch and handle it using Node.js. These are code snippets of my Node.js API:
const mysql = require('mysql2/promise')
...
async function main() {
try {
const client = await mysql.createConnection({
host: DATABASE_HOST,
port: DATABASE_PORT,
user: DATABASE_USERNAME,
password: DATABASE_PASSWORD,
database: DATABASE_NAME
})
client.on('error', error => {
process.stderr.write(`${error.code}\n`) // PROTOCOL_CONNECTION_LOST
process.stderr.write(`An error occurred while connecting to the db: ${error.message}\n`)
process.exit(1)
})
} catch (error) {
process.stderr.write(`Error while creating db connection: ${error.code}\n`)
}
...
}
...
main().catch(err => {
process.stderr.write(`Error message: ${err.message}\n`)
process.exit(1)
})
Do you have an idea how to handle this error?
Do you close the connection after finishing with it?
client.end();
Also considered using a pool?
const pool = mysql.createPool({
host: DATABASE_HOST,
user: DATABASE_USERNAME,
database: DATABASE_NAME,
waitForConnections: true,
connectionLimit: 10,
queueLimit: 0
});
More info about pools: https://github.com/sidorares/node-mysql2#using-connection-pools

Connection pooling in elasticsearch

How can we configure connection pooling for elasticsearch in node js? For handling instance failures and detecting dead nodes.
How can I customize Transport, ConnectionPool, Connections classes of elasticsearch in nodejs.
This feature is supported with the new RC1 client
Here an example:
'use strict'
// docker run -p 9200:9200 -p 9300:9300 --rm -e "discovery.type=single-node" docker.elastic.co/elasticsearch/elasticsearch:6.7.0
const { Client } = require('#elastic/elasticsearch')
const client = new Client({
nodes: ['http://127.0.0.1:9200/'],
requestTimeout: 2000,
sniffInterval: 500,
sniffOnStart: true,
sniffOnConnectionFault: true
})
client.on('sniff', (err, req) => {
console.log('snif', err ? err.message : '', `${JSON.stringify(req.meta.sniff)}`)
})
setInterval(async () => {
try {
const info = await client.info()
console.log(info.body.name)
} catch (err) {
console.log(err.message);
}
}, 1500)
Note install the v6:
"#elastic/elasticsearch": "6.7.0-rc.1",

Resources