I'm looking to keep my elasticsearch's client connection alive. I've been using the elastic client and had some great success with it when indexing and searching its datastore, but I want to be able to create a connection to my elasticsearch's nodes and preserve the connection so that I don't need to continuously create a new connection for every time I POST to it.
Having looked at the documentation I see there's a keep-alive feature for the swagger documentation but I've created my client using nodejs and have had no such luck finding any feature to do so.
my client looks something like this:
const client = new Client({
auth: {
username: 'aSecret',
password: 'alsoASecret',
},
node: 'localhost:9000',
maxRetries: 3,
requestTimeout: 15000,
});
and my index is very simple right now:
await client.index({
index: 'my-datastore'
refresh: true,
body: eventData,
});
How can I keep my index Connection alive so that I can send multiple events to my datastore without having to connect and reconnect?
There is a keepAlive config option in the Client's constructor.
And its a boolean value. Reference
keepAlive: Should the connections to the node be kept open forever? This behavior is recommended when you are connecting directly to Elasticsearch.
var elasticsearch = require('elasticsearch');
var client = new elasticsearch.Client({
auth: {
username: 'aSecret',
password: 'alsoASecret',
},
node: 'localhost:9000',
maxRetries: 3,
requestTimeout: 15000,
keepAlive: true
});
Related
i'm having some trouble connecting node to the database, it keeps throwing me an error of ssl and i tried a lot of different videos and stuff to see if it works but nothing does, here is what i'm currently doing
import sql from 'mssql'
const dbSettings = {
user: 'admin',
password: 'system',
server: 'localhost',
database: 'master',
options: {
trustedConnection: true,
encrypt: true,
trustServerCertificate: true,
},
}
async function getConnection() {
const pool = sql.connect(dbSettings)
const result = await sql.query("SELECT 1")
console.log(result)
}
getConnection()
i also tried this as well but didn't work either
async function getConnection() {
const pool = await sql.connect(dbSettings)
const result = await pool.request().query("SELECT 1")
console.log(result)
i also checked if the SQL Server authentication is enabled with windows and SQL Server and it is, i can log in into SQL Server with that info, but somehow is having trouble creating the connection, by the way, this is the error message it is showing me:
node_modules\mssql\lib\tedious\connection-pool.js:70
err = new ConnectionError(err)
^
ConnectionError: Failed to connect to localhost:1433 - 186B0000:error:0A000102:SSL routines:ssl_choose_client_version:unsupported protocol:c:\ws\deps\openssl\openssl\ssl\statem\statem_lib.c:1986
any tips or solution you can give me to solve this problem would be really helpful to me, thank you very much in advance.
EDIT
I noticed that the connection error only appears when i call the function getConnection if i remove it it doesn't appear, however i need to make sure that the connection was properly established and see the response from the database to move on
change encrypt: true to encrypt: false
i have used knex with oracledb client with the next config
const database = knex({
client: "oracledb",
connection: {
user: DB_USER,
password: DB_PASS,
host: DB_HOST,
port: DB_PORT,
database: DB_NAME,
},
debug: DEBUG_MODE,
fetchAsString: ["number", "clob"],
});
when execute any query it's auto committed and i want to disable it !
database("EMPLOYEES")
.where({ EMPLOYEE_ID: 100 })
.update({ EMAIL: "hi#example.com" })
Click here to see the original reply!
To prevent autocommit you need to start transaction and run queries that should not be autocommitted there. If you disable autocommit you will not have any idea in what state each db session created by pool is.
Though if you don’t care what result you may get or use just a single connection in that special case disabling autocommit could work. In that case it probably can be set with knex.raw().
Today, the behavior of Typeorm (Postgres) for
getManager().query(...) and
getRepositoty().createQueryBuilder(...).getMany()
is to wait for a response indefinitely.
Is there a way to introduce a request timeout that I might've missed?
If this is not possible, does Typeorm expose the connection from its pool so that I can implement a timeout mechanism and close the DB connection manually?
To work with a specific connection from the pool use createQueryRunner there is no info about it in the docs but it is documented in the api.
Creates a query runner used for perform queries on a single database connection.
Using query runners you can control your queries to execute using single database connection and
manually control your database transaction.
Usage example:
const foo = <T>(callback: <T>(em: EntityManager) => Promise<T>): Promise<T> => {
const connection = getConnection();
const queryRunner = connection.createQueryRunner();
return new Promise(async (resolve, reject) => {
let res: T;
try {
await queryRunner.connect();
// add logic for timeout
res = await callback(queryRunner.manager);
} catch (err) {
reject(err);
} finally {
await queryRunner.release();
resolve(res);
}
});
};
You can change the default behaviour on a per connection basis either by using statement_timeout or query_timeout. You can read more about all possible configurations in the official node pg driver doc. Difference between a statement and query?
A statement is any SQL command such as SELECT, INSERT, UPDATE, DELETE.
A query is a synonym for a SELECT statement.
How to tell typeorm to use these configurations? Add these parameters under extra field in ormconfig.js:
{
type: "postgres",
name: "default",
host: process.env.DB_HOST,
port: 5432,
username: process.env.DB_USER,
password: process.env.DB_PASS,
database: process.env.DB_NAME,
synchronize: false,
logging: false,
entities: [
"dist/entity/**/*.js"
],
extra: {
poolSize: 20,
connectionTimeoutMillis: 2000,
query_timeout: 1000,
statement_timeout: 1000
},
}
Note the use of poolSize here. This creates a connection pool of 20 connections for the application to use and reuse. connectionTimeoutMillis ensures that if all the connections inside the pool are busy executing statements/transactions, a new connection request out of the pool will timeout after connectionTimeoutMillis ms. More about connection pool configurations of pg-pool here.
from the documentation you can use maxQueryExecutionTime ConnectionOption.
maxQueryExecutionTime - If query execution time exceed this given max execution time (in milliseconds) then logger will log this query.
ConnectionOptions is a connection configuration you pass to createConnection or define in ormconfig
I am having a kind of strange problem when I trying to establish a single mongodb connection to a db with the mongodb nodejs native driver in the version 3.6.0.
My idea is to have just one connection opened through all the client session and reuse it in the different routes I have in my express server, when I hit it the first time the getDatabase function it creates two connections and after that one of it closes and one stands idle, but when I use a route, the second get opened again (it is like one connection just stays there and do nothing).
I just want to have one connection opened in my pool.
If you see the commented code i was testing with those options but none of them worked for me.
Pd: when I set the socketTimeoutMS to 5000ms just one connection is created but it auto-closes and reopen each 5000ms, which is weird (it reopen itself even when I don't use the connection).
All of this problem happen when I set the useUnifiedTopology to true (I can't set it to false because is deprecated and the other topologies will be removed in the next version of mdb ndjs driver)
Here is an image with the strange behaviour
The code is:
import { MongoClient, Db } from 'mongodb';
import { DB_URI } from '../config/config';
// This mod works as DataBase Singleton
let db: Db;
export const getDataBase = async (id: string) => {
try {
if (db) {
console.log('ALREADY CREATED');
return db;
} else {
console.log('CREATING');
let client: MongoClient = await MongoClient.connect(`${DB_URI}DB_${id}`, {
useUnifiedTopology: true,
/* minPoolSize: 1,
maxPoolSize: 1,
socketTimeoutMS: 180000,
keepAlive: true,
maxIdleTimeMS:10000
useNewUrlParser: true,
keepAlive: true,
w: 'majority',
wtimeout: 5000,
serverSelectionTimeoutMS: 5000,
connectTimeoutMS: 8000,
appname: 'myApp',
*/
});
db = client.db();
return db;
}
} catch (error) {
console.log('DB Connection error', error);
}
};
The driver internally creates one connection per known server for monitoring purposes. This connection is not used for application operations.
Hence, it is expected that you would get two connections established.
In below code we can connect MongoDB
var options = {
db: { native_parser: true },
server: { poolSize: 5 },
replset: { rs_name: 'myReplicaSetName' },
user: 'myUserName',
pass: 'myPassword'
}
mongoose.connect(uri, options);
Here poolSize is 5. So that 5 parallel connection can be perform on request.
But I see if we try to create second connect node gives error that I'm trying to create connection which is not closed. So at the same time one connection can do perform for one application.
So what is meaning of poolSize is 5 and how it perform?
I need a solution and a way to increase pool size when my system is scale up.
Thanks in advanced.
Mongoose (or rather the mongodb driver it uses) will automatically manage the number of connections to the MongoDB server. You should call mongoose.connect() just once.
If you need a larger number of connections, all you have to do is increase the poolSize property. However, since you're using a replicate set, you should set replset.poolSize instead of server.poolSize:
var options = {
db: { native_parser: true },
replset: { rs_name: 'myReplicaSetName', poolSize : POOLSIZE },
user: 'myUserName',
pass: 'myPassword'
}