SQL Server connection problem via using sequalize with encryptset to true - node.js

I have been working on a NodeJS project locally on my (Windows) machine which connects to a SQL server database using Sequelize. It works fine on my local machine but when I upload it to an Azure Web App I get the following error on each request..
Failed to connect to x.x.x.x:1433 - socket hang up (IP address replaced with X's by me)
The code is identical and I have checked there are no IP restrictions on the database, it's totally open. I have also checked that the environment variables (username/password etc.) are identical and being loaded correctly in production. In fact, my logging shows that the connection attempt is identical.
The only thing that makes it work is removing the Sequelize/tedious option {encrypt:true}.
So, just to be clear, encrypted connections work from localhost but not from an azure web app.
I have literally no idea why this could be so any suggestions would be helpful, even if they just point me in the right direction.
Here is my sequelize setup. Obviously I've redacted the host IP address and login details but they are the same from dev and production at the moment.
{
'production': {
'username': xxx,
'password': xxx,
'database': xxx,
'host': xxx,
'dialect': 'mssql',
'dialectOptions': {
'options': {
'encrypt': true,
'multipleStatements': true,
'validateBulkLoadParameters': false
}
},
'omitNull': true,
'pool': {
'max': 100,
'min': 0,
'acquire': 30000,
'idle': 10000
},
'logging': false
},
'development': {
'username': xxx,
'password': xxx,
'database': xxx,
'host': xxx,
'dialect': 'mssql',
'dialectOptions': {
'options': {
'encrypt': true,
'multipleStatements': true,
'validateBulkLoadParameters': false,
'debug': {
'packet': true,
'data': true,
'payload': true,
'token': true
}
}
},
'omitNull': true,
'pool': {
'max': 100,
'min': 0,
'acquire': 30000,
'idle': 10000
},
'logging': false
}
}

Related

Node-mssql not able to connect to the server but with tedious it connects

currently i'am using tedious package to connect to the database and do operations but i would like to switch to node-mssql (seems less messy).
The problem i'm getting is connection timeout:
originalError: ConnectionError: Failed to connect to yyy:1433 in 15000ms
code: 'ETIMEOUT',
isTransient: undefined
}
My config with tedious :
const config = {
server: process.env.HOST, // update me
authentication: {
type: 'default',
options: {
userName: process.env.USER, // update me
password: process.env.PASS, // update me
},
},
options: {
// If you are on Microsoft Azure, you need encryption:
database: process.env.DB,
rowCollectionOnDone: true, // update me
},
};
My config with mssql :
const configMssql = {
user: process.env.USER,
password: process.env.PASS,
server: process.env.HOST, // update me
database: process.env.DB,
pool: {
max: 10,
min: 0,
idleTimeoutMillis: 30000,
},
options: {
encrypt: false, // for azure
trustServerCertificate: false, // change to true for local dev / self-signed certs
},
};
or
const configMssqlString = `Server=${process.env.HOST},1433;Database=${process.env.DB};User Id=${process.env.USER};Password=${process.env.PASS};Encrypt=false`;
Can't figure out whats wrong

Migration with Sequelize CLI to DigitalOcean Postgres Database Throwing SSL Error

Connecting to my my DigitalOcean database with Sequelize works fine when I'm not migrating. For example, attempting to create a new table works just fine; the code below successfully connects and creates a new table.
sequelize = new Sequelize(config.use_env_variable, config);
sequelize.authenticate().then(console.log('success')).catch((error) => console.log(error));
sequelize.define('test-table', {
test_id: {
type: Sequelize.INTEGER,
},
});
sequelize.sync();
I have a CA certificate .crt file I downloaded from DigitalOcean that I'm passing in with the Sequelize options. My config.js looks like
development: {
use_env_variable: 'postgresql://[digitalocean_host_url]?sslmode=require',
ssl: true,
dialectOptions: {
ssl: {
require: true,
rejectUnauthorized: false,
ca: fs.readFileSync(`${__dirname}/../.postgresql/root.crt`),
},
},
},
However when I try to create tables using migrations with
npx sequelize-cli db:migrate
I receive the following output and error:
Parsed url postgresql://[digitalocean_host_url]?sslmode=require
ERROR: no pg_hba.conf entry for host [host], user [user], database [database], SSL off
Which is very strange, because SSL is working when I create a table using just Sequelize sync. I have a .sequelizerc file for the sequelize-cli configurations, which looks like this:
const path = require('path');
const env = process.env.NODE_ENV || 'development'
const config = require('./config/config')[env];
module.exports = {
'config': path.resolve('config', 'config.js'),
'url': config.use_env_variable,
'options-path': path.resolve('config', 'sql-options.json')
}
inside my sql-options.json I have the following
{
"use_env_variable": "postgresql://[digitalocean_host_url]?sslmode=require",
"dialect":"postgres",
"ssl": true,
"dialectOptions": {
"ssl": {
"required": true,
"rejectUnauthorized": true,
"ca": "/../.postgresql/root.crt"
}
}
}
I've tried a lot of the advice from various resources, including the sequelize/cli repo. But none of it seems to work. Any advice would be helpful.
I had the same issue and the fix was to add the code below in the migrations config file even though you already have it in the database connection file.
The following code is in the config/config.js file for migrations.
production: {
username: ****,
password: ****,
database: ****,
host: ****,
dialect: ****,
port: ****,
dialectOptions: {
ssl: {
require: true,
rejectUnauthorized: false,
},
},
},
This is how my DB connection looks like that was working normally.
const sequelize = new Sequelize({
host: ****,
database: ****,
username: ****,
password: ****,
dialect: ****,
port: ****,
dialectOptions: {
ssl: {
require: true,
rejectUnauthorized: false,
},
},
});

Connecting to (LocalDB)\MSSQLLocalDB Sequelize

I have a question about sequelize and sql server.
So I can connect to my database with "localhost" or name of my computer, but i cant to "(LocalDB)\MSSQLLocalDB". This is my connection parameter.
PASSWORD: "sw",
DB: "BusinessDB",
CONFIG: {
host: '(LocalDB)\\MSSQLLocalDB',
dialect: 'mssql',
dialectOptions: {
options: {
encrypt: true,
}
},
pool: {
max: 5,
min: 0,
acquire: 30000,
idle: 10000
},
define:{
timestamps: false,
}
}
```
And this is the error when i'm trying to connect with this config
> Failed to connect to (LocalDB)\MSSQLLocalDB:1433 - getaddrinfo ENOTFOUND (LocalDB)\MSSQLLocalDB
Someone have the solutions for that. I search on google but I doesn't find a solution.
Thanx
I found a solution with the help of "msnodesqlv8" module.
Now i'm using this configuration for connect to my DB.
dialect: 'mssql',
dialectModule: require('msnodesqlv8/lib/sequelize'),
bindParam: false,
/*logging: false,*/
dialectOptions: {
options: {
connectionString: 'Driver={ODBC Driver 17 for SQL Server};Server= (LocalDB)\\MSSQLLocalDB;Database=MyDB;Trusted_Connection=yes;',
},
},
define:{
timestamps: false,
}
The driver version can be found on ODBC data sources software.(type in windows search bar)

Run Nightwatch.js tests against a remote Selenium server in Kubernetes environment

I've created automated tests with Nightwatch-Cucumber based on Nightwatch.js. I can start the tests on local machine, the Selenium server starts on local machine and the tests will be executed.
But now I want to integrate the existing tests in a Kubernetes environment. On local machine I want to use minikube, helm, a jenkins chart to start the tests and a selenium chart. But this setup is quiet different to the local one. I want to start the tests on the Jenkins instance and the tests should be executed against the running Selenium server delivered by the selenium chart. So I want to use such a "remote" Selenium server. I don't want to use a local Selenium server that starts on runtime, but a still existing Selenium server somewhere in the Kubernetes environment
But how to configure my nightwatch.conf.js configuration to realize that scenario?
My current configuration looks like this:
const config = {
output_folder: "reports",
custom_commands_path: "commands",
// custom_assertions_path: 'assertions',
live_output: false,
page_objects_path: "pageobjects",
disable_colors: false,
selenium: {
start_process: true,
server_path: seleniumServer.path,
log_path: "",
host: "127.0.0.1",
port: 4444
},
test_settings: {
default: {
globals: {
waitForConditionTimeout: 30000,
waitForConditionPollInterval: 500
},
screenshots: {
enabled: true,
on_failure: true,
path: "screenshots"
},
//launch_url: "http://localhost:8087",
//selenium_port: 4444,
//selenium_host: "127.0.0.1",
desiredCapabilities: {
browserName: "phantomjs",
javascriptEnabled: true,
acceptSslCerts: true,
"phantomjs.binary.path": phantomjs.path
}
},
First step, make sure your remote Selenium-server is accessable( checking host IP and port )
Secondly, config following :
const config = {
output_folder: "reports",
custom_commands_path: "commands",
// custom_assertions_path: 'assertions',
live_output: false,
page_objects_path: "pageobjects",
disable_colors: false,
selenium: {
start_process: false, // turn this off and comment all below config
// server_path: seleniumServer.path,
// log_path: "",
// host: "127.0.0.1",
// port: 4444
},
test_settings: {
default: {
globals: {
waitForConditionTimeout: 30000,
waitForConditionPollInterval: 500
},
screenshots: {
enabled: true,
on_failure: true,
path: "screenshots"
},
launch_url: "http://localhost:8087",
selenium_port: 4444, // provide your selenium port in 1st step
selenium_host: "127.0.0.1", // provide your selenium address in 1st step
desiredCapabilities: {
browserName: "phantomjs",
javascriptEnabled: true,
acceptSslCerts: true,
"phantomjs.binary.path": phantomjs.path
}
},

what the differences in mongoDB strategy field in replicaSet

From what I have read there are 3 options : "ping" and "strategy" and a default as null.
Can someone explain what are the differences between them ?
I am using secondary preferred so whom should I pick based the fact all my servers are in the same data center ?
When I chose "ping" I see that most of the reading goes only to one secondary (and I have more then 1)
So what am I missing ?
Here is my configuration :
dbOptions: {
db: {
readPreference: 'secondaryPreferred'
},
server: {
socketOptions: {
keepAlive: 1000,
connectTimeoutMS: 30000
},
readPreference: 'secondaryPreferred',
strategy: 'ping'
},
replset: {
socketOptions: {
keepAlive: 1000,
connectTimeoutMS: 30000
},
rs_name: 'test',
readPreference: 'secondaryPreferred',
strategy: 'ping'
}
}

Resources