Unable understand how to install libmongocrypt - node.js

I am trying to setup a mongo connection in NODE.js with autoEncrypt option and it of course tries to connect with the driver at port 27020. I don't have libmongocrypt service running so the connection generates the following error.
ECONNREFUSED 127.0.0.1:27020
I am trying to implement manual encryption with bypassAutoEncryption flag.
I am aware we have to use this library but it appears to be a C library and I am still clueless how I can setup libmongocrypt on my local environment.
OS: Windows 10
MONGO VERSION: 5.0
Any help would be appreciated! Thank you

I'm not familiar with Node itself, but these are common details about this workflow (writing it as answer since it's quite big):
libmongocrypt is a C library that is used by the driver, usually it's embedded in the driver (unless Node doesn't support it for some reason).
ECONNREFUSED 127.0.0.1:27020 this error says that a process required for encryption called mongocryptd is not launched, it's not the same as libmongocrypt library (it's completely different things), you can launch this process by:
Just manual launch. This file is placed in SERVER_PATH\bin\mongocryptd.exe. Use it only as quick check.
Filling autoEncryption.extraOptions.mongocryptdSpawnPath with the path to mongocryptd.exe, you can find some details here
it's worth mention that auto encryption (along with mongocryptd) is available only in enterprise server.

I also had the same problem. But my app runs in a Cloud Function (like AWS Lambda) and installing something is not possible.
Despite docs and forums said that Atlas support Auto Encrypt I couldn't make this work. So I tried Explicit Encryption that work's fine.
So you just need to specify bypassAutoEncryption attribute:
const secureClient = new MongoClient(connectionString, {
useNewUrlParser: true,
useUnifiedTopology: true,
autoEncryption: {
bypassAutoEncryption: true, // explicit encryption
keyVaultNamespace,
kmsProviders,
// schemaMap: userSchema,
// extraOptions,
},
});
And encrypt data by yourself (what I find better - I have more control):
const randomEnc = {
algorithm: 'AEAD_AES_256_CBC_HMAC_SHA_512-Random',
// keyId: [new Binary(Buffer.from(dataKey, 'base64'), 4)], // I also couldn't make this work
keyAltName: 'demo-data-key',
};
const writeResult = await secureClient
.db(db)
.collection(coll)
.insertOne({
name: 'Jon Doe',
ssn: await encryption.encrypt(241014209, randomEnc),
bloodType: await encryption.encrypt('AB+', randomEnc),
'key-id': 'demo-data-key',
medicalRecords: await encryption.encrypt([{ weight: 180, bloodPressure: '120/80' }], randomEnc),
insurance: {
policyNumber: await encryption.encrypt(123142, randomEnc),
provider: 'MaestCare',
},
});
Decryption will be automatic, you don't need to do anything.

Related

Connecting Sequelize to MSSQL with Windows Authentication

I need help with the right libraries to connect Sequelize to MSSQL database using Windows Authentication.
I have a requirement for a client where I cannot use passwords to connect to the database on the server. Their required method of use is to connect to MSSQL database using Windows Authentication.
The problem I have is that we are using Sequelize and the only Dialect using msnodesqlv8 (which supports Windows Authentication) that I was able to find is not maintained any more. https://www.npmjs.com/package/sequelize-msnodesqlv8
Tedious which is the default dialect for Sequelize does not support Windows Authentication without password. It has the option of using ntlm, but it also requires a password.
Unfortunately, after a lot of searching for solutions, I was unable to find any. The only two viable solutions are to either build a dialect library for Sequelize using msnodesqlv8 or to create a custom version of tedious driver using sspi-client library.
I ended up with the later approach of custom version of tedious driver with sspi-client https://www.npmjs.com/package/#sregger/sspi-client thanks to some legacy code samples and help from Tediousjs community. One word of caution is that if you are using sspi-client, Worker will not work. To use Worker, use custom library https://www.npmjs.com/package/shinobi-worker otherwise you will get the error of "Module did not self-register"
I found the solution with this configuration:
database: 'DB_NAME',
host: 'DB_HOST',
dialect: 'mssql',
dialectOptions: {
authentication: {
type: 'ntlm',
options: {
userName: 'DB_USER',
password: 'DB_PASS',
domain: 'MY_COMPANY_DOMAIN',
},
},
options: {
port: 1433,
requestTimeout: 60000,
},
},
moreinfo: https://github.com/vishwasjois/sequelize/blob/988e754c6eef323b1a9dc11f5bee3fb535579da8/docs/upgrade-to-v5.md#dialect-specific
Hope this help

How do I manage Client-Side Encryption Data Keys using the Driver?

I'm trying to to implement the automatic client-side field level encryption feature (Available in MongoDB 4.2+ Enterprise) in my NodeJS project.
There doesn't seem to be any documentation on how to use the 3.3 Node Driver (compatible with Mongo 4.2) to handle Data Keys.
This procedure is described here: https://docs.mongodb.com/manual/tutorial/manage-client-side-encryption-data-keys/ and says:
For guidance on data key management using a 4.2-compatible driver, see the documentation for that driver.
I tried searching on the Node Driver API docs (http://mongodb.github.io/node-mongodb-native/3.3/api/) about how to create and manage Data Keys but I was unable to find any of the methods. It does describe how to config the Client so that it uses Automatic Field Level Encryption, but this requires the data keys.
There doesn't seem to be any documentation on how to use the 3.3 Node Driver (compatible with Mongo 4.2) to handle Data Keys.
You can see some usage snippets from the test in the driver code base node-mongodb-native/test/functional/client_side_encryption. This should be quite similar to how the mongo shell uses it. Using the example Connect to a MongoDB Cluster with Automatic Client-Side Encryption Enabled
on the mongo shell, you can use similarly in MongoDB Node.JS driver as below:
const mongoClient = new MongoClient(
"<ATLAS_URI_HERE>",
{
useNewUrlParser: true,
useUnifiedTopology: true,
autoEncryption: {
keyVaultNamespace: "encryption.__dataKeys",
kmsProviders: {
local: {
// BASE64-ENCODED-96-BYTE-LOCAL-KEY
key: fs.readFileSync(MASTER_ENCRYPTION_KEY_PATH),
}
},
schemaMap: {
"hr.employees": {
"bsonType": "object",
"properties": {
"ssn": {
"encrypt": {
"bsonType": "string",
"algorithm": "AEAD_AES_256_CBC_HMAC_SHA_512-Deterministic",
}
}
}
}
}
}
}
);
The example above is using kmsProviders for Locally Managed Keyfile, please see AWS KMS for kmsProviders AWS Key Management Service.
See also the Client Side Field Level Encryption Guide for an end-to-end procedure for configuring field level encryption using select MongoDB 4.2-compatible drivers (Click on the Node.JS tab to see examples in Node.JS)

How To Move Node-Red Project

I have a node-red project that was working fine. I back'ed up the .node-red folder and have now reinstalled ubuntu and node-red on the same computer. I'm using the node-red-contrib-postgrestor node to access PostgreSQL. After installing all the required nodes and copying the flows.json and settings.js files to the .node-red folder, I cannot get the flow to authenicate with the PostgresSQL database. I've edited the node properties and re-entered the user name / password info and re-deployed. I've tried changing the credentialSecret property in the settings.js file to a new secret string plus I've tried changing the property to false. The flows_cred.json file is either: (1) not created, (2) it is created with contents "{}", or (3) it is created with encrypted data. In none of the cases can I get the node to authenticate with the database. I tried putting debugging statements in the postgrestor.js file and it shows that the user name and password are "undefined".
function PostgrestorNode(config) {
const node = this;
RED.nodes.createNode(node, config);
node.topic = config.topic;
node.config = RED.nodes.getNode(config.postgresDB);
node.on('input', (msg) => {
node.warn(`user: ${node.config.user}, password: ${node.config.password}`);
const query = mustache.render(config.query, {msg});
const pool = new Pool({
user: node.config.user,
password: node.config.password,
host: node.config.host,
port: node.config.port,
database: node.config.database,
ssl: node.config.ssl,
max: node.config.max,
min: node.config.min,
idleTimeoutMillis: node.config.idle
});
Originally the node-red program was working in Ubuntu 16.04. I upgraded to 18.04 when the problem occurred. I've since gone back and reinstalled 16.04 thinking that was the problem, but it doesn't work either way now.
I have read the following:
https://github.com/node-red/node-red/wiki/Design%3A-Encryption-of-credentials
The last sentence "Note that once you set credentialSecret you cannot change its value." is maybe my problem, but surely there's got to be a way to change the logon credentials for the database.
Any ideas would be much appreciated.
I have now discovered that the node-red-contrib-postgres node does not experience the connection problem with the database. There must be a specific issue with the node-red-contrib-postgrestor node.

Trying to insert data into BigQuery fails from container engine pod

I have a simple node.js application that tries to insert some data into BigQuery. It uses the provided gcloud node.js library.
The BigQuery client is created like this, according to the documentation:
google.auth.getApplicationDefault(function(err, authClient) {
if (err) {
return cb(err);
}
let bq = BigQuery({
auth: authClient,
projectId: "my-project"
});
let dataset = bq.dataset("my-dataset");
let table = dataset.table("my-table");
});
With that I try to insert data into BiqQuery.
table.insert(someRows).then(...)
This fails, because the BigQuery client returns a 403 telling me that the authentication is missing the required scopes. The documentation tells me to use the following snippet:
if (authClient.createScopedRequired &&
authClient.createScopedRequired()) {
authClient = authClient.createScoped([
"https://www.googleapis.com/auth/bigquery",
"https://www.googleapis.com/auth/bigquery.insertdata",
"https://www.googleapis.com/auth/cloud-platform"
]);
}
This didn't work either, because the if statement never executes. I skipped the if and set the scopes every time, but the error remains.
What am I missing here? Why are the scopes always wrong regardless of the authClient configuration? Has anybody found a way to get this or a similar gcloud client library (like Datastore) working with the described authentication scheme on a Container Engine pod?
The only working solution I found so far is to create a json keyfile and provide that to the BigQuery client, but I'd rather create the credentials on the fly then having them next to the code.
Side note: The node service works flawless without providing the auth option to BigQuery, when running on a Compute Engine VM, because there the authentication is negotiated automatically by Google.
baking JSON-Keyfiles into the images(containers) is bad idea (security wise [as you said]).
You should be able to add these kind of scopes to the Kubernetes Cluster during its creation (cannot be adjusted afterwards).
Take a look at this doc "--scopes"

sails.js - I want to add DB connection dynamically after sails lift

During sails lift I don't yet have all the connection information for my DB.
Is there a way to either have config values dependent on promises or dynamically create a connection after sails lift has completed?
I would obviously have to add a policy or hook to handle requests to routes needing the model if it wasn't available yet, but at this point I don't see how to even let the sails lift until I already know the connection info (it must be in the configs).
I'm hoping I'm missing a way to dynamically create connections and wire models to them.
Update: In Sails v1.0 / Waterline v0.13, this can be accomplished by accessing the stateless, underlying driver; e.g. sails.getDatastore().driver. This can be used in any database adapter that supports the new stateless driver interface, including MySQL, PostgreSQL, and MongoDB.
Prior to Sails v1.0, this was not officially supported in Sails or Waterline directly, but depending on your use case there are a couple of good solutions for this. If your use case is a handful of dynamic connections for the purpose of development (e.g. in an auto-reload plugin), and you're willing to live on the edge, you can take advantage of a private API as an immediate-term workaround: sails.hook.orm.reload(). However you definitely don't want to use that in production since it literally flushes the entire ORM.
If you are going to be dealing with a larger number (let's say > 10 unique configurations) of runtime-dynamic datastore configurations during the lifetime of the running Node process, that's a different story. In that case, I would recommend using the relevant raw driver (e.g. https://github.com/felixge/node-mysql) to summon/release those dynamic connections from a pool directly via a service. You can still use your normal models in your app for connections which are static-- you will just be best off implementing dynamic database connections separately in your service. For example, if you were building a hosted version of phpMyAdmin you might use a lower-level NPM package to dynamically fetch information about users' tables, but you'd still probably want to have Account and Database models that refer to tables/collections stored in your own database.
A more integrated solution for Sails is in the works. This ability to tap into the raw connection lifecycle and access it from userland is a prerequisite for built-in transaction support, which is something we expect to land in Sails/Waterline some time in the second half of 2016. In the mean time, if you encapsulate your logic to summon/release connections via service methods as suggested above, you'll have a working solution for now and your business logic should be more or less future proof (when you upgrade, you'll just need to swap out the implementation in your service). Hope that helps!
Yes; two things in sails.js allow you to do this. One currently exists, and one is upcoming.
https://github.com/sgress454/sails-hook-autoreload. This module watches for config file changes on disk and will re-load your ORM models when a file changes.
I am working on this exact feature right now, and my plan is to publish my work at the end of next week. I agree that it will be very useful.
The API will allow you to define new Models and Connections in the database on the fly. sails.js lifecycle callbacks handle updating the ORM and adapters and so forth. It is event-based and will allow you to manually fire events to update the ORM/Connections, like so:
sails.emit('hook:dynamic-orm:reload')
Is this what you need?
I have found a workaround for MySql DB
Important: In my case, I will be changing database but all database would have the same schema only difference is in their name and data they contain and make sure to add any error handling you need
In config/connection.js --------
disable Pooling
mysql_database: {
adapter: 'sails-mysql',
host: 'localhost',
user: 'root', //optional
password: '12345', //optional
database: 'db1', //optional
pool: false
},
Now navigate to
node_modules/sails-mysql/lib/connections/spawn.js
Add connectionObject.config = sails.SwitchDbConfig
connectionObject.config = sails.SwitchDbConfig
var conn = mysql.createConnection(connectionObject.config);
conn.connect(function (err) {
afterwards(err, conn);
});
Now Finally Set sails.SwitchDbConfig form anywhere (service , controller ,etc)
as
sails.SwitchDbConfig = {
pool: false,
connectionLimit: 5,
waitForConnections: true,
adapter: 'sails-mysql',
host: 'localhost',
user: 'root',
password: '12345',
database: sails.DB_NAME,
identity: 'mysql_database'
}
And at last if you find something wrong for needs to be updated .... please ping

Resources