Using DigitalOcean's managed Postgres database cluster and App Platform, I want to connect my NodeJS app to my Postgres database.
At the moment, I'm getting the time out error below. As part of debugging the misconfiguration, I want to verify that I'm using the bindable app environment variables correctly.
Here are some basic details. Using this information, can you help me to understand how to construct the bindable variable?
Database Cluster Name: demo
Database Pool Name: Demo
Example database Name: defaultdb
App Platform > App Name: demo1
I've tried an assortment of combinations as shown below. When I run the echo $MY_VAR command, none of these values are interpolated. I see the bindable variable syntax.
# Test using database cluster name only.
apps#client:~$ echo $TEST1
${demo.HOSTNAME}
# Test using _self, or the current context. I believe this is the app context.
apps#client:~$ echo $TEST2
${_self.HOSTNAME}
# Test using syntax of <db_cluster_name>.<db_pool_name>.<env_var>
apps#client:~$ echo $TEST3
${demo.Demo.HOSTNAME}
# Test using syntax of <app_name>.<db_pool_name>.<env_var>
apps#client:~$ echo $TEST4
${demo1.Demo.HOSTNAME}
Most of these are a bit ridiculous, and at this point, I'm just experimenting with various combinations. Can you please help me to understand the syntax that would output the database hostname when I run echo $DB_HOSTNAME from the Digital Ocean app console? Thank you.
Why am I trying to confirm the bindable variable syntax?
I'd like to use the app environment variables for databases. Because of the connection refused, I believe the issue might be related to the CA certificate not being available as part of the Postgres connection details. Since my syntax isn't resolving, the CA cert isn't available when connecting to Postgres in my code.
The bindable variables should be populated in Node's environment variables and available through process.env.
# Sample postgres configuration
const postgresConfig = {
user: process.env.PG_USER,
password: process.env.PG_PASSWORD,
host: process.env.PG_HOST,
database: process.env.PG_DATABASE,
port: process.env.PG_PORT,
ssl: {
require: true,
rejectUnauthorized: true,
ca: process.env.PG_CA_CERT,
},
};
Error message
When attempting to run a Postgres query, I get this error:
Error: connect ETIMEDOUT 10.x.y.z:25061
at TCPConnectWrap.afterConnect [as oncomplete] (node:net:1278:16) {
errno: -110,
code: 'ETIMEDOUT',
syscall: 'connect',
address: '10.x.y.x',
port: 25061
}
I see how it works now. When you are managing your App Platform application, under the Create menu button is the option to "Create/Attach Database".
I didn't look here because I didn't think I would need to use the "Create" button to attach an existing database. It would be more intuitive to find this feature under the "Actions" menu. I fell victim to some peculiar UI design decisions.
Once the database is attached, all is well in the world.
Related
I'm attempting to connect to a new Aurora PostgreSQL instance with Babelfish enabled.
NOTE: I am able to connect to the instance using the pg library through the normal port 5432 (the Postgres TDAS endpoint).
However, for this test, I am attempting to connect through the Babelfish TDS endpoint (1433) using the standard mssql package.
If I specify a database name (it is correct), I receive the error 'database "postgres" does not exist':
var config = {
server: 'xxx.us-east-1.rds.amazonaws.com',
database: 'postgres',
user: 'xxx',
password: 'xxx'
};
and the connection closes since the connection fails.
if I omit the database property in the config, like:
var config = {
server: 'xxx.us-east-1.rds.amazonaws.com',
user: 'xxx',
password: 'xxx'
};
It will connect. Also, I can use that connection to query basic things like SELECT CURRENT_TIMESTAMP and it works!
However, I can't access any tables.
If I run:
SELECT COUNT(1) FROM PERSON
I receive an error 'relation "person" does not exist'.
If I dot-notate it:
SELECT COUNT(1) FROM postgres.dbo."PERSON"
I receive an error "Cross DB query is not supported".
So, I can't connect to the specific database directly and if I connect without specifying a database, I can't cross-query to the table.
Any one done this yet?
Or, if not, any ideas on helping me figure out what to try next? I'm out of ideas.
Babelfish databases (that you connect to on port 1433) have nothing to do with PostgreSQL databases (port 5432). Essentially, all of Babelfish lives within a single PostgreSQL database (parameter babelfishpg_tsql.database_name).
You seem to have a single-db setup, because Cross DB query is not supported. With such a setup, you can only have a single database via port 1433 (apart from master and tempdb). You have to use CREATE DATABASE to create that single database (if it isn't already created; ask sys.databases).
I can't tell if it is supported to create a table in PostgreSQL (port 5432) and use it on port 1433 (the other way around is fine), but if so, you have to create it in a schema that you created with CREATE SCHEMA while connected on port 1433.
The answer was that I should be connecting to database "master".
Even though there is no database titled master in the instance, you still do connect to it.
Once connected, running the following:
select current_database();
This will indicate you are connected to database "babelfish_db".
I don't know how that works or why a database would have an undocumented alias.
The bigger answer here is that cross-DB object references are not currently supported in Babelfish, outside your current SQL Server database.
This is currently being worked on. Stay tuned.
I'm trying to use MongoDB's Client-Side Filed Level Encryption feature with the community edition. I'm not interested in the auto-encryption feature. However, we need the auto-decryption feature which as per the docs is possible in the community edition as well.
We generally use mongoose in our application but I tried with native nodejs driver as well. Here's the code I'm using to create the connection. This works fine if I comment out the autoEncryption object. Doing so allows me to encrypt manually, but this way we will also have to decrypt manually, which kind of beats the purpose.
Some docs suggest adding bypassAutoEncryption: true with extraOptions object to the autoEncryption object. I've treid that as well as seen below.
const secureClient = new MongoClient('mongodb://someUri', {
useNewUrlParser: true,
useUnifiedTopology: true,
autoEncryption: {
keyVaultNamespace,
kmsProviders,
bypassAutoEncryption: true,
extraOptions: {
// mongocryptdBypassSpawn: true,
mongocryptdSpawnArgs: [ "--pidfilepath=bypass-spawning-mongocryptd.pid", "--port", "27021"],
mongocryptdURI: "mongodb://localhost:27021/db?serverSelectionTimeoutMS=1000"
},
}
});
My code is working till generating the master key, data-key and explicitly encrypting the data. Unfortunately, I haven't been able to set up the auto-decryption. To configure the client with CSFLE options the autoEncryption has to be passed in the options.
But whenever I pass this option, I get the following exception
(node:53721) UnhandledPromiseRejectionWarning: MongoServerSelectionError: connect ECONNREFUSED 127.0.0.1:27021
at Timeout._onTimeout (/Users/NiccsJ/ORI/code/testmongoEncryption/node_modules/mongodb/lib/sdam/topology.js:325:38)
at listOnTimeout (internal/timers.js:554:17)
at processTimers (internal/timers.js:497:7)
(Use `node --trace-warnings ...` to show where the warning was created)
I've followed almost all suggestions from the below refs. Surprisingly, mondodb-nodejs documentation doesn't even mention bypassAutoEncryption. I just happen to stumble across mongodb-c(point 3 & 4 below) driver documentation where I first found ant reference of such an option
https://github.com/mongodb/node-mongodb-native/blob/4ecaa37f72040ed8ace6eebc861b43ee9cb32a99/test/spec/client-side-encryption/tests/README.rst
https://github.com/Automattic/mongoose/issues/8167
http://mongocxx.org/mongocxx-v3/client-side-encryption/
https://mongodb.github.io/mongo-csharp-driver/2.11/reference/driver/crud/client_side_encryption/#explicit-encryption-and-auto-decryption
I was able to configure mongoShell with auto-decryption, meaning that my initial setup is not at fault. Also, it leads me to believe that there has to be a way to do it .via code as well.
My stack:
nodeJS: > 14.7
mongoDB: 4.4
OS: macOS for dev, prod will be on AmazonLinux2
Drivers: mongoose, native-nodejs, mongodb-client-encryption
It's not clearly mentioned in the docs. But from what I've read, automatic decryption doesn't require the enterprise-only mongocryptd process.
As mentioned in the official mongoDB-c-driver
Although automatic encryption requires MongoDB 4.2 enterprise or a MongoDB 4.2 Atlas cluster, automatic decryption is supported for all users. To configure automatic decryption without automatic encryption, set bypass_auto_encryption=True in the options::auto_encryption class.
I believe the bypassAutoEncryption option was made for this very purpose.
Not exactly an answer, but this is the best resolution at the moment.
I reported this as a bug on the official JIRA.
Turns out, this apparently is a bug with the node-mongo-native library.
As per their comment, this should be fixed in the next release.
I'm trying to integrate my service with AWS Cassandra (Keyspaces) with the following config:
cassandra:
default:
advanced:
ssl: true
ssl-engine-factory: DefaultSslEngineFactory
metadata:
schema:
enabled: false
auth-provider:
class: PlainTextAuthProvider
username: "XXXXXX"
password: "XXXXXX"
basic:
contact-points:
- ${CASSANDRA_HOST:"127.0.0.1"}:${CASSANDRA_PORT:"9042"}
load-balancing-policy:
local-datacenter: "${CASSANDRA_DATA_CENTER}:datacenter1"
session-keyspace: "keyspace"
Whenever I'm running the service it fails to load with the following error:
Message: Could not reach any contact point, make sure you've provided valid addresses (showing first 1 nodes, use getAllErrors() for more): Node(endPoint=cassandra.eu-west-1.amazonaws.com/3.248.244.41:9142, hostId=null, hashCode=7296b27b): [com.datastax.oss.driver.api.core.DriverTimeoutException: [s0|control|id: 0x1f1c50a1, L:/172.17.0.3:54802 - R:cassandra.eu-west-1.amazonaws.com/3.248.244.41:9142] Protocol initialization request, step 1 (OPTIONS): timed out after 5000 ms]
There's very little documentation about the cassandra-micronaut library, so I'm not sure what I'm doing wrong here.
UPDATE:
For clarity: the values of our environment variables are as follow:
export CASSANDRA_HOST=cassandra.eu-west-1.amazonaws.com
export CASSANDRA_PORT=9142
export CASSANDRA_DATA_CENTER=eu-west-1
Note that even when I've hard-coded the values into my application.yml the problem continued.
I think you need to adjust your variables in this example. The common syntax for Apache Cassandra or Amazon Keyspaces is host:port. For Amazon Keyspaces the port is always 9142.
Try the following:
contact-points:
- ${CASSANDRA_HOST}:${CASSANDRA_PORT}
or simply hard code them at first.
contact-points:
- cassandra.eu-west-1.amazonaws.com:9142
So this:
contact-points:
- ${CASSANDRA_HOST:"127.0.0.1"}:${CASSANDRA_PORT:"9042"}
Doesn't match up with this:
Node(endPoint=cassandra.eu-west-1.amazonaws.com/3.248.244.41:9142,
Double-check which IP(s) and port Cassandra is broadcasting on (usually seen with nodetool status) and adjust the service to not look for it on 127.0.0.1.
I'm new to AWS, and I'm trying to deploy my local web app on AWS using ECR and ECS, but got stuck when running a cluster, it throws the error about the PRISMA_CONFIG environment variable in prisma container.
In my local environment, i'm using docker to build the app using nodejs, prisma and mongodb, it's working fine.
Now on ECS, i created a task definition and for prisma container, i tried to copy the yml config from my local docker-compose.yml file to make it work.
There is field called "ENVIRONMENT", I've inputted the value in the Environment variables, it's just not working and throw the error while the cluster was running, then the task Stopped.
the yml is in multiple lines, but the input box supports string only
the variable key is PRISMA_CONFIG
and the following are the values that i've already tried
| port: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo\n
| \nport: 4466 \ndatabases: \ndefault: \nconnector: mongo \nuri: mongodb://prisma:prisma#mongo
|\nport: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo
\nport: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo
port: 4466\n databases:\n default:\n connector: mongo\n uri: mongodb://prisma:prisma#mongo\n
and the errors
Exception in thread "main" java.lang.RuntimeException: Unable to load Prisma config: java.lang.RuntimeException: No valid Prisma config could be loaded.
expected a comment or a line break, but found p(112)
expected chomping or indentation indicators, but found \(92)
i expected that all containers will run without errors, but actual results are the container stopped after running for a minute.
Please help for this.
or suggest other way to deploy to AWS?
THANK YOU VERY MUCH.
I've been looking for a similar solution to load the prisma config without the multiline string.
There are repositories that load the prisma environment variables separately without a prisma config:
Check out this repo for example:
https://github.com/akoenig/prisma-docker-compose/blob/master/.prisma.env
Here akoenig uses the following env variables using a env_file. So, I'm assuming you can just pass in these environment variables separately to achieve what prisma is looking for.
# CONTENTS OF env_file
PORT=4466
SQL_CLIENT_HOST_CLIENT1=database
SQL_CLIENT_HOST_READONLY_CLIENT1=database
SQL_CLIENT_HOST=database
SQL_CLIENT_PORT=3306
SQL_CLIENT_USER=root
SQL_CLIENT_PASSWORD=prisma
SQL_CLIENT_CONNECTION_LIMIT=10
SQL_INTERNAL_HOST=database
SQL_INTERNAL_PORT=3306
SQL_INTERNAL_USER=root
SQL_INTERNAL_PASSWORD=prisma
SQL_INTERNAL_DATABASE=graphcool
CLUSTER_ADDRESS=http://prisma:4466
SQL_INTERNAL_CONNECTION_LIMIT=10
SCHEMA_MANAGER_SECRET=graphcool
SCHEMA_MANAGER_ENDPOINT=http://prisma:4466/cluster/schema
#CLUSTER_PUBLIC_KEY=
BUGSNAG_API_KEY=""
ENABLE_METRICS=0
JAVA_OPTS=-Xmx1G
This is for a mySQL database. You would need to tailor this to suit your values. But in theory you should just be able to pass these variables one by one into single variables in AWS's GUI.
I've also asked this question on the Prisma Slack channel and am waiting to see if they have other suggestions: https://prisma.slack.com/archives/CA491RJH0/p1569689413383000
Let me know how it goes.
Not and expert here but, have you set up an environment variable PRISMA_API_MANAGEMENT_SECRET you would have defined the secret when you configured your fargate instance.
have a look at the following artical
https://www.prisma.io/tutorials/deploy-prisma-to-aws-fargate-ct14
I'm having trouble getting a connection established to my OracleDB that resides on a different system. From what I've learned from the Oracledb Node module Documentation, the connection setup should look like this:
oracledb.getConnection(
{
user : "hr",
password : "welcome",
connectString : "localhost/XE"
}
//fun querying goes here
I've reviewed the oracledb module documentation, however I cannot seem to find the syntax by which I need to follow when given certain variables. In order for me to secure a connection to the DB, I need to provide on the connection:
dbUserID: blah blah (maps to 'user' in Object),
dbPassword: Blahblahblah (maps to 'password' in Object),
oraclePort: (1521 as standard, but not sure where this goes in the object),
dbHostName: db.server.com (maps to 'localhost' in Object),
dbInstance: DBINSTANCENAME (not sure where this goes in object)
I'm fairly certain the hostname, port and instance are to be used in the 'connectString' section of the object, however I'm unsure how it should be formatted.
Any help and suggestions on how I might go about getting myself connected would be greatly appreciated!
EDIT: Im also receiving this error when the server is started up for the first time: [Error: ORA-12162: TNS:net service name is incorrectly specified] I thought I had solved this error but apparently I have not! Any suggestions would be appreciated.
Try the following:
oracledb.getConnection(
{
user : "hr",
password : "welcome",
connectString : "db.server.com:1521/DBINSTANCENAME"
}
Use 'lsnrctl services' on the DB server to check the service names available. The "Easy Connect" syntax uses the service name, not an instance name.
Check you have the right ports open.