I am using mongoose version 4.13.6 and mongodb from compose, and below is my code for connecting to mongo database.
mongoose.createConnection('mongodb://[user]:[pass]#[host1]:[port1],[host2]:[port2]/dbnamme?ssl=true', {});
But when I run this am getting error,
MongoError: no primary found in replicaset
Dont know why is that, can anyone help me in this?
So the short answer is this:
... all drivers are not equal and some make assumptions when multiple hosts are specified. For example, the Meteor/Node.js MongoDB driver sees two hosts and assumes it is talking to a replicaset. Upon connecting the driver asks which host is master and then errors out because neither of them are. The simple fix for this is to use one host in the URI ..
https://www.compose.com/articles/connecting-to-the-new-mongodb-at-compose/#drivingtoyourfirstdatabase
So when you create a connection, simply use one of the connection URIs for the database you want to connect to like:
var uri = "mongodb://<username>:<password>#[host]:[port]/<db_name>?ssl=true";
mongoose.createConnection(uri);
Related
I'm attempting to connect to a new Aurora PostgreSQL instance with Babelfish enabled.
NOTE: I am able to connect to the instance using the pg library through the normal port 5432 (the Postgres TDAS endpoint).
However, for this test, I am attempting to connect through the Babelfish TDS endpoint (1433) using the standard mssql package.
If I specify a database name (it is correct), I receive the error 'database "postgres" does not exist':
var config = {
server: 'xxx.us-east-1.rds.amazonaws.com',
database: 'postgres',
user: 'xxx',
password: 'xxx'
};
and the connection closes since the connection fails.
if I omit the database property in the config, like:
var config = {
server: 'xxx.us-east-1.rds.amazonaws.com',
user: 'xxx',
password: 'xxx'
};
It will connect. Also, I can use that connection to query basic things like SELECT CURRENT_TIMESTAMP and it works!
However, I can't access any tables.
If I run:
SELECT COUNT(1) FROM PERSON
I receive an error 'relation "person" does not exist'.
If I dot-notate it:
SELECT COUNT(1) FROM postgres.dbo."PERSON"
I receive an error "Cross DB query is not supported".
So, I can't connect to the specific database directly and if I connect without specifying a database, I can't cross-query to the table.
Any one done this yet?
Or, if not, any ideas on helping me figure out what to try next? I'm out of ideas.
Babelfish databases (that you connect to on port 1433) have nothing to do with PostgreSQL databases (port 5432). Essentially, all of Babelfish lives within a single PostgreSQL database (parameter babelfishpg_tsql.database_name).
You seem to have a single-db setup, because Cross DB query is not supported. With such a setup, you can only have a single database via port 1433 (apart from master and tempdb). You have to use CREATE DATABASE to create that single database (if it isn't already created; ask sys.databases).
I can't tell if it is supported to create a table in PostgreSQL (port 5432) and use it on port 1433 (the other way around is fine), but if so, you have to create it in a schema that you created with CREATE SCHEMA while connected on port 1433.
The answer was that I should be connecting to database "master".
Even though there is no database titled master in the instance, you still do connect to it.
Once connected, running the following:
select current_database();
This will indicate you are connected to database "babelfish_db".
I don't know how that works or why a database would have an undocumented alias.
The bigger answer here is that cross-DB object references are not currently supported in Babelfish, outside your current SQL Server database.
This is currently being worked on. Stay tuned.
I have created a neo4j and graphql application with neo4j 4.0. In my application, I used two neo4j databases. These instances run in a docker container on my PC. But When I tried to run a query using graphql playground, graphql server gives the following error.
"Could not perform discovery. No routing servers available. Known routing table: RoutingTable[database=default database, expirationTime=0, currentTime=1592037819743, routers=[], readers=[], writers=[]]"
I created neo4j driver instance and session instance as following
const driver = neo4j.driver(
process.env.NEO4J_URI || "neo4j://localhost:7687",
neo4j.auth.basic(
process.env.NEO4J_USER,
process.env.NEO4J_PASSWORD
)
);
const session = driver.session(
{
database: 'mydb',
}
)
I couldn't find any way to fix this issue. Can someone help me to fix this? thank you.
If you use single server please use bolt:// as protocol. The it will not ask the server for routing tables
I have made a cluster on mongodb website, whitelisted IP adress, added the user and connected my application, but it is not giving me 'Standard URI connection' option (maybe their website got updated) instead it is giving me one with srv. So when I used it in mongoose.connect() function it is giving me error:"Invalid Uri,Must begin with mongodb://"...
I have been working to connect to mongodb for so long but failing every time
You have created cluster on MongoDB Atlas? Right.
If using mongo application:
change nodejs version to 2.12 or later from connect your application screen.
If using mongo shell:
On the "Connect to Cluster0" page, select I have mongo shell installed, then select mongo shell version "3.4 or earlier". You will get the desired format.
I wrote a node web app and created a mongoDb database on my local system. I was using the following code to connect to local mongodb from node js
var mongoose = require('mongoose');
mongoose.connect('mongodb://localhost/db_name'); //local
And everything was working fine on my local machine. So I went on and created an mlab account and created a database. But when I tried to run the code by changing the connection string, connections are still established I believe. But the find and save requests are not invoking the callbacks, even no errors shows up. All requests are getting timed out.
var mongoose = require('mongoose');
mongoose.connect("mongodb://user:pass#ds036789.mlab.com:36789/db_name"); //mlab
Another thing I noticed is that I cannot ping ds036789.mlab.com. But TCP connections are succeeding when I tried the nc command
nc -w 3 -v ds036789.mlab.com 36789
I even tried deploying to azure. Which doesn't work either. Any help is much appreciated. Thanks.
EDIT:
Not being able to ping was due to the fact that I used azure hosting. It is expected. And I also found out that I get this error while trying to connect :
connection error: { [MongoError: auth failed] name: 'MongoError', ok: 0, errmsg: 'auth failed', code: 18 }
Credentials are correct though.
From the error mesasge it seems like you are using invalid auth details
This is most likely happen when you do not create username and password for individual database i.e, db_name in you case.
Check mLabs account and create username and password for db_name database and update your connection string.
According to the error information, as #Astro said, it seems to be caused by using invalid auth user/password which be created for database.
Did you create a new user for connecting the database, not account user for mlab? Such as the figures below shown.
Fig 1. A database user is required for connecting
Fig 2. Users list for the database
Hope it helps.
I figured out the issue, it wasn't an issue with the credentials. It was an issue with the mongoose version. The mongoose version I used didn't support the authentication. I had to remove the package and reinstall the latest version. with
node install mongoose#latest
Hope it helps someone. And thanks for the answers :)
I'm running py2neo 2.0.4 on a remote Neo4j 2.1.6 database. I'm able to connect to the database with some commands, but not with all.
Using the same connection uri for both instances:
This works fine.
test = self.graph_db.find_one('Node')
This does not.
test = self.graph_db.cypher.execute('MATCH (n) RETURN n LIMIT 1')
Regardless of the actual contents of the query, I get the same connection refused results.
With the help of my service provider for Neo4j, we were able to determine the error and a fix.
This is a known flaw in pre-2.2 Neo4j. To resolve this error, use the py2neo rewrite function.
py2neo.rewrite(('http', '0.0.0.0', 7474, ('https', {host}, {port}))