mongos.connect("mongodb://localhost/Company")
On executing the above command as per the document if Company database exits then it will be connected to the nodejs or else database will be created and then connection is made.
My question is where will this newly created database exist in mongodb data folder or nodejs application folder
So your application is connecting to localhost where you have MongoDB server running on the default port (27017). If you are connected to a MongoDB cluster using a mongos process you will have to see where the mongod (database process itself) is/are running. Let's take the simple case where your mongod is running locally.
So I expect you have started your MongoDB instance with all default values, the "database files" are created in /data/db ( \data\db ).
This means that in your case, you should see the "Company" db files in this folder, something like :
/data/db/Company.0
/data/db/Company.ns
Let's now give you more informations about this:
When you start your database you start a "mongod", that use a parameter named dbpath (see http://docs.mongodb.org/manual/reference/configuration-options/#storage.dbPath ) that is defaulted to /data/db
You can override it to any existing folder to adapt to your environment.
So nothing is "created" inside your application, everything in done at the"mongodb" (database) level.
Related
I had a node application running on EC2 managed by Elastic Beanstalk.
The elastic beanstalk removed an instance and recreated new one due to some issue.
I had mongo store its db in a separate Elastic Block Store volume, and did re-attach the volume, mounted etc..
However when I tried to start mongodb using systemctl, I got various errors.
I tried --repair, chown the data directory to mongod and it finally worked, but now the user db was gone and the application re-created it and all collections are empty, but I do see large collection-x-xxxxxxx.wt and index-x-xxxxx.wt files in the data directory.
What am I doing wrong ?
Is there any way to recover the data.
PS: I did try the --repair before I saw the warning about how it would remove all corrupted data
I was able to restore from the collection-x-xxxxx.wt files....
In my 'corrupted' mongo data directory, there was a WiredTiger.wt.orig file and WiredTiger.wt file.
If I try to start mongod by removing the 'orig' extension, mongod will not start and start showing errors like WiredTiger.wt read error: failed to read 4096 bytes at offset 73728: WT_ERROR: non-specific WiredTiger error.
Searching for restoring a corrupted 'WiredTiger' file, I came across this medium article about repairing MongoDB after a corrupted WiredTiger file.
Steps from article as I followed them :
Stop mongod. (the one with empty collections)
Point mongod to a new data directory
Start mongod and create new db, and new collections with the same names as the ones from corrupted mongo db.
Insert atleast one dummy record into each of these collection.
Find the names of the collection*.wt files in this new location using db.<insert-collectionName>.stats(), look in the uri property of the output.
Stop mongod.
Copy over collection-x-xxxxx.wt from corrupted directory to the new directory and rename them to the corresponding ones from step 5.
7.1. i.e. Say, if your collection named 'testCollection' had the wt collection file name as collection-1-1111111.wt in corrupted directory and the name as collection-6.6666666.wt in new directory, you will have to copy the 'collection-1-1111111.wt' into the new directory and rename it to collection-6.6666666.wt
7.2. To find the collection wt file name of say 'testCollection', you can open the collection-x.xxxx.wt files in a text editor and scroll past the 'gibberish' to see your actual data matching the ones from 'testCollection'. (mine is not encrypted at rest).
Repeat the copy - rename step for all collections you have.
Run repair in new db path with --repair switch, you can see mongo fixing stuff in logs.
Start db.
Once done, verify the collections, mongodump from new db and mongorestore to fresh db and recreate indexes.
That article was a god sent, I cant believe it worked. Thank you Ido Ozeri from Medium
I'm attempting to connect to a new Aurora PostgreSQL instance with Babelfish enabled.
NOTE: I am able to connect to the instance using the pg library through the normal port 5432 (the Postgres TDAS endpoint).
However, for this test, I am attempting to connect through the Babelfish TDS endpoint (1433) using the standard mssql package.
If I specify a database name (it is correct), I receive the error 'database "postgres" does not exist':
var config = {
server: 'xxx.us-east-1.rds.amazonaws.com',
database: 'postgres',
user: 'xxx',
password: 'xxx'
};
and the connection closes since the connection fails.
if I omit the database property in the config, like:
var config = {
server: 'xxx.us-east-1.rds.amazonaws.com',
user: 'xxx',
password: 'xxx'
};
It will connect. Also, I can use that connection to query basic things like SELECT CURRENT_TIMESTAMP and it works!
However, I can't access any tables.
If I run:
SELECT COUNT(1) FROM PERSON
I receive an error 'relation "person" does not exist'.
If I dot-notate it:
SELECT COUNT(1) FROM postgres.dbo."PERSON"
I receive an error "Cross DB query is not supported".
So, I can't connect to the specific database directly and if I connect without specifying a database, I can't cross-query to the table.
Any one done this yet?
Or, if not, any ideas on helping me figure out what to try next? I'm out of ideas.
Babelfish databases (that you connect to on port 1433) have nothing to do with PostgreSQL databases (port 5432). Essentially, all of Babelfish lives within a single PostgreSQL database (parameter babelfishpg_tsql.database_name).
You seem to have a single-db setup, because Cross DB query is not supported. With such a setup, you can only have a single database via port 1433 (apart from master and tempdb). You have to use CREATE DATABASE to create that single database (if it isn't already created; ask sys.databases).
I can't tell if it is supported to create a table in PostgreSQL (port 5432) and use it on port 1433 (the other way around is fine), but if so, you have to create it in a schema that you created with CREATE SCHEMA while connected on port 1433.
The answer was that I should be connecting to database "master".
Even though there is no database titled master in the instance, you still do connect to it.
Once connected, running the following:
select current_database();
This will indicate you are connected to database "babelfish_db".
I don't know how that works or why a database would have an undocumented alias.
The bigger answer here is that cross-DB object references are not currently supported in Babelfish, outside your current SQL Server database.
This is currently being worked on. Stay tuned.
Currently we have several databases with a HADR configuration where the primary databases are on a Linux Server "A", with the standby all on Linux Server "B". DB2 version is 9.7.
We are attempting to relocate the primary database of one of these databases (with the intent of moving all later) to a new Linux Server "C". Efforts to find something similar have just brought back results of HADR takeover, that is not what we are aiming to do.
Lets call this database MYDB.
I have taken the steps below, and while HADR will start between the two by issuing the relevant start HADR commands first on the standby then on the primary, issuing 'db2pd -db MYDB -hadr' is showing them as disconnected with 'S0000000.log' as the log file on the opposite end. The correct log is being displayed locally.
STEPS TAKEN
Quiesced the database and then stopped HADR on the primary, confirmed on the secondary there was now a log gap, stopped HADR on the Standby and deactivated.
Took an offline backup on the current primary database and sent that to the new server "C", where an identical version of DB2 is already set up.
Created a new database 'MYDB' and restored from the backup sent over.
Updated the relevant database configurations:
On new server C :
db2 update db cfg for mydb using HADR_LOCAL_HOST C
db2 update db cfg for mydb using HADR_LOCAL_SVC hadr_mydb_c
On the existing standby B:
db2 update db cfg for mydb using HADR_REMOTE_HOST C
db2 update db cfg for mydb using HADR_REMOTE_SVC hadr_mydb_c
db2 update alternate server for database mydb using hostname c port 3700
'hadr_mydb_c' has been added to /etc/services on both 'B' and 'C' with a defined port of 3734
'C' has been added to both 'B' and 'C' hosts files. Log locations ect have been created to
match the existing server 'A'.
At this stage we have done a db2stop and start on the backup, then when we issue 'db2 start hadr on db mydb as standby' we get a confirmation that HADR has started.
On the new primary we issue 'db2 start hadr on db mydb as primary' we again get a confirmation that HADR has started.
db2pd shows HADR as active but the databases not connected.
Issuing the following command on the backup instance to see HADR details I can't see the MYDB database listed along with the other databases.
db2 "SELECT SUBSTR(DB_NAME, 1, 8) AS DBNAME, HADR_ROLE, HADR_STATE,HADR_SYNCMODE, HADR_CONNECT_STATUS,HADR_HEARTBEAT,HADR_TIMEOUT,HADR_LOG_GAP FROM TABLE (SNAP_GET_HADR (CAST (NULL as VARCHAR(128)), 0)) as T"
Luckily reverting the HADR config back to what they were previously we can reconnect HADR between the old primary and backup. Any ideas of how best to proceed?
Use-case: I am trying to write data from a nodejs process running locally (on a docker container) to my locally running postgres server (no docker container). The nodejs process is able to connect to the server (setting the address to host.docker.internal solved that problem) however, when I attempt a simple "SELECT * FROM contact LIMIT 1" query, this error is returned:
{"type":"postgres error","request":"SELECT * FROM contact",
"error":{
"name":"error","length":106,
"severity":"ERROR",
"code":"42P01",
"position":"15",
"file":"parse_relation.c",
"line":"1376",
"routine":"parserOpenTable"}}
The relation error suggests the table is not found-- I created this table using a postgres client (postico) and have been able to successfully query the table's contents with other pg clients as well
I see multiple posts are suggesting running the sequelize db:migrate command, but would this be the right solution here?
I did not create a model nor a migration, and created the table directly in the table. Is there something else I may be overlooking that is producing this error?
I have made a cluster on mongodb website, whitelisted IP adress, added the user and connected my application, but it is not giving me 'Standard URI connection' option (maybe their website got updated) instead it is giving me one with srv. So when I used it in mongoose.connect() function it is giving me error:"Invalid Uri,Must begin with mongodb://"...
I have been working to connect to mongodb for so long but failing every time
You have created cluster on MongoDB Atlas? Right.
If using mongo application:
change nodejs version to 2.12 or later from connect your application screen.
If using mongo shell:
On the "Connect to Cluster0" page, select I have mongo shell installed, then select mongo shell version "3.4 or earlier". You will get the desired format.