I had a node application running on EC2 managed by Elastic Beanstalk.
The elastic beanstalk removed an instance and recreated new one due to some issue.
I had mongo store its db in a separate Elastic Block Store volume, and did re-attach the volume, mounted etc..
However when I tried to start mongodb using systemctl, I got various errors.
I tried --repair, chown the data directory to mongod and it finally worked, but now the user db was gone and the application re-created it and all collections are empty, but I do see large collection-x-xxxxxxx.wt and index-x-xxxxx.wt files in the data directory.
What am I doing wrong ?
Is there any way to recover the data.
PS: I did try the --repair before I saw the warning about how it would remove all corrupted data
I was able to restore from the collection-x-xxxxx.wt files....
In my 'corrupted' mongo data directory, there was a WiredTiger.wt.orig file and WiredTiger.wt file.
If I try to start mongod by removing the 'orig' extension, mongod will not start and start showing errors like WiredTiger.wt read error: failed to read 4096 bytes at offset 73728: WT_ERROR: non-specific WiredTiger error.
Searching for restoring a corrupted 'WiredTiger' file, I came across this medium article about repairing MongoDB after a corrupted WiredTiger file.
Steps from article as I followed them :
Stop mongod. (the one with empty collections)
Point mongod to a new data directory
Start mongod and create new db, and new collections with the same names as the ones from corrupted mongo db.
Insert atleast one dummy record into each of these collection.
Find the names of the collection*.wt files in this new location using db.<insert-collectionName>.stats(), look in the uri property of the output.
Stop mongod.
Copy over collection-x-xxxxx.wt from corrupted directory to the new directory and rename them to the corresponding ones from step 5.
7.1. i.e. Say, if your collection named 'testCollection' had the wt collection file name as collection-1-1111111.wt in corrupted directory and the name as collection-6.6666666.wt in new directory, you will have to copy the 'collection-1-1111111.wt' into the new directory and rename it to collection-6.6666666.wt
7.2. To find the collection wt file name of say 'testCollection', you can open the collection-x.xxxx.wt files in a text editor and scroll past the 'gibberish' to see your actual data matching the ones from 'testCollection'. (mine is not encrypted at rest).
Repeat the copy - rename step for all collections you have.
Run repair in new db path with --repair switch, you can see mongo fixing stuff in logs.
Start db.
Once done, verify the collections, mongodump from new db and mongorestore to fresh db and recreate indexes.
That article was a god sent, I cant believe it worked. Thank you Ido Ozeri from Medium
Related
I have such problem as below
$ prisma migrate dev --name "ok"
Error: P3006
Migration `2021080415559_order_linking` failed to apply clearnly to the shadow database.
Error code: P1014
Error:
The underlying table for model 'Order' does not exist.
How to fix it?
The solution:
It seems that this may be due to the migration file in the prisma folder.
I decided to delete the Migration Files and the whole folder with it. I restarted the application, it got a new file and it worked.
*delete the migrations folder*
$ prisma generate
$ prisma migrate dev --name "ok"
*it works*
It looks like your migrations were corrupted somehow. There was probably changes to your database that was not recorded in the migration history.
You could try one of these:
If you're okay with losing the data in the database, try resetting the database with prisma migrate reset. More info
Try running introspection to capture any changes to the database with prisma introspect before applying a new migration. More info
I have an Node.JS/Express app holding a database with two main collections, named here as first and second. This database and its collections must be imported into MongoDB Atlas. Following the instructions available at Atlas, I proceeded the import of the collection first using mongoimport method. However, doing the exact same steps of the first collection, the second can't be imported by any means.
I downloaded a Terminal log to check ALL attempts I did for each one of them and tried to replicate the first success, but none worked. I even created a second account to watch it closer, but the behaviour is the same: first comes through, but second does not.
This is the code coming in (sensitive info swapped by CAPS words:
$usermac mongoimport --host flixNewMongoDB-shard-0/flixnewmongodb-shard-00-00-da9ev.mongodb.net:27017,flixnewmongodb-shard-00-01-da9ev.mongodb.net:27017,flixnewmongodb-shard-00-02-da9ev.mongodb.net:27017 --ssl --username admin --password PASSWORD --authenticationDatabase ADMINNAME --db DBNAME --collection COLLECTIONNAME --type json --file COLLECTIONNAME.json
This is the error message I receive in the end of the process:
2019-11-10T14:26:35.686+0100 Failed: open COLLECTIONNAME.json: no such file or directory
2019-11-10T14:26:35.686+0100 0 document(s) imported successfully. 0 document(s) failed to import.
My first doubt is why the code worked in the first attempt but not the second. All variables were the same. Could I have any insight of the friends here? Thanks in advance for the availability.
The solution I found for me: following the advice I received from Support, I exported the second.json collection and imported it manually. Once Atlas and my database were linked, it went through. I was uncertain about doing that because I was afraid to let the file without connection to the database, but that hasn't been the case.
I already have my Solr configured and running in the port 8983. Initially i indexed all the data in the MongoDB using data import handler. But now I realized that for every update and new insertion to automatically index we need a Mongo connector . I followed these links: Mongo Connector and Usage-with-Solr.
I am getting struck at the point
python mongo_connector.py -m localhost:27017 -t http://localhost:8983/solr
It shows error
python: can't open file 'mongo_connector.py': [Errno 2] No such file or directory
how to integrate a collection named food with the mongodb collection testfood such that new insertions and update automatically reflects in solr.
I have created a dbfarm in MonetDb. then I've moved the directory of the dbfarm to another location, and the dbfarm stopped working. So I'm trying to fix that by deleting the old dbfarm and/or creating a new one
The problem is that when trying to create new dbfarm by
monetdbd start newDbfarm/
I get the error:
monetdbd: binding to stream socket port 50000 failed: Address already in use
How can I solve this?
I'm working with the latest monetDb (MonetDB Oct2014 Release)
Update
I've managed to somehow fix this by using
monetdbd set port=50001 newDbfarm/
before the
monetdbd start newDbfarm/
and then I have to always specify the port when using monetdb:
monetdb -p50001 create voc1
Is there a way to just delete the old dbfarm? or change the default so I will always go to the new dbfarm?
You can stop monetdbd from using the old dbfarm
monetdb stop oldBdbfarm
monetdb start newDbfarm
It might take a while for this to complete, especially if there are queries running.
mongos.connect("mongodb://localhost/Company")
On executing the above command as per the document if Company database exits then it will be connected to the nodejs or else database will be created and then connection is made.
My question is where will this newly created database exist in mongodb data folder or nodejs application folder
So your application is connecting to localhost where you have MongoDB server running on the default port (27017). If you are connected to a MongoDB cluster using a mongos process you will have to see where the mongod (database process itself) is/are running. Let's take the simple case where your mongod is running locally.
So I expect you have started your MongoDB instance with all default values, the "database files" are created in /data/db ( \data\db ).
This means that in your case, you should see the "Company" db files in this folder, something like :
/data/db/Company.0
/data/db/Company.ns
Let's now give you more informations about this:
When you start your database you start a "mongod", that use a parameter named dbpath (see http://docs.mongodb.org/manual/reference/configuration-options/#storage.dbPath ) that is defaulted to /data/db
You can override it to any existing folder to adapt to your environment.
So nothing is "created" inside your application, everything in done at the"mongodb" (database) level.