Can't import a 2nd mondgoDB collection to Atlas - node.js

I have an Node.JS/Express app holding a database with two main collections, named here as first and second. This database and its collections must be imported into MongoDB Atlas. Following the instructions available at Atlas, I proceeded the import of the collection first using mongoimport method. However, doing the exact same steps of the first collection, the second can't be imported by any means.
I downloaded a Terminal log to check ALL attempts I did for each one of them and tried to replicate the first success, but none worked. I even created a second account to watch it closer, but the behaviour is the same: first comes through, but second does not.
This is the code coming in (sensitive info swapped by CAPS words:
$usermac mongoimport --host flixNewMongoDB-shard-0/flixnewmongodb-shard-00-00-da9ev.mongodb.net:27017,flixnewmongodb-shard-00-01-da9ev.mongodb.net:27017,flixnewmongodb-shard-00-02-da9ev.mongodb.net:27017 --ssl --username admin --password PASSWORD --authenticationDatabase ADMINNAME --db DBNAME --collection COLLECTIONNAME --type json --file COLLECTIONNAME.json
This is the error message I receive in the end of the process:
2019-11-10T14:26:35.686+0100 Failed: open COLLECTIONNAME.json: no such file or directory
2019-11-10T14:26:35.686+0100 0 document(s) imported successfully. 0 document(s) failed to import.
My first doubt is why the code worked in the first attempt but not the second. All variables were the same. Could I have any insight of the friends here? Thanks in advance for the availability.

The solution I found for me: following the advice I received from Support, I exported the second.json collection and imported it manually. Once Atlas and my database were linked, it went through. I was uncertain about doing that because I was afraid to let the file without connection to the database, but that hasn't been the case.

Related

Mongodb - Empty collections after EC2 crash

I had a node application running on EC2 managed by Elastic Beanstalk.
The elastic beanstalk removed an instance and recreated new one due to some issue.
I had mongo store its db in a separate Elastic Block Store volume, and did re-attach the volume, mounted etc..
However when I tried to start mongodb using systemctl, I got various errors.
I tried --repair, chown the data directory to mongod and it finally worked, but now the user db was gone and the application re-created it and all collections are empty, but I do see large collection-x-xxxxxxx.wt and index-x-xxxxx.wt files in the data directory.
What am I doing wrong ?
Is there any way to recover the data.
PS: I did try the --repair before I saw the warning about how it would remove all corrupted data
I was able to restore from the collection-x-xxxxx.wt files....
In my 'corrupted' mongo data directory, there was a WiredTiger.wt.orig file and WiredTiger.wt file.
If I try to start mongod by removing the 'orig' extension, mongod will not start and start showing errors like WiredTiger.wt read error: failed to read 4096 bytes at offset 73728: WT_ERROR: non-specific WiredTiger error.
Searching for restoring a corrupted 'WiredTiger' file, I came across this medium article about repairing MongoDB after a corrupted WiredTiger file.
Steps from article as I followed them :
Stop mongod. (the one with empty collections)
Point mongod to a new data directory
Start mongod and create new db, and new collections with the same names as the ones from corrupted mongo db.
Insert atleast one dummy record into each of these collection.
Find the names of the collection*.wt files in this new location using db.<insert-collectionName>.stats(), look in the uri property of the output.
Stop mongod.
Copy over collection-x-xxxxx.wt from corrupted directory to the new directory and rename them to the corresponding ones from step 5.
7.1. i.e. Say, if your collection named 'testCollection' had the wt collection file name as collection-1-1111111.wt in corrupted directory and the name as collection-6.6666666.wt in new directory, you will have to copy the 'collection-1-1111111.wt' into the new directory and rename it to collection-6.6666666.wt
7.2. To find the collection wt file name of say 'testCollection', you can open the collection-x.xxxx.wt files in a text editor and scroll past the 'gibberish' to see your actual data matching the ones from 'testCollection'. (mine is not encrypted at rest).
Repeat the copy - rename step for all collections you have.
Run repair in new db path with --repair switch, you can see mongo fixing stuff in logs.
Start db.
Once done, verify the collections, mongodump from new db and mongorestore to fresh db and recreate indexes.
That article was a god sent, I cant believe it worked. Thank you Ido Ozeri from Medium

The underlying table for model 'Order' does not exist. Error code: P1014 (Prisma)

I have such problem as below
$ prisma migrate dev --name "ok"
Error: P3006
Migration `2021080415559_order_linking` failed to apply clearnly to the shadow database.
Error code: P1014
Error:
The underlying table for model 'Order' does not exist.
How to fix it?
The solution:
It seems that this may be due to the migration file in the prisma folder.
I decided to delete the Migration Files and the whole folder with it. I restarted the application, it got a new file and it worked.
*delete the migrations folder*
$ prisma generate
$ prisma migrate dev --name "ok"
*it works*
It looks like your migrations were corrupted somehow. There was probably changes to your database that was not recorded in the migration history.
You could try one of these:
If you're okay with losing the data in the database, try resetting the database with prisma migrate reset. More info
Try running introspection to capture any changes to the database with prisma introspect before applying a new migration. More info

Writing to localhost Postgres returning infamous"42P01 parse_relation.c" error

Use-case: I am trying to write data from a nodejs process running locally (on a docker container) to my locally running postgres server (no docker container). The nodejs process is able to connect to the server (setting the address to host.docker.internal solved that problem) however, when I attempt a simple "SELECT * FROM contact LIMIT 1" query, this error is returned:
{"type":"postgres error","request":"SELECT * FROM contact",
"error":{
"name":"error","length":106,
"severity":"ERROR",
"code":"42P01",
"position":"15",
"file":"parse_relation.c",
"line":"1376",
"routine":"parserOpenTable"}}
The relation error suggests the table is not found-- I created this table using a postgres client (postico) and have been able to successfully query the table's contents with other pg clients as well
I see multiple posts are suggesting running the sequelize db:migrate command, but would this be the right solution here?
I did not create a model nor a migration, and created the table directly in the table. Is there something else I may be overlooking that is producing this error?

Issue with sequelize migrations to heroku

I have been sitting at this computer for 7 hours straight trying to deploy my app. There is a node/express backend serving up restricted API json data at different endpoints. In order to get access to this json data you have to have a token.
It all works fine and dandy on my local server during development. However, when I go to send the migrations to heroku (using 'heroku run bash', then 'sequelize db:migrate), I get some random error saying "SyntaxError: Unexpected String"..as shown below.
As you can see, when I run sequelize:db:migrate:undo, it says that no executed migrations found.
```
Sequelize [Node: 5.11.1, CLI: 2.4.0, ORM: 3.24.3, pg: ^6.1.0]
Loaded configuration file "config/config.json".
Using environment "production".
== 20160917224717-create-user: migrating =======
[SyntaxError: Unexpected string]
~ $ sequelize db:migrate:undo
Sequelize [Node: 5.11.1, CLI: 2.4.0, ORM: 3.24.3, pg: ^6.1.0]
Loaded configuration file "config/config.json".
Using environment "production".
No executed migrations found.
~ $
```
However, when I look in my heroku database, I DO see that there is now 1 table. However, that table does not work, and I am still getting an error on form submit to create a user. The error I get on form submit is:
message: "relation "users" does not exist"
name: "SequelizeDatabaseError"
What gives? This alleged syntax unexpected string error does not throw when I am running locally. It runs smooth as butter with the migrations. What could this be?
Thanks.
Either you have not updated your code on the server to match your local environment or your database does not match. "Users relation" implies the users table is related to another table. Make sure the code is really updated (if sequelize code mentions this relation) AND all other tables are reproduced on the server.
Once I got a problem with accessing postgres db on Heroku (provided as an add-on):
I installed SQL Shell (psql) on my laptop
I logged-in to the db (used all necessary credentials)
run: \dt to check db schema => result: 'no relations', so the migration wasn't done yet
run migrate:db
run: \dt => schema is here - OK!
now: run heroku console
login to your account
run: heroku restart -a=<my_app>
If the lack of db:migration was a problem only, it should be fine now.
You can also try to reset your db first (if your data isn't precious!) running: heroku pg:reset -a=<my_app>

Cannot fetch from or insert data to mongodb using mongoose

I wrote a node web app and created a mongoDb database on my local system. I was using the following code to connect to local mongodb from node js
var mongoose = require('mongoose');
mongoose.connect('mongodb://localhost/db_name'); //local
And everything was working fine on my local machine. So I went on and created an mlab account and created a database. But when I tried to run the code by changing the connection string, connections are still established I believe. But the find and save requests are not invoking the callbacks, even no errors shows up. All requests are getting timed out.
var mongoose = require('mongoose');
mongoose.connect("mongodb://user:pass#ds036789.mlab.com:36789/db_name"); //mlab
Another thing I noticed is that I cannot ping ds036789.mlab.com. But TCP connections are succeeding when I tried the nc command
nc -w 3 -v ds036789.mlab.com 36789
I even tried deploying to azure. Which doesn't work either. Any help is much appreciated. Thanks.
EDIT:
Not being able to ping was due to the fact that I used azure hosting. It is expected. And I also found out that I get this error while trying to connect :
connection error: { [MongoError: auth failed] name: 'MongoError', ok: 0, errmsg: 'auth failed', code: 18 }
Credentials are correct though.
From the error mesasge it seems like you are using invalid auth details
This is most likely happen when you do not create username and password for individual database i.e, db_name in you case.
Check mLabs account and create username and password for db_name database and update your connection string.
According to the error information, as #Astro said, it seems to be caused by using invalid auth user/password which be created for database.
Did you create a new user for connecting the database, not account user for mlab? Such as the figures below shown.
Fig 1. A database user is required for connecting
Fig 2. Users list for the database
Hope it helps.
I figured out the issue, it wasn't an issue with the credentials. It was an issue with the mongoose version. The mongoose version I used didn't support the authentication. I had to remove the package and reinstall the latest version. with
node install mongoose#latest
Hope it helps someone. And thanks for the answers :)

Resources