Deleting database and creating database with the same name issue - node.js

I am using nodejs, Mongoose and mongodb database.
Sometimes I have seen that if I drop a database that included some collections inside it, when I run my node which makes some collections inside the database ( with the same name ), then the information that I want to save as a new entity does not show correctly.
I need to also mention when I drop the database in mongo db shell I also manually delete all the catch files inside c:/data/db ( default mongo db folder).
I checked the mongo documentation, sometimes it could make a problem if you make a database with the same name of the deleted database. Located at https://docs.mongodb.com/manual/reference/method/db.dropDatabase/#db.dropDatabase

You answered your own question. The document you have linked describe the solution:
Warning: If you drop a database and create a new database with the same name, you must either restart all mongos instances, or use the flushRouterConfig command on all mongos instances before reading or writing to that database. This action ensures that the mongos instances refresh their metadata cache, including the location of the primary shard for the new database. Otherwise, the mongos may miss data on reads and may write data to a wrong shard.

Related

CouchDB and PouchDB are producing duplicate records when recreating same CouchDB from scratch with a script

We have a couch DB we are developing on. We have a script that, for development purposes, just recreates the couch database from some json files from scratch every time we run that script. All the data we are inserting into couch from this script have unique _id's within all of the databases/documents.
The problem is, after we run this script, the pouch database in the browser starts creating duplicates when it synchronizes with couch. This results in duplicate data both in pouch and couch because when pouch replicates back to couch, couch will now contain duplicate data too.
Does anyone know if  there any way or method to prevent this duplication from occurring? It was my understanding the _id field was suppose to be unique, but that's clearly not the case, because both couch and pouch are creating duplicate records with the same _id fields.

how to use different database with same table structure in typed dataset xsd

I'm confused how to explain myself well here is my scenario.
I have a database named "database1" and i have used typed dataset in visual studio and have added more than 200 stored procedures in the table adapters.
it is a desktop based application. now i want to deployee the same software in other school same database but i have changed the database name
when i generate new database from query and write all the stored procedures in the database and change the database name in the connection string it doesn't work.
I'd recommend you don't change the database name, or if it's something specific to the first client (like ParisTechnicalCollegeDatabase), change it now to something generic (SchoolManager) so you never have to change it again. There's no specific problem I can think of regards reusing a typed dataSet on a different database: I do it daily on database that aren't clones of each other. Ensure your second server is set up with a user and default schema that is specified in the connection string. The problem will eithe per be faulty connection string or incorrect database setup, not the fault of the dataSet
For more targeted help, post up the error messages that appear when you try to run your app on the new database

How do I use flask Migrate and SQLAlchemy on my own database, and connect to a third party database that I do not wish to migrate?

I have discovered a way to connect to multiple databases with Flask through binds.
Facts leading up to question:
I have defined my own database Models with SQLAlchemy.
I use Flask-Migrate to generate my database
I wish to connect to a third party database and use the SQLAlchemy ORM.
I wish to query the third party database using the SQLAlchemy ORM in a way such as: db.session.query(THIRDPARTYMODEL).filter_by(id=id)
I'm worried, that once I define the new models and use Flask Migrate, that it will try to create a new migration and create the tables I already have.
I wish to use my own database and the third party database at the same time.
How can I connect to the third party database and query it using the SQLAchemy ORM without worrying about Flask Migrate trying to generate new tables on the third party database?
Flask-Migrate does not automatically migrate multiple databases, the default is to only migrate the main database from Flask-SQLAlchemy.
The problem is that if you have additional models created that belong to another database, Alembic will consider them part of that main database, so it will write migrations for your main database that include the entire set of models.
To have things working the way you want, you will need to ensure those other models that do not belong in the main database are not imported when you invoke the manage.py db ... set of commands.
You may also tell Flask-Migrate and Alembic to support migrating multiple databases by using flask db init --multidb when initializing the migrations.
I have the same issue, but I wasn't able to use the accepted include_object solution because that method is called too far into the migration on the non-migrated database. In my case, the user doesn't have write access, so the migration failed as soon as Alembic tried to make the alembic_version table on the remote DB. This happened before include_object was called on any of the tables.
Instead, because I wanted Alembic to completely ignore this database and all its tables, I simply put the read-only database in SQLALCHEMY_BINDS and then removed it in migrations/env.py before it was processed:
# ===Generated code===
bind_names = []
for name, url in current_app.config.get("SQLALCHEMY_BINDS").items():
context.config.set_section_option(name, "sqlalchemy.url", url.replace(
'%', '%%'))
bind_names.append(name)
# ===New code===
### Remove the 'remote' bind, which is a read-only database that we don't
### want to track migration changes on.
if 'remote' in bind_names:
bind_names.remove('remote')

PouchDB - start local, replicate later

Does it create any major problems if we always create and populate a PouchDB database locally first, and then later sync/authenticate with a centralised CouchDB service like Cloudant?
Consider this simplified scenario:
You're building an accommodation booking service such as hotel search or airbnb
You want people to be able to favourite/heart properties without having to create an account, and will use PouchDB to store this list
i.e. the idea is to not break their flow by making them create an account when it isn't strictly necessary
If users wish to opt in, they can later create an account and receive credentials for a "server side" database to sync with
At the point of step 3, once I've created a per-user CouchDB database server-side and assigned credentials to pass back to the browser for sync/replication, how can I link that up with the PouchDB data already created? i.e.
Can PouchDB somehow just reuse the existing database for this sync, therefore pushing all existing data up to the hosted CouchDB database, or..
Instead do we need to create a new PouchDB database and then copy over all docs from the existing (non-replicated) one to this new (replicated) one, and then delete the existing one?
I want to make sure I'm not painting myself into any corner I haven't thought of, before we begin the first stage, which is supporting non-replicated PouchDB.
It depends on what kind of data you want to sync from the server, but in general, you can replicate a pre-existing database into a new one with existing documents, just so long as those document IDs don't conflict.
So probably the best idea for the star-rating model would be to create documents client-side with IDs like 'star_<timestamp>' to ensure they don't conflict with anything. Then you can aggregate them with a map/reduce function.

Can't switch to database other than _system within Foxx

I'm trying to dynamically generate databases/collections on application startup, in case they do not exist yet. This works fine when working with the _system database. The thing is that it seems not to be allowed when trying to switch to other databases. From the docs:
db._useDatabase(name)
Changes the current database to the database specified by name. Note
that the database specified by name must already exist.
Changing the database might be disallowed in some contexts, for
example server-side actions (including Foxx).
Does this mean that Foxx applications can only create collections within the _system database? My manifest file sets the isSystem property to false. What is the meaning of such limitation?
A Foxx is included in one database context and has access to the collections there.
You can install Foxx apps in different databases.
f.E. you can run the following commands in arangosh to install the foxx app "example" in the database "myDB":
db._useDatabase("myDB")
fm.install("example","/example")
your app will than be reachable at (assuming standard configuration):
http://localhost:8529/_db/myDB/example
It is intentionally not possible to access a different database from within a Foxx app.

Resources