i am using couch db and i want to post data in local host, i want to pass data append to couch db URL without open the couch db my data is save couch db database how can do this process.like i want to save name url=http://127.0.0.1:5984/address/_design/addressbook/index.html?name=lovesrivastava this URL pass through local-host and save data in couch db database. and return true or false
That question is very hard to understand, but I think your question might be this:
You have 2 CouchDB databases, one on the localhost, and another on a "server". You want to serve a CouchApp from the localhost, but you want saved documents to be saved on the server. You can't save directly from the browser to the server because that would be "cross domain".
Your idea to "pass through" the local database to the server is not the right approach. You always need to save your document back to the database where you got it.
What you need is "replication":
You save the document to your local CouchDB, and then "replicate" it up to the server.
Related
i followed a tutorial and made a todo list web app using nodejs, express and ejs. after completing i used ngrok to make my localhost public so that i can share it with my friends, now i've realized that the tasks that i add on my todo list are saved in my server unless i restart it or close it completely. i want to know where the tasks which is a is stored in my local machine, does nodejs store it, or its saved in the memory of my local machine, more importantly if its already being stored somewhere why do i have to worry about using a database server like mongodb or firebase? hope my question is clear.
If you store your data in array or something in your code i can say "Yes. the data would be stored in the memory of your local machine".
And you don't have to worry about database if you don't have to permanently store your data.
I am using nodejs, Mongoose and mongodb database.
Sometimes I have seen that if I drop a database that included some collections inside it, when I run my node which makes some collections inside the database ( with the same name ), then the information that I want to save as a new entity does not show correctly.
I need to also mention when I drop the database in mongo db shell I also manually delete all the catch files inside c:/data/db ( default mongo db folder).
I checked the mongo documentation, sometimes it could make a problem if you make a database with the same name of the deleted database. Located at https://docs.mongodb.com/manual/reference/method/db.dropDatabase/#db.dropDatabase
You answered your own question. The document you have linked describe the solution:
Warning: If you drop a database and create a new database with the same name, you must either restart all mongos instances, or use the flushRouterConfig command on all mongos instances before reading or writing to that database. This action ensures that the mongos instances refresh their metadata cache, including the location of the primary shard for the new database. Otherwise, the mongos may miss data on reads and may write data to a wrong shard.
I have a scenario where there are multiple (~1000 - 5000) databases being created dynamically in CouchDB, similar to the "one database per user" strategy. Whenever a user creates a document in any DB, I need to hit an existing API and update that document. This need not be synchronous. A short delay is acceptable. I have thought of two ways to solve this:
Continuously listen to the changes feed of the _global_changes database.
Get the db name which was updated from the feed.
Call the /{db}/_changes API with the seq (stored in redis).
Fetch the changed document, call my external API and update the document
Continuously replicate all databases into a single database.
Listen to the /_changes feed of this database.
Fetch the changed document, call my external API and update the document in the original database (I can easily keep a track of which document originally belongs to which database)
Questions:
Does any of the above make sense? Will it scale to 5000 databases?
How do I handle failures? It is critical that the API be hit for all documents.
Thanks!
I am working on a small project and I can upload data to a mongo database but so far I have been unable to save an image to the same database, a friend of mine advised me to send a reference to the database but at this stage I have not worked out how to do this, any assistance would be greatly appreciated.
You can use node-s3-uploader library from Turistforeningen or build something like that if you want to save on local disk. The work behind is get the image file which you received, transform it to multiple versions to optimize bandwidth, you keep a reference to original version on MongoDB or MySQL.
When client request an image, send them the original link, depend on which case they need, client will deduce original link to re-scale link.
Assets like image should not be saved to database system directly but to local disk of your server, and you just save the link of that image to the database.
The database system like mongodb or mysql should save things that will be queried, and that's why we use database system. Things like binary file or image should just save to the local disk, because the content of these file is unreadable, and also can not be queried. But the name, path or URL of these file can be queried, so we normally save these things to database system.
I have a mobile app that stores data (fetched from a remote CouchDb instance) to a local SQLite database. Sometimes a sync request occurs, and I would like to know of all items in the local database whether their counterparts have changed on the server. (So, whether their revision is newer than the one in the local database.)
But how can I request only the specific items in the local database, without making hundreds of separate requests, and without fetching all the documents in the huge remote database?
The way I currently do it, is: for each local document, first get the HTTP header, and download the body only if the header shows a newer revision than the local doc has. I worry that for users that store a lot of items locally, this will result in a lot of HTTP request
What about using the Change Notification API?
Your app could request all recent changes, and then check if any of the local files has changed on the server.
Adding a filter might help getting only interesting changes.