My apologies if this has been asked before but I haven't been able to find the answer.
I have a small project written in express.js (node.js) that uses a local sqlite data file. I want to switch to mongodb but would like to keep the data file within the project directory. Is that possible with mongodb and, if so, can anyone offer any guidance?
Thank you,
Luis
mongod --dbpath myappdir/mongo
It's certainly possible. In your mongod.conf file, simply point the dbpath property at whatever directory you'd like
dbpath = /path/to/your/project
Keep in mind that unlike sqlite, MongoDB creates a number of files. And that number changes as your data changes.
Related
I am trying to understand a few things that arose from writing the COPY command in a SQL (.sql) file generated by Prisma. I understand that the filepath given to COPY is absolute/relative to where the database is running.
COPY table(value,other) FROM 'how/do/i/get/the/right/path' DELIMITER ',' CSV HEADER
Can someone explain when we have a hosted server and database (I believe they are usually on separate machines) how we would COPY from a csv file. The file is part of my typical git repo. After reading, my understanding is that the file would actually need to be where the database is hosted, is that correct? I am not sure I have a full grasp on this. Do hosted db servers also store files? I thought it would just be the database (I understand its a machine that could technically have anything on it but is that ever done?)
What would be the best way to access this file from the db? Would I use psql to connect to my database and then ssh into the server? I think there are different solutions like running a script which has psql in it and using the variation \COPY.
What I wanted ideally was to have the file as part of my repo and in a Prisma migrate file copy over the contents into a database table. If the above was incorrect, would someone be able to clarify how I would get the correct path into the command? I want it to be in a .sql file and put in the host variable (assuming that would work, depending on clarity on the above points regarding where files live).
Thanks!
Sources used:
https://www.postgresql.org/docs/current/sql-copy.html
https://www.cybertec-postgresql.com/en/copy-in-postgresql-moving-data-between-servers/
Basically I don't want to use an existing mongodb database site like the official mongocloud or whatever-- how can I do what they do, but myself? Do I just include the database folder, along with all of the mongodb executable, in my nodejs folder and call require("child_process").spawn("mongodb.exe", /insert params here/), or is there some kind of way to do this in the mongo module?
And also do I need my own virtual machine to be able to do this or can the following work on a standard heroku nodejs application for example?
Anyone?
Heroku's hosting solution has only ephemeral volumes, so you can't use it for a database. Any files you create are temporary and will be purged on a regular basis.
For example, when your application is idle Heroku will de-provision that resource and clear out any data you've left there.
You can't use Heroku like this, you must use an external database service, or one of their many add-on offerings.
I am working on a projet using Node.js and mongooseJS to acces database.
All my read and write functions work fine when using a local base, but when I switch to a base on a remote server, with the exact same databases and collections, the application does not do anything and I get a timeout.
Since I am very new to mongoDB, is there something I have missed?
Thanks for any advice.
Gabriel
Check if you set bindIp in mongodb configuration file.
Also, notice the mongodb version and platform you used.
I'm a newbie in MongoDB. And I'm sorry if the Question is not clear enough. What i mean is:
I have clustered GlusterFS Volumes (configured on top of 2 CentOS). Which means, same data directory can be read from both CentOS boxes:
Lets call: CentOS-1 and CentOS-2
And i wanna install MongoDB Servers mongod on both CentOS boxes. But start (run) only one. (The other one on CentOS-2 might be purposely stopped)
Then the Applications will be connecting to that one (current Active) on CentOS-1.
Here the main question comes in (please refer to the picture below):
Let's say: if CentOS-1 Server goes down, and i manually start the another MongoDB Server (mongod on the another box CentOS-2), and let all the Applications to connect to CentOS-2:
(1) Will everything be still working?
(2) Will there be 'lock' issues as in MySQL?
(3) If it works, does it mean, we can add any amount of MongoDB Servers (in stand-by mode), and whenever they swing, there's no problem?
Note:
Only 1 Server at a time will be running. Not like: the Data Store is being accessed by multiple Server.
Thanks for all opinions in advanced :)
Yes you can. There won't be any problem in moving the data files to a different server as long as you plan to use the same version of mongodb and the same operating system. When you move the files make sure to delete the mongodb.lock file if it exists in data directory.
Glusterfs is good for file replication between various servers, but its not good idea to sync mongodb data using glusterfs.
Will everything be still working?
probabily no
Will there be 'lock' issues as in MySQL?
yes it will be. check this https://docs.mongodb.org/v3.0/faq/concurrency/ .glusterfs locks the file while it write on gluster-volumes and mongodb data may change frequently which could result problem.
you can consider mongodb replication (https://docs.mongodb.org/manual/core/replication-introduction/) for your purpose
We have a linux server for the database. Most of our data are on /var/. I would like to backup entire directory on external hard drive or on another linux system so that if something goes wrong I can entirely replace the directory. Since the directory has many files, I do not want to copy and paste every time instead I like to sync them.
Is there easy way to do that? rsyn can do that, how do I avoid of login every time the server? BTW I have limited knowledge of linux system.
Appreciated any comments and suggestions.
Bikesh
Rsyncing database files is not a recommended way of backing them up. I believe you have mysql running on the server. In that case, you can take a full database dump in the server, using steps mentioned in following link:
http://www.microhowto.info/howto/dump_a_complete_mysql_database_as_sql.html#idp134080
And, then syncing these files to your backup server. You can use rsych command for this purpose:
http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
Make sure that you have installed mysql in the backup server too. You can also copy the mysql configuration file /etc/my.cnf file to the database backup server. In case you require your database to be updated always, you can setup mysql replication. You can follow the below mentioned guide to do the same.
http://kbforlinux.blogspot.in/2011/09/setup-mysql-master-slave-replication.html