databases and data not saving ~Linux QubeOS~ - node.js

when i create a postgresql database and create tables and columns and even insert data into the columns. I cant restart my machine without losing the created databases and all the data.
i have tried changing a coupe things in the configuration file but nothing helped.
I also have to reset the password for postgres everytime I restart my machine. I mainly use mongodb I am just learning postgreSQL just so I can use it if I ever need to in the future. I am runing a Linux machine QubesOS. I have a few problems like this useing QubesOS. every tutorial I watch everybody uses Macs. Which a mac seems good and all kinda a mix between windows and linux The best of both worlds. Easy package installs and terminal control but I dont want to trade my linux machine for a Mac I would much rather just fix these problems I am having with PostgreSQL on my linux machine

You ran into an important security feature of QubesOS: All data modifications are discarded on shutdown of a so called "Qube". They are reset to their original state.
But there is the exception of data kept in some very special directories.
If you convince your data base packages to put their data into these directories, it will be preserved over reboots of your data base Qube:
Read this documentation for more information.

Related

Shipping Postgres inside a node server [duplicate]

If it's possible, I'm interested in being able to embed a PostgreSQL database, similar to sqllite. I've read that it's not possible. I'm no database expert though, so I want to hear from you.
Essentially I want PostgreSQL without all the configuration and installation. If it's possible, tell me how.
Run postgresql in a background process.
Start a separate thread in your application that would start a postgresql server in local mode either by binding it to localhost with some random free port or by using sockets (does windows support sockets?). That should be fairly easy, something like:
system("C:\Program Files\MyApplication\pgsql\postgres.exe -D C:\Documents and Settings\User\Local Settings\MyApplication\database -h 127.0.0.1 -p 12345");
and then just connect to 127.0.0.1:12345.
When your application quits, you can always send a SIGTERM to your thread and then wait a few seconds for postgresql to quit (ie join the thread).
PS: You can also use pg_ctl to control your "embedded" database, even without threads, just do a "pg_ctl start" (with appropriate options) when starting the application and "pg_ctl stop" when quitting it.
You cannot embed it, nor should you try.
For embedding you should use sqlite as you mentioned or firebird rdbms.
Unless you do a major rewrite of code, it is not possible to run Postgres "embedded". Either run it as a separate process or use something else. SQLite is an excellent choice. But there are others. MySQL has an embedded version. See it at http://mysql.com/oem/. Also several java choices, and Mac has Core Data you can write too. Hell, you can even use FoxPro. What OS you on and what services you need from the database?
You can't embed it as a in process type thing like sqlite etc, but you can easily embed it into your application setup using Inno setup at http://www.innosetup.org. Search their mailing list archive and you will find someone did most of the work for you and all you have to to is grab the zipped distro and you can easily have postgresql installed when the user installs your app. You can then use the pg_hba.conf file to restrict the server to local host only. Not a true embedded DB, but it would work.
PostgreSQL is intended to run as a stand-alone server; it's probably possible to embed it if you hack at it hard and long enough, but it would be much easier to just run it as intended in a separate process.
HSQLDB (http://hsqldb.org/) is another db which is easily embedded. Requires Java, but is an excellent and often-used choice for Java applications.
Anyone tried on Mac OS X:
http://pagesperso-orange.fr/bruno.gaufier/xhtml/prod_postgresql.xhtml
http://www.macosxguru.net/article.php?story=20041119135924825
(Of course sqlite would be my embedded db of choice as well)
Well, I know this is a very very very old post, but if anyone has nowadays this question, I would refer to:
You can use containers running Postgres. Here's a post that could be helpful, doing something along this line using R:
https://rsangole.netlify.app/post/2021/08/07/docker-based-rstudio-postgres/?utm_source=pocket_mylist
Take a look at duckdb https://duckdb.org/docs/installation/ It is relatively new and still needs to mature. But it works pretty much like an embedded database ("In-process, serverless"), with bindings for several languages (Python, R, Java, ...)

Cygwin intermittently loses it's mapped drives in /cygdrive

So, I have a collection of Windows Server 2016 virtual machines that are used to run some tests in pairs. To perform these tests, I copy a selection of scripts and files from the network on to the machine, before performing the tests.
I'm basically using a selection of scripts that have existed around here since before my time and whilst i would like to use other methods, so much of our infrastructure relies on these scripts that overhauling the system would be a colossal task.
First up, i sort out the mapped drives with
net use X: \\network\location1 /user:domain\user password
net use Y: \\network\location2 /user:domain\user password
and so on
Soon after, i use rsync to copy files from a location in /cygdrive/y/somewhere to /cygdrive/c/somewhere_else
During the rsync, i will get errors that "files have vanished" (I'm currently unable to post the exact error, I will edit this later to include this). When i check what's currently in the /cygdrive directory, all i see is /cygdrive/c and everything else has disappeared.
I've tried making a symbolic link to /cygdrive/y in a different location, I've tried including persistent:yes on the net use command, I've changed the power settings on the network card to not sleep. None of these work.
I'm currently looking into the settings for the virtual machines themselves at this point, but I have some doubts as we have other virtual windows machines that do not seem to have this issue.
Has anyone has heard of anything similar and/or knows of a decent method to troubleshoot this?
Right, so I've been working on this all day and finally noticed a positive change, but since my systems are in VMware's vCloud, this may not work for some people. It's was simply a matter of having the VM turned off and upgrading the Virtual Hardware Version to the latest version. I have noticed with this though, that upon a restart, one of the first messages that comes up mentions that the computer is "disabling group policies".
I did a bit of research into this and found out that Windows 8 and 10 (no mention of any Windows Server machines) both automatically update Group Policies in the background, disconnecting and reconnecting mapped drives to recreate them.
It's possible that changing the Group Policy drive from "recreate" to "update" should fix this issue, and that the Virtual Hardware update happened to resolve this in a similar manner.

MongoDB : Is it possible to store "Data Directory" on GlusterFS Volume (across Multiple VMs), so that standby Mongo Server can use it when required?

I'm a newbie in MongoDB. And I'm sorry if the Question is not clear enough. What i mean is:
I have clustered GlusterFS Volumes (configured on top of 2 CentOS). Which means, same data directory can be read from both CentOS boxes:
Lets call: CentOS-1 and CentOS-2
And i wanna install MongoDB Servers mongod on both CentOS boxes. But start (run) only one. (The other one on CentOS-2 might be purposely stopped)
Then the Applications will be connecting to that one (current Active) on CentOS-1.
Here the main question comes in (please refer to the picture below):
Let's say: if CentOS-1 Server goes down, and i manually start the another MongoDB Server (mongod on the another box CentOS-2), and let all the Applications to connect to CentOS-2:
(1) Will everything be still working?
(2) Will there be 'lock' issues as in MySQL?
(3) If it works, does it mean, we can add any amount of MongoDB Servers (in stand-by mode), and whenever they swing, there's no problem?
Note:
Only 1 Server at a time will be running. Not like: the Data Store is being accessed by multiple Server.
Thanks for all opinions in advanced :)
Yes you can. There won't be any problem in moving the data files to a different server as long as you plan to use the same version of mongodb and the same operating system. When you move the files make sure to delete the mongodb.lock file if it exists in data directory.
Glusterfs is good for file replication between various servers, but its not good idea to sync mongodb data using glusterfs.
Will everything be still working?
probabily no
Will there be 'lock' issues as in MySQL?
yes it will be. check this https://docs.mongodb.org/v3.0/faq/concurrency/ .glusterfs locks the file while it write on gluster-volumes and mongodb data may change frequently which could result problem.
you can consider mongodb replication (https://docs.mongodb.org/manual/core/replication-introduction/) for your purpose

Migrate data from one server to another

I bought a new server and I want to move all the data (directories, sub directories, users, passwords, ..etc) from my old server to it.
Is there a way to do that?
Thanks,
Do you have physical access to both servers? If so you can use the dd command to make a clone of the disk from the old server to the disk that is going into the new server.
In order to do this though, both hard drives have to be installed in one of the servers.
You can also use netcat and dd to clone a disk over a network.
for the directories and files, use a FTP client from your server, if it allows you to, if not, just download all the content to your computer and upload it to the new server.
For the users and passwords, i guess they are in a Database, connect to the database using SSH, telnet, or MysqlAdmin or any RMDB client system and export a dump file, then log in to the new server's SQL system and import that dump file.
Anyway you should give more details of both servers anyway so we can help you, for example, are they Shared hosting or dedicated machine? and what kind of access do you have to them, also, their operative system would help people to reply you accurately
In principle, yes.
If the hardware is similar (= just more RAM, disk space but same CPU architecture and no special graphics card drivers), you might be able to copy every file and then install the boot loader once more (the boot loader config usually changes when the hard disk size changes).
Or you can create a list of all services that you use, determine which config files each one uses and then just copy those. Ideally, you shouldn't copy them but compare the old and the new versions and merge them.
The most work intensive way is to use a tool like puppet. In a nutshell, puppet allows to create install scripts for services (along with all the configuration that you need). So if you need to install a service again (new hardware, second server), you just tell puppet to do it. On the plus side, your whole installation will be documented, too. If you ever wonder why something is the way it is, you can look into the puppet files.
Of course, this approach takes a lot of time and discipline, so it might not be worth it in your case. Apply common sense.

MongoDB databases migrating from one OS to another

I am building a web application in Python for which I need MongoDB. I have MongoDB installed on a Mac OS X. And for my app I want to have a Linux VPS. I wanted to know whether I could migrate MongoDB collections from Mac to Linux. Does endianess of the system causes problems? What else might? I am no expert in databases or operating systems. And if we can migrate, can someone point me towards a guide or procedure? Thanks in advance.
You can just run mongoexport, which will dump your database to a file in either JSON or CSV format.
Then, on your new machine, you can run mongoimport with the input file you got from mongoexport, and everything should be there.
mongoexport: http://www.mongodb.org/display/DOCS/mongoexport
mongoimport: http://www.mongodb.org/display/DOCS/Import+Export+Tools?focusedCommentId=4554852#ImportExportTools-mongoimport
While export and re-importing certainly works, this will recreate all the indices from scratch in the new location. For complex indices this could take days.
I wouldn't be surprised in the binary files are compatible - so I would first try shutting down the original server, copying over the entire data directory to the new location. Make sure you are running the exact same version of the mongo server software (e.g. 2.0.x, both 64-bit and both official binaries from 10gen, and with the same configuration options).
I'm pretty sure this will start correctly and all data AND indices will be ready to go. This is basically just taking a binary snapshot of your data files.
mongodb has plenty of tools for export and import of databases. Check out:
http://www.mongodb.org/display/DOCS/Import+Export+Tools

Resources