If it's possible, I'm interested in being able to embed a PostgreSQL database, similar to sqllite. I've read that it's not possible. I'm no database expert though, so I want to hear from you.
Essentially I want PostgreSQL without all the configuration and installation. If it's possible, tell me how.
Run postgresql in a background process.
Start a separate thread in your application that would start a postgresql server in local mode either by binding it to localhost with some random free port or by using sockets (does windows support sockets?). That should be fairly easy, something like:
system("C:\Program Files\MyApplication\pgsql\postgres.exe -D C:\Documents and Settings\User\Local Settings\MyApplication\database -h 127.0.0.1 -p 12345");
and then just connect to 127.0.0.1:12345.
When your application quits, you can always send a SIGTERM to your thread and then wait a few seconds for postgresql to quit (ie join the thread).
PS: You can also use pg_ctl to control your "embedded" database, even without threads, just do a "pg_ctl start" (with appropriate options) when starting the application and "pg_ctl stop" when quitting it.
You cannot embed it, nor should you try.
For embedding you should use sqlite as you mentioned or firebird rdbms.
Unless you do a major rewrite of code, it is not possible to run Postgres "embedded". Either run it as a separate process or use something else. SQLite is an excellent choice. But there are others. MySQL has an embedded version. See it at http://mysql.com/oem/. Also several java choices, and Mac has Core Data you can write too. Hell, you can even use FoxPro. What OS you on and what services you need from the database?
You can't embed it as a in process type thing like sqlite etc, but you can easily embed it into your application setup using Inno setup at http://www.innosetup.org. Search their mailing list archive and you will find someone did most of the work for you and all you have to to is grab the zipped distro and you can easily have postgresql installed when the user installs your app. You can then use the pg_hba.conf file to restrict the server to local host only. Not a true embedded DB, but it would work.
PostgreSQL is intended to run as a stand-alone server; it's probably possible to embed it if you hack at it hard and long enough, but it would be much easier to just run it as intended in a separate process.
HSQLDB (http://hsqldb.org/) is another db which is easily embedded. Requires Java, but is an excellent and often-used choice for Java applications.
Anyone tried on Mac OS X:
http://pagesperso-orange.fr/bruno.gaufier/xhtml/prod_postgresql.xhtml
http://www.macosxguru.net/article.php?story=20041119135924825
(Of course sqlite would be my embedded db of choice as well)
Well, I know this is a very very very old post, but if anyone has nowadays this question, I would refer to:
You can use containers running Postgres. Here's a post that could be helpful, doing something along this line using R:
https://rsangole.netlify.app/post/2021/08/07/docker-based-rstudio-postgres/?utm_source=pocket_mylist
Take a look at duckdb https://duckdb.org/docs/installation/ It is relatively new and still needs to mature. But it works pretty much like an embedded database ("In-process, serverless"), with bindings for several languages (Python, R, Java, ...)
I have setup a live CentOS 7 that is booted via PXE if the client is connected to a specified network port.
Once the Linux is booted up, I have scripted a small logic that compares if there is a newer image version available on a central host than it is already deployed on the client. This is done with comparing the contents of a versions file. If there is a newer version, the image should be deployed on the client. Else only parts of the Image (qcow2-Files) should be replaced to safe time.
Since the Image is up to 1TB I do not want to apply the image at any case. It would also take too long.
On the client, there is a volume group that consists of lvms in different sizes and also "normal" partitions (like /dev/sda1).
Is there a way to deploy a whole partition structure using a cli?
I already figured this to recover one disk out of the whole system.
But this would make a lot of effort to script around that to get the destination structure I want.
I found out that there is no way to "run" clonezilla as a cli (which I actually cannot understand why this does not exist). I was trying to use parts of the clonezilla live iso with the command "ocs-sr", but I stuck somewhere and it always gives me a "unknown commands"-Error.
For my case the best would be a thing like:
. clonezilla --restore /path/to/images/folder --dest /dev
Which applies all Images in the imagefolder that is generated by clonezilla to the client.
Any help highly appreciated.
I've found that using Clonezilla's preparation script does the thing for me. You can use ocs_prerun parameter that will run a script before clonezilla will do anything.
If you are stuck into a company hardened image, you can try this to setup a (ubuntu) Linux with the needed programs on it.
when i create a postgresql database and create tables and columns and even insert data into the columns. I cant restart my machine without losing the created databases and all the data.
i have tried changing a coupe things in the configuration file but nothing helped.
I also have to reset the password for postgres everytime I restart my machine. I mainly use mongodb I am just learning postgreSQL just so I can use it if I ever need to in the future. I am runing a Linux machine QubesOS. I have a few problems like this useing QubesOS. every tutorial I watch everybody uses Macs. Which a mac seems good and all kinda a mix between windows and linux The best of both worlds. Easy package installs and terminal control but I dont want to trade my linux machine for a Mac I would much rather just fix these problems I am having with PostgreSQL on my linux machine
You ran into an important security feature of QubesOS: All data modifications are discarded on shutdown of a so called "Qube". They are reset to their original state.
But there is the exception of data kept in some very special directories.
If you convince your data base packages to put their data into these directories, it will be preserved over reboots of your data base Qube:
Read this documentation for more information.
I'm a newbie in MongoDB. And I'm sorry if the Question is not clear enough. What i mean is:
I have clustered GlusterFS Volumes (configured on top of 2 CentOS). Which means, same data directory can be read from both CentOS boxes:
Lets call: CentOS-1 and CentOS-2
And i wanna install MongoDB Servers mongod on both CentOS boxes. But start (run) only one. (The other one on CentOS-2 might be purposely stopped)
Then the Applications will be connecting to that one (current Active) on CentOS-1.
Here the main question comes in (please refer to the picture below):
Let's say: if CentOS-1 Server goes down, and i manually start the another MongoDB Server (mongod on the another box CentOS-2), and let all the Applications to connect to CentOS-2:
(1) Will everything be still working?
(2) Will there be 'lock' issues as in MySQL?
(3) If it works, does it mean, we can add any amount of MongoDB Servers (in stand-by mode), and whenever they swing, there's no problem?
Note:
Only 1 Server at a time will be running. Not like: the Data Store is being accessed by multiple Server.
Thanks for all opinions in advanced :)
Yes you can. There won't be any problem in moving the data files to a different server as long as you plan to use the same version of mongodb and the same operating system. When you move the files make sure to delete the mongodb.lock file if it exists in data directory.
Glusterfs is good for file replication between various servers, but its not good idea to sync mongodb data using glusterfs.
Will everything be still working?
probabily no
Will there be 'lock' issues as in MySQL?
yes it will be. check this https://docs.mongodb.org/v3.0/faq/concurrency/ .glusterfs locks the file while it write on gluster-volumes and mongodb data may change frequently which could result problem.
you can consider mongodb replication (https://docs.mongodb.org/manual/core/replication-introduction/) for your purpose
We have a linux server for the database. Most of our data are on /var/. I would like to backup entire directory on external hard drive or on another linux system so that if something goes wrong I can entirely replace the directory. Since the directory has many files, I do not want to copy and paste every time instead I like to sync them.
Is there easy way to do that? rsyn can do that, how do I avoid of login every time the server? BTW I have limited knowledge of linux system.
Appreciated any comments and suggestions.
Bikesh
Rsyncing database files is not a recommended way of backing them up. I believe you have mysql running on the server. In that case, you can take a full database dump in the server, using steps mentioned in following link:
http://www.microhowto.info/howto/dump_a_complete_mysql_database_as_sql.html#idp134080
And, then syncing these files to your backup server. You can use rsych command for this purpose:
http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
Make sure that you have installed mysql in the backup server too. You can also copy the mysql configuration file /etc/my.cnf file to the database backup server. In case you require your database to be updated always, you can setup mysql replication. You can follow the below mentioned guide to do the same.
http://kbforlinux.blogspot.in/2011/09/setup-mysql-master-slave-replication.html