Understanding COPY in postgresql and database/server hosting - node.js

I am trying to understand a few things that arose from writing the COPY command in a SQL (.sql) file generated by Prisma. I understand that the filepath given to COPY is absolute/relative to where the database is running.
COPY table(value,other) FROM 'how/do/i/get/the/right/path' DELIMITER ',' CSV HEADER
Can someone explain when we have a hosted server and database (I believe they are usually on separate machines) how we would COPY from a csv file. The file is part of my typical git repo. After reading, my understanding is that the file would actually need to be where the database is hosted, is that correct? I am not sure I have a full grasp on this. Do hosted db servers also store files? I thought it would just be the database (I understand its a machine that could technically have anything on it but is that ever done?)
What would be the best way to access this file from the db? Would I use psql to connect to my database and then ssh into the server? I think there are different solutions like running a script which has psql in it and using the variation \COPY.
What I wanted ideally was to have the file as part of my repo and in a Prisma migrate file copy over the contents into a database table. If the above was incorrect, would someone be able to clarify how I would get the correct path into the command? I want it to be in a .sql file and put in the host variable (assuming that would work, depending on clarity on the above points regarding where files live).
Thanks!
Sources used:
https://www.postgresql.org/docs/current/sql-copy.html
https://www.cybertec-postgresql.com/en/copy-in-postgresql-moving-data-between-servers/

Related

List of Postgres Instances running on a Server

I am working as a Postgres DBA at my organization. We are currently working on Postgres 9.5 on SUSE Linux servers. I wanted a specific solution. We have multiple SUSE Linux servers and each server can host one or more Postgres database. My requirement is I need to find the list of all the database available on a particular server irrespective of the database is up and running or its shut down.
Is there any file or any location where Postgres keeps note of all the databases that is created on a server. Is there a way I can get the required details without connecting to the Postgres instance and without running any PSQL commands?
If not what would be the best way to get the list. Any hints, solutions and thoughts would help me to resolve this issue.
Thanks a lot for the help in advance.
In PostgreSQL, a server process serves a database cluster, which physically resides in a data directory.
A PostgreSQL cluster contains several databases, all of which are shut down when the server process is shut down.
So your question can be split into two parts:
How to find out which database clusters are on a machine, regardless if they are started or not?
How to find out all databases inside a cluster even if the cluster is shut down?
There is no generic answer to the first question, since a data directory could in principle be everywhere and could belong to any user.
You could get an approximation with
find / -name PG_VERSION
but it is probably better to find a special solution that fits your setup (about which you didn't tell us).
The second question is even harder.
To even get a list of numeric IDs, you would have to know all tablespace directories. Those in the default tablespace are the names of the subdirectories in the base subdirectory of the data directory.
In order to get the names of the databases you have to have the PostgreSQL server started, because it would be very hard to find the file that belongs to pg_database and parse it.

File read/write on cloud(heroku) using node.js

First of all I am a beginner with node.js.
In node.js when I use functions such as fs.writeFile(); the file is created and is visible in my repository. But when this same process is done on a cloud such as heroku no file is visible in the repository(cloned via git). I know the file is being made because I am able to read it but I cannot view it. Why is this??? Plus how can I view the file?
I had the same issue, and found out that Heroku and other cloud services generally prefer that you don't write in their file system; everything you write/save will be store in "ephemeral filesystem", it's like a ghost file system really.
Usually you would want to use Amazon S3 or reddis for json files etc, and other bigger ones like mp3.
I think it will work if you rent a remote server, like ECS, with a linux system, and a mounted storage space, then this might work.

backing up entire linux server on external hard drive or another cluster

We have a linux server for the database. Most of our data are on /var/. I would like to backup entire directory on external hard drive or on another linux system so that if something goes wrong I can entirely replace the directory. Since the directory has many files, I do not want to copy and paste every time instead I like to sync them.
Is there easy way to do that? rsyn can do that, how do I avoid of login every time the server? BTW I have limited knowledge of linux system.
Appreciated any comments and suggestions.
Bikesh
Rsyncing database files is not a recommended way of backing them up. I believe you have mysql running on the server. In that case, you can take a full database dump in the server, using steps mentioned in following link:
http://www.microhowto.info/howto/dump_a_complete_mysql_database_as_sql.html#idp134080
And, then syncing these files to your backup server. You can use rsych command for this purpose:
http://www.tecmint.com/rsync-local-remote-file-synchronization-commands/
Make sure that you have installed mysql in the backup server too. You can also copy the mysql configuration file /etc/my.cnf file to the database backup server. In case you require your database to be updated always, you can setup mysql replication. You can follow the below mentioned guide to do the same.
http://kbforlinux.blogspot.in/2011/09/setup-mysql-master-slave-replication.html

Specify a local mongodb data file for specific Node.js application

My apologies if this has been asked before but I haven't been able to find the answer.
I have a small project written in express.js (node.js) that uses a local sqlite data file. I want to switch to mongodb but would like to keep the data file within the project directory. Is that possible with mongodb and, if so, can anyone offer any guidance?
Thank you,
Luis
mongod --dbpath myappdir/mongo
It's certainly possible. In your mongod.conf file, simply point the dbpath property at whatever directory you'd like
dbpath = /path/to/your/project
Keep in mind that unlike sqlite, MongoDB creates a number of files. And that number changes as your data changes.

Cloning PostgreSQL database

I want to have a clone of a postgresql database. If I copy the entire data directory from one machine and replace another machine's data directory with it, will there be any problems? both of them have the same OS, btw (CentOS)
Certainly if you stop the server and then copy it, that's fine. If you don't, the cloned server will have to do recovery, which isn't so good. Or just use pg_dumpall to produce a script to recreate the data on the new machine.
You can invoke pg_start_backup() and then copy the datadir. All changes will then be written to a "log" and committed later on when you run pg_stop_backup().
http://www.postgresql.org/docs/8.1/static/backup-online.html
Se section 23.3.2. Making a Base Backup
I then think you can restore the files on another machine running identical versions of postgresql under the same architecture.
section 23.3.3. Recovering with an On-line Backup will explain how to restore the backup you have made.

Resources