Writing a file to an Ubuntu 18.04 directory using postgres query tool - linux

I have a Python program calling postgres queries then showing the results on the screen. So far Python and postgres both run on Windows. Now I want to move the database to Ubuntu.
I therefore want to write a file from postgres query tool to an Ubuntu 18.04 directory. Under Windows (old setting), writing the file from postgres query tool works without complications. Using the new postgres database on Ubuntu, I have added postgres to my personal (=mm) user group and changed the permissions of the corresponding folder (755). Even added a file temp.csv and changed the permissions of that file as well (755). Still, when trying...
create table temp (test text);
insert into temp(test) VALUES ('Sample text');
copy (select * from temp) to '/home/mm/temp.csv';
... I get ...
ERROR: could not open file "/home/mm/temp.csv" for writing: Keine Berechtigung [=No permission]
SQL Status:42501
Hinweis:COPY TO instructs the PostgreSQL server process to write a file. You may want a client-side facility such as psql's \copy.
I could write to a samba directory with "free for all" permissions, but I don't want to do that for data security reasons.
Does anyone have an idea?

Related

Unable to restore database backup on Postgresql-10.0

I am using postgresql-9.4 (port 5432) and postgresql-10.0 (port 5433) on my Linux machine (RHEL 7.4). Postgresql-9.4 was installed using yum repository and Postgresql-10.0 was installed using source in different partitions.
I have taken a backup of db (dtbase.backup) on Postgresql-9.4 using it's pg_dump and trying to restore this on Postgresql-10.0 using it's pg_restore.
While doing this, I am getting below error:
pg_restore: [archiver] unsupported version (1.13) in file header
I have checked different forums but unable to find the solution. Any help would be highly appreciated.
You'll have to upgrade your PostgreSQL v10 to 10.3 so that you have commit b8a2908f0ac735da68d49be2bce2d523e363f67b:
Avoid using unsafe search_path settings during dump and restore.
Historically, pg_dump has "set search_path = foo, pg_catalog" when
dumping an object in schema "foo", and has also caused that setting
to be used while restoring the object. This is problematic because
functions and operators in schema "foo" could capture references meant
to refer to pg_catalog entries, both in the queries issued by pg_dump
and those issued during the subsequent restore run. That could
result in dump/restore misbehavior, or in privilege escalation if a
nefarious user installs trojan-horse functions or operators.
This patch changes pg_dump so that it does not change the search_path
dynamically. The emitted restore script sets the search_path to what
was used at dump time, and then leaves it alone thereafter. Created
objects are placed in the correct schema, regardless of the active
search_path, by dint of schema-qualifying their names in the CREATE
commands, as well as in subsequent ALTER and ALTER-like commands.
Since this change requires a change in the behavior of pg_restore
when processing an archive file made according to this new convention,
bump the archive file version number; old versions of pg_restore will
therefore refuse to process files made with new versions of pg_dump.
Security: CVE-2018-1058
Your 9.4 installation already uses archive format 1.13, which your v10 installation does not yet understand.
Besides, you should always use pg_dump from the higher PostgreSQL version to upgrade a database.

Moving neo4j database from Windows to Ubuntu

I created neo4j database using cypher queries through browser and some python (py2neo) routines.
Now, I have to transfer this database to another neo4j instance on my Linux desktop.
What I did-
Zip the contents of folder default.graphdb.
Unzipped the contents of the zip file to data/graph.db in my linux installation.
Also the user:pass of the database are same.
But when I goto the browser, I can't find any of that data. The directory does point to the folder that I extracted to (/home/goelakash/neo4j-community-2.3.0/data/graph.db).
How do I get that database?
EDIT - messages.log
https://drive.google.com/file/d/0B3JPglmAz1b5ak1vRWR5Z0p5UVE/view?usp=sharing
The data files should be located directly in data/graph.db. So check e.g. if there is a file called neostore.nodestore.db. If so, check the permissions - the system user running Neo4j needs to have full recursive read/write permission on the graph.db folder.
Also make sure that you're using the same version of Neo4j on Windows and Linux (or upgrade the store following the reference manual).
For more insight attach the startup sequence form data/graph.db/messages.log.

Restoring a MySQL database on a potentially different MySQL installation?

I have a broken installation of Ubuntu 14.04 - it won't boot, but I won't say anymore about that because that's not what I'm asking about really. I have a MySQL database (created using v5.5) on the broken Ubuntu installation and I need that data. I can get at the raw MySQL database files by mounting the broken installation onto another machine.
I actually need the database to be imported into a MySQL v5.1 installation. I tried copying the raw database files (e.g. the directory at /var/lib/mysql/dbname) into the same directory on the working OS installation. At first, it seemed like it worked, I can see the database, I can use it and I can list the tables. But it turns out that even though I can see the tables in the db, any attempts to describe or use them in any way give the 'table doesnt exist` error.
Ideally, I'd love to be able to use msqldump and then import the database the proper way, but how can I get a dump of the database if it's not part of the MySQL installation (remember, I can't boot into the installation, it's broken).
Of course, mysqldump is the most preferable solution, but if it's not possible to use that utility with the raw database files as input, then I'm willing to try anything that might work.
Of course the first thing you should do is to install the same version of MySQL as the original - if you're directly using the raw data files, keeping things as identical to the original as possible is a must! The same applies to paths, make sure the new installation and data files are placed in the same directory path that they were originally.
Once you have this, you can mysqldump the tables and use that to import into a clean, new installation.

How can i check the mysql server is installed or not before installing the application in Inno setup compiler

I need inno setup compiler script to check whether mysql server is installed or not before installing the application.
MySQL is quit differ than other ones, we can also use MySQL with out installing into our system by running required services time to time from the downloaded zip archives, extracted files may be placed any where on the system...Tlama Already mentioned this.
Here we have two cases to Check
Case 1:-MySQL is installed or not
Direxists function(Here you can Check whether MySQL directory exists in program files or not )
MySQL directory path :{pf}\MySQL
Filexists function(with this you can check required MySQL files are there in the users systems )
Query the registry with the MySQL registry Key names
HKEY_USERS\S-1-5-21-1707045092-1792370289-147592793-1000\Software\MySQL
HKEY_USERS\S-1-5-21-1707045092-1792370289-147592793-1000\Software\MySQL AB
HKEY_CURRENT_USER\Software\MySQL
HKEY_CURRENT_USER\Software\MySQL AB
check whether these are existed in registry or not.
if Every thing is existed, that's fine. Go for your application installation
if no check for case 2 also
Case 2:- Is there any files or directories with the name of MySQL in the entire system and required services of Mysql are running or not
a. first check whether is there any file or folder exists with the name of MySQL in the users machine by using below commands, to execute commands you can use Exec function
with the below one you can find whether MySQL(file/directory) is there in c drive or not, but not in entire system
C:\>tree |find "mysql" >filename
b)now change the drive to D,E,F by using
below command will give you , all disk drives in the machine
C:\>wmic logicaldisk get caption >filename
then check each and every drive in the above filename
C:\>D:
d:\>tree |find "mysql" >filename
each time Loadstringfromfile to some string and then check the length of string is zero or not.
if not zero, you need to check for required services are running or not by using (you can skip some above steps for simplicity)
tasklist |find "required service of MySQL" >filename
if all drives finished and if did not found any thing, no worries simply prompt user to download MySQL (Use ITD(Innoo tools downloader)) or else you can pack MySQL msi with your application but your application become bulky(Contains more memory).

How do I get Sybase File Path, Transaction Log, & installation details?

I need to Retirve the following Details using Sybase SQL Query.
1) Database Data File Path
2) Database Transaction Log File Path
3) Path where SybaseSoftware Installed
4) Patch Installed on Sybase
Thanks.
That info is easy for a DBA to obtain in 30 seconds; with a GUI Admin tool in a few clicks.
Why do you want to obtain the details of the server installation VIA SQL ? If you are a coder you do not need that info to do your job; that info is the domain of the DBA, and changes as they administer the server. More important, the changes are transparent to the coder. Even if you did know it, it will not help or hinder you in your work.
Online Sybase Manuals
The "data and log file paths" in particular, are protected from direct access by developers (it is a secured ANSI SQL RDBMS).
Update
Evidently you did not bother to look up the manuals.
Open a session with the server, so that you can execute SQL commnands via "Sybase SQL Query". From your PC, Run either isql (character) or DBISQL (GUI); they are both on the Sybase PC Installation CD, you can also download them free.
Devices ("Data File Paths"):
sp_helpdevice
go
There are many Databases per server. There are many Devices per server. You will have to figure out (a) which Devices contain the Database you are interested in (b) Data Devices vs Log Devices.
sp_helpdb
go
Log Devices ("Database Transaction Log File Path")
(same as (1) )
"Sybase Installation" or $HOME directory (on the server). There are two methods, the first is much easier:
via host system
.
Log into the host system of the server, as the sybase user
You are already located in the sybase $HOME directory
It is the installation directory
(the original installer may have created directory trees for each version or EBF ("patch level"), but that is easy to figure out using Unix/DOS commands)
.
via islq/DBISQL
.
sp_configure "configuration file"
go
.
This will give you the path to the configuration file. It is almost always, the file path to the $SYBASE or sybase>$HOME directory. You can move up or sideways in the directory tree, using Unix/DOS commands, and figure it out from there.
.
The version of the Sybase ASE is the only item from your list that is relevant to coders. It (including current EBF ("patch level") is obtained via:
SELECT ##VERSION
go

Resources