I am trying to set up a postgresql server on arch linux but the start command returns logfile: permission denied - linux

As explained I want to set up a PostgreSQL database (specifically for Metasploit to use as I'm running my own install of arch instead of kali linux). I have followed the instructions so far to the best of my ability although I don't have any experience with PostgreSQL or much in databases for that matter and am stuck on this error.
As you can see the server was able to be initialised at least but I don't know how to/can't get it to start.
[postgres#rt metasploit]$ initdb --locale=C --encoding=UTF8 -D /var/lib/postgres/data --data-checksums
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "C".
The default text search configuration will be set to "english".
Data page checksums are enabled.
creating directory /var/lib/postgres/data ... ok
creating subdirectories ... ok
selecting dynamic shared memory implementation ... posix
selecting default max_connections ... 100
selecting default shared_buffers ... 128MB
selecting default time zone ... Etc/GMT-2
creating configuration files ... ok
running bootstrap script ... ok
performing post-bootstrap initialization ... ok
syncing data to disk ... ok
initdb: warning: enabling "trust" authentication for local connections
You can change this by editing pg_hba.conf or using the option -A, or
--auth-local and --auth-host, the next time you run initdb.
Success. You can now start the database server using:
pg_ctl -D /var/lib/postgres/data -l logfile start
[postgres#rt metasploit]$ pg_ctl -D /var/lib/postgres/data -l logfile start
waiting for server to start..../bin/sh: line 1: logfile: Permission denied
stopped waiting
pg_ctl: could not start server
Examine the log output.
If this is a noob question please just link me to the answer, as I said I have little to no experience working with databases and just want to get it up and running. I tried searching for it myself but couldn't find much of help.

Related

Database 'neo4j' is unavailable. Cannot reset neo4j database

I have my community 4.1.1 neo4j service installed on the ubuntu commandline running on my windows machine. I have been using neo4j steadily for a month or two now, just recently it has prevented me from accessing the neo4j database, it will say this in neo4j browser:
Database 'neo4j' is unavailable. Run :sysinfo for more info.
I have tried uninstalling neo4j and reinstalling but that has not worked either. I tried playing around with the default listen address previously, but now with the reinstall all config data is back to normal. Running ./neo4j-community-4.1.1/bin/cypher-shell under bin does not work. It says:
Unable to establish connection in 3000ms
If I run ./neo4j-community-4.1.1/bin/cypher-shell -a 192.168.0.19 it says:
Database 'neo4j' is unavailable
When I run ./neo4j-community-4.1.1/bin/neo4j-admin check-consistency --database=neo4j it also states:
.2020-08-18 22:12:16.868+0000 WARN [o.n.c.ConsistencyCheckService] Index was dirty on startup which means it was not shutdown correctly and need to be cleaned up with a successful recovery. Index file: /home/thomp105/neo4j-community-4.1.1/data/databases/neo4j/neostore.relationshipgroupstore.db.id.
I would love to reset everything from scratch but I am unsure how
At this point I cannot even access the browser at localhost:7474. It hangs indefinitely trying to load.
I am truly stumped. Anyone have any advice on how I navigate this issue?
It's not easy to guess the issue without seeing your system, but may I ask if you can try to delete your default database, i.e. neo4j physically from the disk (e.g. rm -rf /home/thomp105/neo4j-community-4.1.1/data/databases/neo4j/), and then try to create another database with different name instead (open neo4j.conf, search for dbms.active_database, which point out on default database, and change it to some other name)?
I had this problem running on a linux server. The server was up but got this error on any query: Database 'neo4j' is unavailable. To troubleshoot I ran sudo neo4j console and the problem went away. When I ran the console as user ne04j the problem came back.
$ /usr/share/neo4j/bin/neo4j console
Directories in use:
home: /var/lib/neo4j
config: /etc/neo4j
logs: /var/log/neo4j
plugins: /var/lib/neo4j/plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/run/neo4j
So I tried: sudo chown -R neo4j:neo4j /var/lib/neo4j/data and the problem went away. Apparently when I'd done a restore of the database I'd run the neo4j server as root and when the system runs neo4j it does it as the user neo4j so couldn't read any of its data. It seems that an error like this would warrant an easy to parse error message but verbosity is not the neo4j way.

Postgresqll 10: Postgres User /sbin/nologin preventing initdb setup

I'm attempting to install postgresql 10 for the first time and need to run the initdb setup. Unfortunately, this fails and returns an error from the nologin shell.
server# /usr/pgsql-10/bin/postgresql-10-setup initdb
Initializing database ...
failed, see /var/lib/pgsql/10/initdb.log
server# cat /var/lib/pgsql/10/initdb.log
This account is currently not available.
I strace'd the command and verified the su commands are probably what's causing this and it seems the default setting for the postgres user is /sbin/nologin. In various examples I've seen, there is no mention of this being a possible issue. How would this work on any other system by default? I feel that temporarily modifying the login shell would work but I want to understand this issue better more specifically from the application's end.
centos 7.8
selinux mode: permissive
postgresql 10

Can you help me access Mac SMB share from Ubuntu using smbclient? (NT_STATUS_ACCESS_DENIED error)

I've been working on a file server product that uses smbcilent to transfer files between client computers and the server. It's been working great so far with our LAMP (Ubuntu) server and Windows machines.
I'm currently trying to expand the setup to include Mac's, but am having trouble with the server accessing the share on the Mac.
Here's my command and error (bracketed descriptions replace private info):
# smbclient //10.101.0.7/[share-file] -U [username]%[password] -c ls
WARNING: The "syslog" option is deprecated
NTLMSSP packet check failed due to short signature (0 bytes)!
NTLMSSP NTLM2 packet check failed due to invalid signature!
session setup failed: NT_STATUS_ACCESS_DENIED
Things I've tried:
✓ Accessing share using a Windows machine to ensure the share is setup properly - check! Works fine there.
✓ Invoking -S off or --signing=off in the command - no change.
✓ Just looking at the shares first using smbclient -L 10.101.0.7 -U [username]%[password] - same error.
✓ Googling for an answer - check! Several people with similar problems, but no working solutions so far.
The most promising thing I've see so far involves compiling smbclient 4.4 from sources and running that with no authentication (-U ""%""), but that seems like a temporary solution based on a bug rather than a solid plan that will work for a long time. (But I'll try that next if I can't find any better ideas...)
Thanks for reading and trying to help!
Try adding --option="ntlmssp_client:force_old_spnego = yes" to the smbclient command as suggested on the samba-technical mailing list.
For me, this now lists shares on a Mac OSX server:
smbclient -U$user%$password -L $mac_host --option="ntlmssp_client:force_old_spnego = yes"
For mounting, you may need to add the nounix,sec=ntlmssp options as in
sudo mount -t cifs //$mac_host/$share $mountpoint -o nounix,sec=ntlmssp,username=$user,password=$password
On recent versions of MacOS (e.g. Monterey) it is necessary to do several configuration steps to enable smb access from Linux:
Open System Preferences.
Select Sharing.
Select File Sharing.
Ensure that the directory is listed in Shared Folders.
Right-click/two-finger click on the share directory.
Click on Advanced Options
Ensure Only allow SMB encrypted connections is checked.
Click OK
Click on Options
Click on the checkbox for Share files and folders using SMB.
Under Windows File Sharing ensure the appropriate user is checked.
Type the user's password in the 'Authenticate' dialog bo and press 'OK'.
Click 'Done'.
You should now be able to connect from Linux to the MacOS share using the commands given by #mivk.

Can't Write to /sys/kernel/ to disable Transparent Huge Pages (THP) for MongoDB on OVH CentOS 7

My Issue
I am having trouble removing MongoDB warnings about Transparent Huge Pages (THP) on an OVH CentOS 7 installation, and the issue appears to be the inability to write to /sys/kernel/mm as root.
First, I realize the OVH kernel is customized, and I know many of you will say to go with a fresh non-customized kernel, but that's not an option right now. I need to solve this problem for the current OS.
MongoDB Warnings:
2016-03-09T00:31:45.889-0500 W CONTROL [initandlisten] Failed to probe "/sys/kernel/mm/transparent_hugepage": Permission denied
2016-03-09T00:31:45.889-0500 W CONTROL [initandlisten] Failed to probe "/sys/kernel/mm/transparent_hugepage": Permission denied
MongoDB is trying to read the transparent_hugepage files (below), but they do not exist:
/sys/kernel/mm/transparent_hugepage/enabled
/sys/kernel/mm/transparent_hugepage/defrag
Cannot Create the Files
All of the solutions I've seen involve creating the files and populating them with never, including the script in the MongoDB documentation. In all of the solutions, this is the key part:
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
However, the files do not exist, and I cannot create anything under /sys/kernel/mm as root.
root#myhost [~]# echo never > /sys/kernel/mm/transparent_hugepage/enabled
-bash: /sys/kernel/mm/transparent_hugepage/enabled: No such file or directory
root#myhost [~]# mkdir -p /sys/kernel/mm/transparent_hugepage
mkdir: cannot create directory ‘/sys/kernel/mm/transparent_hugepage’: Operation not permitted
The owner and group of directory /sys/kernel/mm are root, and I have temporarily changed the permissions from 700 to 777, yet I still cannot create the directory as root.
Tuned Profile Also Doesn't Help
To be thorough, I have also created the custom Tuned profile (per instructions in MongoDB link above) and activated it, but it generates the error WARNING tuned.plugins.plugin_vm: Option 'transparent_hugepages' is not supported on current hardware.
Tuned Profile (/etc/tuned/no-thp/tuned.conf):
[main]
include=virtual-guest
[vm]
transparent_hugepages=never
Error in Tuned log:
WARNING tuned.plugins.plugin_vm: Option 'transparent_hugepages' is not supported on current hardware.
Some Solution in MongoDB Itself?
It seems like the best solution would be to somehow explicitly configure MongoDB not to use THP so that it wouldn't have to check for the missing files, but I've seen nothing like this. If there is a way, even if it involves customizing MongoDB (and repeating after every update), I'm willing to do it.
Right now I've installed CentOS 7 on OVH. They use /boot/bzImage-3.14.32-xxxx-grs-ipv6-64 that implements grsecurity (https://grsecurity.net) which precludes access to some folders.
The very simple solution to the warnings from MongoDB about huge pages can be solved by replacing the kernel. The procedure for CentOS7 is as follows:
Download required kernel from OVH ftp: ftp://ftp.ovh.net/made-in-ovh/bzImage2 into /boot folder.
Edit /etc/grub2.cfg:
# linux /boot/bzImage-3.14.32-xxxx-grs-ipv6-64 root=/dev/md1 ro net.ifnames=0
linux /boot/bzImage-4.8.17-xxxx-std-ipv6-64 root=/dev/md1 ro net.ifnames=0
Here I replaced bzImage-3.14.32-xxxx-grs-ipv6-64 default by bzImage-4.8.17-xxxx-std-ipv6-64 without grs.
Now, reboot and check if the new kernel is ok:
root#ns506846 ~]# uname -r
4.8.17-xxxx-std-ipv6-64

Error When Starting OmniEvents

I am attempting to install REDHAWK v1.8.2 on a fresh install of CentOS 6.4 32 bit, but I am unable to get omniNames and omniEvents to start.
sudo /sbin/service omniEvents stop
Stopping CORBA event service: omniEvents
sudo /sbin/service omniNames stop
Stopping omniNames [ OK ]
sudo /sbin/service omniNames start
Starting omniNames [ OK ]
sudo /sbin/service omniEvents start
Starting CORBA event service on port 11169: omniEvents: [25848]: Warning - failed to resolve initial reference 'NameService'. Exception: TRANSIENT
omniEvents.
I tried to verify if omniNames was really running by calling the naming client, but got an error (see below), so it seems omniNames is not successfully starting.
nameclt list
Caught a TRANSIENT exception when trying to validate the type of the
NamingContext. Is the naming service running?
As part of the debugging process, I tried to kill the omniNames process and start it a different way (see below).
sudo killall omniNames
omniNames -start
Wed Nov 13 21:08:08 2013:
Starting omniNames for the first time.
Error: cannot create initial log file '/var/omninames/omninames-orion.log':
No such file or directory
You can set the environment variable OMNINAMES_LOGDIR to specify the
directory where the log files are kept.
I'm not sure why omniNames can't create the log file, because I verified that /var/omninames folder actually exists and even starting omniNames as root yields the same error. Regardless, I set the log directory to my desktop to circumvent the error (see below).
export OMNINAMES_LOGDIR=/home/$USER/Desktop/logs
mkdir -p /home/$USER/Desktop/logs
omniNames -start
Wed Nov 13 21:09:17 2013:
Starting omniNames for the first time.
Wrote initial log file.
Read log file successfully.
Root context is IOR:010000002b00000049444c3a6f6d672e6f72672f436f734e616d696e672f4e616d696e67436f6e746578744578743a312e30000001000000000000005c000000010102000a00000031302e322e382e333500f90a0b0000004e616d6553657276696365000200000000000000080000000100000000545441010000001c00000001000000010001000100000001000105090101000100000009010100
Checkpointing Phase 1: Prepare.
Checkpointing Phase 2: Commit.
Checkpointing completed.
Even though it looks like omniNames successfully started, when I open another terminal window and call the naming client, I get the same error as before (see below).
nameclt list
Caught a TRANSIENT exception when trying to validate the type of the
NamingContext. Is the naming service running?
The only modification I made in the /etc/omniORB.cfg file is to add the lines for InitRef (see below).
InitRef = NameService=corbaname::localhost
InitRef = EventService=corbaloc::localhost:1169/omniEvents
Also, I am not connected to the internet so my version of CentOS has not been updated from the base version, except for the boost libraries as recommended in Appendix J of the manual (http://sourceforge.net/projects/redhawksdr/files/redhawk-doc/1.9.0/REDHAWK_Manual_v1.9.0.pdf/download).
Looks like the issue is in your configuration. You've got the wrong port in your configuration file. It should be port 11169 however you've listed port 1169.
See: http://redhawksdr.github.io/Documentation/mainch2.html#x4-120002.6 for details.
A few other observations and tricks regarding omniOrb in case this was not the issue.
Sometimes omninames/omnievents can get into a bad state. The fix is to delete the log files created by omniNames and omniEvents and restart the services. They are located:
/var/lib/omniEvents/*
/var/omniNames/*
You'll need to be root to delete those files. I always forget where they are located and often do a "locate omni | grep -i log" to remind myself but you must do this as root since they are not visible to standard users.
While it should not matter, I've personally found that using 127.0.0.1 is more reliable than localhost. For some reason, using localhost within a VM in the configuration file has caused me problems in the past. Consider using 127.0.0.1 instead of localhost. This is what the current version of the Redhawk Manual recommends as well.
You mentioned you are using Redhawk v1.8.2. As an FYI, the latest REDHAWK version in the 1.8 series is currently v1.8.5 and 1.9.0 was also recently released.
Hopefully this gets you up and running!

Resources