I have a Centos 6.4 x64 server with epel repo enabled. I installed couchdb via the yum package manager (version 1.0.4) with no errors. I edited the /etc/couchdb/local.ini file with my port (default 5984) and server ip address. Whenever I run the service couchdb start, it returns the ok message:
Starting couchdb: [ OK ]
However, if I run a service couchdb status right after, I get this:
couchdb dead but pid file exists
and, of course, the server does not work.
The weird part is that service couchdb start always returns the success message, although the server never actually runs. Also, there is no log files created at all by couchdb (my /var/log/couchdb/ folder is empty - doubled checked couchdb configuration files for the path).
When I delete the /var/run/subsys/couchdb.pid file, service shows couchdb as not started, and when I try to restart couchdb (service couchdb start), I get the success message again and so on.
Any help will be greatly appreciated. :)
EDIT: I forgot to mention that when I run couchdb it works fine (giving me only this warning warning: "TODO: max is currently unsupported"), so it is just the service that doesn't work.
May be there are so many instances running for couchdb you have to kill each and every and then restart the servcie. Hope it works fine.
Related
I have my community 4.1.1 neo4j service installed on the ubuntu commandline running on my windows machine. I have been using neo4j steadily for a month or two now, just recently it has prevented me from accessing the neo4j database, it will say this in neo4j browser:
Database 'neo4j' is unavailable. Run :sysinfo for more info.
I have tried uninstalling neo4j and reinstalling but that has not worked either. I tried playing around with the default listen address previously, but now with the reinstall all config data is back to normal. Running ./neo4j-community-4.1.1/bin/cypher-shell under bin does not work. It says:
Unable to establish connection in 3000ms
If I run ./neo4j-community-4.1.1/bin/cypher-shell -a 192.168.0.19 it says:
Database 'neo4j' is unavailable
When I run ./neo4j-community-4.1.1/bin/neo4j-admin check-consistency --database=neo4j it also states:
.2020-08-18 22:12:16.868+0000 WARN [o.n.c.ConsistencyCheckService] Index was dirty on startup which means it was not shutdown correctly and need to be cleaned up with a successful recovery. Index file: /home/thomp105/neo4j-community-4.1.1/data/databases/neo4j/neostore.relationshipgroupstore.db.id.
I would love to reset everything from scratch but I am unsure how
At this point I cannot even access the browser at localhost:7474. It hangs indefinitely trying to load.
I am truly stumped. Anyone have any advice on how I navigate this issue?
It's not easy to guess the issue without seeing your system, but may I ask if you can try to delete your default database, i.e. neo4j physically from the disk (e.g. rm -rf /home/thomp105/neo4j-community-4.1.1/data/databases/neo4j/), and then try to create another database with different name instead (open neo4j.conf, search for dbms.active_database, which point out on default database, and change it to some other name)?
I had this problem running on a linux server. The server was up but got this error on any query: Database 'neo4j' is unavailable. To troubleshoot I ran sudo neo4j console and the problem went away. When I ran the console as user ne04j the problem came back.
$ /usr/share/neo4j/bin/neo4j console
Directories in use:
home: /var/lib/neo4j
config: /etc/neo4j
logs: /var/log/neo4j
plugins: /var/lib/neo4j/plugins
import: /var/lib/neo4j/import
data: /var/lib/neo4j/data
certificates: /var/lib/neo4j/certificates
run: /var/run/neo4j
So I tried: sudo chown -R neo4j:neo4j /var/lib/neo4j/data and the problem went away. Apparently when I'd done a restore of the database I'd run the neo4j server as root and when the system runs neo4j it does it as the user neo4j so couldn't read any of its data. It seems that an error like this would warrant an easy to parse error message but verbosity is not the neo4j way.
In some beta version of Arangodb 3.4 my database crashed while I tried to add a view via arangosh. Because I were not able to start the database anymore, it was not possible to make a backup (database dump).
I just wanted to install the newest Arangodb 3.4.2.1 then, but that failed because my CPU was to old (no SSE 4.2 support). So I bought a new computer, sat up a new linux, copied the databases to /var/lib/arangodb3/databases, started a new installation of Arangodb in which it even asked me, if the current databases should be upraded. I confirmed that.
Unfortunately it hasn't found the databases in that directory, so I have now just acces to the system database.
My question is: Can I recover the databases which are laying in /var/lib/arangodb3/databases somehow?
Do you have a copy of the "var/lib/arangodb3" directory (which includes "databases" as a subfolder) as well? If so, copy the folder to a location on your new machine where Arangodb 3.4.2.1 is installed. You also have to make sure to give the user arangodb access to this folder with the following command:
chown -R arangodb:arangodb /path/to/your/arangodb3RecoveryFolder
Next you can modify the arangod.conf (located at /etc/arangodb3/arangod.conf) to point to your recovery arangodb3 folder.
[database]
directory = /path/to/your/arangodb3RecoveryFolder
Then stop the arangodb3 service with sudo service arangodb3 stop,
run sudo service arangodb3 upgrade to upgrade the database directory and sudo service arangodb3 start to start the service again.
You can check if the service is running by executing sudo service arangodb3 status. In case it is not working, have a look at potential error messages in the log file (/var/log/arangodb3/arangod.log).
I'm using Centos 6.9. I have installed Redis using yum:
sudo yum update
sudo yum install redis
No errors were given during the installation.
I can start Redis using redis-cli. It gives me the prompt as expected:
127.0.0.1:6379>
However whenever I issue commands (e.g. PING or SET foo bar) it's giving the following error message:
(error) MISCONF Redis is configured to save RDB snapshots, but is currently not able to persist on disk. Commands that may modify the data set are disabled. Please check Redis logs for details about the error.
I've found MISCONF Redis is configured to save RDB snapshots. and gone through it but none of the advice in there works.
The Accepted Answer on the above was to use CONFIG SET to change the directory where Redis was storing data. I tried this in a non-root directory, CONFIG SET dir /home/andy, but it still gives me the same error message.
If I execute BGSAVE it says "Background saving started" but then attempting SET foo bar goes back to giving me the error above.
Other answers have discussed this being a permissions issue. However I don't see how these apply because I've tried starting Redis as both root and my own account (andy) and the same occurs.
I'm not sure if it's the same problem as described on the link or something else.
How can I further diagnose this? I am a PHP developer by trade so this is not my area of expertise, however I am trying to install Redis so I can use it with a PHP application which has it's own interface to Redis.
It seems that the yum installation creates a redis user, and your Redis instance is running by this user. So even if you set dir to /home/andy, this redis user still doesn't have permission to write to andy's home directory.
Use ps aux |grep redis to get the user who's running redis, and config dir to a directory that this user has write permission.
Quick fix for this error is, goto redis-cli and set following
127.0.0.1:6379> config set stop-writes-on-bgsave-error no
Above like just ignore it
I solved the problem by executing following commands:
$ redis-cli
> config set stop-writes-on-bgsave-error no
I am attempting to install REDHAWK v1.8.2 on a fresh install of CentOS 6.4 32 bit, but I am unable to get omniNames and omniEvents to start.
sudo /sbin/service omniEvents stop
Stopping CORBA event service: omniEvents
sudo /sbin/service omniNames stop
Stopping omniNames [ OK ]
sudo /sbin/service omniNames start
Starting omniNames [ OK ]
sudo /sbin/service omniEvents start
Starting CORBA event service on port 11169: omniEvents: [25848]: Warning - failed to resolve initial reference 'NameService'. Exception: TRANSIENT
omniEvents.
I tried to verify if omniNames was really running by calling the naming client, but got an error (see below), so it seems omniNames is not successfully starting.
nameclt list
Caught a TRANSIENT exception when trying to validate the type of the
NamingContext. Is the naming service running?
As part of the debugging process, I tried to kill the omniNames process and start it a different way (see below).
sudo killall omniNames
omniNames -start
Wed Nov 13 21:08:08 2013:
Starting omniNames for the first time.
Error: cannot create initial log file '/var/omninames/omninames-orion.log':
No such file or directory
You can set the environment variable OMNINAMES_LOGDIR to specify the
directory where the log files are kept.
I'm not sure why omniNames can't create the log file, because I verified that /var/omninames folder actually exists and even starting omniNames as root yields the same error. Regardless, I set the log directory to my desktop to circumvent the error (see below).
export OMNINAMES_LOGDIR=/home/$USER/Desktop/logs
mkdir -p /home/$USER/Desktop/logs
omniNames -start
Wed Nov 13 21:09:17 2013:
Starting omniNames for the first time.
Wrote initial log file.
Read log file successfully.
Root context is IOR:010000002b00000049444c3a6f6d672e6f72672f436f734e616d696e672f4e616d696e67436f6e746578744578743a312e30000001000000000000005c000000010102000a00000031302e322e382e333500f90a0b0000004e616d6553657276696365000200000000000000080000000100000000545441010000001c00000001000000010001000100000001000105090101000100000009010100
Checkpointing Phase 1: Prepare.
Checkpointing Phase 2: Commit.
Checkpointing completed.
Even though it looks like omniNames successfully started, when I open another terminal window and call the naming client, I get the same error as before (see below).
nameclt list
Caught a TRANSIENT exception when trying to validate the type of the
NamingContext. Is the naming service running?
The only modification I made in the /etc/omniORB.cfg file is to add the lines for InitRef (see below).
InitRef = NameService=corbaname::localhost
InitRef = EventService=corbaloc::localhost:1169/omniEvents
Also, I am not connected to the internet so my version of CentOS has not been updated from the base version, except for the boost libraries as recommended in Appendix J of the manual (http://sourceforge.net/projects/redhawksdr/files/redhawk-doc/1.9.0/REDHAWK_Manual_v1.9.0.pdf/download).
Looks like the issue is in your configuration. You've got the wrong port in your configuration file. It should be port 11169 however you've listed port 1169.
See: http://redhawksdr.github.io/Documentation/mainch2.html#x4-120002.6 for details.
A few other observations and tricks regarding omniOrb in case this was not the issue.
Sometimes omninames/omnievents can get into a bad state. The fix is to delete the log files created by omniNames and omniEvents and restart the services. They are located:
/var/lib/omniEvents/*
/var/omniNames/*
You'll need to be root to delete those files. I always forget where they are located and often do a "locate omni | grep -i log" to remind myself but you must do this as root since they are not visible to standard users.
While it should not matter, I've personally found that using 127.0.0.1 is more reliable than localhost. For some reason, using localhost within a VM in the configuration file has caused me problems in the past. Consider using 127.0.0.1 instead of localhost. This is what the current version of the Redhawk Manual recommends as well.
You mentioned you are using Redhawk v1.8.2. As an FYI, the latest REDHAWK version in the 1.8 series is currently v1.8.5 and 1.9.0 was also recently released.
Hopefully this gets you up and running!
I have a oracle 11g XE instance running under ubuntu server. I tried changing the hostname of the server by modifying the host name in /etc/hostname, /etc/hosts, tnsnames.ora and listener.ora but the oracle-xe instance fails to start after reboot. Any idea which configuration I am missing?
Sometimes Oracle starts with only certain services / functionalities not working properly... If that's the case and your Oracle instance partially failed to start you can get some more information about running listeners by invoking the lsnrctl command line utility and then using the status command.
You can also look for clues in the Oracle log files under <oracle-install>/app/oracle/diag/tnslsnr/<hostname>/listener/alert/log.xml - you should definitely have one for your old hostname and you might have another one created for your new hostname as well.
I had this and solved it just rename your listner.ora and restart, it will change the setting for the new host name
see my explanation Here