I try to execute arangodb 3.2 (version updated) in my server centos 6.9 but don't work, what I can do for fixing it?.
[root#user ~]# arangod
2017-08-18T20:01:34Z [8419] INFO ArangoDB 3.2.1 [linux] 64bit, using jemalloc, VPack 0.1.30, RocksDB 5.6.0, ICU 58.1, V8 5.7.0.0, OpenSSL 1.0.1e-fips 11 Feb 2013
2017-08-18T20:01:34Z [8419] INFO using storage engine mmfiles
2017-08-18T20:01:34Z [8419] INFO {cluster} Starting up with role SINGLE
2017-08-18T20:01:34Z [8419] INFO {syscall} file-descriptors (nofiles) hard limit is 8192, soft limit is 8192
2017-08-18T20:01:34Z [8419] INFO Authentication is turned on (system only), authentication for unix sockets is turned on
2017-08-18T20:01:34Z [8419] ERROR error while opening database collections: got invalid indexes for collection '_fishbowl' (exception location: /var/lib/otherjenkins/workspace/RELEASE__BuildPackages/arangod/MMFiles/MMFilesCollection.cpp:2004). Please report this error to arangodb.com
2017-08-18T20:01:34Z [8419] FATAL cannot start database: got invalid indexes for collection '_fishbowl' (exception location: /var/lib/otherjenkins/workspace/RELEASE__BuildPackages/arangod/MMFiles/MMFilesCollection.cpp:2004). Please report this error to arangodb.com
Related
I'm from Linux server background and very new to JBOSS. I'm trying to setup a IoT application server which requires JBOSS service to provide
a web interface for the application server.
But when i check the JBOSS server state it is showing 'starting', i need this to be 'running'.
# /opt/cgms/bin/jboss-cli.sh --connect controller=127.0.0.1 ":read- attribute(name=server-state)"
{
"outcome" => "success",
"result" => "starting"
}
I can see that the deployment is getting failed when i start JBOSS using the script standalone.sh. I've increased the deployment-timeout
up to 6000 seconds in standalone.xml, still the deployment is failing with the following message in /opt/cgms/standalone/deployments/cgms.ear.failed,
""JBAS015052: Did not receive a response to the deployment operation within the allowed timeout period [6000 seconds].
Check the server configuration file and the server logs to find more about the status of the deployment."
Here is my JBOSS setup details,
[root#app-server ~]# /opt/cgms/bin/jboss-cli.sh --connect
[standalone#localhost:9999 /] version
JBoss Admin Command-line Interface
JBOSS_HOME: /opt/cgms
JBoss AS release: 7.3.0.Final-redhat-14 "Janus"
JBoss AS product: EAP 6.2.0.GA
JAVA_HOME: null
java.version: 1.8.0_65
java.vm.vendor: Oracle Corporation
java.vm.version: 25.65-b01
os.name: Linux
os.version: 3.10.0-229.el7.x86_64
When i check the server.log, it is stuck at,
# tailf /opt/cgms/server/cgms/log/server.log
624: app-server: Aug 12 2017 05:45:01.506 +0000: %IOTFND-6-UNSPECIFIED: %[ch=StdSchedulerFactory][sev=INFO][tid=MSC service thread 1-1]: Quartz scheduler 'CgnmsQuartz' initialized from an externally provided properties instance.
625: app-server: Aug 12 2017 05:45:01.506 +0000: %IOTFND-6-UNSPECIFIED: %[ch=StdSchedulerFactory][sev=INFO][tid=MSC service thread 1-1]: Quartz scheduler version: 2.2.1
It will not go further from here.
I've tried with java 1.7, but the script standalone.sh failed with a java error,
java.lang.UnsupportedClassVersionError: com/cisco/cgms/loglayout/LogHandler : Unsupported major.minor version 52.0
at java.lang.ClassLoader.defineClass1(Native Method)
at java.lang.ClassLoader.defineClass(ClassLoader.java:800)
at org.jboss.modules.ModuleClassLoader.doDefineOrLoadClass(ModuleClassLoader.java:345)
at org.jboss.modules.ModuleClassLoader.defineClass(ModuleClassLoader.java:423)
at org.jboss.modules.ModuleClassLoader.loadClassLocal(ModuleClassLoader.java:261)
at org.jboss.modules.ModuleClassLoader$1.loadClassLocal(ModuleClassLoader.java:76)
Here are my server details,
OS - Red Hat Enterprise Linux Server release 7.1 (Maipo) - runs on Oracle VM VirtualBox
kernel - app-server 3.10.0-229.el7.x86_64 #1 SMP Thu Jan 29 18:37:38 EST 2015 x86_64 x86_64 x86_64 GNU/Linux
When i check netstat, port 80 and 443 are listening.
Please help to fix this problem.
Well it seems I've hit my first issue with my BigInsights Image, not a massive problem, but something to think about. On my Ambari browser services page it was showing that the Kafka service was not running, I tried a restart a number of times, but this seemed to continuously fail. So I figured that I best look into it a bit further. In this case the issue was on the Ambari Master server which has the most services running on it.
So first call of action is to see if maybe Ambari is not making the call correctly:
[root#master ~]# kafka
Usage: /usr/bin/kafka {start|stop|status|clean}
[root#master ~]# kafka status
Kafka is not running.
[root#master ~]# kafka start
Starting Kafka succeeded with PID=15815.
[root#master ~]# kafka status
Kafka is not running.
Next I tired a clean start, not that I figured it would make much difference, but maybe there was a issue with the logs not allowing it to restart:
[root#master ~]# kafka clean
Removed the Kafka PID file: /var/run/kafka/kafka.pid.
Removed the Kafka OUT file: /var/log/kafka/kafka.out.
Removed the Kafka ERR file: /var/log/kafka/kafka.err.
[root#master ~]# kafka status
Kafka is not running. No pid file found.
[root#master ~]# kafka start
Starting Kafka succeeded with PID=15875.
[root#master-01 ~]# kafka status
Kafka is not running.
So lets take a proper look at the logs:
[root#master ~]# ls -ltr /var/log/kafka/
-<cut>-
-rw-r--r-- 1 kafka hadoop 6588 Aug 11 13:55 controller.log.2015-08-11-13
-rw-r--r-- 1 kafka hadoop 6000 Aug 11 13:59 server.log.2015-08-11-13
-rw-r--r-- 1 kafka hadoop 6588 Aug 11 14:55 controller.log
-rw-r--r-- 1 kafka hadoop 5700 Aug 11 14:56 server.log
-rw-r--r-- 1 root root 284 Aug 11 15:09 kafka.err
-rw-r--r-- 1 root root 522 Aug 11 15:09 kafka.out
-rw-r--r-- 1 kafka hadoop 707 Aug 11 15:09 kafkaServer-gc.log
Lets look at the error and out files:
[root#master ~]# cat /var/log/kafka/kafka.err
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c5330000, 986513408, 0) failed; error='Cannot allocate memory' (errno=12)
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00000000c5330000, 986513408, 0) failed; error='Cannot allocate memory' (errno=12)
[root#master ~]# cat /var/log/kafka/kafka.out
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 986513408 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /root/hs_err_pid15875.log
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (mmap) failed to map 986513408 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /root/hs_err_pid16305.log
Ah, that's odd, as I asked for at least 4GB of memory for my VMs, lets check:
[root#master ~]# cat /proc/meminfo
MemTotal: 1922260 kB
MemFree: 278404 kB
Buffers: 8600 kB
Cached: 43384 kB
Best get some more memory allocated!
Normally the minimum that you should install BigInsights with, as recommended by the IBM support pages is 8GB, so this gives you rather a insight into why. At least 2GB of it is just to run the installed services on the system, even before you start loading the DB and running queries.
I have a debian linux image running on Google compute. Can successfully get cassandra working with "sudo cassandra" or "sudo cassandra -f" but then as soon as I log off this stops working. But when I try to run this as a service it simply doesnt say anything and doesnt start it either! I installed it using the aptget package v2.1.
I've tried sudo service cassandra start. It looks like its doing something and then quits without any logs.
Please help me run this up as a service. I can't even locate where the logs are stored when I run it as a service.
I ran into this issue recently, and as BrianC indicated it can be an out of memory condition. In my case I could successfully start cassandra with sudo cassandra -f but not with /etc/init.d/cassandra start.
For me, the last log entry in /var/log/cassandra/system.log when starting as a service was:
INFO [main] 2015-04-30 10:58:16,234 CassandraDaemon.java (line 248) Classpath: /etc/cassandra:/usr/share/cassandra/lib/antlr-3.2.jar:/usr/share/cassandra/lib/commons-cli-1.1.jar:/usr/share/cassandra/lib/commons-codec-1.2.jar:/usr/share/cassandra/lib/commons-lang3-3.1.jar:/usr/share/cassandra/lib/compress-lzf-0.8.4.jar:/usr/share/cassandra/lib/concurrentlinkedhashmap-lru-1.3.jar:/usr/share/cassandra/lib/disruptor-3.0.1.jar:/usr/share/cassandra/lib/guava-15.0.jar:/usr/share/cassandra/lib/high-scale-lib-1.1.2.jar:/usr/share/cassandra/lib/jackson-core-asl-1.9.2.jar:/usr/share/cassandra/lib/jackson-mapper-asl-1.9.2.jar:/usr/share/cassandra/lib/jamm-0.2.5.jar:/usr/share/cassandra/lib/jbcrypt-0.3m.jar:/usr/share/cassandra/lib/jline-1.0.jar:/usr/share/cassandra/lib/json-simple-1.1.jar:/usr/share/cassandra/lib/libthrift-0.9.1.jar:/usr/share/cassandra/lib/log4j-1.2.16.jar:/usr/share/cassandra/lib/lz4-1.2.0.jar:/usr/share/cassandra/lib/metrics-core-2.2.0.jar:/usr/share/cassandra/lib/netty-3.6.6.Final.jar:/usr/share/cassandra/lib/reporter-config-2.1.0.jar:/usr/share/cassandra/lib/servlet-api-2.5-20081211.jar:/usr/share/cassandra/lib/slf4j-api-1.7.2.jar:/usr/share/cassandra/lib/slf4j-log4j12-1.7.2.jar:/usr/share/cassandra/lib/snakeyaml-1.11.jar:/usr/share/cassandra/lib/snappy-java-1.0.5.jar:/usr/share/cassandra/lib/snaptree-0.1.jar:/usr/share/cassandra/lib/super-csv-2.1.0.jar:/usr/share/cassandra/lib/thrift-server-0.3.7.jar:/usr/share/cassandra/apache-cassandra-2.0.14.jar:/usr/share/cassandra/apache-cassandra-thrift-2.0.14.jar:/usr/share/cassandra/apache-cassandra.jar:/usr/share/cassandra/stress.jar:/usr/share/java/jna.jar::/usr/share/cassandra/lib/jamm-0.2.5.jar:/usr/share/cassandra/lib/jamm-0.2.5.jar
And nothing afterwards. If it is a memory problem you should be able to verify this in your syslog. If if contains something like:
Apr 30 10:53:39 dev kernel: [1173246.957818] Out of memory: Kill process 8229 (java) score 132 or sacrifice child
Apr 30 10:53:39 dev kernel: [1173246.957831] Killed process 8229 (java) total-vm:634084kB, anon-rss:286772kB, file-rss:12676kB
Increase your ram. In my case I increased it to 2GB and it started fine.
The arangodb is showing the following error after computer reboot
"FATAL Database upgrade check failed for 'mydatabase'" Please inspect the logs from any errors
i had reinstalled the arangodb, and then, running in first time it was ok, but when I reboot the computer it didn´t start anymore
in the log file was this log messages
´´
2014-07-18T14:49:46Z [6405] INFO ArangoDB 2.2.0 -- ICU 52.1, V8 3.16.14, OpenSSL 1.0.1 14 Mar 2012
2014-07-18T14:49:46Z [6405] INFO using default language 'en'
2014-07-18T14:49:46Z [6405] INFO loaded database '_system' from '/var/lib/arangodb/databases/database-70153'
2014-07-18T14:49:46Z [6405] INFO loaded database 'mydatabase' from '/var/lib/arangodb/databases/database-60101129'
2014-07-18T14:49:46Z [6405] INFO running WAL recovery
2014-07-18T14:49:46Z [6405] INFO dropping database 'mydatabase', directory '/var/lib/arangodb/databases/database-60101129'
2014-07-18T14:49:46Z [6405] INFO creating database 'mydatabase', directory '/var/lib/arangodb/databases/database-60101129'
2014-07-18T14:49:47Z [6405] INFO WAL recovery finished successfully
2014-07-18T14:49:47Z [6405] INFO using endpoint 'tcp://0.0.0.0:8529' for non-encrypted requests
2014-07-18T14:49:47Z [6405] INFO using default API compatibility: 20200
2014-07-18T14:49:47Z [6405] INFO JavaScript using startup '/usr/share/arangodb/js', modules '/usr/share/arangodb/js/server/modules;/usr/share/arangodb/js/common/modules;/usr/share/arangodb/js/node', actions '/usr/share/arangodb/js/actions', application '/var/lib/arangodb-apps'
2014-07-18T14:49:47Z [6405] FATAL Database upgrade check failed for 'mydatabase'. Please inspect the logs from any errors
sorry for my bad english
This is a bug in the start script under Unix. As a workaround can you edit the file /etc/init.d/arangodb and replace the lines
$DAEMON -c $CONF --uid arangodb --gid arangodb --check-version
RETVAL=$?
by
RETVAL=0
This should solve the problem.
The /etc/init.d/arangodb now has this section to resolve your problem:
if [ "$RETVAL" -eq 0 ]; then
$DAEMON --uid arangodb --gid arangodb --pid-file "$PIDFILE" --temp.path "/var/tmp/arangod" --log.foreground-tty false --supervisor $#
RETVAL=$?
log_end_msg $RETVAL
else
log_failure_msg "database version check failed, maybe you need to run 'upgrade'?"
fi
I have made JavaFx application which is running fine in Window , Mac OS but when i run in Linux Fedora the application make crash the whole system with the following log.
1) What is the reason of crash in Linux ?
2) What may the be the possible solution of this crash?
A fatal error has been detected by the Java Runtime Environment:
SIGSEGV (0xb) at pc=0x00840e58, pid=2114, tid=2694839152 JRE version:
Java(TM) SE Runtime Environment (7.0_51-b13) (build 1.7.0_51-b13)
Java VM: Java HotSpot(TM) Client VM (24.51-b03 mixed mode linux-x86 )
Problematic frame: C [libc.so.6+0x2fe58] exit+0x38 Failed to write
core dump. Core dumps have been disabled. To enable core dumping, try
"ulimit -c unlimited" before starting Java again If you would like to
submit a bug report, please visit:
http://bugreport.sun.com/bugreport/crash.jsp The crash happened
outside the Java Virtual Machine in native code. See problematic
frame for where to report the bug.
--------------- T H R E A D ---------------
Current thread (0xa0a8d800): JavaThread "JNativeHook Native Hook"
[_thread_in_native, id=2306, stack(0xa01ff000,0xa0a00000)]
--------------- S Y S T E M ---------------
OS:Fedora release 14 (Laughlin)
uname:Linux 2.6.35.6-45.fc14.i686 #1 SMP Mon Oct 18 23:56:17 UTC 2010
i686 libc:glibc 2.12.90 NPTL 2.12.90 rlimit: STACK 8192k, CORE 0k,
NPROC 1024, NOFILE 1024, AS infinity load average:20.56 6.52 4.06
/proc/meminfo: MemTotal: 1013996 kB MemFree: 112652 kB
Buffers: 4224 kB Cached: 140000 kB
Memory: 4k page, physical 1013996k(112652k free), swap
1535996k(665220k free)
vm_info: Java HotSpot(TM) Client VM (24.51-b03) for linux-x86 JRE
(1.7.0_51-b13), built on Dec 18 2013 18:49:34 by "java_re" with gcc
4.3.0 20080428 (Red Hat 4.3.0-8)
time: Mon Feb 10 16:29:44 2014 elapsed time: 15804 seconds
I am not entering the whole log because it is too long to post. please provide possible solution of Exception log
Please file a bug at https://github.com/kwhat/jnativehook with the entire crash log. Chances are the issue has already been fixed in the 1.2 trunk.