Cassandra - DataStax OpsCenter: Error provisioning cluster - cassandra

I install Cassandra on my local mac, together with opscenter from DataStax. When I create the new cluster, it gives me the error,
"Error
Error provisioning cluster: Unable to login to some nodes. Please check your user/creds"
though I am very sure "Node Credentials (sudo)" user name and password are entered correctly. Any idea?
The server side logs show,
"2015-06-02 17:00:48+0800 [] INFO: Sleeping before retrying ssh login.
2015-06-02 17:00:53+0800 [] There was a problem verifying an ssh login on 127.0.0.1
Traceback (most recent call last):
Failure: opscenterd.SecureShell.SshFailed: ssh to u'127.0.0.1' failed"
However, when I open a terminal and try ssh username#127.0.0.1. I can access without a problem

Related

Running Derby as a server on Linux using JDK11

I am at my wits end!
I have a minimal install of Ubuntu Server 18.04 and OpenJDK 11 (headless).
Downloaded, to a local folder are the java 9+ binaries for Derby (db-derby-10.15.2.0-bin)
Path and Environment settings are all correct!
When I start the server startNetworkServer -h 0.0.0.0, I get an error when doing a simple connect using the ij command line tool
ij> connect 'jdbc:derby://localhost:1527/dbname;create=true';
ERROR XJ041: DERBY SQL error: ERRORCODE: 40000, SQLSTATE: XJ041, SQLERRMC: Failed to create database 'dbname', see the next exception for details.::SQLSTATE: XBM01::SQLSTATE: XJ001
The derby.log file makes reference to:
java.security.AccessControlException: access denied ("java.lang.RuntimePermission" "getenv.SOURCE_DATE_EPOCH")
Looking further into this error, I learned that I somehow need a security.profile. I found this website that seemed to be the answers to my problems. https://www.javacodegeeks.com/2020/04/apache-derby-database-jvm-security-policy.html
Following these pretty straight-forward instructions, I get:
java.security.AccessControlException: access denied
org.apache.derby.shared.common.security.SystemPermission( "engine", "usederbyinternals" )
For the next person who has this strange problem (it seems to happen with some regularity, here's a simple workaround, copied from this FAQ page at Chalmers Institute of Technology:
Q: When we try to create a database in Derby and the database explorer in NetBeans, we get one or more of the following error(s):
An error occurred while creating the database:
java.sql.NonTransientConnectionException: DERBY SQL error: ERRORCODE:
40000, SQLSTATE: XJ041, SQLERRMC: ...
Caused by: java.security.AccessControlException: access denied
("java.lang.RuntimePermission" "getenv.SOURCE_DATE_EPOCH")
A: This is some kind of missconfiguration in the JVM with a very aggressive security policy that doesn't allow applications to fetch the time on the system (since epoch). The solution is to edit ~/.java.policy or [java.home]/lib/security/java.policy and add the following:
grant {
permission java.lang.RuntimePermission "getenv.SOURCE_DATE_EPOCH", "read";
};
If you are on Windows you can read about where this policy file is supposed to be located here;
https://docs.oracle.com/javase/7/docs/technotes/guides/security/PolicyFiles.html
Apache-Derby is a database management system prepared for a multi-user environment, therefore, when you execute the startNetworkServer -h 0.0.0.0 instruction, you are telling it by default to take certain security into account, and that is why it does not let you do an insecure connection such as ij> connect 'jdbc:derby://172.16.17.31:1527/BBDD_server;create=true';
because you are connecting without specifying username and password, so you should either connect by specifying username + password, or start the server without any security:
startNetworkServer -h 0.0.0.0 -noSecurityManager
More help:
https://db.apache.org/derby/docs/10.4/adminguide/tadminnetservopen.html
https://db.apache.org/derby/docs/10.4/adminguide/tadminnetservbasic.html

Opscenter not loading. Life cycle manager not connecting to the cluster.

My opscenter always gets stuck with Loading OpsCenter... BTW this is my first installation and so far I did not get OpsCenter to run.
All three of these run normally.
nodetool status
service dse
service datastax-agent
I could reproduce it on both GChrome & MFirefox. Both remotely & running on localhost.
opscenterd.log :
2017-04-12 15:20:15,877 [myclustername] WARN: These nodes reported this message, Nodes: ['10.35.21.207'] Message: HTTP request http://10.35.21.207:61621/connection-status? failed:
An error occurred while connecting: 107: Transport endpoint is not connected. (MainThread)
when using life cycle manager , it sees my cluster name I picked but could not connect. Here's what the log looks like when I attempt to start managing the un-managed cluster.
[opscenterd] ERROR: Problem while calling ImportClusterIntoLifecycleManagerController (AgentCommunicationFailure): Cluster Import Failure: Unable to determine the DSE version for the specified cluster. Please verify that the Agents for this cluster are properly communicating with Opscenter.
File "/usr/share/opscenter/lib/py/twisted/internet/defer.py", line 1122, in _inlineCallbacks
result = result.throwExceptionIntoGenerator(g)
File "/usr/share/opscenter/lib/py/twisted/python/failure.py", line 389, in throwExceptionIntoGenerator
return g.throw(self.type, self.value, self.tb)
File "/usr/share/opscenter/jython/Lib/site-packages/opscenterd/WebServer.py", line 2598, in ImportClusterIntoLifecycleManagerController

Chef-server-ctl reconfigure/ Creating Admin User on chef server

I am fairly new to Linux (and brand new to chef) and I have ran into an issue when setting up my chef server. I am trying to create an admin user with the command
sudo chef-server-ctl user-create admin Admin Ladmin admin#example.com
examplepass -f admin.pem
but after I keep getting this error:
ERROR: Connection refused connecting...
ERROR: Connection refused connecting to https://127.0.0.1/users/, retry 5/5
ERROR: Network Error: Connection refused - Connection refused
connecting to https://..., giving up
Check your knife configuration and network settings
I also noticed that when I ran chef-server-ctl I got this output:
[2016-12-21T13:24:59-05:00] ERROR: Running exception handlers Running
handlers complete
[2016-12-21T13:24:59-05:00] ERROR: Exception
handlers complete Chef Client failed. 0 resources updated in 01 seconds
[2016-12-21T13:24:59-05:00] FATAL: Stacktrace dumped to
/var/opt/opscode/local-mode-cache/chef-stacktrace.out
[2016-12-21T13:24:59-05:00] FATAL: Please provide the contents of the
stacktrace.out file if you file a bug report
[2016-12-21T13:24:59-05:00] FATAL:
Chef::Exceptions::CannotDetermineNodeName: Unable to determine node
name: configure node_name or configure the system's hostname and fqdn
I read that this error is due to a prerequisite mistake but I'm uncertain as to what it means or how to fix it. So any input would be greatly appreciated.
Your server does not have a valid FQDN (aka full host name). You'll have to fix this before installing Chef server.

Spark standalone mode : failed on connection exception:

I am running a spark(1.2.1) standalone cluster on my virtual machine(Ubuntu 12.04). I can run the example such as als.py and pi.py successfully. But I can't run the workcount.py example because a connection error will occur.
bin/spark-submit --master spark://192.168.1.211:7077 /examples/src/main/python/wordcount.py ~/Documents/Spark_Examples/wordcount.py
The error message is as below:
15/03/13 22:26:02 INFO BlockManagerMasterActor: Registering block manager a12:45594 with 267.3 MB RAM, BlockManagerId(0, a12, 45594)
15/03/13 22:26:03 INFO Client: Retrying connect to server: a11/192.168.1.211:9000. Already tried 4 time(s).
......
Traceback (most recent call last):
File "/home/spark/spark/examples/src/main/python/wordcount.py", line 32, in <module>
.reduceByKey(add)
File "/home/spark/spark/lib/spark-assembly-1.2.1 hadoop1.0.4.jar/pyspark/rdd.py", line 1349, in reduceByKey
File "/home/spark/spark/lib/spark-assembly-1.2.1-hadoop1.0.4.jar/pyspark/rdd.py", line 1559, in combineByKey
File "/home/spark/spark/lib/spark-assembly-1.2.1-hadoop1.0.4.jar/pyspark/rdd.py", line 1942, in _defaultReducePartitions
File "/home/spark/spark/lib/spark-assembly-1.2.1-hadoop1.0.4.jar/pyspark/rdd.py", line 297, in getNumPartitions
......
py4j.protocol.Py4JJavaError: An error occurred while calling o23.partitions.
java.lang.RuntimeException: java.net.ConnectException: Call to a11/192.168.1.211:9000 failed on connection exception: java.net.ConnectException: Connection refused
......
I didn't use Yarn or ZooKeeper. And all the virtual machines can connect to each other via ssh without password. I also set the SPARK_LOCAL_IP for master and workers.
I think that wordcount.py example is accessing hdfs to reading lines in a file (and then count the words)
Something like:
sc.textFile("hdfs://<master-hostname>:9000/path/to/whatever")
Port 9000 is usually used for hdfs.
Please be sure that this file is accessible or do not use hdfs for that example :).
I hope it helps.

OpsCenter Installation error: DNSLookupError: DNS lookup failed

I downloaded the tar.gz file and I'm trying to install Opscenter. I am getting the following error.
ERROR:
Trying to download https://opscenter.datastax.com:443/definitions/5.0.1/version.md5
resulted in following error:
Traceback (most recent call last):
File "build/lib/python2.6/site-packages/opscenterd/Definitions.py", line 133, in getNewHash
DNSLookupError: DNS lookup failed: address 'opscenter.datastax.com' not found: [Errno -2] Name or service not known.
Do I need to have internet access to install Opscenter?
This error should not prevent OpsCenter from working correctly, you just won’t have update information (for cases when you’re running outdated versions of Cassandra or OpsCenter.)
OpsCenter does not require internet connection.
You should be able to configure OpsCenter to not fetch the updated definition files. Refer to OpsCenter configuration properties and look for the [definitions] auto_update property which you can set to False.

Resources