spark.deploy.zookeeper.url
Introduction to connections without zoopeer passwords
https://spark.apache.org/docs/latest/configuration.html#deploy
if zookeeper have password, spark ha how to set up ?
thank you
I try to configure like this, but error
-Dspark.deploy.zookeeper.url=test:test123#172.28.1.43:2181
2023-02-08 16:16:53,448 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=test:test123#172.28.1.43:2181 sessionTimeout=60000 watcher=org.apache.curator.ConnectionState#8eb0a94
2023-02-08 16:17:03,495 WARN zookeeper.ClientCnxn: Session 0x0 for server test:test123#172.28.1.43:2181, unexpected error, closing socket connection and attempting reconnect
java.lang.IllegalArgumentException: Unable to canonicalize address test:test123#172.28.1.43:2181 because it's not resolvable
at org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:65)
at org.apache.zookeeper.SaslServerPrincipal.getServerPrincipal(SaslServerPrincipal.java:41)
at org.apache.zookeeper.ClientCnxn$SendThread.startConnect(ClientCnxn.java:1001)
at org.apache.zookeeper.ClientCnxn$SendThread.run(ClientCnxn.java:1060)
Related
Does anyone knows how to fix this?
error that the service gives when trying to start:
Process terminated. Fatal error detected: RabbitMQ.Client.Exceptions.BrokerUnreachableException: None of the specified endpoints were reachable
RabbitMQ.Client.Exceptions.OperationInterruptedException: The AMQP operation was interrupted: AMQP close-reason, initiated by Library, code=541, text='Unexpected Exception', cla...
System.Net.Sockets.SocketException (104): Connection reset by peer
rabbitmq log:
{handshake_error,opening,0, {amqp_error,access_refused, "access to vhost '/hostname' refused for user 'admin2'", 'connection.open'}}
tell me what can be done?
I made trough the rabbitmq web interface new vhost and user, made an access to the vhost, did not help
I am not able to connect to the mongoDB Atlas cluster that I have made. I entered in the given line of code after I created the cluster and recieved the error:
I am not able to find any solution to this problem. Please help me.
MongoDB shell version v4.2.0
Enter password: Cannot get console mode 6
connecting to: mongodb://cluster0-shard-00-01-jigfx.mongodb.net:27017,cluster0-shard-00-02-jigfx.mongodb.net:27017,cluster0-shard-00-00-jigfx.mongodb.net:27017/test?authSource=admin&compressors=disabled&gssapiServiceName=mongodb&replicaSet=Cluster0-shard-0&ssl=true
2019-09-03T17:07:19.299-0400 I NETWORK [js] Starting new replica set monitor for Cluster0-shard-0/cluster0-shard-00-01-jigfx.mongodb.net:27017,cluster0-shard-00-02-jigfx.mongodb.net:27017,cluster0-shard-00-00-jigfx.mongodb.net:27017
2019-09-03T17:07:19.300-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cluster0-shard-00-01-jigfx.mongodb.net:27017
2019-09-03T17:07:19.300-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cluster0-shard-00-02-jigfx.mongodb.net:27017
2019-09-03T17:07:19.300-0400 I CONNPOOL [ReplicaSetMonitor-TaskExecutor] Connecting to cluster0-shard-00-00-jigfx.mongodb.net:27017
2019-09-03T17:07:20.099-0400 I NETWORK [ReplicaSetMonitor-TaskExecutor]
Confirmed replica set for Cluster0-shard-0 is Cluster0-shard-0/cluster0-shard-00-00-jigfx.mongodb.net:27017,cluster0-shard-00-01-jigfx.mongodb.net:27017,cluster0-shard-00-02-jigfx.mongodb.net:27017
2019-09-03T17:07:20.719-0400 I NETWORK [js] Marking host cluster0-shard-00-00-jigfx.mongodb.net:27017 as failed :: caused by :: Location40659:can't connect to new replica set master [cluster0-shard-00-00-jigfx.mongodb.net:27017], err: AuthenticationFailed: Missing expected field "pwd"
*** It looks like this is a MongoDB Atlas cluster. Please ensure that your IP whitelist allows connections from your network.
2019-09-03T17:07:21.522-0400 E QUERY [js] Error: Missing expected field "pwd" :
connect#src/mongo/shell/mongo.js:341:17
#(connect):2:6
2019-09-03T17:07:21.524-0400 F - [main] exception: connect failed
2019-09-03T17:07:21.524-0400 E - [main] exiting with code 1
The expected result is a prompt that asks me for the password to connect to the cluster, but the prompt instantly responds with Cannot get console mode 6
try adding --password **** to the end of command
I'm trying to send spark job to yarn (without HDFS) in HA mode.
For submitting I'm using org.apache.spark.deploy.SparkSubmit.
When I send request from machine with active Resource Manager, it works well. But if I' trying to send from machine with standby Resource Manager, job fails with error:
DEBUG org.apache.hadoop.ipc.Client - Connecting to spark2-node-dev/10.10.10.167:8032
DEBUG org.apache.hadoop.ipc.Client - Connecting to /0.0.0.0:8032
org.apache.hadoop.ipc.Client - Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep
However, when I send request via command line (spark-submit), it works well through both active and standby machine.
What can cause the problem?
P.S. Use the same parameters for both type of sending job: org.apache.spark.deploy.SparkSubmit and spark-submit command line request. And properties yarn.resourcemanager.hostname.rm_id defined for all rm hosts
The problem was with absence of yarn-site.xml within class path for spark-submitter jar. Actually spark submitter jar does not take to account YARN_CONF_DIR or HADOOP_CONF_DIR env var, so cannot see yarn-site.
One solution that I found was to put yarn-site into classpath of jar.
I have 2 node Cassandra cluster with datastax-agent up and running(one seed node) also nodetool status showing healthy.
In the 3 node I have opscenter install and the UI is loading fine with a blank screen, when I saw the var log it's complaining 'NO cassandra connection available for hostlist with a invalid unsupported version'(paster the log details message below). Any help is highly appreciated
2017-02-25 06:33:06+0000 [CLUSTER_NAME] ERROR: Control connection failed to connect, shutting down Cluster: ('Unable to connect to any servers', {'SEED-IP': })
2017-02-25 06:33:06+0000 [CLUSTER_NAME] WARN: No cassandra connection available for hostlist ['SEED-IP'] . Retrying.
I was using an old version of opscenter which was not matching the version of dse. Here is the map for your refference.
Reference: datastax doc
Im tring t build up an Hawkular Server in Windows 7 - unfortunately this server works with a Cassandra DB - ive installed the newest version and in the starting process of Hawkular I got following error:
14:57:20,102 ERROR [org.hawkular.alerts.bus.log] (Thread-195 (ActiveMQ-client-global-threads-205387390)) HAWKALERT210009: Error accesing to DefinitionsService. Description: [java.lang.RuntimeException: Cassandra session is null]
14:57:19,843 WARN [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] (metricsservice-lifecycle-thread) HAWKMETRICS200004: [18] Retrying connecting to Cassandra cluster in [2]s...
14:57:19,839 ERROR [org.hawkular.alerts.engine.log] (EE-ManagedExecutorService-default-Thread-10) HAWKALERT220009: Definitions Service error in [Triggers]. Msg: [java.lang.RuntimeException: Cassandra session is null]
14:57:20,354 ERROR [org.hawkular.alerts.bus.log] (Thread-190 (ActiveMQ-client-global-threads-205387390)) HAWKALERT210009: Error accesing to DefinitionsService. Description: [com.datastax.driver.core.exceptions.NoHostAvailableException: All host(s) tried for qu
ery failed (tried: /127.0.0.1:9042 (com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))]
14:57:22,104 INFO [org.hawkular.inventory.impl.tinkerpop] (ServerService Thread Pool -- 111) HAWKINV001000: Using graph provider: org.hawkular.inventory.impl.tinkerpop.provider.TitanProvider
14:57:22,360 INFO [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] (metricsservice-lifecycle-thread) HAWKMETRICS200002: Initializing metrics service
14:57:22,396 WARN [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] (metricsservice-lifecycle-thread) HAWKMETRICS200003: Could not connect to Cassandra cluster - assuming its not up yet: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.d
atastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
14:57:22,397 WARN [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] (metricsservice-lifecycle-thread) HAWKMETRICS200004: [19] Retrying connecting to Cassandra cluster in [3]s...
14:57:23,102 WARN [org.hawkular.inventory.cdi] (ServerService Thread Pool -- 111) HAWKINV003501: Inventory backend failed to initialize in an attempt 10 of 15 with message: Could not instantiate implementation: com.thinkaurelius.titan.diskstorage.cassandra.th
rift.CassandraThriftStoreManager.
14:57:25,398 INFO [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] (metricsservice-lifecycle-thread) HAWKMETRICS200002: Initializing metrics service
14:57:25,438 WARN [org.hawkular.metrics.api.jaxrs.MetricsServiceLifecycle] (metricsservice-lifecycle-thread) HAWKMETRICS200003: Could not connect to Cassandra cluster - assuming its not up yet: All host(s) tried for query failed (tried: /127.0.0.1:9042 (com.d
atastax.driver.core.exceptions.InvalidQueryException: unconfigured table schema_keyspaces))
Cassandra DB is online and I can connect to localhost:9160 with the Cassandra CQL Shell - but not the hawkular server - have I forgotten something?
From the log, the port Hawkular is trying to use is 9042. Make sure of this setting in your cassandra.yaml
start_native_transport: true
native_transport_port: 9042
The port you are using (9160) is thrift port.
I can only take a stab in the dark with such limited information.
However you said you connected to localhost:9160. In the log it mentions "127.0.0.1:9042".