So I have configured and initialized accumulo as given in the Accumulo User Manual.
In the conf/accumulo-site.xml, I set my trace.user to accumulo_tracer, and I also create a system user with the same name. Additionally, I store the password in the same file under trace.token.property.password property.
However, after ./bin/start-all.sh, everything starts up fine, including the accumulo UI. But the following error is displayed in the logs/tracer_localhost.log file.
2015-05-28 10:58:46,229 [watcher.MonitorLog4jWatcher] INFO : Enabled log-forwarding
2015-05-28 10:58:46,254 [server.Accumulo] INFO : tracer starting
2015-05-28 10:58:46,254 [server.Accumulo] INFO : Instance 48f5f9cf-f08d-4736-b504-335b044a2d88
2015-05-28 10:58:46,255 [server.Accumulo] INFO : Data Version 6
2015-05-28 10:58:46,255 [server.Accumulo] INFO : Attempting to talk to zookeeper
2015-05-28 10:58:46,430 [server.Accumulo] INFO : ZooKeeper connected and initialized, attempting to talk to HDFS
2015-05-28 10:58:46,430 [server.Accumulo] INFO : Connected to HDFS
2015-05-28 10:58:46,432 [watcher.MonitorLog4jWatcher] INFO : Changing monitor log4j address to localhost:4560
2015-05-28 10:58:46,433 [watcher.MonitorLog4jWatcher] INFO : Enabled log-forwarding
2015-05-28 10:58:46,510 [watcher.MonitorLog4jWatcher] INFO : Set watch for Monitor Log4j watcher
2015-05-28 10:58:46,638 [tracer.TraceServer] INFO : Waiting to checking/create the trace table.
org.apache.accumulo.core.client.AccumuloSecurityException: Error BAD_CREDENTIALS for user accumulo_tracer - Username or Password is Invalid
at org.apache.accumulo.core.client.impl.ServerClient.execute(ServerClient.java:65)
at org.apache.accumulo.core.client.impl.ConnectorImpl.<init>(ConnectorImpl.java:66)
at org.apache.accumulo.server.client.HdfsZooInstance.getConnector(HdfsZooInstance.java:156)
at org.apache.accumulo.tracer.TraceServer.<init>(TraceServer.java:201)
at org.apache.accumulo.tracer.TraceServer.main(TraceServer.java:303)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.accumulo.start.Main$1.run(Main.java:141)
at java.lang.Thread.run(Thread.java:745)
Caused by: ThriftSecurityException(user:accumulo_tracer, code:BAD_CREDENTIALS)
at org.apache.accumulo.core.client.impl.thrift.ClientService$authenticate_result$authenticate_resultStandardScheme.read(ClientService.java:15613)
at org.apache.accumulo.core.client.impl.thrift.ClientService$authenticate_result$authenticate_resultStandardScheme.read(ClientService.java:15591)
at org.apache.accumulo.core.client.impl.thrift.ClientService$authenticate_result.read(ClientService.java:15535)
at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:78)
at org.apache.accumulo.core.client.impl.thrift.ClientService$Client.recv_authenticate(ClientService.java:500)
at org.apache.accumulo.core.client.impl.thrift.ClientService$Client.authenticate(ClientService.java:486)
at org.apache.accumulo.core.client.impl.ConnectorImpl$1.execute(ConnectorImpl.java:69)
at org.apache.accumulo.core.client.impl.ConnectorImpl$1.execute(ConnectorImpl.java:66)
at org.apache.accumulo.core.client.impl.ServerClient.executeRaw(ServerClient.java:100)
at org.apache.accumulo.core.client.impl.ServerClient.execute(ServerClient.java:63)
... 10 more
2015-05-28 10:58:47,469 [server.Accumulo] WARN : System swappiness setting is greater than ten (60) which can cause time-sensitive operations to be delayed. Accumulo is time sensitive because it needs to maintain distributed lock agreement.
Any help or guidance where I might have missed something, would be very helpful!
Thanks in advance!
You don't need to create a system user with that name. What you need is to create an Accumulo user with that name. You can do this in the accumulo shell as the (Accumulo) root user $ACCUMULO_HOME/bin/accumulo shell -u root. You'll also need to grant the user the table permission to create tables. See help grant in the shell to learn how to set the CREATE_TABLE system permission for that user.
So here's what solved my issue: a fresh init!!
So during the init phase, I give a password for the root user and the same password I am supposed to mention in the conf/accumulo-site.xml
<property>
<name>instance.secret</name>
<value>your_own_secret_password</value>
<description>A secret unique to a given instance that all servers must know in order to communicate with one another.
Change it before initialization. To
change it later use ./bin/accumulo org.apache.accumulo.server.util.ChangeSecret --old [oldpasswd] --new [newpasswd],
and then update this file.
</description>
</property>
After that everything just worked like a charm. Hope that helped. Post any queries, would be happy to help.
Related
We are using Confluent community edition setup for Kafka, currently we have a requirement to configure ACLs around the cluster, accordingly we have configured the zk and broker nodes so clients requires authentication(username/password) SASL_PLAINTEXT tokens to publish/subscribe to cluster, its working perfectly without schema registry, however while configuring schema-registry, its unable initialize throwing below exception even though we have configurred it to use SASL+PLAINTEXT connection with brokers/zk nodes. Is there anything I'm missing please help.
Please note we are using allow.everyone.if.no.acl.found=true flag and currently we dont have and ACL defined, so I don't think we need to setup any ACLs for _schemas topic which is used by schema registry to initialize.
[2019-12-17 00:33:23,844] ERROR Error starting the schema registry (io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication:64)
io.confluent.kafka.schemaregistry.exceptions.SchemaRegistryInitializationException: Error initializing kafka store while initializing schema registry
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:212)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.initSchemaRegistry(SchemaRegistryRestApplication.java:62)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:73)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryRestApplication.setupResources(SchemaRegistryRestApplication.java:40)
at io.confluent.rest.Application.createServer(Application.java:201)
at io.confluent.kafka.schemaregistry.rest.SchemaRegistryMain.main(SchemaRegistryMain.java:42)
Caused by: io.confluent.kafka.schemaregistry.storage.exceptions.StoreInitializationException: Timed out trying to create or validate schema topic configuration
at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:172)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.init(KafkaStore.java:114)
at io.confluent.kafka.schemaregistry.storage.KafkaSchemaRegistry.init(KafkaSchemaRegistry.java:210)
... 5 more
Caused by: java.util.concurrent.TimeoutException
at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:108)
at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:274)
at io.confluent.kafka.schemaregistry.storage.KafkaStore.createOrVerifySchemaTopic(KafkaStore.java:165)
... 7 more
Regarding Kafka-Zookeeper Security using DIGEST MD5 Authentication, I am trying to rotate/change credentials/password for both server(zookeeper) and client(kafka) jaas config file.
We have a 3 node cluster of 3 zookeepers and 3 kafka broker nodes with below jaas configuration file.
kafka.conf
org.apache.zookeeper.server.auth.DigestLoginModule required
username="super"
password="password";
};
zookeeper.conf
Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_super="password";
};
To rotate we do a rolling restart of server(zookeeper) instances after updating the credential(password) and during the process of rolling restart after updating the same credential/password for super user for client(kafka instances) one at a time, we notice
[2019-06-15 17:17:38,929] INFO [ZooKeeperClient] Waiting until connected. (kafka.zookeeper.ZooKeeperClient)
[2019-06-15 17:17:38,929] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)
these info level in server logs, which eventually results in unclean shutdown and restart of the broker which impacts the writes and reads for longer than expected. I have tried commenting requireClientAuthScheme=sasl in zookeeper zoo.cfg https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication to allow any clients authenticate to zookeeper but no success.
Also, alternative approach - tried to update the credential/password in jaas config file dynamically using sasl.jaas.config and do get the same exception documented in this jira (reference: https://issues.apache.org/jira/browse/KAFKA-8010).
can someone have any suggestions? Thanks in advance.
There are two approaches to control logging. One is via log4j.properties and another via controlling it programmatically. I have tried both:
Via log4j.properties file:
# disable logging for spark libraries
log4j.additivity.org=false
log4j.additivity.org.apache=false
#log4j.logger.org.apache=ERROR, NOAPPENDER
log4j.logger.org=ERROR, NOAPPENDER
and via programmatically:
org.apache.log4j.Logger logger = LogManager.getLogger(pkgName);
logger.setLevel(Level.ERROR);
I was able to suppress other logs but there are few INFO logs which are still getting printed:
INFO metastore: Connected to metastore.
INFO Hive: Registering function addfunc ca.nextpathway.hive.UDFToDate
and
INFO ContextHandler: Started o.s.j.s.ServletContextHandler#17f9344b{/static,null,AVAILABLE}
I want to suppress all the INFO logs except for few specific packages. But I think I am nowhere near it. If anyone knows what could be the problem here please let me know.
Try using the below. This should work.
Logger.getLogger("org.apache.hadoop.hive").setLevel(Level.ERROR);
The code
https://github.com/apache/hive/blob/master/ql/src/java/org/apache/hadoop/hive/ql/metadata/Hive.java has a bug. It creates the LOg as below:
Logger LOG = LoggerFactory.getLogger("hive.ql.metadata.Hive");
So the regular filter with org.apache.hadoop.hive does not work. Instead, you have to use "hive.ql.metadata.Hive". For example:
org.apache.log4j.Logger.getLogger("hive.ql.metadata.Hive").setLevel(Level.WARN);
I am trying to connect to Cassandra from pig.
But Cassandra is installed in different cluster i need to connect to connect to Cassandra remotely from pig.
I am referring following link exmaple
Getting error like
Failed to parse: Can not retrieve schema from loader org.apache.cassandra.hadoop.pig.CqlStorage#1216d9bf
at org.apache.pig.parser.QueryParserDriver.parse(QueryParserDriver.java:198)
at org.apache.pig.PigServer$Graph.parseQuery(PigServer.java:1688)
at org.apache.pig.PigServer$Graph.access$000(PigServer.java:1421)
at org.apache.pig.PigServer.parseAndBuild(PigServer.java:354)
at org.apache.pig.PigServer.executeBatch(PigServer.java:379)
at org.apache.pig.PigServer.executeBatch(PigServer.java:365)
at org.apache.pig.tools.grunt.GruntParser.executeBatch(GruntParser.java:140)
at org.apache.pig.tools.grunt.GruntParser.processDump(GruntParser.java:769)
at org.apache.pig.tools.pigscript.parser.PigScriptParser.parse(PigScriptParser.java:372)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:198)
at org.apache.pig.tools.grunt.GruntParser.parseStopOnError(GruntParser.java:173)
at org.apache.pig.tools.grunt.Grunt.exec(Grunt.java:84)
at org.apache.pig.Main.run(Main.java:484)
at org.apache.pig.Main.main(Main.java:158)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:606)
at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
My pig script is as follows
A = LOAD 'cql://userName:password/mykeyspace/mycolumnfamily'
USING org.apache.cassandra.hadoop.pig.CqlStorage()
AS (user_id:long, fname:chararray, last_update_date:chararray, lname:chararray);
DUMP A;
Please let me know where we have to provide the ip of the system where the Cassandra is installed
Something i got by searching on internet is http://www.datastax.com/dev/blog/cassandra-and-pig-tutorial
Querying Cassandra Using Pig
Starting the pig client through Datastax Enterprise.
This requires no setup beyond having started the cluster in Analytics mode.
(14:52:17)[~/BlogPosts/CassPig_Libraries]dse pig
2013-08-26 14:52:27,166 [main] INFO org.apache.pig.Main - Logging error messages to: /Users/russellspitzer/BlogPosts/CassPig_Libraries/pig_1377553947163.log
2013-08-26 14:52:27,421 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to hadoop file system at: cfs://127.0.0.1/
2013-08-26 14:52:27.488 java[64588:1503] Unable to load realm info from SCDynamicStore
2013-08-26 14:52:28,348 [main] INFO org.apache.pig.backend.hadoop.executionengine.HExecutionEngine - Connecting to map-reduce job tracker at: 127.0.0.1:8012
grunt>
Next we construct our pig commands, starting with loading our data from Cassandra. We’ll be using the cql:// url and the CqlStorage() connector. The format of the command is basically load ‘cql://keyspace/table’. More info on CQL3 and Pig.
grunt> libdata = load 'cql://libdata/libout' USING CqlStorage();
grunt> DESCRIBE libdata;
Set the following as environment variables (uppercase,
underscored), or as Hadoop configuration variables (lowercase, dotted):
* PIG_INITIAL_ADDRESS or cassandra.thrift.address : initial address to connect to
* PIG_RPC_PORT or cassandra.thrift.port : the port thrift is listening on
* PIG_PARTITIONER or cassandra.partitioner.class : cluster partitioner
For example, against a local node with the default settings, you'd use:
export PIG_INITIAL_ADDRESS=localhost
export PIG_RPC_PORT=9160
export PIG_PARTITIONER=org.apache.cassandra.dht.Murmur3Partitioner
These properties can be overridden with the following if you use different clusters for input and output:
* PIG_INPUT_INITIAL_ADDRESS : initial address to connect to for reading
* PIG_INPUT_RPC_PORT : the port thrift is listening on for reading
* PIG_INPUT_PARTITIONER : cluster partitioner for reading
* PIG_OUTPUT_INITIAL_ADDRESS : initial address to connect to for writing
* PIG_OUTPUT_RPC_PORT : the port thrift is listening on for writing
* PIG_OUTPUT_PARTITIONER : cluster partitioner for writing
For more reference refer the below URL
https://github.com/Stratio/stratio-cassandra/tree/master/examples/pig
Hope this Helps!!!...
I was trying to create a table inside Accumulo using the createtable command and found out that it was getting stuck. I waited for around 20 mins before cancelling the createtable command.
createtable test_table
I have one master and 2 tablet servers and found out that my master and one of the tablets died. I could not telnet to port 9997 of that particular tablet server and I could not even telnet to port 29999 (master.port.client in accumulo-site.xml). When I saw the tserver logs of the dead server, I saw the following entries.
2016-05-10 02:12:07,456 [zookeeper.DistributedWorkQueue] INFO : Got unexpected z
ookeeper event: None for /accumulo/be4f66be-1508-4314-9bff-888b56d9b0ce/recovery
2016-05-10 02:12:23,883 [zookeeper.ZooCache] WARN : Saw (possibly) transient exc
eption communicating with ZooKeeper, will retry
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode =
Session expired for /accumulo/be4f66be-1508-4314-9bff-888b56d9b0ce/tables
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.getChildren(ZooKeeper.java:1472)
at org.apache.accumulo.fate.zookeeper.ZooCache$1.run(ZooCache.java:210)
at org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
at org.apache.accumulo.fate.zookeeper.ZooCache.getChildren(ZooCache.java
:221)
at org.apache.accumulo.core.client.impl.Tables.exists(Tables.java:142)
at org.apache.accumulo.server.tabletserver.LargestFirstMemoryManager.tab
leExists(LargestFirstMemoryManager.java:149)
at org.apache.accumulo.server.tabletserver.LargestFirstMemoryManager.get
MemoryManagementActions(LargestFirstMemoryManager.java:175)
at org.apache.accumulo.tserver.TabletServerResourceManager$MemoryManagem
entFramework.manageMemory(TabletServerResourceManager.java:408)
at org.apache.accumulo.tserver.TabletServerResourceManager$MemoryManagem
entFramework.access$400(TabletServerResourceManager.java:318)
at org.apache.accumulo.tserver.TabletServerResourceManager$MemoryManagem
entFramework$2.run(TabletServerResourceManager.java:346)
at org.apache.accumulo.fate.util.LoggingRunnable.run(LoggingRunnable.jav
a:35)
at java.lang.Thread.run(Thread.java:745)
2016-05-10 02:12:23,884 [zookeeper.ZooCache] WARN : Saw (possibly) transient exc
eption communicating with ZooKeeper, will retry
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode =
Session expired for /accumulo/be4f66be-1508-4314-9bff-888b56d9b0ce/tables/!0/con
f/table.classpath.context
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at org.apache.accumulo.fate.zookeeper.ZooCache$2.run(ZooCache.java:264)
at org.apache.accumulo.fate.zookeeper.ZooCache.retry(ZooCache.java:162)
at org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:289)
at org.apache.accumulo.fate.zookeeper.ZooCache.get(ZooCache.java:238)
at org.apache.accumulo.server.conf.ZooCachePropertyAccessor.get(ZooCache
PropertyAccessor.java:117)
at org.apache.accumulo.server.conf.ZooCachePropertyAccessor.get(ZooCache
PropertyAccessor.java:103)
at org.apache.accumulo.server.conf.TableConfiguration.get(TableConfigura
tion.java:99)
at org.apache.accumulo.tserver.constraints.ConstraintChecker.classLoader
Changed(ConstraintChecker.java:93)
at org.apache.accumulo.tserver.tablet.Tablet.checkConstraints(Tablet.jav
a:1225)
at org.apache.accumulo.tserver.TabletServer$8.run(TabletServer.java:2848
)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:51
1)
at java.util.concurrent.FutureTask.runAndReset(FutureTask.java:308)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.
access$301(ScheduledThreadPoolExecutor.java:180)
at java.util.concurrent.ScheduledThreadPoolExecutor$ScheduledFutureTask.
run(ScheduledThreadPoolExecutor.java:294)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.
java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor
.java:617)
at java.lang.Thread.run(Thread.java:745)
2016-05-10 02:12:23,887 [zookeeper.ZooReader] WARN : Saw (possibly) transient ex
ception communicating with ZooKeeper
org.apache.zookeeper.KeeperException$SessionExpiredException: KeeperErrorCode =
Session expired for /accumulo/be4f66be-1508-4314-9bff-888b56d9b0ce/tservers/accu
mulo.tablet.2:9997
at org.apache.zookeeper.KeeperException.create(KeeperException.java:127)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51)
at org.apache.zookeeper.ZooKeeper.exists(ZooKeeper.java:1045)
at org.apache.accumulo.fate.zookeeper.ZooReader.getStatus(ZooReader.java
:132)
at org.apache.accumulo.fate.zookeeper.ZooLock.process(ZooLock.java:383)
at org.apache.zookeeper.ClientCnxn$EventThread.processEvent(ClientCnxn.j
ava:522)
at org.apache.zookeeper.ClientCnxn$EventThread.run(ClientCnxn.java:498)
2016-05-10 02:12:24,252 [watcher.MonitorLog4jWatcher] INFO : Changing monitor lo
g4j address to accumulo.master:4560
2016-05-10 02:12:24,252 [watcher.MonitorLog4jWatcher] INFO : Enabled log-forward
ing
Even the master server's logs had the same stacktrace. My zookeeper is running.
At first, I thought it was a disk issue. Maybe there was no space. But that was not the case. I ran the fsck on the accumulo instance.volumes and it returned the HEALTHY status.
Does anyone know what exactly happened and if possible, how to avoid it?
EDIT : Even the tracer_accumulo.master.log had the same stacktrace.
ZooKeeper session expirations occur when a thread inside the ZooKeeper client does not get run within the necessary time (by default, 30s) to maintain the session which is an in-memory state between ZooKeeper client and server. There is no single explanation for this, but many common culprits:
JVM garbage collection pauses in the client. Accumulo should log a warning if it experienced a pause.
Lack of CPU time. If the host itself is overburdened, Accumulo might not have the cycles to run all of the tasks it needs to in a timely manner.
Lack of sockets/filehandles, Accumulo could be trying to connect to ZooKeeper, but be unable to open new connections
ZooKeeper might be rate-limiting connections as a denial-of-service prevention. Check the zookeeper logs for errors about dropping/denying new connections from a specific IP, and, if you see these errors, consider increasing maxClientCnxns in zoo.cfg.