Cannot start Cassandra with "bin/cassandra -f" - cassandra

I have a problem of using Cassandra, I can start it with "bin/cassandra", but cannot start it with "bin/cassandra -f", anyone know the reason?
Here are the detailed info:
root#server1:~/cassandra# bin/cassandra -f
INFO 10:51:31,500 JNA not found. Native methods will be disabled.
INFO 10:51:31,740 DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO 10:51:32,043 Deleted /var/lib/cassandra/data/system/LocationInfo-61-Data.db
INFO 10:51:32,044 Deleted /var/lib/cassandra/data/system/LocationInfo-62-Data.db
INFO 10:51:32,052 Deleted /var/lib/cassandra/data/system/LocationInfo-63-Data.db
INFO 10:51:32,053 Deleted /var/lib/cassandra/data/system/LocationInfo-64-Data.db
INFO 10:51:32,063 Sampling index for /var/lib/cassandra/data/system/LocationInfo-65-Data.db
INFO 10:51:32,117 Sampling index for /var/lib/cassandra/data/Keyspace1/Standard2-5-Data.db
INFO 10:51:32,118 Sampling index for /var/lib/cassandra/data/Keyspace1/Standard2-6-Data.db
INFO 10:51:32,120 Sampling index for /var/lib/cassandra/data/Keyspace1/Standard2-7-Data.db
INFO 10:51:32,131 Replaying /var/lib/cassandra/commitlog/CommitLog-1285869561954.log
INFO 10:51:32,143 Finished reading /var/lib/cassandra/commitlog/CommitLog-1285869561954.log
INFO 10:51:32,145 Creating new commitlog segment /var/lib/cassandra/commitlog/CommitLog-1286301092145.log
INFO 10:51:32,153 Standard2 has reached its threshold; switching in a fresh Memtable at CommitLogContext(file='/var/lib/cassandra/commitlog/CommitLog-1286301092145.log', position=121)
INFO 10:51:32,155 Enqueuing flush of Memtable-Standard2#1811560891(29 bytes, 1 operations)
INFO 10:51:32,156 Writing Memtable-Standard2#1811560891(29 bytes, 1 operations)
INFO 10:51:32,200 Completed flushing /var/lib/cassandra/data/Keyspace1/Standard2-8-Data.db
INFO 10:51:32,203 Compacting [org.apache.cassandra.io.SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard2-5-Data.db'),org.apache.cassandra.io.SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard2-6-Data.db'),org.apache.cassandra.io.SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard2-7-Data.db'),org.apache.cassandra.io.SSTableReader(path='/var/lib/cassandra/data/Keyspace1/Standard2-8-Data.db')]
INFO 10:51:32,214 Recovery complete
INFO 10:51:32,214 Log replay complete
INFO 10:51:32,230 Saved Token found: 47408016217042861442279446207060121025
INFO 10:51:32,230 Saved ClusterName found: Test Cluster
INFO 10:51:32,231 Saved partitioner not found. Using org.apache.cassandra.dht.RandomPartitioner
INFO 10:51:32,250 LocationInfo has reached its threshold; switching in a fresh Memtable at CommitLogContext(file='/var/lib/cassandra/commitlog/CommitLog-1286301092145.log', position=345)
INFO 10:51:32,250 Enqueuing flush of Memtable-LocationInfo#1120194637(95 bytes, 2 operations)
INFO 10:51:32,251 Writing Memtable-LocationInfo#1120194637(95 bytes, 2 operations)
INFO 10:51:32,307 Completed flushing /var/lib/cassandra/data/system/LocationInfo-66-Data.db
INFO 10:51:32,316 Starting up server gossip
INFO 10:51:32,329 Compacted to /var/lib/cassandra/data/Keyspace1/Standard2-9-Data.db. 1670/1440 bytes for 6 keys. Time: 125ms.
INFO 10:51:32,366 Binding thrift service to /172.24.0.80:9160
INFO 10:51:32,369 Cassandra starting up...

I cant see any problems? (-f is short for 'foreground')

Related

how to check when did cassandra got shut down or started/re-started from logs?

nodetool status -- shows some nodes as down and also application is not able to reach the cassandra nodes. When I connect to cassandra nodes and check the logs there are some errors in the logs (system.log and debug.log).
Didn't understand how to check if when did Cassandra got started / re-started and got shutdown. Is there any way to check this from logs ? if so which log and how ?
In the logs folder of your Cassandra installation you should find system.log file. The default location for the log files is /var/log/cassandra.
When Cassandra starts the output looks like this:
INFO [main] 2018-08-18 19:10:27,162 YamlConfigurationLoader.java:89 - Configuration location: file:/home/20171127/.ccm/3113/node1/conf/cassandra.yaml
INFO [main] 2018-08-18 19:10:27,696 Config.java:495 - Node configuration:[allocate_tokens_for_keyspace=null; authenticator=AllowAllAuthenticator;INFO [main] 2018-08-18 19:10:27,698 DatabaseDescriptor.java:367 - DiskAccessMode 'auto' determined to be mmap, indexAccessMode is mmap
INFO [main] 2018-08-18 19:10:27,699 DatabaseDescriptor.java:425 - Global memtable on-heap threshold is enabled at 123MB
INFO [main] 2018-08-18 19:10:27,700 DatabaseDescriptor.java:429 - Global memtable off-heap threshold is enabled at 123MB
WARN [main] 2018-08-18 19:10:27,974 DatabaseDescriptor.java:550 - Only 30.922GiB free across all data volumes. Consider adding more capacity to your cluster or removing obsolete snapshots
INFO [main] 2018-08-18 19:10:28,025 RateBasedBackPressure.java:123 - Initialized back-pressure with high ratio: 0.9, factor: 5, flow: FAST, window size: 2000.
INFO [main] 2018-08-18 19:10:28,026 DatabaseDescriptor.java:729 - Back-pressure is disabled with strategy org.apache.cassandra.net.RateBasedBackPressure{high_ratio=0.9, factor=5, flow=FAST}.
INFO [main] 2018-08-18 19:10:28,265 JMXServerUtils.java:246 - Configured JMX server at: service:jmx:rmi://127.0.0.1/jndi/rmi://127.0.0.1:7100/jmxrmi
INFO [main] 2018-08-18 19:10:28,282 CassandraDaemon.java:473 - Hostname: 2VVg5
INFO [main] 2018-08-18 19:10:28,284 CassandraDaemon.java:480 - JVM vendor/version: OpenJDK 64-Bit Server VM/1.8.0_181
INFO [main] 2018-08-18 19:10:28,285 CassandraDaemon.java:481 - Heap size: 495.000MiB/495.000MiB
INFO [main] 2018-08-18 19:10:28,287 CassandraDaemon.java:486 - Code Cache Non-heap memory: init = 2555904(2496K) used = 4178816(4080K) committed = 4194304(4096K) max = 251658240(245760K)
INFO [main] 2018-08-18 19:10:28,289 CassandraDaemon.java:486 - Metaspace Non-heap memory: init = 0(0K) used = 18530200(18095K) committed = 19005440(18560K) max = -1(-1K)
INFO [main] 2018-08-18 19:10:28,289 CassandraDaemon.java:486 - Compressed Class Space Non-heap memory: init = 0(0K) used = 2092504(2043K) committed = 2228224(2176K) max = 1073741824(1048576K)
INFO [main] 2018-08-18 19:10:28,290 CassandraDaemon.java:486 - Par Eden Space Heap memory: init = 41943040(40960K) used = 41943040(40960K) committed = 41943040(40960K) max = 41943040(40960K)
INFO [main] 2018-08-18 19:10:28,291 CassandraDaemon.java:486 - Par Survivor Space Heap memory: init = 5242880(5120K) used = 5242880(5120K) committed = 5242880(5120K) max = 5242880(5120K)
INFO [main] 2018-08-18 19:10:28,293 CassandraDaemon.java:486 - CMS Old Gen Heap memory: init = 471859200(460800K) used = 669336(653K) committed = 471859200(460800K) max = 471859200(460800K)
When Cassandra shuts down properly, the log entries look like this:
INFO [StorageServiceShutdownHook] 2018-08-18 19:28:23,661 Gossiper.java:1559 - Announcing shutdown
INFO [StorageServiceShutdownHook] 2018-08-18 19:28:23,662 StorageService.java:2289 - Node /127.0.0.1 state jump to shutdown
INFO [StorageServiceShutdownHook] 2018-08-18 19:28:25,664 MessagingService.java:992 - Waiting for messaging service to quiesce
INFO [ACCEPT-/127.0.0.1] 2018-08-18 19:28:25,665 MessagingService.java:1346 - MessagingService has terminated the accept() thread
INFO [StorageServiceShutdownHook] 2018-08-18 19:28:25,729 HintsService.java:220 - Paused hints dispatch
Depending on how Cassandra stopped is possible to not have any log entries where you could see that Cassandra stopped.
Other interesting files that you could look into are debug.log and gc.log.

First query to cassandra tables through Thrift server takes too long

I am trying to query cassandra table through Thrift server. I have setup my spark cluster having one master and one worker in the same node.
I am starting thrift server with following command without having any custom configuration.
$SPARK_HOME/sbin/start-thriftserver.sh --packages com.datastax.spark:spark-cassandra-connector_2.11:2.0.2 --conf spark.cassandra.connection.host=127.0.0.1 --master spark://<spark-master>:7077
I have created following table in cassandra and inserted not more than 10 records in it and configured in hive metastore.
CREATE TABLE IF NOT EXISTS places_for_research(
research_id uuid,
tenant_id uuid,
country text,
place_id uuid,
PRIMARY KEY((tenant_id,research_id),country,place_id)
);
Now when I query this table from beeline, first time it takes around 19 seconds and on subsequent execution it reduces this time to half second.
Following is the query which I execute from beeline which return 2 records.
select * from places_for_research where tenant_id='340276cb-389b-4f57-a2cf-6ff5ec3e4d91' and research_id='95dafbe7-78d0-4509-9553-899dfaa7b858';
Wondering what is causing so much time for first request. How can I optimise first request performance?
Following is the thrift server logs for your ref
17/11/03 20:12:50 INFO SparkExecuteStatementOperation: Running query 'select * from places_for_research where tenant_id='340276cb-389b-4f57-a2cf-6ff5ec3e4d91' and research_id='95dafbe7-78d0-4509-9553-899dfaa7b858'' with 9d9a5c7c-2766-48c3-ab58-348b461b6577
17/11/03 20:12:50 INFO SparkSqlParser: Parsing command: select * from places_for_research where tenant_id='340276cb-389b-4f57-a2cf-6ff5ec3e4d91' and research_id='95dafbe7-78d0-4509-9553-899dfaa7b858'
17/11/03 20:12:51 INFO HiveMetaStore: 2: get_table : db=default tbl=places_for_research
17/11/03 20:12:51 INFO audit: ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=places_for_research
17/11/03 20:12:51 INFO HiveMetaStore: 2: Opening raw store with implemenation class:org.apache.hadoop.hive.metastore.ObjectStore
17/11/03 20:12:51 INFO ObjectStore: ObjectStore, initialize called
17/11/03 20:12:51 INFO Query: Reading in results for query "org.datanucleus.store.rdbms.query.SQLQuery#0" since the connection used is closing
17/11/03 20:12:51 INFO MetaStoreDirectSql: Using direct SQL, underlying DB is DERBY
17/11/03 20:12:51 INFO ObjectStore: Initialized ObjectStore
17/11/03 20:12:52 INFO CatalystSqlParser: Parsing command: array<string>
17/11/03 20:12:52 INFO HiveMetaStore: 2: get_table : db=default tbl=places_for_research
17/11/03 20:12:52 INFO audit: ugi=anonymous ip=unknown-ip-addr cmd=get_table : db=default tbl=places_for_research
17/11/03 20:12:52 INFO CatalystSqlParser: Parsing command: array<string>
17/11/03 20:12:53 INFO ClockFactory: Using native clock to generate timestamps.
17/11/03 20:12:53 WARN NettyUtil: Found Netty's native epoll transport, but not running on linux-based operating system. Using NIO instead.
17/11/03 20:12:54 INFO Cluster: New Cassandra host /127.0.0.1:9042 added
17/11/03 20:12:54 INFO CassandraConnector: Connected to Cassandra cluster: Test Cluster
17/11/03 20:12:55 INFO CassandraSourceRelation: Input Predicates: [IsNotNull(tenant_id), IsNotNull(research_id), EqualTo(tenant_id,340276cb-389b-4f57-a2cf-6ff5ec3e4d91), EqualTo(research_id,95dafbe7-78d0-4509-9553-899dfaa7b858)]
17/11/03 20:12:55 INFO CassandraSourceRelation: Input Predicates: [IsNotNull(tenant_id), IsNotNull(research_id), EqualTo(tenant_id,340276cb-389b-4f57-a2cf-6ff5ec3e4d91), EqualTo(research_id,95dafbe7-78d0-4509-9553-899dfaa7b858)]
17/11/03 20:12:57 INFO CodeGenerator: Code generated in 652.925772 ms
17/11/03 20:12:57 INFO SparkContext: Starting job: run at AccessController.java:0
17/11/03 20:12:57 INFO DAGScheduler: Got job 0 (run at AccessController.java:0) with 1 output partitions
17/11/03 20:12:57 INFO DAGScheduler: Final stage: ResultStage 0 (run at AccessController.java:0)
17/11/03 20:12:57 INFO DAGScheduler: Parents of final stage: List()
17/11/03 20:12:57 INFO DAGScheduler: Missing parents: List()
17/11/03 20:12:57 INFO DAGScheduler: Submitting ResultStage 0 (MapPartitionsRDD[6] at run at AccessController.java:0), which has no missing parents
17/11/03 20:12:58 INFO MemoryStore: Block broadcast_0 stored as values in memory (estimated size 12.8 KB, free 366.3 MB)
17/11/03 20:12:58 INFO MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 6.3 KB, free 366.3 MB)
17/11/03 20:12:58 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.110:57001 (size: 6.3 KB, free: 366.3 MB)
17/11/03 20:12:58 INFO SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:996
17/11/03 20:12:58 INFO DAGScheduler: Submitting 1 missing tasks from ResultStage 0 (MapPartitionsRDD[6] at run at AccessController.java:0)
17/11/03 20:12:58 INFO TaskSchedulerImpl: Adding task set 0.0 with 1 tasks
17/11/03 20:12:58 INFO TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, 192.168.1.110, executor 0, partition 0, ANY, 8403 bytes)
17/11/03 20:13:00 INFO BlockManagerInfo: Added broadcast_0_piece0 in memory on 192.168.1.110:57005 (size: 6.3 KB, free: 366.3 MB)
17/11/03 20:13:05 INFO CassandraConnector: Disconnected from Cassandra cluster: Test Cluster
17/11/03 20:13:09 INFO TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 11709 ms on 192.168.1.110 (executor 0) (1/1)
17/11/03 20:13:09 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks have all completed, from pool
17/11/03 20:13:10 INFO DAGScheduler: ResultStage 0 (run at AccessController.java:0) finished in 11.734 s
17/11/03 20:13:10 INFO DAGScheduler: Job 0 finished: run at AccessController.java:0, took 12.189787 s
17/11/03 20:13:10 INFO CodeGenerator: Code generated in 63.249603 ms
17/11/03 20:13:10 INFO SparkExecuteStatementOperation: Result Schema: StructType(StructField(tenant_id,StringType,true), StructField(research_id,StringType,true), StructField(country,StringType,true), StructField(place_id,StringType,true))
Thanks.
The Spark Thrift Server is lazy which means it doesn't actually start any machinery for doing queries until after the first query is launched. The delay you see is the actual starting up and requesting of remote resources. This will always take some non-zero amount of time but you could possibly avoid this by always having your thrift server immediately queried with a dummy request after being started up.

Cassandra stopped working after nodetool repair

After running "nodetool repair" command cassandra node gone down and did not start again.
INFO [main] 2016-10-19 12:44:50,244 ColumnFamilyStore.java:405 - Initializing system_schema.aggregates
INFO [main] 2016-10-19 12:44:50,247 ColumnFamilyStore.java:405 - Initializing system_schema.indexes
INFO [main] 2016-10-19 12:44:50,248 ViewManager.java:139 - Not submitting build tasks for views in keyspace system_schema as storage service is not initialized
Cassandra version 3.7
Turned on the node and it's fine. It took too long to start (more than 30 minutes).
INFO [main] 2016-10-19 15:32:48,348 ColumnFamilyStore.java:405 - Initializing system_schema.indexes
INFO [main] 2016-10-19 15:32:48,354 ViewManager.java:139 - Not submitting build tasks for views in keyspace system_schema as storage service is not initialized
INFO [main] 2016-10-19 16:07:36,529 ColumnFamilyStore.java:405 - Initializing system_distributed.parent_repair_history
INFO [main] 2016-10-19 16:07:36,546 ColumnFamilyStore.java:405 - Initializing system_distributed.repair_history
Now I'm trying to figure out why it is so slow.

Unable to start cassandra

I am trying to start cassandra so I did
sudo ./cassandra
I came across this
Error: Exception thrown by the agent : java.net.MalformedURLException: Local host name unknown: java.net.UnknownHostException: node24.nise.local: node24.nise.local
so I did what was mentioned as on problem on starting cassandra link and i changed the /etc/hosts file.
Then the starting process got stuck after this:
INFO 22:27:14,227 CFS(Keyspace='system', ColumnFamily='local') liveRatio is 33.904761904761905 (just-counted was 33.904761904761905). calculation took 110ms for 3 cells
INFO 22:27:14,260 Enqueuing flush of Memtable-local#726006040(84/840 serialized/live bytes, 4 ops)
INFO 22:27:14,262 Writing Memtable-local#726006040(84/2848 serialized/live bytes, 4 ops)
INFO 22:27:14,280 Completed flushing /var/lib/cassandra/data/system/local/system-local-jb-50-Data.db (116 bytes) for commitlog position ReplayPosition(segmentId=1401859631027, position=500327)
WARN 22:27:14,327 setting live ratio to maximum of 64.0 instead of Infinity
INFO 22:27:14,327 Enqueuing flush of Memtable-local#1689909512(10100/101000 serialized/live bytes, 259 ops)
INFO 22:27:14,328 CFS(Keyspace='system', ColumnFamily='local') liveRatio is 64.0 (just-counted was 64.0). calculation took 0ms for 0 cells
INFO 22:27:14,350 Writing Memtable-local#1689909512(10100/101000 serialized/live bytes, 259 ops)
INFO 22:27:14,386 Completed flushing /var/lib/cassandra/data/system/local/system-local-jb-51-Data.db (5278 bytes) for commitlog position ReplayPosition(segmentId=1401859631027, position=512328)
INFO 22:27:14,493 Node localhost/127.0.0.1 state jump to normal
No other line was executed after this . Can anyone help in letting me know why did this happen exactly.
I was getting same error ..
you just need to do give command in command prompt
hostname localhost (or the hostname of where cassandra is running)
This believe it will solve your problem
I think after this statement
INFO 22:27:14,493 Node localhost/127.0.0.1 state jump to normal
your server running normally, to verify do jps and check that CassandraDaemon is running or not.

Hadoop Namenode startup failure due to InconsistentFSSStateException

I am setting up a Hadoop (v1.1.1) cluster on Windows Azure. I am trying to launch the namenode process by using:
service hadoop-namenode start
However I am consistently getting the following error which is associated with when the VM reboots being wiped. I moved this directory out so it would not be deleted each time but it still occurs. Any help would be gratefully received.
STARTUP_MSG: Starting NameNode
STARTUP_MSG: host = master/10.77.42.61
STARTUP_MSG: args = []
STARTUP_MSG: version = 1.1.1
STARTUP_MSG: build = https://svn.apache.org/repos/asf/hadoop/common/branches/branch-1.1 -r 1411108; compiled by 'hortonfo' on Mon Nov 19 10:44:13 UTC 2012
************************************************************/
2012-12-13 09:38:54,102 INFO org.apache.hadoop.metrics2.impl.MetricsConfig: loaded properties from hadoop-metrics2.properties
2012-12-13 09:38:54,222 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source MetricsSystem,sub=Stats registered.
2012-12-13 09:38:54,230 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: Scheduled snapshot period at 60 second(s).
2012-12-13 09:38:54,230 INFO org.apache.hadoop.metrics2.impl.MetricsSystemImpl: NameNode metrics system started
2012-12-13 09:38:54,675 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source ugi registered.
2012-12-13 09:38:54,714 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source jvm registered.
2012-12-13 09:38:54,720 INFO org.apache.hadoop.metrics2.impl.MetricsSourceAdapter: MBean for source NameNode registered.
2012-12-13 09:38:54,804 INFO org.apache.hadoop.hdfs.util.GSet: VM type = 64-bit
2012-12-13 09:38:54,810 INFO org.apache.hadoop.hdfs.util.GSet: 2% max memory = 2.475 MB
2012-12-13 09:38:54,810 INFO org.apache.hadoop.hdfs.util.GSet: capacity = 2^18 = 262144 entries
2012-12-13 09:38:54,810 INFO org.apache.hadoop.hdfs.util.GSet: recommended=262144, actual=262144
2012-12-13 09:38:54,890 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: fsOwner=hdfs
2012-12-13 09:38:54,895 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: supergroup=supergroup
2012-12-13 09:38:54,895 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isPermissionEnabled=true
2012-12-13 09:38:54,915 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: dfs.block.invalidate.limit=100
2012-12-13 09:38:54,915 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: isAccessTokenEnabled=false accessKeyUpdateInterval=0 min(s), accessTokenLifetime=0 min(s)
2012-12-13 09:38:55,429 INFO org.apache.hadoop.hdfs.server.namenode.FSNamesystem: Registered FSNamesystemStateMBean and NameNodeMXBean
2012-12-13 09:38:55,465 INFO org.apache.hadoop.hdfs.server.namenode.NameNode: Caching file names occuring more than 10 times
2012-12-13 09:38:55,471 INFO org.apache.hadoop.hdfs.server.common.Storage: Cannot access storage directory /hadoop/name
2012-12-13 09:38:55,474 ERROR org.apache.hadoop.hdfs.server.namenode.FSNamesystem: FSNamesystem initialization failed.
org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /hadoop/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.initialize(FSNamesystem.java:411)
at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.<init>(FSNamesystem.java:379)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:277)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:529)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1403)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1412)
2012-12-13 09:38:55,476 ERROR org.apache.hadoop.hdfs.server.namenode.NameNode: org.apache.hadoop.hdfs.server.common.InconsistentFSStateException: Directory /hadoop/name is in an inconsistent state: storage directory does not exist or is not accessible.
at org.apache.hadoop.hdfs.server.namenode.FSImage.recoverTransitionRead(FSImage.java:303)
at org.apache.hadoop.hdfs.server.namenode.FSDirectory.loadFSImage(FSDirectory.java:100)
Change the permissions of the directory, which you have specified as the value of the "dfs.name.dir" property in your hdfs-site.xml file, to 755 and also change the user of this directory to the current user.BTW, Were you able to do a successful format?

Resources