Have been trying to run query neo4j database using python.
The code works for simplest of queries, but not for all. I dont get any exception, and the dont understand the root cause going by log files.
My code looks like this..
from neo4j.v1 import GraphDatabase, basic_auth
graph_url = "bolt://localhost:7687"
graph_username = "neo4j"
graph_password = "neo4j"
driver =GraphDatabase.driver(graph_url, auth=basic_auth(graph_username, graph_password))
session = driver.session()
query_simple="Create (enitity:n{name : 'john doe'})"
session.run(query_simple)
query = "LOAD CSV WITH HEADERS FROM 'http://data.neo4j.com/northwind/products.csv' AS row CREATE (n:Product) SET n = row n.unitPrice = toFloat(row.unitPrice), n.unitsInStock = toInt(row.unitsInStock), n.unitsOnOrder = toInt(row.unitsOnOrder), n.reorderLevel = toInt(row.reorderLevel), n.discontinued = (row.discontinued <> '0')"
session.run(query)
the simple query runs fine, but the other query doesnt run. Its a sample query which works on the neo4j gui on my local host
in the debug log files i am getting these two kind of error logs:
2016-07-06 22:14:27.062+0000 ERROR [o.n.b.v.t.BoltProtocolV1] Failed to write response to driver
java.lang.NullPointerException at
org.neo4j.bolt.v1.transport.ChunkedOutput.ensure(ChunkedOutput.java:156)
at
org.neo4j.bolt.v1.transport.ChunkedOutput.writeShort(ChunkedOutput.java:90)
at
org.neo4j.bolt.v1.packstream.PackStream$Packer.packStructHeader(PackStream.java:304)
at
org.neo4j.bolt.v1.messaging.PackStreamMessageFormatV1$Writer.handleFailureMessage(PackStreamMessageFormatV1.java:154)
at
org.neo4j.bolt.v1.messaging.msgprocess.MessageProcessingCallback.publishError(MessageProcessingCallback.java:48)
at
org.neo4j.bolt.v1.messaging.msgprocess.MessageProcessingCallback.completed(MessageProcessingCallback.java:98)
at
org.neo4j.bolt.v1.messaging.msgprocess.MessageProcessingCallback.completed(MessageProcessingCallback.java:31)
at
org.neo4j.bolt.v1.runtime.internal.SessionStateMachine.after(SessionStateMachine.java:823)
at
org.neo4j.bolt.v1.runtime.internal.SessionStateMachine.run(SessionStateMachine.java:655)
at
org.neo4j.bolt.v1.runtime.internal.concurrent.SessionWorkerFacade.lambda$run$3(SessionWorkerFacade.java:68)
at
org.neo4j.bolt.v1.runtime.internal.concurrent.SessionWorker.execute(SessionWorker.java:116)
at
org.neo4j.bolt.v1.runtime.internal.concurrent.SessionWorker.run(SessionWorker.java:77) at java.lang.Thread.run(Thread.java:745)
and
2016-07-06 20:52:20.588+0000 ERROR [o.n.b.t.SocketTransportHandler]
Fatal error occurred when handling a client connection: Connection
reset by peer Connection reset by peer java.io.IOException: Connection
reset by peer at sun.nio.ch.FileDispatcherImpl.read0(Native Method)
at sun.nio.ch.SocketDispatcher.read(SocketDispatcher.java:39) at
sun.nio.ch.IOUtil.readIntoNativeBuffer(IOUtil.java:223) at
sun.nio.ch.IOUtil.read(IOUtil.java:192) at
sun.nio.ch.SocketChannelImpl.read(SocketChannelImpl.java:380) at
io.netty.buffer.PooledUnsafeDirectByteBuf.setBytes(PooledUnsafeDirectByteBuf.java:311)
at
io.netty.buffer.AbstractByteBuf.writeBytes(AbstractByteBuf.java:881)
at
io.netty.channel.socket.nio.NioSocketChannel.doReadBytes(NioSocketChannel.java:242)
at
io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:119)
at
io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:511)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeysOptimized(NioEventLoop.java:468)
at
io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:382)
at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:354) at
io.netty.util.concurrent.SingleThreadEventExecutor$2.run(SingleThreadEventExecutor.java:111)
at java.lang.Thread.run(Thread.java:745)
I am using a community edition of neo4j on my system, python version 3.5
Thanks in advance :)
Have you noted the section below in the file conf/neo4j.conf?
# Determines if Cypher will allow using file URLs when loading data using
# `LOAD CSV`. Setting this value to `false` will cause Neo4j to fail `LOAD CSV`
# clauses that load data from the file system.
#dbms.security.allow_csv_import_from_file_urls=true
Yet (after uncommenting the line above and restarting neo4j) you may get another error related to what's explained at: https://neo4j.com/developer/kb/explanation-of-error-load-csv-error-of-couldnt-load-the-external-resource/
You could also try by downloading the csv file and save it into the import directory and then use:
LOAD CSV WITH HEADERS FROM 'file:///products.csv' AS row ...
Related
I am new to EMR and Bigdata,
We have an EMR step and that was working fine till last month, currently I am getting the below error.
--- Logging error ---
Traceback (most recent call last):
File "/mnt/yarn/usercache/hadoop/appcache/application_1660495066893_0006/container_1660495066893_0006_01_000001/src.zip/src/source/Data_Extraction.py", line 59, in process_job_description
df_job_desc = spark.read.schema(schema_jd).option('multiline',"true").json(self.filepath)
File "/mnt/yarn/usercache/hadoop/appcache/application_1660495066893_0006/container_1660495066893_0006_01_000001/pyspark.zip/pyspark/sql/readwriter.py", line 274, in json
return self._df(self._jreader.json(self._spark._sc._jvm.PythonUtils.toSeq(path)))
File "/mnt/yarn/usercache/hadoop/appcache/application_1660495066893_0006/container_1660495066893_0006_01_000001/py4j-0.10.7-src.zip/py4j/java_gateway.py", line 1257, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/mnt/yarn/usercache/hadoop/appcache/application_1660495066893_0006/container_1660495066893_0006_01_000001/pyspark.zip/pyspark/sql/utils.py", line 63, in deco
return f(*a, **kw)
File "/mnt/yarn/usercache/hadoop/appcache/application_1660495066893_0006/container_1660495066893_0006_01_000001/py4j-0.10.7-src.zip/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o115.json.
: java.io.IOException: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.SdkClientException: Unable to execute HTTP request: Remote host terminated the handshake
at com.amazon.ws.emr.hadoop.fs.s3n.Jets3tNativeFileSystemStore.list(Jets3tNativeFileSystemStore.java:421)
at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.listStatus(S3NativeFileSystem.java:654)
at com.amazon.ws.emr.hadoop.fs.s3n.S3NativeFileSystem.listStatus(S3NativeFileSystem.java:625)
at com.amazon.ws.emr.hadoop.fs.EmrFileSystem.listStatus(EmrFileSystem.java:473)
at
these json files are presents in S3, I downloaded some of the files to reproduce the issue in local,
when I have smaller set of data, it is working fine, but in EMR im unable to reproduce.
also, I checked Application details of EMR for this step.
it says undefined status for status with the below details.
Details:org.apache.spark.sql.Dataset.collectToPython(Dataset.scala:3285)
sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
java.lang.reflect.Method.invoke(Method.java:498)
py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
py4j.Gateway.invoke(Gateway.java:282)
py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
py4j.commands.CallCommand.execute(CallCommand.java:79)
py4j.GatewayConnection.run(GatewayConnection.java:238)
java.lang.Thread.run(Thread.java:750)
spark session creation
from pyspark.sql import SparkSession
from pyspark.conf import SparkConf
spark_builder = (
SparkSession\
.builder\
.config(conf=SparkConf())\
.appName("test"))
spark = spark_builder.getOrCreate()
I am not sure, what went wrong suddenly with this step, please help.
Your error indicates a failed security protocol as suggested by various results from googling all pointing to throttling/rejecting incoming TLS connections. Given that this occurs in the context of a backoff strategy.
You can further try these suggestions for retrying with exponential backoff strategy - here and limiting your requests by utilising the AMID.
Additionally you can check you DNS quotas to check if that is not limiting anything or exhausting your quota
Further add your Application Environment to further check if an outdated version might be causing this-
EMR Release version
Spark Versions
AWS SDK Version
AMI [ Amazon Linux Machine Images ] versions
Java & JVM Details
Hadoop Details
Recommended Environment would be to use - AMI 2.x , EMR - 5.3x and the compatible SDKs towards the same [ Preferably AWSS3JavaClient 1.11x ]
More info about EMR releases can be found here
Additionally provide a clear snippet , how are you exactly reading your json files from S3 , are you doing it in an iterative fashion , 1 after the other or in bulk or batches
References used -
https://github.com/aws/aws-sdk-java/issues/2269
javax.net.ssl.SSLHandshakeException: Remote host closed connection during handshake during web service communicaiton
https://github.com/aws/aws-sdk-java/issues/1405
From your error message: ...SdkClientException: Unable to execute HTTP request: Remote host terminated the handshake, seems like you've got a security protocol that is not accepted by the host or the error indicates that the connection was closed on the service side before the SDK was able to perform handshake. You should add a try/except block and add some delay between retrys, to handle those
errors = 0
while errors < 5:
try:
df_job_desc = spark.read.schema(schema_jd).option('multiline',"true").json(self.filepath)
errors = 0
except:
time.sleep(1)
errors += 1
pass
I'm using ubuntu 16 and I downloaded and installed the last version of android-studio (file android-studio-ide-181.5056338-linux.zip).
I'm running android-studio behind a corporate proxy. I unzipped the file and I launched the studio.sh and I configured the proxy by following these links https://developer.android.com/studio/intro/studio-config#proxy and link https://www.jetbrains.com/help/idea/2018.1/settings-http-proxy.html
Note I configured also my proxy authentication (both username and password) as showed in this image:
When I check the connection if I use some unsecure URLs (e.g. http://www.google.it) all works pretty good; the check is OK. If I use secure URLs (e.g. https://www.google.it) I get a 407 error (authentication required)
as showed in this image
Then I saved the settings and closed the IDE. Note I created no project yet... I just installed the IDE.
Now every time I start the IDE in the idea.log I see the following error:
2018-10-24 10:21:40,985 [e-1136-b06] INFO - CompilerWorkspaceConfiguration - Available processors: 4
2018-10-24 10:21:48,630 [d thread 2] INFO - ateSettings.impl.UpdateChecker - Status: 407
com.intellij.util.io.HttpRequests$HttpStatusException: Status: 407. Status=407, Url=https://dl.google.com/android/studio/patches/updates.xml?uid=a40d2545-8f35-41af-bfbd-4bb898d2dc21&os=Linux+4.4.0-138-generic&build=AI-181.5540.7.32.5056338
at com.intellij.util.io.HttpRequests.openConnection(HttpRequests.java:512)
at com.intellij.util.io.HttpRequests.access$300(HttpRequests.java:49)
at com.intellij.util.io.HttpRequests$RequestImpl.getConnection(HttpRequests.java:278)
at com.intellij.util.io.HttpRequests$RequestImpl.getInputStream(HttpRequests.java:287)
at com.intellij.util.io.HttpRequests$RequestImpl.getReader(HttpRequests.java:305)
at com.intellij.util.io.HttpRequests$RequestImpl.getReader(HttpRequests.java:298)
at com.intellij.openapi.updateSettings.impl.UpdateChecker$checkPlatformUpdate$1.process(UpdateChecker.kt:176)
at com.intellij.openapi.updateSettings.impl.UpdateChecker$checkPlatformUpdate$1.process(UpdateChecker.kt:54)
at com.intellij.util.io.HttpRequests.lambda$doProcess$0(HttpRequests.java:403)
at com.intellij.util.net.ssl.CertificateManager.runWithUntrustedCertificateStrategy(CertificateManager.java:335)
at com.intellij.util.io.HttpRequests.doProcess(HttpRequests.java:403)
at com.intellij.util.io.HttpRequests.process(HttpRequests.java:383)
at com.intellij.util.io.HttpRequests.access$100(HttpRequests.java:49)
at com.intellij.util.io.HttpRequests$RequestBuilderImpl.connect(HttpRequests.java:252)
at com.intellij.openapi.updateSettings.impl.UpdateChecker.checkPlatformUpdate(UpdateChecker.kt:174)
at com.intellij.openapi.updateSettings.impl.UpdateChecker.doUpdateAndShowResult(UpdateChecker.kt:120)
at com.intellij.openapi.updateSettings.impl.UpdateChecker.access$doUpdateAndShowResult(UpdateChecker.kt:54)
at com.intellij.openapi.updateSettings.impl.UpdateChecker$updateAndShowResult$2.run(UpdateChecker.kt:97)
at com.intellij.openapi.progress.impl.CoreProgressManager$TaskRunnable.run(CoreProgressManager.java:750)
at com.intellij.openapi.progress.impl.CoreProgressManager$5.run(CoreProgressManager.java:434)
at com.intellij.openapi.progress.impl.CoreProgressManager.lambda$runProcess$1(CoreProgressManager.java:157)
at com.intellij.openapi.progress.impl.CoreProgressManager.registerIndicatorAndRun(CoreProgressManager.java:580)
at com.intellij.openapi.progress.impl.CoreProgressManager.executeProcessUnderProgress(CoreProgressManager.java:525)
at com.intellij.openapi.progress.impl.ProgressManagerImpl.executeProcessUnderProgress(ProgressManagerImpl.java:85)
at com.intellij.openapi.progress.impl.CoreProgressManager.runProcess(CoreProgressManager.java:144)
at com.intellij.openapi.application.impl.ApplicationImpl.lambda$null$10(ApplicationImpl.java:565)
at com.intellij.openapi.application.impl.ApplicationImpl$1.run(ApplicationImpl.java:305)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
2018-10-24 10:21:48,631 [d thread 2] INFO - ateSettings.impl.UpdateChecker - Connection failed (Status: 407). Please check network connection and try again.
So it seems that my proxy settings are not used by the update phase. Note i tried also to use unsecure connections but the error remains.
Ho can I solve this issue?
Thank you
Angelo
Try this.
Change your Proxy settings like this.
And test your connection again.
https://developer.android.com/studio/intro/studio-config#proxy
I am trying to add a node to the cluster. Adding new node to the cluster fails with a broken pipe. Cassandra fails after starting within 2 minutes. I removed the node from the ring and adding it back fails.
OS info: 4.4.0-59-generic #80-Ubuntu SMP x86_64 x86_64 x86_64 GNU/Linux.
This is the error I get on the node that I am trying to bootstrap.
cassandra version - 2.2.7. Getting Broken pipe exception..
ERROR [STREAM-OUT-/123.120.56.71] 2017-04-10 23:46:15,410 StreamSession.java:532 - Stream #cbb7a150-1e47-11e7-a556-a98ec456f4de Streaming error occurred
org.apache.cassandra.io.FSReadError: java.io.IOException: Broken pipe
at org.apache.cassandra.io.util.ChannelProxy.transferTo(ChannelProxy.java:144) ~[apache-cassandra-2.2.7.jar:2.2.7]
at org.apache.cassandra.streaming.compress.CompressedStreamWriter$1.apply(CompressedStreamWriter.java:91) ~[apache-cassandra-2.2.7.jar:2.2. 7]
at org.apache.cassandra.streaming.compress.CompressedStreamWriter$1.apply(CompressedStreamWriter.java:88) ~[apache-cassandra-2.2.7.jar:2.2. 7]
at org.apache.cassandra.io.util.BufferedDataOutputStreamPlus.applyToChannel(BufferedDataOutputStreamPlus.java:297) ~[apache-cassandra-2.2.7 .jar:2.2.7]
at org.apache.cassandra.streaming.compress.CompressedStreamWriter.write(CompressedStreamWriter.java:87) ~[apache-cassandra-2.2.7.jar:2.2.7]
at org.apache.cassandra.streaming.messages.OutgoingFileMessage.serialize(OutgoingFileMessage.java:90) ~[apache-cassandra-2.2.7.jar:2.2.7]
at org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:48) ~[apache-cassandra-2.2.7.jar:2.2.7]
at org.apache.cassandra.streaming.messages.OutgoingFileMessage$1.serialize(OutgoingFileMessage.java:40) ~[apache-cassandra-2.2.7.jar:2.2.7]
at org.apache.cassandra.streaming.messages.StreamMessage.serialize(StreamMessage.java:47) ~[apache-cassandra-2.2.7.jar:2.2.7]
at org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.sendMessage(ConnectionHandler.java:389) ~[apache-cassandra-2.2.7 .jar:2.2.7]
at org.apache.cassandra.streaming.ConnectionHandler$OutgoingMessageHandler.run(ConnectionHandler.java:361) ~[apache-cassandra-2.2.7.jar:2.2.7]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_101]
Caused by: java.io.IOException: Broken pipe
at sun.nio.ch.FileChannelImpl.transferTo0(Native Method) ~[na:1.8.0_101]
at sun.nio.ch.FileChannelImpl.transferToDirectlyInternal(FileChannelImpl.java:428) ~[na:1.8.0_101]
at sun.nio.ch.FileChannelImpl.transferToDirectly(FileChannelImpl.java:493) ~[na:1.8.0_101]
at sun.nio.ch.FileChannelImpl.transferTo(FileChannelImpl.java:608) ~[na:1.8.0_101]
at org.apache.cassandra.io.util.ChannelProxy.transferTo(ChannelProxy.java:140) ~[apache-cassandra-2.2.7.jar:2.2.7]
... 11 common frames omitted
INFO [STREAM-OUT-/123.120.56.71] 2017-04-10 23:46:15,424 StreamResultFuture.java:183 - Stream #cbb7a150-1e47-11e7-a556-a98ec456f4de Session with / 123.120.56.71 is complete
WARN [STREAM-OUT-/123.120.56.71] 2017-04-10 23:46:15,425 StreamResultFuture.java:210 - Stream #cbb7a150-1e47-11e7-a556-a98ec456f4de Stream failed
Can be due to corrupted data, wrong ssl configuration, schema disagreement or network failures.
Look like you have corrupted data or schema disagreement, so try the following:
1) Remove all the data from your data and commitlog directories, and then try to start.
2) If it doesn't help, try to to start with auto_bootstrap: false in cassandra.yaml. After the node starts and up, run nodetool rebuild.
If it fails, please attach all the errors here.
I tried to add a new list to the arabic plugin gazetteer.
I followed the following steps:
create a new file "tags.lst" to the directory "GATE_Developer_8.1\plugins\Lang_Arabic\resources\gazetteer\"
append the "lists.def" file with: "tags.lst:tags::arabic"
When launching the gate software, a window pop-up with the following message:
Resource could not be created!
gate.creole.ResourceInstantiationException:
gate.util.GateRuntimeException: Error loading list: tags.lst:
java.io.IOException: The system cannot find the path specified.
Here is the full exception:
gate.creole.ResourceInstantiationException: gate.util.GateRuntimeException: Error loading list: tags.lst: java.io.IOException: The system cannot find the path specified
at gate.creole.gazetteer.LinearDefinition.load(LinearDefinition.java:281)
at gate.creole.gazetteer.DefaultGazetteer.init(DefaultGazetteer.java:119)
at gate.Factory.createResource(Factory.java:432)
at gate.gui.NewResourceDialog$4.run(NewResourceDialog.java:257)
at java.lang.Thread.run(Thread.java:745)
Caused by: gate.util.GateRuntimeException: Error loading list: tags.lst: java.io.IOException: The system cannot find the path specified
at gate.creole.gazetteer.LinearDefinition.add(LinearDefinition.java:527)
at gate.creole.gazetteer.LinearDefinition.load(LinearDefinition.java:276)
... 4 more
Caused by: gate.creole.ResourceInstantiationException: java.io.IOException: The system cannot find the path specified
at gate.creole.gazetteer.LinearDefinition.loadSingleList(LinearDefinition.java:199)
at gate.creole.gazetteer.LinearDefinition.loadSingleList(LinearDefinition.java:158)
at gate.creole.gazetteer.LinearDefinition.add(LinearDefinition.java:520)
... 5 more
Caused by: java.io.IOException: The system cannot find the path specified
at java.io.WinNTFileSystem.createFileExclusively(Native Method)
at java.io.File.createNewFile(File.java:1012)
at gate.creole.gazetteer.LinearDefinition.loadSingleList(LinearDefinition.java:188)
... 7 more
I will appreciate any help ??
The Problem was due to two major issues which are:
The file was not saved correctly as utf-8 encoding which was resolved by using online converter: http://www.motobit.com/util/charset-codepage-conversion.asp
The file contains special characters which were resolved by using the following replaceAll regular expression [#|"|:]:
line = line.replaceAll("[#|\"|:]", " ");
I am trying to upgrade a Cassandra 2.1.0 cluster to 2.1.8 (latest release).
When I start a first node with 2.1.8 runtime, I get an error and the node refuses to start.
This is the error's stack trace :
org.apache.cassandra.io.FSReadError: java.lang.NullPointerException
at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:642) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:302) [apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:524) [apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:613) [apache-cassandra-2.1.8.jar:2.1.8]
Caused by: java.lang.NullPointerException: null
at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:634) ~[apache-cassandra-2.1.8.jar:2.1.8]
... 3 common frames omitted
FSReadError in Failed to remove unfinished compaction leftovers (file: /home/nudgeca2/datas/data/main/segment-97b5ba00571011e49a928bffe429b6b5/main-segment-ka-15432-Statistics.db). See log for details.
at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:642)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:302)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:524)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:613)
Caused by: java.lang.NullPointerException
at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:634)
... 3 more
Exception encountered during startup: java.lang.NullPointerException
The cluster has 7 nodes and it turns on AWS Linux EC2 instances.
The node I try to upgrade was stopped after a nodetool drain.
Then I tried to come back to 2.1.0 runtime but I now get a similar error.
I also tried to stop and start another node and everything was ok, the node restarted without any problem.
I tried to touch the missing file (as it should be removed, I thought it would perhaps not need a specific content). I had two other files with the same error that I also touched. And finally the node fails further while trying to read these files.
Anyone has any idea what I should do ?
Thank you for any help.
It might be worth opening a Jira for that issue, so if nothing else, they can catch the NPE and provide a better error message.
It looks like it's trying to open:
file: /home/nudgeca2/datas/data/main/segment-97b5ba00571011e49a928bffe429b6b5/main-segment-ka-15432-Statistics.db
It's possible that it's trying to read that file because it finds the associated data file: (/home/nudgeca2/datas/data/main/segment-97b5ba00571011e49a928bffe429b6b5/main-segment-ka-15432-Data.db). Does that data file exist? I'd be tempted to move it out of the way, and see if it starts properly.