I am using nutch 1.11 (published in 07 December 2015) and using bin/crawl command to help me do the work and everything is ok until it reaches to solrindex command to put the data to solr search engine, it cause error:
SolrIndexWriter
solr.server.type : Type of SolrServer to communicate with (default 'http' however options include 'cloud', 'lb' and 'concurrent')
solr.server.url : URL of the Solr instance (mandatory)
solr.zookeeper.url : URL of the Zookeeper URL (mandatory if 'cloud' value for solr.server.type)
solr.loadbalance.urls : Comma-separated string of Solr server strings to be used (madatory if 'lb' value for solr.server.type)
solr.mapping.file : name of the mapping file for fields (default solrindex-mapping.xml)
solr.commit.size : buffer size when sending to Solr (default 1000)
solr.auth : use authentication (default false)
solr.auth.username : username for authentication
solr.auth.password : password for authentication
2016-01-28 02:49:41,422 INFO indexer.IndexerMapReduce - IndexerMapReduce: crawldb: nutchweb/crawldb
2016-01-28 02:49:41,425 INFO indexer.IndexerMapReduce - IndexerMapReduce: linkdb: nutchweb/linkdb
2016-01-28 02:49:41,425 INFO indexer.IndexerMapReduce - IndexerMapReduces: adding segment: nutchweb/segments/20160127234706
2016-01-28 02:49:41,652 WARN util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
2016-01-28 02:49:42,586 WARN conf.Configuration - file:/tmp/hadoop-micky/mapred/staging/micky810285982/.staging/job_local810285982_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2016-01-28 02:49:42,587 WARN conf.Configuration - file:/tmp/hadoop-micky/mapred/staging/micky810285982/.staging/job_local810285982_0001/job.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2016-01-28 02:49:42,751 WARN conf.Configuration - file:/tmp/hadoop-micky/mapred/local/localRunner/micky/job_local810285982_0001/job_local810285982_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.retry.interval; Ignoring.
2016-01-28 02:49:42,752 WARN conf.Configuration - file:/tmp/hadoop-micky/mapred/local/localRunner/micky/job_local810285982_0001/job_local810285982_0001.xml:an attempt to override final parameter: mapreduce.job.end-notification.max.attempts; Ignoring.
2016-01-28 02:49:43,342 INFO anchor.AnchorIndexingFilter - Anchor deduplication is: off
2016-01-28 02:49:49,230 INFO indexer.IndexWriters - Adding org.apache.nutch.indexwriter.solr.SolrIndexWriter
2016-01-28 02:49:50,627 INFO solr.SolrMappingReader - source: content dest: content
2016-01-28 02:49:50,627 INFO solr.SolrMappingReader - source: title dest: title
2016-01-28 02:49:50,627 INFO solr.SolrMappingReader - source: host dest: host
2016-01-28 02:49:50,627 INFO solr.SolrMappingReader - source: segment dest: segment
2016-01-28 02:49:50,627 INFO solr.SolrMappingReader - source: boost dest: boost
2016-01-28 02:49:50,627 INFO solr.SolrMappingReader - source: digest dest: digest
2016-01-28 02:49:50,627 INFO solr.SolrMappingReader - source: tstamp dest: tstamp
2016-01-28 02:49:50,959 INFO solr.SolrIndexWriter - Indexing 250 documents
2016-01-28 02:49:50,960 INFO solr.SolrIndexWriter - Deleting 0 documents
2016-01-28 02:49:54,346 INFO solr.SolrIndexWriter - Indexing 250 documents
2016-01-28 02:50:06,471 WARN mapred.LocalJobRunner - job_local810285982_0001
java.lang.Exception: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Exception writing document id http://nutch.apache.org/apidocs/apidocs-1.1/overview-tree.html to the index; possible analysis error.
at org.apache.hadoop.mapred.LocalJobRunner$Job.runTasks(LocalJobRunner.java:462)
at org.apache.hadoop.mapred.LocalJobRunner$Job.run(LocalJobRunner.java:529)
Caused by: org.apache.solr.client.solrj.impl.HttpSolrServer$RemoteSolrException: Exception writing document id http://nutch.apache.org/apidocs/apidocs-1.1/overview-tree.html to the index; possible analysis error.
at org.apache.solr.client.solrj.impl.HttpSolrServer.executeMethod(HttpSolrServer.java:552)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:210)
at org.apache.solr.client.solrj.impl.HttpSolrServer.request(HttpSolrServer.java:206)
at org.apache.solr.client.solrj.request.AbstractUpdateRequest.process(AbstractUpdateRequest.java:124)
at org.apache.nutch.indexwriter.solr.SolrIndexWriter.write(SolrIndexWriter.java:134)
at org.apache.nutch.indexer.IndexWriters.write(IndexWriters.java:85)
at org.apache.nutch.indexer.IndexerOutputFormat$1.write(IndexerOutputFormat.java:50)
at org.apache.nutch.indexer.IndexerOutputFormat$1.write(IndexerOutputFormat.java:41)
at org.apache.hadoop.mapred.ReduceTask$OldTrackingRecordWriter.write(ReduceTask.java:493)
at org.apache.hadoop.mapred.ReduceTask$3.collect(ReduceTask.java:422)
at org.apache.nutch.indexer.IndexerMapReduce.reduce(IndexerMapReduce.java:356)
at org.apache.nutch.indexer.IndexerMapReduce.reduce(IndexerMapReduce.java:56)
at org.apache.hadoop.mapred.ReduceTask.runOldReducer(ReduceTask.java:444)
at org.apache.hadoop.mapred.ReduceTask.run(ReduceTask.java:392)
at org.apache.hadoop.mapred.LocalJobRunner$Job$ReduceTaskRunnable.run(LocalJobRunner.java:319)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:471)
at java.util.concurrent.FutureTask.run(FutureTask.java:262)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:745)
2016-01-28 02:50:07,330 ERROR indexer.IndexingJob - Indexer: java.io.IOException: Job failed!
at org.apache.hadoop.mapred.JobClient.runJob(JobClient.java:836)
at org.apache.nutch.indexer.IndexingJob.index(IndexingJob.java:145)
at org.apache.nutch.indexer.IndexingJob.run(IndexingJob.java:222)
at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
at org.apache.nutch.indexer.IndexingJob.main(IndexingJob.java:231)
I found out that the error is
Exception writing document id
http://nutch.apache.org/apidocs/apidocs-1.1/overview-tree.html to the
index; possible analysis error.
And it seems that no one encounter this error before, please help.
Related
We are using spark standalone cluster 3 zookeepers in HA mode.I am seeing this issue in zookeeper.log.
Exception causing close of session 0x0 due to java.io.IOException: Len error 1195725856
Closed socket connection for client /10.23...... (no session established for client)
Zookeeper leader is getting auto failed over from one server to another server and hence followed by this, spark master is getting auto failed over
Also Some clients are getting continually disconnected/reconnected with this error.
How to fix
Full Log:
[myid:3] - WARN [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#354] - Exception causing close of session 0x0 due to java.io.IOException: Len error 1195725856
[myid:3] - INFO [NIOServerCxn.Factory:0.0.0.0/0.0.0.0:2181:NIOServerCnxn#1001] - Closed socket connection for client /10....... (no session established for client)
[myid:3] - ERROR [LearnerHandler-/159.1.......:LearnerHandler#562] - Unexpected exception causing shutdown while sock still open
[myid:3] - WARN [LearnerHandler-/159.1......:LearnerHandler#575] - ******* GOODBYE /159.1..... ********
[myid:3] - INFO [WorkerReceiver[myid=3]:FastLeaderElection#542] - Notification: 1 (n.leader), 0x29000000ed (n.zxid), 0xa (n.round), LOOKING (n.state), 1 (n.sid), 0x29 (n.peerEPoch), LEADING (my state)
[myid:3] - INFO [LearnerHandler-/159........:LearnerHandler#263] - Follower sid: 1 : info : org.apache.zookeeper.server.quorum.QuorumPeer$QuorumServer#e144103
[myid:3] - INFO [LearnerHandler-/159.......LearnerHandler#318] - Synchronizing with Follower sid: 1 maxCommittedLog=0x29000000ed minCommittedLog=0x2800000007 peerLastZxid=0x29000000ed
[myid:3] - INFO [LearnerHandler-/159.......:LearnerHandler#395] - Sending DIFF
How can I generate nodejs-server with OAS 2.0 as input with openapi-generator?
openapi-generator runs on two versions 3.3.4 and 4.0.0.
The results of my execution are listed below.
■in 3.3.4
java -jar openapi-generator-cli-3.3.4.jar generate -i petstore.json -g nodejs-server -o stub
[main] WARN o.o.c.ignore.CodegenIgnoreProcessor - Output directory does not exist, or is inaccessible. No file (.openapi-generator-ignore) will be evaluated.
[main] WARN o.o.c.languages.NodeJSServerCodegen -
=======================================================================================
Currently, Node.js server doesn't work as its dependency doesn't support OpenAPI Spec3.
For further details, see https://github.com/OpenAPITools/openapi-generator/issues/34
=======================================================================================
[main] INFO o.o.codegen.DefaultGenerator - Model Pets not generated since it's an alias to array (without property)
Exception in thread "main" java.lang.RuntimeException: Could not generate api file for 'Pets'
at org.openapitools.codegen.DefaultGenerator.generateApis(DefaultGenerator.java:651)
at org.openapitools.codegen.DefaultGenerator.generate(DefaultGenerator.java:891)
at org.openapitools.codegen.cmd.Generate.run(Generate.java:355)
at org.openapitools.codegen.OpenAPIGenerator.main(OpenAPIGenerator.java:62)
Caused by: java.lang.IllegalArgumentException: character to be escaped is missing
at java.util.regex.Matcher.appendReplacement(Matcher.java:809)
at java.util.regex.Matcher.replaceAll(Matcher.java:955)
at java.lang.String.replaceAll(String.java:2223)
at org.openapitools.codegen.languages.NodeJSServerCodegen.apiFilename(NodeJSServerCodegen.java:192)
at org.openapitools.codegen.DefaultGenerator.generateApis(DefaultGenerator.java:595)
... 3 more
■in 4.0.0
java -jar openapi-generator-cli-4.0.0.jar generate -i petstore.json -g nodejs-server -o stub
[main] WARN o.o.c.ignore.CodegenIgnoreProcessor - Output directory does not exist, or is inaccessible. No file (.openapi-generator-ignore) will be evaluated.
[main] INFO o.o.codegen.DefaultGenerator - OpenAPI Generator: nodejs-server (server)
[main] INFO o.o.codegen.DefaultGenerator - Generator 'nodejs-server' is considered stable.
[main] WARN o.o.c.languages.NodeJSServerCodegen -
=======================================================================================
Currently, Node.js server doesn't work as its dependency doesn't support OpenAPI Spec3.
For further details, see https://github.com/OpenAPITools/openapi-generator/issues/34
=======================================================================================
[main] INFO o.o.codegen.DefaultGenerator - Model Pets not generated since it's an alias to array (without property) and `generateAliasAsModel` is set to false (default)
Exception in thread "main" java.lang.RuntimeException: Could not generate api file for 'Pets'
at org.openapitools.codegen.DefaultGenerator.generateApis(DefaultGenerator.java:666)
at org.openapitools.codegen.DefaultGenerator.generate(DefaultGenerator.java:922)
at org.openapitools.codegen.cmd.Generate.run(Generate.java:396)
at org.openapitools.codegen.OpenAPIGenerator.main(OpenAPIGenerator.java:60)
Caused by: java.lang.IllegalArgumentException: character to be escaped is missing
at java.util.regex.Matcher.appendReplacement(Matcher.java:809)
at java.util.regex.Matcher.replaceAll(Matcher.java:955)
at java.lang.String.replaceAll(String.java:2223)
at org.openapitools.codegen.languages.NodeJSServerCodegen.apiFilename(NodeJSServerCodegen.java:181)
at org.openapitools.codegen.DefaultGenerator.generateApis(DefaultGenerator.java:611)
... 3 more
The input OAS uses the following.
https://raw.githubusercontent.com/OAI/OpenAPI-Specification/master/examples/v2.0/json/petstore.json
The execution environment is as follows.
・windows10
・java 1.8.0_202
It works for me:
java -jar modules/openapi-generator-cli/target/openapi-generator-cli.jar generate -g nodejs-server -i https://raw.githubusercontent.com/OAI/OpenAPI-Specification/master/examples/v2.0/json/petstore.json -o /tmp/nodejs-server
[main] INFO o.o.codegen.DefaultGenerator - OpenAPI Generator: nodejs-server (server)
[main] INFO o.o.codegen.DefaultGenerator - Generator 'nodejs-server' is considered stable.
[main] WARN o.o.c.languages.NodeJSServerCodegen -
=======================================================================================
Currently, Node.js server doesn't work as its dependency doesn't support OpenAPI Spec3.
For further details, see https://github.com/OpenAPITools/openapi-generator/issues/34
=======================================================================================
[main] INFO o.o.codegen.DefaultCodegen - Skipped overwriting README.md as the file already exists in /tmp/java2//README.md
[main] INFO o.o.codegen.DefaultGenerator - Model Pets not generated since it's an alias to array (without property) and `generateAliasAsModel` is set to false (default)
[main] INFO o.o.codegen.AbstractGenerator - writing file /tmp/java2/service/PetsService.js
[main] INFO o.o.codegen.AbstractGenerator - writing file /tmp/java2/controllers/Pets.js
[main] INFO o.o.codegen.AbstractGenerator - writing file /tmp/java2/utils/writer.js
[main] INFO o.o.codegen.AbstractGenerator - writing file /tmp/java2/api/openapi.yaml
[main] INFO o.o.codegen.AbstractGenerator - writing file /tmp/java2/index.js
[main] INFO o.o.codegen.AbstractGenerator - writing file /tmp/java2/package.json
[main] INFO o.o.codegen.AbstractGenerator - writing file /tmp/java2/.openapi-generator/VERSION
But as mentioned in the warning, the nodejs-server generator no longer works as expected as one of its dependencies does not support OpenAPI spec v3.
Please refer to https://github.com/OpenAPITools/openapi-generator/issues/2828 for the latest development of creating a new NodeJS Express generator.
UPDATE (2019/09): we've added a new nodejs-express-server generator. Please refer to https://twitter.com/oas_generator/status/1160000504455319553 for more information.
We have Titan setup with Cassandra as it's storage.backend. We have a cluster of 3 Cassandra nodes. We are seeing the following error when one of the Cassandra nodes fail/shutdown.
This is the error that gets logged. Any help on this will be really appreciated.
INFO CountingConnectionPoolMonitor:194 - AddHost: 10.76.54.7
ERROR ConnectionPoolMBeanManager:74 - com.netflix.MonitoredResources:type=ASTYANAX,name=KeyspaceTitanConnectionPool,ServiceType=connectionpool
ERROR ConnectionPoolMBeanManager:74 - com.netflix.MonitoredResources:type=ASTYANAX,name=ClusterTitanConnectionPool,ServiceType=connectionpool
INFO GraphDatabaseConfiguration:1518 - Generated unique-instance-id=3217005b14357-titan-m21e
INFO ConnectionPoolMBeanManager:53 - Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,name=ClusterTitanConnectionPool,ServiceType=connectionpool
INFO CountingConnectionPoolMonitor:194 - AddHost: 10.76.54.44
INFO CountingConnectionPoolMonitor:194 - AddHost: 10.76.54.57
INFO CountingConnectionPoolMonitor:194 - AddHost: 10.76.54.7
INFO ConnectionPoolMBeanManager:53 - Registering mbean: com.netflix.MonitoredResources:type=ASTYANAX,name=KeyspaceTitanConnectionPool,ServiceType=connectionpool
INFO CountingConnectionPoolMonitor:194 - AddHost: 10.76.54.44
INFO CountingConnectionPoolMonitor:194 - AddHost: 10.76.54.57
INFO CountingConnectionPoolMonitor:194 - AddHost: 10.76.54.7
INFO Backend:459 - Configuring index [search]
INFO plugins:151 - [Donald Ritter] loaded [], sites []
INFO ElasticSearchIndex:354 - Configured remote host: 10.76.54.15 : 9300
INFO ElasticSearchIndex:354 - Configured remote host: 10.76.54.24 : 9300
INFO ElasticSearchIndex:354 - Configured remote host: 10.77.170.171 : 9300
INFO Backend:176 - Initiated backend operations thread pool of size 16
INFO KCVSLog:730 - Loaded unidentified ReadMarker start time 2016-02-16T23:45:27.539Z into com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller#49f59d47
Im trying to query a titan db 0.5.4 via mogwai, but when I run the following script i get the error: rexpro.exceptions.RexProScriptException: transaction is not open
and I found the same question here
P.S there is no tag for mogwai
script:
#!/usr/bin/env python3
from mogwai.connection import execute_query, setup
con = setup('127.0.0.1', graph_name="bio4j", username="re", password="re")
results = execute_query("2 * a",params={"a":2}, connection= con)
print(results)
results = execute_query("bio4j.E",params={}, connection= con)
print(results)
log:
$ ./bin/rexster.sh --start
0 [main] INFO com.tinkerpop.rexster.Application - .:Welcome to Rexster:.
93 [main] INFO com.tinkerpop.rexster.server.RexsterProperties - Using [/Users/Phoenix/Dropbox/Graph4Bio/Titan/rexhome/config/rexster.xml] as configuration source.
102 [main] INFO com.tinkerpop.rexster.Application - Rexster is watching [/Users/Phoenix/Dropbox/Graph4Bio/Titan/rexhome/config/rexster.xml] for change.
730 [main] INFO com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration - Generated unique-instance-id=0a69045d1736-AngryMac-local1
804 [main] INFO com.thinkaurelius.titan.diskstorage.Backend - Initiated backend operations thread pool of size 8
905 [main] INFO com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog - Loaded unidentified ReadMarker start time Timepoint[1455128079919000 μs] into com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller#302c971f
908 [main] INFO com.tinkerpop.rexster.RexsterApplicationGraph - Graph [bio4j] - configured with allowable namespace [tp:gremlin]
932 [main] INFO com.tinkerpop.rexster.config.GraphConfigurationContainer - Graph bio4j - titangraph[berkeleyje:/Users/Phoenix/Dropbox/Graph4Bio/Bio4j/bio4j] loaded
939 [main] INFO com.tinkerpop.rexster.server.metrics.HttpReporterConfig - Configured HTTP Metric Reporter.
941 [main] INFO com.tinkerpop.rexster.server.metrics.ConsoleReporterConfig - Configured Console Metric Reporter.
2058 [main] INFO com.tinkerpop.rexster.server.HttpRexsterServer - HTTP/REST thread pool configuration: kernal[4 / 4] worker[8 / 8]
2060 [main] INFO com.tinkerpop.rexster.server.HttpRexsterServer - Using org.glassfish.grizzly.strategies.LeaderFollowerNIOStrategy IOStrategy for HTTP/REST.
2160 [main] INFO com.tinkerpop.rexster.server.HttpRexsterServer - Rexster Server running on: [http://localhost:8182]
2160 [main] INFO com.tinkerpop.rexster.server.RexProRexsterServer - Using org.glassfish.grizzly.strategies.LeaderFollowerNIOStrategy IOStrategy for RexPro.
2160 [main] INFO com.tinkerpop.rexster.server.RexProRexsterServer - RexPro thread pool configuration: kernal[4 / 4] worker[8 / 8]
2162 [main] INFO com.tinkerpop.rexster.server.RexProRexsterServer - Rexster configured with [DefaultSecurity].
2163 [main] INFO com.tinkerpop.rexster.server.RexProRexsterServer - RexPro Server bound to [0.0.0.0:8184]
2177 [main] INFO com.tinkerpop.rexster.server.ShutdownManager - Bound shutdown socket to /127.0.0.1:8183. Starting listener thread for shutdown requests.
152568 [Grizzly(2) SelectorRunner] INFO com.tinkerpop.rexster.protocol.EngineController - ScriptEngineManager has factory for: ECMAScript
152568 [Grizzly(2) SelectorRunner] INFO com.tinkerpop.rexster.protocol.EngineController - ScriptEngineManager has factory for: gremlin-groovy
152568 [Grizzly(2) SelectorRunner] INFO com.tinkerpop.rexster.protocol.EngineController - Registered ScriptEngine for: gremlin-groovy
152569 [Grizzly(2) SelectorRunner] INFO com.tinkerpop.rexster.protocol.EngineHolder - Initializing gremlin-groovy engine with additional imports.
153259 [Grizzly(2) SelectorRunner] INFO com.tinkerpop.rexster.protocol.EngineHolder - ScriptEngine initializing with a custom script
154074 [Grizzly(2) SelectorRunner] INFO com.tinkerpop.rexster.protocol.EngineController - ScriptEngineManager has factory for: Groovy
154076 [Grizzly(2) SelectorRunner] INFO com.tinkerpop.rexster.protocol.session.RexProSessions - RexPro Session created: a2b416ce-75ea-4ecb-9835-b287162c90cb
154354 [Grizzly(4)] INFO com.tinkerpop.rexster.protocol.session.RexProSessions - Try to destroy RexPro Session: a2b416ce-75ea-4ecb-9835-b287162c90cb
154355 [Grizzly(4)] INFO com.tinkerpop.rexster.protocol.session.RexProSessions - RexPro Session destroyed or doesn't otherwise exist: a2b416ce-75ea-4ecb-9835-b287162c90cb
154356 [Grizzly(5)] INFO com.tinkerpop.rexster.protocol.session.RexProSessions - RexPro Session created: 5b8a669f-615d-4f84-9d1e-2d10624347f0
154525 [Grizzly(7)] WARN com.tinkerpop.rexster.protocol.server.ScriptServer - Could not process script [bio4j.E] for language [groovy] on session [[B#6634722f] and request [[B#68f38099]
154527 [Grizzly(8)] INFO com.tinkerpop.rexster.protocol.session.RexProSessions - Try to destroy RexPro Session: 5b8a669f-615d-4f84-9d1e-2d10624347f0
154527 [Grizzly(8)] INFO com.tinkerpop.rexster.protocol.session.RexProSessions - RexPro Session destroyed or doesn't otherwise exist: 5b8a669f-615d-4f84-9d1e-2d10624347f0
154529 [Grizzly(1)] INFO com.tinkerpop.rexster.protocol.session.RexProSessions - Try to destroy RexPro Session: 00000000-0000-0000-0000-000000000000
154529 [Grizzly(1)] INFO com.tinkerpop.rexster.protocol.session.RexProSessions - RexPro Session destroyed or doesn't otherwise exist: 00000000-0000-0000-0000-000000000000
Maintainer of mogwai here.
What version of mogwai are you using? in 0.7.7 there is no return value for setup method and the connection object should not be passed around. In fact when you call setup it creates a connection pool (a synchronous rexpro connection pool since there was no concurrency option specified). So in general, just call setup once for the life of your app and you can use execute query without any references.
Also this message in particular stands out:
154525 [Grizzly(7)] WARN com.tinkerpop.rexster.protocol.server.ScriptServer - Could not process script [bio4j.E] for language [groovy] on session [[B#6634722f] and request [[B#68f38099]
Is your graph configured with a graph name of "bio4j"? The default titan graph name is "graph" and the default graph object name mogwai uses is "g". If you have a graph name of "bio4j" you wouldn't reference this directly, you'd use the graph object name associated to the transaction. You can think of a graph-name as a database name in a SQL database, and the graph object being the transactional reference to said database. This is configured in the xml configuration file when starting titan. Particularly:
<graphs>
<graph>
<graph-name>graph</graph-name>
....
</graph>
</graphs>
So assuming you changed that from "graph" to "bio4j" and left the default graph_obj_name in the setup function as "g", then your query should read "g.E".
I am looking for the exporting db table content into the csv file. For that I found that Apache Sqoop does it very fast. and also import into hdfs.
I am successfully able import to hdfs using Apache Sqoop for Oracle 11g. Now when I tried for Sybase database. It's not completing the task and situation like nothing happening.
command using...
"import"
, "--driver", "com.sybase.jdbc4.jdbc.SybDriver"
, "--connect"
, "jdbc:sybase:Tds:192.168.1.151:5002?ServiceName=test1"
, "--username", "tejkiran", "--password", "passw0rd"
, "--table", "PIN"
, "-m", "1"
, "--target-dir", "/tmp/HELLO/tej01"
Log Details...
16:51:38.249 [main] WARN org.apache.sqoop.tool.SqoopTool -
$SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
16:51:38.442 [main] DEBUG com.cloudera.sqoop.SqoopOptions - Generated nonce dir: /tmp/sqoop-hduser/compile/7c374682d26b0ab8b800a1bbbbccd726
16:51:38.547 [main] WARN org.apache.sqoop.tool.BaseSqoopTool - Setting your password on the command-line is insecure. Consider using -P instead.
16:51:38.586 [main] WARN org.apache.sqoop.ConnFactory - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
16:51:38.750 [main] DEBUG org.apache.sqoop.ConnFactory - Loaded manager factory: com.cloudera.sqoop.manager.DefaultManagerFactory
16:51:38.751 [main] DEBUG org.apache.sqoop.ConnFactory - Trying ManagerFactory: com.cloudera.sqoop.manager.DefaultManagerFactory
16:51:38.803 [main] INFO org.apache.sqoop.manager.SqlManager - Using default fetchSize of 1000
16:51:38.803 [main] DEBUG org.apache.sqoop.ConnFactory - Instantiated ConnManager org.apache.sqoop.manager.GenericJdbcManager#50246923
16:51:38.803 [main] INFO org.apache.sqoop.tool.CodeGenTool - Beginning code generation
16:51:40.279 [main] DEBUG org.apache.sqoop.manager.SqlManager - No connection paramenters specified. Using regular API for making connection.
16:51:41.147 [main] DEBUG org.apache.sqoop.manager.SqlManager - Using fetchSize for next query: 1000
16:51:41.164 [main] INFO org.apache.sqoop.manager.SqlManager - Executing SQL statement: SELECT t.* FROM PIN AS t WHERE 1=0
After this statement there is not log updated and