GraphFactory could not find gremlin.graph property configration - python-3.x

Summary
While trying to start the gremlin server with origindb GraphFactory message: GraphFactory could not find [org.apache.tinkerpop.gremlin.orientdb.OrientEmbeddedFactory] error i got
Detail
I am using the below configuration
Gremlin : apache-tinkerpop-gremlin-server-3.3.1
Orientdb : orientdb-tp3-3.0.2
for download jar files use bin/gremlin-server.sh -i com.orientechnologies orientdb-gremlin 3.0.2
gremlinpython : 3.3.0
gremlin-server-orientdb.yaml file
host: localhost
port: 8182
scriptEvaluationTimeout: 30000
channelizer: org.apache.tinkerpop.gremlin.server.channel.WebSocketChannelizer
graphs: {
graph : conf/orientdb-empty.properties
}
scriptEngines: {
gremlin-groovy: {
plugins: { org.apache.tinkerpop.gremlin.server.jsr223.GremlinServerGremlinPlugin: {},
org.apache.tinkerpop.gremlin.orientdb.jsr223.OrientDBGremlinPlugin: {},
org.apache.tinkerpop.gremlin.jsr223.ImportGremlinPlugin: {classImports: [java.lang.Math], methodImports: [java.lang.Math#*]},
org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: {files: [../config/demodb.groovy]}}}}
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { ioRegistries: [org.apache.tinkerpop.gremlin.orientdb.io.OrientIoRegistry] }} # application/vnd.gremlin-v3.0+gryo
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { serializeResultToString: true }} # application/vnd.gremlin-v3.0+gryo-stringd
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0, config: { ioRegistries: [org.apache.tinkerpop.gremlin.orientdb.io.OrientIoRegistry] }} # application/json
processors:
- { className: org.apache.tinkerpop.gremlin.server.op.session.SessionOpProcessor, config: { sessionTimeout: 28800000 }}
- { className: org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor, config: { cacheExpirationTime: 600000, cacheMaxSize: 1000 }}
metrics: {
consoleReporter: {enabled: true, interval: 180000},
csvReporter: {enabled: true, interval: 180000, fileName: /tmp/gremlin-server-metrics.csv},
jmxReporter: {enabled: true},
slf4jReporter: {enabled: true, interval: 180000}}
strictTransactionManagement: false
maxInitialLineLength: 4096
maxHeaderSize: 8192
maxChunkSize: 8192
maxContentLength: 65536
maxAccumulationBufferComponents: 1024
resultIterationBatchSize: 64
writeBufferLowWaterMark: 32768
writeBufferHighWaterMark: 65536
authentication: {
authenticator: com.orientechnologies.tinkerpop.server.auth.OGremlinServerAuthenticator
}
ssl: {
enabled: false}
orientdb-empty.properties file
gremlin.graph=org.apache.tinkerpop.gremlin.orientdb.OrientEmbeddedFactory
also, tried with this
gremlin.graph=org.apache.tinkerpop.gremlin.orientdb.OrientGraph
stacktrace
admin-12#admin:~/Documents/apache-tinkerpop-gremlin-server-3.3.1/bin$ ./gremlin-server.sh conf/gremlin-server-orientdb.yaml
[INFO] GremlinServer -
\,,,/
(o o)
-----oOOo-(3)-oOOo-----
[INFO] GremlinServer - Configuring Gremlin Server from conf/gremlin-server-orientdb.yaml
[INFO] MetricManager - Configured Metrics ConsoleReporter configured with report interval=180000ms
[INFO] MetricManager - Configured Metrics CsvReporter configured with report interval=180000ms to fileName=/tmp/gremlin-server-metrics.csv
[INFO] MetricManager - Configured Metrics JmxReporter configured with domain= and agentId=
[INFO] MetricManager - Configured Metrics Slf4jReporter configured with interval=180000ms and loggerName=org.apache.tinkerpop.gremlin.server.Settings$Slf4jReporterMetrics
[WARN] DefaultGraphManager - Graph [graph] configured at [conf/orientdb-empty.properties] could not be instantiated and will not be available in Gremlin Server. GraphFactory message: GraphFactory could not find [org.apache.tinkerpop.gremlin.orientdb.OrientEmbeddedFactory] - Ensure that the jar is in the classpath
java.lang.RuntimeException: GraphFactory could not find [org.apache.tinkerpop.gremlin.orientdb.OrientEmbeddedFactory] - Ensure that the jar is in the classpath
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:63)
at org.apache.tinkerpop.gremlin.structure.util.GraphFactory.open(GraphFactory.java:104)
at org.apache.tinkerpop.gremlin.server.util.DefaultGraphManager.lambda$new$0(DefaultGraphManager.java:57)
at java.util.LinkedHashMap$LinkedEntrySet.forEach(LinkedHashMap.java:671)
at org.apache.tinkerpop.gremlin.server.util.DefaultGraphManager.<init>(DefaultGraphManager.java:55)
at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.lang.reflect.Constructor.newInstance(Constructor.java:423)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:80)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:111)
at org.apache.tinkerpop.gremlin.server.GremlinServer.main(GremlinServer.java:325)
[INFO] ServerGremlinExecutor - Initialized Gremlin thread pool. Threads in pool named with pattern gremlin-*
Exception in thread "main" java.lang.IllegalStateException: java.lang.reflect.InvocationTargetException
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.initializeGremlinScriptEngineManager(GremlinExecutor.java:448)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.<init>(GremlinExecutor.java:105)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor.<init>(GremlinExecutor.java:74)
at org.apache.tinkerpop.gremlin.groovy.engine.GremlinExecutor$Builder.create(GremlinExecutor.java:590)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:128)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:111)
at org.apache.tinkerpop.gremlin.server.GremlinServer.main(GremlinServer.java:325
Updated
[WARN] Slf4JLogger - An exceptionCaught() event was fired, and it reached at the tail of the pipeline. It usually means the last handler in the pipeline did not handle the exception.
java.lang.NullPointerException
at com.orientechnologies.tinkerpop.server.auth.OGremlinServerAuthenticator.authenticate(OGremlinServerAuthenticator.java:34)
at org.apache.tinkerpop.gremlin.server.auth.SimpleAuthenticator$PlainTextSaslAuthenticator.getAuthenticatedUser(SimpleAuthenticator.java:143)
at org.apache.tinkerpop.gremlin.server.handler.SaslAuthenticationHandler.channelRead(SaslAuthenticationHandler.java:103)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)
at io.netty.handler.codec.MessageToMessageDecoder.channelRead(MessageToMessageDecoder.java:102)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:356)
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:342)
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:335)

You are mixing a lot of different versions so it's hard to say what will work. First of all, TinkerPop recommends that you try to match the versions of the server with the version of the client. So that means that if you you use 3.3.1 on the server then you should try to use 3.3.1 of a client (in your case gremlin-python). Next, you are using orientdb-gremlin with 3.0.2 which appears to be bound to TinkerPop 3.3.0,
https://github.com/orientechnologies/orientdb-gremlin/blob/3.0.2/driver/pom.xml
which means that for best results you should probably use 3.3.0 on Gremlin Server and gremlin-python. Now, while I mention all this about "matching versions" I will say that it is possible to mix versions, but matching will help limit the things that can go wrong as you're just getting started so I'd encourage you to start there.
As for your error, I think you installed the wrong dependencies. You should have done:
bin/gremlin-server.sh -i com.orientechnologies orientdb-gremlin-server 3.0.2
as orientdb-gremlin-server will bring in OrientEmbeddedFactory as well as the orientdb-gremlin dependencies. I also think that your orientdb-empty.properties file is missing some configuration options - see what is defaulted and what is not here.

Related

Not able to connect to Elasticsearch from docker container (node.js client)

I have set up an elasticsearch/kibana docker configuration and I want to connect to elasticsearch from inside of a docker container using the #elastic/elasticsearch client for node. However, the connection is "timing out".
The project is taken with inspiration from Patrick Triest : https://blog.patricktriest.com/text-search-docker-elasticsearch/
However, I have made some modification in order to connect kibana, use a newer ES image and the new elasticsearch node client.
I am using the following docker-compose file:
version: "3"
services:
api:
container_name: mp-backend
build: .
ports:
- "3000:3000"
- "9229:9229"
environment:
- NODE_ENV=local
- ES_HOST=elasticsearch
- PORT=3000
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.5.1
container_name: elasticsearch
environment:
- node.name=elasticsearch
- cluster.name=es-docker-cluster
- discovery.type=single-node
- bootstrap.memory_lock=true
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
- "http.cors.allow-origin=*"
- "http.cors.enabled=true"
- "http.cors.allow-headers=X-Requested-With,X-Auth-Token,Content-Type,Content-Length,Authorization"
- "http.cors.allow-credentials=true"
ulimits:
memlock:
soft: -1
hard: -1
volumes:
- data01:/usr/share/elasticsearch/data
ports:
- 9200:9200
networks:
- elastic
kibana:
image: docker.elastic.co/kibana/kibana:7.5.1
ports:
- "5601:5601"
links:
- elasticsearch
networks:
- elastic
depends_on:
- elasticsearch
volumes:
data01:
driver: local
networks:
elastic:
driver: bridge
When building/ bringing the container up, I able to get a response from ES: curl -XGET "localhost:9200", "you know, for search"... And kibana is running and able to connect to the index.
I have the following file located in the backend container (connection.js):
const { Client } = require("#elastic/elasticsearch");
const client = new Client({ node: "http://localhost:9200" });
/*Check the elasticsearch connection */
async function health() {
let connected = false;
while (!connected) {
console.log("Connecting to Elasticsearch");
try {
const health = await client.cluster.health({});
connected = true;
console.log(health.body);
return health;
} catch (err) {
console.log("ES Connection Failed", err);
}
}
}
health();
If I run it outside of the container then I get the expected response:
node server/connection.js
Connecting to Elasticsearch
{
cluster_name: 'es-docker-cluster',
status: 'yellow',
timed_out: false,
number_of_nodes: 1,
number_of_data_nodes: 1,
active_primary_shards: 7,
active_shards: 7,
relocating_shards: 0,
initializing_shards: 0,
unassigned_shards: 3,
delayed_unassigned_shards: 0,
number_of_pending_tasks: 0,
number_of_in_flight_fetch: 0,
task_max_waiting_in_queue_millis: 0,
active_shards_percent_as_number: 70
}
However, if I run it inside of the container:
docker exec mp-backend "node" "server/connection.js"
Then I get the following response:
Connecting to Elasticsearch
ES Connection Failed ConnectionError: connect ECONNREFUSED 127.0.0.1:9200
at onResponse (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Transport.js:214:13)
at ClientRequest.<anonymous> (/usr/src/app/node_modules/#elastic/elasticsearch/lib/Connection.js:98:9)
at ClientRequest.emit (events.js:223:5)
at Socket.socketErrorListener (_http_client.js:415:9)
at Socket.emit (events.js:223:5)
at emitErrorNT (internal/streams/destroy.js:92:8)
at emitErrorAndCloseNT (internal/streams/destroy.js:60:3)
at processTicksAndRejections (internal/process/task_queues.js:81:21) {
name: 'ConnectionError',
meta: {
body: null,
statusCode: null,
headers: null,
warnings: null,
meta: {
context: null,
request: [Object],
name: 'elasticsearch-js',
connection: [Object],
attempts: 3,
aborted: false
}
}
}
So, I tried changing the client connection to (I read somewhere that this might help):
const client = new Client({ node: "http://172.24.0.1:9200" });
Then I am just "stuck" waiting for a response. Only one console.log of "Connecting to Elasticsearch"
I am using the following version:
"#elastic/elasticsearch": "7.5.1"
As you probably see, I do not have a full grasp of what is happening here... I have also tried to add:
links:
- elasticsearch
networks:
- elastic
To the api service, without any luck.
Does anyone know what I am doing wrong here? Thank you in advance :)
EDIT:
I did a "docker network inspect" on the network with *_elastic. There I see the following:
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.22.0.0/16",
"Gateway": "172.22.0.1"
}
]
},
Changing the client to connect to the "GateWay" Ip:
const client = new Client({ node: "http://172.22.0.1:9200" });
Then it works! I am still wondering why as this was just "trial and error" Is there any way to obtain this Ip without having to inspect the network?
In Docker, localhost (or the corresponding IPv4 address 127.0.0.1, or the corresponding IPv6 address ::1) generally means "this container"; you can't use that host name to access services running in another container.
In a Compose-based setup, the names of the services: blocks (api, elasticsearch, kibana) are usable as host names. The caveat is that all of the services have to be on the same Docker-internal network. Compose creates one for you and attaches containers to it by default. (In your example api is on the default network but the other two containers are on a separate elastic network.) Networking in Compose in the Docker documentation has some more details.
So to make this work, you need to tell your client code to honor the environment variable you're setting that points at Elasticsearch
const esHost = process.env.ES_HOST || 'localhost';
const esUrl = 'http://' + esHost + ':9200';
const client = new Client({ node: esUrl });
In your docker-compose.yml file delete all of the networks: blocks to use the provided default network. (While you're there, links: is unnecessary and Compose provides reasonable container_name: for you; api can reasonably depends_on: [elasticsearch].)
Since we've provided a fallback for $ES_HOST, if you're working in a host development environment, it will default to using localhost; outside of Docker where it means "the current host" it will reach the published port of the Elasticsearch container.

Prisma: getting "com.prisma.deploy.schema.InvalidProjectId: No service with name 'default' and stage 'default' found" error

I'm getting errors related to name 'default' and stage 'default' when initializing new prisma project
Steps to reproduce:
Follow all the steps from official guide strictly
Get com.prisma.deploy.schema.InvalidProjectId: No service with name 'default' and stage 'default' found error when run prisma deploy
Get this error when performing a simple query from http://localhost:4466/graphql:
Query:
query {
user {
id
name
}
}
Response:
{
"errors": [
{
"message": "Project not found: 'graphql_default'",
"code": 3016,
"requestId": "local:cjzs556h5000f0754vc6k36qd"
}
]
}
Versions:
Connector: MongoDB
Prisma Server: 1.34.6
prisma CLI: prisma/1.34.6 (darwin-x64) node-v10.16.3
OS: OS X Mojave - 10.14.6
Logs from Docker:
$ docker logs hello-world_prisma_1
No log level set, defaulting to INFO.
[INFO] Cluster created with settings {hosts=[mongo:27017], mode=SINGLE, requiredClusterType=UNKNOWN, serverSelectionTimeout='30000 ms', maxWaitQueueSize=500}
[INFO] Exception in monitor thread while connecting to server mongo:27017
Exception opening socket
com.mongodb.MongoSocketOpenException: Exception opening socket
at com.mongodb.internal.connection.AsynchronousSocketChannelStream$OpenCompletionHandler.failed(AsynchronousSocketChannelStream.java:272)
at sun.nio.ch.Invoker.invokeUnchecked(Invoker.java:128)
at sun.nio.ch.Invoker$2.run(Invoker.java:218)
at sun.nio.ch.AsynchronousChannelGroupImpl$1.run(AsynchronousChannelGroupImpl.java:112)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.net.ConnectException: Connection refused
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.checkConnect(Native Method)
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.finishConnect(UnixAsynchronousSocketChannelImpl.java:252)
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.finish(UnixAsynchronousSocketChannelImpl.java:198)
at sun.nio.ch.UnixAsynchronousSocketChannelImpl.onEvent(UnixAsynchronousSocketChannelImpl.java:213)
at sun.nio.ch.EPollPort$EventHandlerTask.run(EPollPort.java:293)
... 1 more
[INFO] Initializing workers...
[INFO] Obtaining exclusive agent lock...
[INFO] Obtaining exclusive agent lock... Successful.
[INFO] Successfully started 1 workers.
[INFO] No server chosen by com.mongodb.async.client.ClientSessionHelper$1#70a6c292 from cluster description ClusterDescription{type=UNKNOWN, connectionMode=SINGLE, serverDescriptions=[ServerDescription{address=mongo:27017, type=UNKNOWN, state=CONNECTING, exception={com.mongodb.MongoSocketOpenException: Exception opening socket}, caused by {java.net.ConnectException: Connection refused}}]}. Waiting for 30000 ms before timing out
[INFO] Opened connection [connectionId{localValue:2, serverValue:1}] to mongo:27017
[INFO] Monitor thread successfully connected to server with description ServerDescription{address=mongo:27017, type=STANDALONE, state=CONNECTED, ok=true, version=ServerVersion{versionList=[3, 6, 13]}, minWireVersion=0, maxWireVersion=6, maxDocumentSize=16777216, logicalSessionTimeoutMinutes=30, roundTripTimeNanos=16638401}
Server running on :4466
[INFO] Opened connection [connectionId{localValue:3, serverValue:2}] to mongo:27017
[INFO] Deployment worker initialization complete.
[Warning] Management authentication is disabled. Enable it in your Prisma config to secure your server.
{"key":"error/handled","requestId":"local:cjzs54qg500020754mbbzqni9","payload":{"exception":"com.prisma.deploy.schema.InvalidProjectId: No service with name 'default' and stage 'default' found","query":"\n query($name: String! $stage: String!) {\n project(name: $name stage: $stage) {\n name\n stage\n }\n }\n ","variables":"{\"name\":\"default\",\"stage\":\"default\"}","code":"4000","stack_trace":"com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$3(SchemaBuilder.scala:144)\\n scala.Option.getOrElse(Option.scala:121)\\n com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$2(SchemaBuilder.scala:144)\\n scala.util.Success.$anonfun$map$1(Try.scala:251)\\n scala.util.Success.map(Try.scala:209)\\n scala.concurrent.Future.$anonfun$map$1(Future.scala:288)\\n scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)\\n scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)\\n scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)\\n akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)\\n akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)\\n scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)\\n scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)\\n akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)\\n akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)\\n akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)\\n akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)\\n akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)\\n akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)\\n akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)","message":"No service with name 'default' and stage 'default' found"}}
[Debug] Initializing deployment worker for default_default
[Debug] Scheduling deployment for project default_default
[INFO] Opened connection [connectionId{localValue:4, serverValue:3}] to mongo:27017
[Debug] Applied migration for project default_default
Formatted [Warning]:
{
"key": "error/handled",
"requestId": "local:cjzs54qg500020754mbbzqni9",
"payload": {
"exception": "com.prisma.deploy.schema.InvalidProjectId: No service with name 'default' and stage 'default' found",
"query": "\n query($name: String! $stage: String!) {\n project(name: $name stage: $stage) {\n name\n stage\n }\n }\n ",
"variables": "{\"name\":\"default\",\"stage\":\"default\"}",
"code": "4000",
"stack_trace": "com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$3(SchemaBuilder.scala:144)\\n scala.Option.getOrElse(Option.scala:121)\\n com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$2(SchemaBuilder.scala:144)\\n scala.util.Success.$anonfun$map$1(Try.scala:251)\\n scala.util.Success.map(Try.scala:209)\\n scala.concurrent.Future.$anonfun$map$1(Future.scala:288)\\n scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)\\n scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)\\n scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)\\n akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)\\n akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)\\n scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)\\n scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)\\n akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)\\n akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)\\n akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)\\n akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)\\n akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)\\n akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)\\n akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)",
"message": "No service with name 'default' and stage 'default' found"
}
}
Formatted "query":
query($name: String! $stage: String!) {
project(name: $name stage: $stage) {
name
stage
}
}
Formatted "variables":
{ "name":"default", "stage":"default" }
Formatted stack trace:
com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$3(SchemaBuilder.scala:144)
scala.Option.getOrElse(Option.scala:121)
com.prisma.deploy.schema.SchemaBuilderImpl.$anonfun$projectField$2(SchemaBuilder.scala:144)
scala.util.Success.$anonfun$map$1(Try.scala:251)
scala.util.Success.map(Try.scala:209)
scala.concurrent.Future.$anonfun$map$1(Future.scala:288)
scala.concurrent.impl.Promise.liftedTree1$1(Promise.scala:29)
scala.concurrent.impl.Promise.$anonfun$transform$1(Promise.scala:29)
scala.concurrent.impl.CallbackRunnable.run(Promise.scala:60)
akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
akka.dispatch.BatchingExecutor$BlockableBatch.$anonfun$run$1(BatchingExecutor.scala:91)
scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:12)
scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:81)
akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:91)
akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:40)
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:44)
akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
P/s: It actually was running flawlessly some days before, but today I can't manage to make it work again!

JanusGraphException: Could not execute operation due to backend exception

I'm going crazy because of this exception when starting up JanusGraph.
It happened between restarts of the gremlin-server, without even touching the configuration files.
This error always appears at the very first startup of gremlin-server.
This is the stack trace from the logs:
6911 [main] INFO org.janusgraph.diskstorage.log.kcvs.KCVSLog - Loaded unidentified ReadMarker start time 2019-08-21T17:57:53.794212900Z into org.janusgraph.diskstorage.log.kcvs.KCVSLog$MessagePuller#51efb731
8148 [main] WARN org.janusgraph.diskstorage.locking.consistentkey.ConsistentKeyLocker - Skipping outdated lock on KeyColumn [k=0x 16-165-160-103-105- 30- 71-114- 97-112-104- 95- 78- 97-109-101- 95- 73-110-100-101-248, c=0x 0] with our rid ( 48- 97- 48- 48- 52- 98- 48- 49- 49- 54- 52- 50- 52- 45- 68- 69- 83- 75- 84- 79- 80- 45- 56- 67- 86- 72- 80- 57- 49- 49) but mismatched timestamp (actual ts 2019-08-21T17:57:54.934645Z, expected ts 2019-08-21T17:57:54.934645300Z)
8149 [main] ERROR org.janusgraph.graphdb.database.StandardJanusGraph - Could not commit transaction [1] due to storage exception in system-commit
org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:56)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.persist(CacheTransaction.java:91)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.flushInternal(CacheTransaction.java:139)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.commit(CacheTransaction.java:196)
at org.janusgraph.diskstorage.BackendTransaction.commit(BackendTransaction.java:150)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:716)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1380)
at org.janusgraph.graphdb.database.management.ManagementSystem.commit(ManagementSystem.java:246)
at org.janusgraph.graphdb.management.ConfigurationManagementGraph.createIndexIfDoesNotExist(ConfigurationManagementGraph.java:311)
at org.janusgraph.graphdb.management.ConfigurationManagementGraph.<init>(ConfigurationManagementGraph.java:81)
at org.janusgraph.graphdb.management.JanusGraphManager.lambda$new$0(JanusGraphManager.java:77)
at java.base/java.util.LinkedHashMap.forEach(LinkedHashMap.java:684)
at org.janusgraph.graphdb.management.JanusGraphManager.<init>(JanusGraphManager.java:74)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:80)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:122)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:86)
at org.apache.tinkerpop.gremlin.server.GremlinServer.main(GremlinServer.java:345)
Caused by: org.janusgraph.diskstorage.locking.PermanentLockingException: Permanent locking failure
at org.janusgraph.diskstorage.locking.AbstractLocker.checkLocks(AbstractLocker.java:359)
at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingTransaction.checkAllLocks(ExpectedValueCheckingTransaction.java:175)
at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingTransaction.prepareForMutations(ExpectedValueCheckingTransaction.java:154)
at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingStoreManager.mutateMany(ExpectedValueCheckingStoreManager.java:72)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:94)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:91)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:68)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:54)
... 20 more
Caused by: org.janusgraph.diskstorage.PermanentBackendException: Read 1 locks with our rid 48- 97- 48- 48- 52- 98- 48- 49- 49- 54- 52- 50- 52- 45- 68- 69- 83- 75- 84- 79- 80- 45- 56- 67- 86- 72- 80- 57- 49- 49 but mismatched timestamps; no lock column contained our timestamp (2019-08-21T17:57:54.934645300Z)
at org.janusgraph.diskstorage.locking.consistentkey.ConsistentKeyLocker.checkSeniority(ConsistentKeyLocker.java:528)
at org.janusgraph.diskstorage.locking.consistentkey.ConsistentKeyLocker.checkSingleLock(ConsistentKeyLocker.java:454)
at org.janusgraph.diskstorage.locking.consistentkey.ConsistentKeyLocker.checkSingleLock(ConsistentKeyLocker.java:118)
at org.janusgraph.diskstorage.locking.AbstractLocker.checkLocks(AbstractLocker.java:351)
... 27 more
8152 [main] ERROR org.janusgraph.graphdb.database.StandardJanusGraph - Could not commit transaction [1] due to exception
org.janusgraph.core.JanusGraphException: Could not execute operation due to backend exception
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:56)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.persist(CacheTransaction.java:91)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.flushInternal(CacheTransaction.java:139)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction.commit(CacheTransaction.java:196)
at org.janusgraph.diskstorage.BackendTransaction.commit(BackendTransaction.java:150)
at org.janusgraph.graphdb.database.StandardJanusGraph.commit(StandardJanusGraph.java:716)
at org.janusgraph.graphdb.transaction.StandardJanusGraphTx.commit(StandardJanusGraphTx.java:1380)
at org.janusgraph.graphdb.database.management.ManagementSystem.commit(ManagementSystem.java:246)
at org.janusgraph.graphdb.management.ConfigurationManagementGraph.createIndexIfDoesNotExist(ConfigurationManagementGraph.java:311)
at org.janusgraph.graphdb.management.ConfigurationManagementGraph.<init>(ConfigurationManagementGraph.java:81)
at org.janusgraph.graphdb.management.JanusGraphManager.lambda$new$0(JanusGraphManager.java:77)
at java.base/java.util.LinkedHashMap.forEach(LinkedHashMap.java:684)
at org.janusgraph.graphdb.management.JanusGraphManager.<init>(JanusGraphManager.java:74)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
at java.base/jdk.internal.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
at java.base/jdk.internal.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
at java.base/java.lang.reflect.Constructor.newInstance(Constructor.java:490)
at org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor.<init>(ServerGremlinExecutor.java:80)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:122)
at org.apache.tinkerpop.gremlin.server.GremlinServer.<init>(GremlinServer.java:86)
at org.apache.tinkerpop.gremlin.server.GremlinServer.main(GremlinServer.java:345)
Caused by: org.janusgraph.diskstorage.locking.PermanentLockingException: Permanent locking failure
at org.janusgraph.diskstorage.locking.AbstractLocker.checkLocks(AbstractLocker.java:359)
at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingTransaction.checkAllLocks(ExpectedValueCheckingTransaction.java:175)
at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingTransaction.prepareForMutations(ExpectedValueCheckingTransaction.java:154)
at org.janusgraph.diskstorage.locking.consistentkey.ExpectedValueCheckingStoreManager.mutateMany(ExpectedValueCheckingStoreManager.java:72)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:94)
at org.janusgraph.diskstorage.keycolumnvalue.cache.CacheTransaction$1.call(CacheTransaction.java:91)
at org.janusgraph.diskstorage.util.BackendOperation.executeDirect(BackendOperation.java:68)
at org.janusgraph.diskstorage.util.BackendOperation.execute(BackendOperation.java:54)
... 20 more
Caused by: org.janusgraph.diskstorage.PermanentBackendException: Read 1 locks with our rid 48- 97- 48- 48- 52- 98- 48- 49- 49- 54- 52- 50- 52- 45- 68- 69- 83- 75- 84- 79- 80- 45- 56- 67- 86- 72- 80- 57- 49- 49 but mismatched timestamps; no lock column contained our timestamp (2019-08-21T17:57:54.934645300Z)
at org.janusgraph.diskstorage.locking.consistentkey.ConsistentKeyLocker.checkSeniority(ConsistentKeyLocker.java:528)
at org.janusgraph.diskstorage.locking.consistentkey.ConsistentKeyLocker.checkSingleLock(ConsistentKeyLocker.java:454)
at org.janusgraph.diskstorage.locking.consistentkey.ConsistentKeyLocker.checkSingleLock(ConsistentKeyLocker.java:118)
at org.janusgraph.diskstorage.locking.AbstractLocker.checkLocks(AbstractLocker.java:351)
... 27 more
8153 [main] ERROR org.apache.tinkerpop.gremlin.server.util.ServerGremlinExecutor - Could not invoke constructor on class org.janusgraph.graphdb.management.JanusGraphManager (defined by the 'graphManager' setting) with one argument of class Settings
I'm on windows 10, the version of JanusGraph I'm using is the 0.4.0 and it's configured to use Cassandra 3.11.0 and Elastisearch 6.7.2, both running on docker (while I manually start gremlin-server).
I've already tried cleaning everything from docker (containers, images and volumes).
I also tried running 'bin/JanusGraph.sh clean-, but with no luck.
Even downloading again the janusgraph-0.4.0-hadoop2.zip and reconfiguring it from start does nothing.
It goes without saying that if I try to startup the server without touching the original configuration files it works. (But without the ConfigurationManagementGraph)
I don't understand how it could break without touching anything.
This is the gremlin-server configuration:
host: 0.0.0.0
port: 8182
scriptEvaluationTimeout: 180000
# channelizer: org.apache.tinkerpop.gremlin.server.channel.WebSocketChannelizer
channelizer: org.janusgraph.channelizers.JanusGraphWebSocketChannelizer
graphManager: org.janusgraph.graphdb.management.JanusGraphManager
graphs: {
#graph: conf/gremlin-server/janusgraph-cql-es-server.properties,
ConfigurationManagementGraph: conf/gremlin-server/janusgraph-cql-es-server-configured.properties,
}
scriptEngines: {
gremlin-groovy: {
plugins: { org.janusgraph.graphdb.tinkerpop.plugin.JanusGraphGremlinPlugin: {},
org.apache.tinkerpop.gremlin.server.jsr223.GremlinServerGremlinPlugin: {},
org.apache.tinkerpop.gremlin.tinkergraph.jsr223.TinkerGraphGremlinPlugin: {},
org.apache.tinkerpop.gremlin.jsr223.ImportGremlinPlugin: {classImports: [java.lang.Math], methodImports: [java.lang.Math#*]},
org.apache.tinkerpop.gremlin.jsr223.ScriptFileGremlinPlugin: {files: [scripts/init.groovy]}}}}
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV3d0, config: { serializeResultToString: true }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV3d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
# - { className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1 }
# - { className: org.apache.tinkerpop.gremlin.driver.ser.GraphBinaryMessageSerializerV1, config: { serializeResultToString: true }}
# Older serialization versions for backwards compatibility:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoLiteMessageSerializerV1d0, config: {ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { serializeResultToString: true }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV2d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
processors:
- { className: org.apache.tinkerpop.gremlin.server.op.session.SessionOpProcessor, config: { sessionTimeout: 28800000 }}
- { className: org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor, config: { cacheExpirationTime: 600000, cacheMaxSize: 1000 }}
metrics: {
consoleReporter: {enabled: true, interval: 180000},
csvReporter: {enabled: true, interval: 180000, fileName: /tmp/gremlin-server-metrics.csv},
jmxReporter: {enabled: true},
slf4jReporter: {enabled: true, interval: 180000},
gangliaReporter: {enabled: false, interval: 180000, addressingMode: MULTICAST},
graphiteReporter: {enabled: false, interval: 180000}}
maxInitialLineLength: 4096
maxHeaderSize: 8192
maxChunkSize: 8192
maxContentLength: 65536
maxAccumulationBufferComponents: 1024
resultIterationBatchSize: 64
writeBufferLowWaterMark: 32768
writeBufferHighWaterMark: 65536
This is the janus graph configuration (I removed the comments for brevity):
gremlin.graph=org.janusgraph.core.ConfiguredGraphFactory
graph.graphname=ConfigurationManagementGraph
storage.backend=cql
storage.hostname=127.0.0.1
storage.cql.keyspace=janusgraph
cache.db-cache = true
cache.db-cache-time = 180000
cache.db-cache-size = 0.25
index.search.backend=elasticsearch
index.search.hostname=127.0.0.1
index.search.elasticsearch.client-only=true
This is the script gremlin server runs at startup:
def globals = [:]
def getGraph() {
def graphNames = ConfiguredGraphFactory.getGraphNames();
def graphMaps = [:];
for (graphName in graphNames) {
def g = ConfiguredGraphFactory.open(graphName);
graphMaps.put(graphName, g.traversal())
}
return graphMaps;
}
globals << getGraph()
In the docker-compose console I see no exceptions, the last thing before gremlin-server shows the error is the following line:
cassandra | INFO [MigrationStage:1] 2019-08-21 17:57:53,725 ColumnFamilyStore.java:430 - Initializing janusgraph.system_properties_lock_
Please has anyone any idea what might be causing this problem?
Thank you!
You cannot use JDK11, but you can use JDK8, or modify the source code itself.
reference

Good and reliable practice to get a Node.js client to connect to a gremlin-server (all running on the same server) for CRUD operations

[On Ubuntu 18.04]
I have (and am constrained to use) janusgraph to create a persona repository. I will be using a graphDB, largely because persona relationships are as important as persona attributes and the structure needs to be fluid. I need to create a Node.js application (+ pug & stylus eventually) to do CRUD operations on the data in the graphDB but I don't understand how to connect the client to the server. (Research has shown solutions for this problem but only with different components in place - Neo4j, OrientDB, etc)
I can start gremlin-server successfully and then I have tried creating a gremlin-console connection such as the following
/opt/janusgraph-0.2.2-hadoop2/bin/gremlin.sh /opt/janusgraph-0.2.2-hadoop2/conf/gremlin-server/gremlin-server.yaml
where the yaml looks like this:
(As well as localhost, I also tried the host's specific IP address, and both options with and without the square brackets, but with similar results)
host: [localhost]
port: 8182
scriptEvaluationTimeout: 30000
channelizer: org.apache.tinkerpop.gremlin.server.channel.WebSocketChannelizer
graphs: {
graph: conf/gremlin-server/janusgraph-cql-es-server.properties
}
plugins:
- janusgraph.imports
scriptEngines: {
gremlin-groovy: {
imports: [java.lang.Math],
staticImports: [java.lang.Math.PI],
scripts: [scripts/empty-sample.groovy]}}
serializers:
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoLiteMessageSerializerV1d0, config: {ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GryoMessageSerializerV1d0, config: { serializeResultToString: true }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerGremlinV2d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistry] }}
- { className: org.apache.tinkerpop.gremlin.driver.ser.GraphSONMessageSerializerV1d0, config: { ioRegistries: [org.janusgraph.graphdb.tinkerpop.JanusGraphIoRegistryV1d0] }}
processors:
- { className: org.apache.tinkerpop.gremlin.server.op.session.SessionOpProcessor, config: { sessionTimeout: 28800000 }}
- { className: org.apache.tinkerpop.gremlin.server.op.traversal.TraversalOpProcessor, config: { cacheExpirationTime: 600000, cacheMaxSize: 1000 }}
metrics: {
consoleReporter: {enabled: true, interval: 180000},
csvReporter: {enabled: true, interval: 180000, fileName: /tmp/gremlin-server-metrics.csv},
jmxReporter: {enabled: true},
slf4jReporter: {enabled: true, interval: 180000},
gangliaReporter: {enabled: false, interval: 180000, addressingMode: MULTICAST},
graphiteReporter: {enabled: false, interval: 180000}}
maxInitialLineLength: 4096
maxHeaderSize: 8192
maxChunkSize: 8192
maxContentLength: 65536
maxAccumulationBufferComponents: 1024
resultIterationBatchSize: 64
writeBufferLowWaterMark: 32768
writeBufferHighWaterMark: 65536
The stdout looks like this:
(o o)
-----oOOo-(3)-oOOo-----
plugin activated: janusgraph.imports
plugin activated: tinkerpop.server
plugin activated: tinkerpop.utilities
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/opt/janusgraph-0.2.2-hadoop2/lib/slf4j-log4j12-1.7.12.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/opt/janusgraph-0.2.2-hadoop2/lib/logback-classic-1.1.2.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16:19:01 WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
plugin activated: tinkerpop.hadoop
plugin activated: tinkerpop.spark
plugin activated: tinkerpop.tinkergraph
Error in /opt/janusgraph-0.2.2-hadoop2/conf/gremlin-server/gremlin-server.yaml at [1: host: [localhost]] - No such property: localhost for class: groovysh_evaluate
Clearly, I've assumed or configured something(s) incorrectly, even in the console to server yaml and I don't know if the same yaml file would work for a Node.js client to connect.
I think I want the console connection so I can import the starter data from a graphml file and thereafter create new entries, query them, update them & delete them from the front end Node.js application.
What do I need to do to ?
The following doesn't create any connections from Gremlin Console to Gremlin Server:
/opt/janusgraph-0.2.2-hadoop2/bin/gremlin.sh /opt/janusgraph-0.2.2-hadoop2/conf/gremlin-server/gremlin-server.yaml
gremlin.sh does not take a Gremlin Server yaml file as an argument. Simply start bin/gremlin.sh and then write connect commands:
gremlin> :remote connect tinkerpop.server conf/remote.yaml
where remote.yaml points at Gremlin Server. You can read more about this in detail in a variety of places but I'll just point you to the TinkerPop reference documentation here.

Why does web-component-tester time out in flight mode?

I've got a basic web-component-tester project which works fine when I'm online.
If I switch to flight mode, it seems to fail to connect to Selenium, and instead gives a largely useless error message after about 60s delay: "Error: Unable to connect to selenium".
Edit 2: I've narrowed the problem down in the following question, but I'd still like to know how to avoid it with web-component-tester:
Why does NodeJS request() fail on localhost in flight mode, but not 127.0.0.1? (Windows 10)
Edit: After some digging, it's something to do with DNS resolver somewhere beneath selenium-standalone failing while in flight mode, and not a lot to do with web-component-tester.
After inserting some debug logging into selenium-standalone, I tracked down the failure point to the check for whether Selenium is running. When online, this works fine, but when offline I get:
// check-started.js, logging the error inside the request() call:
Error: getaddrinfo ENOENT localhost:60435
at Object.exports._errnoException (util.js:1022:11)
at errnoException (dns.js:33:15)
at GetAddrInfoReqWrap.onlookup [as oncomplete] (dns.js:76:26)
The following seem to describe similar situations, but I don't see how to persuade selenium-standalone nor web-component-tester to specify an IP address family to even try the suggested solutions:
https://github.com/nodejs/node/issues/4825
https://github.com/nodejs/node/issues/10290
node.js http.request and ipv6 vs ipv4
My original text is below.
The full error log and wct.conf.json are below. I can supply package.json and bower.json too if it would help.
I'm on Windows 10.
wct.conf.json:
{
"verbose": true,
"plugins": {
"local": {
"skipSeleniumInstall": true,
"browsers": ["chrome"]
},
"sauce": {
"disabled": true
}
}
}
error log:
> color-curve#0.0.1 test C:\Users\Dave\projects\infinity-components\color-curve
> standard "**/*.html" && wct -l chrome
step: loadPlugins
step: configure
hook: configure
Expanded local browsers: [ 'chrome' ] into capabilities: [ { browserName: 'chrome',
version: '60',
chromeOptions:
{ binary: 'C:\\Program Files (x86)\\Google\\Chrome\\Application\\chrome.exe',
args: [Object] } } ]
configuration: { suites: [ 'test/index.html' ],
verbose: true,
quiet: false,
expanded: false,
testTimeout: 90000,
persistent: false,
extraScripts: [],
clientOptions: { root: '/components/', verbose: true },
compile: 'auto',
activeBrowsers: [ { browserName: 'chrome', version: '60', chromeOptions: [Object] } ],
browserOptions: {},
plugins:
{ local:
{ disabled: false,
skipSeleniumInstall: true,
browsers: [Object],
seleniumArgs: [] },
sauce: { disabled: true } },
registerHooks: [Function: registerHooks],
enforceJsonConf: false,
webserver:
{ hostname: 'localhost',
_generatedIndexContent: '<!doctype html>\n<html>\n <head>\n <meta charset="utf-8">\n <script>WCT = {"root":"/components/","verbose":true};</script>\n <script>window.__generatedByWct = true;</script>\n <script src="../web-component-tester/browser.js"></script>\n\n <script src="../web-component-tester/data/a11ySuite.js"></script>\n</head>\n <body>\n <script>\n WCT.loadSuites(["test/index.html"]);\n </script>\n </body>\n</html>\n' },
root: 'C:\\Users\\Dave\\projects\\infinity-components\\color-curve',
_: [],
origSuites: [ 'test/' ] }
hook: prepare
hook: prepare:selenium
Starting Selenium server for local browsers
INFO - Selenium build info: version: '3.0.1', revision: '1969d75'
INFO - Launching a standalone Selenium Server
INFO::main: Logging initialized #222ms
INFO - Driver class not found: com.opera.core.systems.OperaDriver
INFO - Driver provider com.opera.core.systems.OperaDriver registration is skipped:
Unable to create new instances on this machine.
INFO - Driver class not found: com.opera.core.systems.OperaDriver
INFO - Driver provider com.opera.core.systems.OperaDriver is not registered
INFO - Driver provider org.openqa.selenium.safari.SafariDriver registration is skipped:
registration capabilities Capabilities [{browserName=safari, version=, platform=MAC}] does not match the current platform WIN10
INFO:osjs.Server:main: jetty-9.2.15.v20160210
INFO:osjsh.ContextHandler:main: Started o.s.j.s.ServletContextHandler#100fc185{/,null,AVAILABLE}
INFO:osjs.ServerConnector:main: Started ServerConnector#2922e2bb{HTTP/1.1}{0.0.0.0:51126}
INFO:osjs.Server:main: Started #419ms
INFO - Selenium Server is up and running
INFO - Selenium build info: version: '3.0.1', revision: '1969d75'
INFO - Launching a standalone Selenium Server
INFO::main: Logging initialized #222ms
INFO - Driver class not found: com.opera.core.systems.OperaDriver
INFO - Driver provider com.opera.core.systems.OperaDriver registration is skipped:
Unable to create new instances on this machine.
INFO - Driver class not found: com.opera.core.systems.OperaDriver
INFO - Driver provider com.opera.core.systems.OperaDriver is not registered
INFO - Driver provider org.openqa.selenium.safari.SafariDriver registration is skipped:
registration capabilities Capabilities [{browserName=safari, version=, platform=MAC}] does not match the current platform WIN10
INFO:osjs.Server:main: jetty-9.2.15.v20160210
INFO:osjsh.ContextHandler:main: Started o.s.j.s.ServletContextHandler#100fc185{/,null,AVAILABLE}
INFO:osjs.ServerConnector:main: Started ServerConnector#2922e2bb{HTTP/1.1}{0.0.0.0:51126}
INFO:osjs.Server:main: Started #419ms
INFO - Selenium Server is up and running
Error: Unable to connect to selenium

Resources