Octodns can't add new zones to powerdns running with mysql backend - powerdns

I'm running powerdns with mysql backend.
However, when I try to add new zones using octodns it fails with below error. Ideally octodns should create zone's entry in the database and also create a SOA record for it.
Can anyone help pls?
$ octodns-sync --config=./config/dev.yaml --debug --doit
2019-12-17T06:53:45 [140647765260032] INFO Manager init: config_file=./config/dev.yaml
2019-12-17T06:53:45 [140647765260032] INFO Manager init: max_workers=1
2019-12-17T06:53:45 [140647765260032] INFO Manager init: max_workers=False
2019-12-17T06:53:45 [140647765260032] DEBUG Manager init: configuring providers
2019-12-17T06:53:45 [140647765260032] DEBUG PowerDnsProvider[powerdns] init: id=powerdns, host=localhost, port=8081, nameserver_values=None, nameserver_ttl=600
2019-12-17T06:53:45 [140647765260032] DEBUG PowerDnsProvider[powerdns] init: id=powerdns, apply_disabled=False, update_pcent_threshold=0.30, delete_pcent_threshold=0.30
2019-12-17T06:53:45 [140647765260032] DEBUG YamlProvider[config] init: id=config, directory=./config, default_ttl=3600, enforce_order=1
2019-12-17T06:53:45 [140647765260032] DEBUG YamlProvider[config] init: id=config, apply_disabled=False, update_pcent_threshold=0.30, delete_pcent_threshold=0.30
2019-12-17T06:53:45 [140647765260032] INFO Manager sync: eligible_zones=[], eligible_targets=[], dry_run=False, force=False
2019-12-17T06:53:45 [140647765260032] INFO Manager sync: zone=example.com.
2019-12-17T06:53:45 [140647765260032] INFO Manager sync: sources=['config'] -> targets=['powerdns']
2019-12-17T06:53:45 [140647765260032] DEBUG Manager sync: populating, zone=example.com.
2019-12-17T06:53:45 [140647765260032] DEBUG Manager configured_sub_zones: subs=dict_keys([])
2019-12-17T06:53:45 [140647765260032] DEBUG Zone init: zone=Zone<example.com.>, sub_zones=set()
2019-12-17T06:53:45 [140647765260032] DEBUG YamlProvider[config] populate: name=example.com., target=False, lenient=False
2019-12-17T06:53:45 [140647765260032] DEBUG Record init: zone.name=example.com., type= ARecord, name=
2019-12-17T06:53:45 [140647765260032] DEBUG YamlProvider[config] _populate_from_file: successfully loaded "./config/example.com.yaml"
2019-12-17T06:53:45 [140647765260032] INFO YamlProvider[config] populate: found 1 records, exists=False
2019-12-17T06:53:45 [140647765260032] DEBUG Manager sync: planning, zone=example.com.
2019-12-17T06:53:45 [140647765260032] INFO PowerDnsProvider[powerdns] plan: desired=example.com.
2019-12-17T06:53:45 [140647765260032] DEBUG Zone init: zone=Zone<example.com.>, sub_zones=set()
2019-12-17T06:53:45 [140647765260032] DEBUG PowerDnsProvider[powerdns] populate: name=example.com., target=True, lenient=True
2019-12-17T06:53:45 [140647765260032] DEBUG PowerDnsProvider[powerdns] _request: method=GET, path=zones/example.com.
2019-12-17T06:53:45 [140647765260032] DEBUG urllib3.connectionpool Starting new HTTP connection (1): localhost:8081
2019-12-17T06:53:45 [140647765260032] DEBUG urllib3.connectionpool http://localhost:8081 "GET /api/v1/servers/localhost/zones/example.com. HTTP/1.1" 404 9
2019-12-17T06:53:45 [140647765260032] DEBUG PowerDnsProvider[powerdns] _request: status=404
Traceback (most recent call last):
File "/home/sam/dns/env/bin/octodns-sync", line 8, in
sys.exit(main())
File "/home/sam/dns/env/lib/python3.5/site-packages/octodns/cmds/sync.py", line 39, in main
dry_run=not args.doit, force=args.force)
File "/home/sam/dns/env/lib/python3.5/site-packages/octodns/manager.py", line 315, in sync
plans = [p for f in futures for p in f.result()]
File "/home/sam/dns/env/lib/python3.5/site-packages/octodns/manager.py", line 315, in
plans = [p for f in futures for p in f.result()]
File "/home/sam/dns/env/lib/python3.5/site-packages/octodns/manager.py", line 56, in result
return self.func(*self.args, **self.kwargs)
File "/home/sam/dns/env/lib/python3.5/site-packages/octodns/manager.py", line 243, in _populate_and_plan
plan = target.plan(zone)
File "/home/sam/dns/env/lib/python3.5/site-packages/octodns/provider/base.py", line 51, in plan
exists = self.populate(existing, target=True, lenient=True)
File "/home/sam/dns/env/lib/python3.5/site-packages/octodns/provider/powerdns.py", line 174, in populate
resp = self._get('zones/{}'.format(zone.name))
File "/home/sam/dns/env/lib/python3.5/site-packages/octodns/provider/powerdns.py", line 46, in _get
return self._request('GET', path, data=data)
File "/home/sam/dns/env/lib/python3.5/site-packages/octodns/provider/powerdns.py", line 42, in _request
resp.raise_for_status()
File "/home/sam/dns/env/lib/python3.5/site-packages/requests/models.py", line 940, in raise_for_status
raise HTTPError(http_error_msg, response=self)
requests.exceptions.HTTPError: 404 Client Error: Not Found for url: http://localhost:8081/api/v1/servers/localhost/zones/example.com.

I guess you're using powerdns 4.2+ with old octodns. The behavior has been changed.
It used to return 402 for non-existing zones, but it returns 404 now.
octodns fixed this 6 months ago, this might be fixed in 0.9.11.
If not, you can follow this code in master and patch them into the same file.
https://github.com/github/octodns/blame/master/octodns/provider/powerdns.py#L224

Related

Configuring OpenTelemetry Collector to Export Zipkin traces

I'm attempting to configure the open telemetry collector in Kubernetes. I took the jaeger all in one deployment which is here: https://www.jaegertracing.io/docs/1.22/opentelemetry/ and ported it to kubernete running on my minikube.
The problem is I can't seem to get the open telemetry collector to receive the jaeger traces and send it to my proxy container. My jaeger all in one app seems to be working in my minikube instance. Traces are being sent through the hot rap app and I can view the traces in the jaeger UI.
My open telemetry collector looks like the following:
receivers:
jaeger:
protocols:
grpc:
endpoint: 0.0.0.0:14250
thrift_compact:
endpoint: 0.0.0.0:6831
thrift_http:
endpoint: 0.0.0.0:14268
logging:
loglevel: debug
exporters:
zipkin:
endpoint: "http://proxy.collector-agent.svc.cluster.local:80/v1/observations/api/v2/spans"
insecure: true
logging:
loglevel: debug
processors:
batch:
extensions:
health_check:
pprof:
endpoint: :1888
zpages:
endpoint: :55679
service:
extensions: [pprof, zpages, health_check]
pipelines:
traces:
receivers: [jaeger]
processors: [batch]
exporters: [zipkin]
metrics:
receivers: [otlp]
processors: [batch]
exporters: [logging]
It doesn't seem that the open-tel collector is even receiving the jaeger traces. The logs from the container are below..
dev-MacBook-Pro otel-agent % kubectl logs otel-collector-6c4db7687c-h9pm9
2021-03-10T16:53:39.394Z info service/service.go:411 Starting OpenTelemetry Collector... {"Version": "v0.22.0-7-gc8bc12e3", "GitHash": "c8bc12e3", "NumCPU": 2}
2021-03-10T16:53:39.404Z info service/service.go:593 Using memory ballast {"MiBs": 683}
2021-03-10T16:53:39.404Z info service/service.go:255 Setting up own telemetry...
2021-03-10T16:53:39.406Z info service/telemetry.go:102 Serving Prometheus metrics {"address": ":8888", "level": 0, "service.instance.id": "85884852-3e34-4b13-b24e-03d7e9f49868"}
2021-03-10T16:53:39.406Z info service/service.go:292 Loading configuration...
2021-03-10T16:53:39.409Z info service/service.go:303 Applying configuration...
2021-03-10T16:53:39.409Z info service/service.go:324 Starting extensions...
2021-03-10T16:53:39.409Z info builder/extensions_builder.go:53 Extension is starting... {"component_kind": "extension", "component_type": "health_check", "component_name": "health_check"}
2021-03-10T16:53:39.409Z info healthcheckextension/healthcheckextension.go:40 Starting health_check extension {"component_kind": "extension", "component_type": "health_check", "component_name": "health_check", "config": {"TypeVal":"health_check","NameVal":"health_check","Port":13133}}
2021-03-10T16:53:39.410Z info builder/extensions_builder.go:59 Extension started. {"component_kind": "extension", "component_type": "health_check", "component_name": "health_check"}
2021-03-10T16:53:39.410Z info builder/extensions_builder.go:53 Extension is starting... {"component_kind": "extension", "component_type": "zpages", "component_name": "zpages"}
2021-03-10T16:53:39.410Z info zpagesextension/zpagesextension.go:42 Register Host's zPages {"component_kind": "extension", "component_type": "zpages", "component_name": "zpages"}
2021-03-10T16:53:39.413Z info zpagesextension/zpagesextension.go:55 Starting zPages extension {"component_kind": "extension", "component_type": "zpages", "component_name": "zpages", "config": {"TypeVal":"zpages","NameVal":"zpages","Endpoint":"localhost:55679"}}
2021-03-10T16:53:39.413Z info builder/extensions_builder.go:59 Extension started. {"component_kind": "extension", "component_type": "zpages", "component_name": "zpages"}
2021-03-10T16:53:39.414Z info builder/exporters_builder.go:302 Exporter is enabled. {"component_kind": "exporter", "exporter": "zipkin"}
2021-03-10T16:53:39.414Z info service/service.go:339 Starting exporters...
2021-03-10T16:53:39.414Z info builder/exporters_builder.go:92 Exporter is starting... {"component_kind": "exporter", "component_type": "zipkin", "component_name": "zipkin"}
2021-03-10T16:53:39.414Z info builder/exporters_builder.go:97 Exporter started. {"component_kind": "exporter", "component_type": "zipkin", "component_name": "zipkin"}
2021-03-10T16:53:39.414Z info memorylimiter/memorylimiter.go:108 Memory limiter configured {"component_kind": "processor", "component_type": "memory_limiter", "component_name": "memory_limiter", "limit_mib": 1572864000, "spike_limit_mib": 536870912, "check_interval": 5}
2021-03-10T16:53:39.414Z info builder/pipelines_builder.go:203 Pipeline is enabled. {"pipeline_name": "traces/1", "pipeline_datatype": "traces"}
2021-03-10T16:53:39.414Z info service/service.go:352 Starting processors...
2021-03-10T16:53:39.414Z info builder/pipelines_builder.go:51 Pipeline is starting... {"pipeline_name": "traces/1", "pipeline_datatype": "traces"}
2021-03-10T16:53:39.414Z info builder/pipelines_builder.go:61 Pipeline is started. {"pipeline_name": "traces/1", "pipeline_datatype": "traces"}
2021-03-10T16:53:39.414Z info builder/receivers_builder.go:230 Receiver is enabled. {"component_kind": "receiver", "component_type": "jaeger", "component_name": "jaeger", "datatype": "traces"}
2021-03-10T16:53:39.414Z info builder/receivers_builder.go:105 Ignoring receiver as it is not used by any pipeline {"component_kind": "receiver", "component_type": "zipkin", "component_name": "zipkin", "receiver": "zipkin"}
2021-03-10T16:53:39.414Z info service/service.go:364 Starting receivers...
2021-03-10T16:53:39.414Z info builder/receivers_builder.go:70 Receiver is starting... {"component_kind": "receiver", "component_type": "jaeger", "component_name": "jaeger"}
2021-03-10T16:53:39.415Z info static/strategy_store.go:201 No sampling strategies provided or URL is unavailable, using defaults {"component_kind": "receiver", "component_type": "jaeger", "component_name": "jaeger"}
2021-03-10T16:53:39.415Z info builder/receivers_builder.go:75 Receiver started. {"component_kind": "receiver", "component_type": "jaeger", "component_name": "jaeger"}
2021-03-10T16:53:39.415Z info healthcheck/handler.go:128 Health Check state change {"component_kind": "extension", "component_type": "health_check", "component_name": "health_check", "status": "ready"}
2021-03-10T16:53:39.415Z info service/service.go:267 Everything is ready. Begin running and processing data.
Even when I send a ton of jaeger traces nothing ever seems to be received by the collector. Is there a way to debug further or a configuration I'm missing? Any help would be greatly appreciated.

Error starting yugabyte as per shown in docs

I ran yb-ctl create as specified at https://download.yugabyte.com/local#linux and ran into these errors
13:10 $ bin/yb-ctl create
Creating cluster.
Waiting for cluster to be ready.
Viewing file /net/dev-server-sanketh-3/share/yugabyte-data/node-1/disk-1/tserver.err:
/tmp/pkg1/yugabyte-2.0.7.0/bin/yb-tserver: error while loading shared libraries: libatomic.so.1: cannot open shared object file: No such file or directory
Viewing file /net/dev-server-sanketh-3/share/yugabyte-data/node-1/disk-1/master.err:
/tmp/pkg1/yugabyte-2.0.7.0/bin/yb-master: error while loading shared libraries: libatomic.so.1: cannot open shared object file: No such file or directory
Traceback (most recent call last):
File "bin/yb-ctl", line 1968, in <module>
control.run()
File "bin/yb-ctl", line 1945, in run
self.args.func()
File "bin/yb-ctl", line 1707, in create_cmd_impl
self.wait_for_cluster_or_raise()
File "bin/yb-ctl", line 1552, in wait_for_cluster_or_raise
raise RuntimeError("Timed out waiting for a YugaByte DB cluster!")
RuntimeError: Timed out waiting for a YugaByte DB cluster!
Viewing file /tmp/tmp3NIbj3:
2019-12-06 13:10:18,634 INFO: Starting master-1 with:
/tmp/pkg1/yugabyte-2.0.7.0/bin/yb-master --fs_data_dirs "/net/dev-server-sanketh-3/share/yugabyte-data/node-1/disk-1" --webserver_interface 127.0.0.1 --rpc_bind_addresses 127.0.0.1 --v 0 --version_file_json_path=/tmp/pkg1/yugabyte-2.0.7.0 --webserver_doc_root "/tmp/pkg1/yugabyte-2.0.7.0/www" --callhome_enabled=false --replication_factor=1 --yb_num_shards_per_tserver 2 --ysql_num_shards_per_tserver=2 --master_addresses 127.0.0.1:7100 --enable_ysql=true >"/net/dev-server-sanketh-3/share/yugabyte-data/node-1/disk-1/master.out" 2>"/net/dev-server-sanketh-3/share/yugabyte-data/node-1/disk-1/master.err" &
2019-12-06 13:10:18,658 INFO: Starting tserver-1 with:
/tmp/pkg1/yugabyte-2.0.7.0/bin/yb-tserver --fs_data_dirs "/net/dev-server-sanketh-3/share/yugabyte-data/node-1/disk-1" --webserver_interface 127.0.0.1 --rpc_bind_addresses 127.0.0.1 --v 0 --version_file_json_path=/tmp/pkg1/yugabyte-2.0.7.0 --webserver_doc_root "/tmp/pkg1/yugabyte-2.0.7.0/www" --callhome_enabled=false --tserver_master_addrs=127.0.0.1:7100 --yb_num_shards_per_tserver=2 --redis_proxy_bind_address=127.0.0.1:6379 --cql_proxy_bind_address=127.0.0.1:9042 --local_ip_for_outbound_sockets=127.0.0.1 --use_cassandra_authentication=false --ysql_num_shards_per_tserver=2 --enable_ysql=true --pgsql_proxy_bind_address=127.0.0.1:5433 >"/net/dev-server-sanketh-3/share/yugabyte-data/node-1/disk-1/tserver.out" 2>"/net/dev-server-sanketh-3/share/yugabyte-data/node-1/disk-1/tserver.err" &
2019-12-06 13:10:18,662 INFO: Waiting for master and tserver processes to come up.
2019-12-06 13:10:29,126 INFO: PIDs found: {'tserver': [None], 'master': [None]}
2019-12-06 13:10:29,127 ERROR: Failed waiting for master and tserver processes to come up.
^^^ Encountered errors ^^^
Could you please let me know as to how I could fix this?
Did you run ./bin/post_install.sh from the setup ?
If yes, maybe you're missing apt-get install libatomic1 ?

How do I install swagger in a Sails.js 1.x no-frontend project?

I recently started a new sail.js 1.2.3 project for building apis. I installed the project with the --no-frontend option specified.
I have the following npm packages installed:
"sails-hook-orm": "^2.1.1",
"sails-hook-sockets": "^2.0.0",
"sails-hook-swagger-generator": "^2.8.2",
"sails-postgresql": "^1.0.2",
"sails-swagger": "^0.5.1",
"sails-util-micro-apps": "^1.1.1",
"swagger-ui-dist": "^3.23.11",
I created a swagger folder in my project, and when I run sails lift , the swagger.json file gets rebuilt (I think correctly).
However, I cannot get swagger-ui configured to use the swagger.json file, and I can't get the sails.js project to display the swagger-ui docs.
Here's what I already tried:
Swagger Sails JS
Here's the results from running sails lift:
debug: hookPath: C:\Users\...\node_modules\sails-swagger\dist\api\hooks\swagger
debug: marlinspike (swagger): loading config from C:\Users\...\node_modules\sails-swagger\dist\config
debug: In route `/swagger/doc`:
debug: The `cors.origin` config has been deprecated.
debug: Please use `cors.allowOrigins` instead.
debug: (See http://sailsjs.com/config/security for more info.)
debug: In route `/swagger/doc`:
debug: The `cors.methods` config has been deprecated.
debug: Please use `cors.allowRequestMethods` instead.
debug: In route `/swagger/doc`:
debug: When specifying multiple allowable CORS origins, the allowOrigins setting
debug: should be an array of strings. We'll split it up for you this time...
debug: marlinspike (swagger): loading Services from C:\Users\...\node_modules\sails-swagger\dist\api\services...
warn: marlinspike (swagger): no Services found. skipping
debug: marlinspike (swagger): loading Models...
debug: marlinspike (swagger): loading Controllers...
debug: marlinspike (swagger): loading Policies...
warn: marlinspike (swagger): no Policies found. skipping
No tag for this identity 'status'
No tag for this identity ''
No tag for this identity ''
No tag for this identity ''
No tag for this identity ''
No tag for this identity ''
No tag for this identity ''
No tag for this identity ''
No tag for this identity ''
No tag for this identity ''
No tag for this identity ''
No tag for this identity ''
No tag for this identity ''
No tag for this identity ''
No tag for this identity ''
No tag for this identity ''
No tag for this identity 'swagger'
No tag for this identity 'swagger'
No tag for this identity 'swagger'
No tag for this identity 'swagger'
No tag for this identity 'swagger'
No tag for this identity 'swagger'
info: ·• Auto-migrating... (drop)
Swagger generated successfully
info: ✓ Auto-migration complete.
warn: Ignored attempt to bind route (/swagger/doc) to unknown action :: { cors:
{ allowOrigins: [ 'http://swagger.balderdash.io' ],
allowRequestMethods: 'GET,OPTIONS,HEAD',
allRoutes: true,
allowCredentials: true,
allowRequestHeaders: 'content-type',
allowResponseHeaders: '',
allowAnyOriginWithCredentialsUnsafe: false },
controller: 'SwaggerController',
action: 'doc' }
info:
info: .-..-.
info:
info: Sails <| .-..-.
info: v1.2.3 |\
info: /|.\
info: / || \
info: ,' |' \
info: .-'.-==|/_--'
info: `--'-------'
info: __---___--___---___--___---___--___
info: ____---___--___---___--___---___--___-__
info:
info: Server lifted in `C:\Users\...`
info: To shut down Sails, press <CTRL> + C at any time.
info: Read more at https://sailsjs.com/support.
I want to run sails lift, have the swagger.json file updated, then have a route/path I can use to see the expected swagger ui.
Thanks in advance!```

TypeError: must be str, not NoneType when running distributed locust with taurus

I am trying to create a configuration for distributed locust run, I have a .py script with defined tasks, and I have simple taurus configuration just to make it working:
execution:
executor: locust
master: true
slaves: 1
scenario: tns
concurrency: 10
ramp-up: 10s
iterations: 100
hold-for: 10s
scenarios:
tns:
script: /usr/src/app/scenarios/locust_scenarios/sample.py
reporting:
- module: final-stats
dump-csv: test_result.csv
- module: console
- module: passfail
criteria:
- avg-rt>250ms for 30s, continue as failed
- failures>5% for 5s, continue as failed
- failures>50% for 10s, stop as failed
then I start locust slave node:
python -m locust.main -f scenarios/locust_scenarios/sample.py --slave --master-host=localhost
and execute test, here is the log
$ bzt -o modules.console.screen=gui locust_tests_execution_config.yaml
12:38:54 INFO: Taurus CLI Tool v1.12.0
12:38:54 INFO: Starting with configs: ['locust_tests_execution_config.yaml']
12:38:54 INFO: Configuring...
12:38:54 INFO: Artifacts dir: /Users/usr/Projects/load/2018-06-20_12-38-54.391229
12:38:54 WARNING: at path 'execution': 'execution' should be a list
12:38:54 INFO: Preparing...
12:38:54 WARNING: Module 'console' can be only used once, will merge all new instances into single
12:38:54 INFO: Starting...
12:38:54 INFO: Waiting for results...
12:38:55 WARNING: Please wait for graceful shutdown...
12:38:55 INFO: Shutting down...
12:38:56 INFO: Terminating process PID 54419 with signal Signals.SIGTERM (59 tries left)
12:38:57 INFO: Terminating process PID 54419 with signal Signals.SIGTERM (58 tries left)
12:38:57 ERROR: TypeError: must be str, not NoneType
File "/Users/usr/.virtualenvs/stfw/lib/python3.6/site-packages/bzt/cli.py", line 250, in perform
self.engine.run()
File "/Users/usr/.virtualenvs/stfw/lib/python3.6/site-packages/bzt/engine.py", line 222, in run
reraise(exc_info)
File "/Users/usr/.virtualenvs/stfw/lib/python3.6/site-packages/bzt/six/py3.py", line 84, in reraise
raise exc
File "/Users/usr/.virtualenvs/stfw/lib/python3.6/site-packages/bzt/engine.py", line 204, in run
self._wait()
File "/Users/usr/.virtualenvs/stfw/lib/python3.6/site-packages/bzt/engine.py", line 243, in _wait
while not self._check_modules_list():
File "/Users/usr/.virtualenvs/stfw/lib/python3.6/site-packages/bzt/engine.py", line 230, in _check_modules_list
finished = bool(module.check())
File "/Users/usr/.virtualenvs/stfw/lib/python3.6/site-packages/bzt/modules/aggregator.py", line 635, in check
for point in self.datapoints():
File "/Users/usr/.virtualenvs/stfw/lib/python3.6/site-packages/bzt/modules/aggregator.py", line 401, in datapoints
for datapoint in self._calculate_datapoints(final_pass):
File "/Users/usr/.virtualenvs/stfw/lib/python3.6/site-packages/bzt/modules/aggregator.py", line 664, in _calculate_datapoints
self._process_underlings(final_pass)
File "/Users/usr/.virtualenvs/stfw/lib/python3.6/site-packages/bzt/modules/aggregator.py", line 649, in _process_underlings
for data in underling.datapoints(final_pass):
File "/Users/usr/.virtualenvs/stfw/lib/python3.6/site-packages/bzt/modules/aggregator.py", line 401, in datapoints
for datapoint in self._calculate_datapoints(final_pass):
File "/Users/usr/.virtualenvs/stfw/lib/python3.6/site-packages/bzt/modules/locustio.py", line 221, in _calculate_datapoints
self.read_buffer += self.file.get_bytes(size=1024 * 1024, last_pass=final_pass)
12:38:57 INFO: Post-processing...
12:38:57 INFO: Test duration: 0:00:03
12:38:57 INFO: Test duration: 0:00:03
12:38:57 INFO: Artifacts dir: /Users/usr/Projects/load/2018-06-20_12-38-54.391229
12:38:57 WARNING: Done performing with code: 1
locust log shows that locust slave was connected and ready to swarm.
What should I do to make it running?
Thanks
It seems that there is a defect in bzt library, based on this thread:
https://groups.google.com/forum/#!searchin/codename-taurus/locust%7Csort:date/codename-taurus/woBeH1JeBFo/pHhoGUSoAwAJ
there will be a fix in new release:
https://github.com/Blazemeter/taurus/pull/871

How to execute gremlin query with mogwai

Im trying to query a titan db 0.5.4 via mogwai, but when I run the following script i get the error: rexpro.exceptions.RexProScriptException: transaction is not open
and I found the same question here
P.S there is no tag for mogwai
script:
#!/usr/bin/env python3
from mogwai.connection import execute_query, setup
con = setup('127.0.0.1', graph_name="bio4j", username="re", password="re")
results = execute_query("2 * a",params={"a":2}, connection= con)
print(results)
results = execute_query("bio4j.E",params={}, connection= con)
print(results)
log:
$ ./bin/rexster.sh --start
0 [main] INFO com.tinkerpop.rexster.Application - .:Welcome to Rexster:.
93 [main] INFO com.tinkerpop.rexster.server.RexsterProperties - Using [/Users/Phoenix/Dropbox/Graph4Bio/Titan/rexhome/config/rexster.xml] as configuration source.
102 [main] INFO com.tinkerpop.rexster.Application - Rexster is watching [/Users/Phoenix/Dropbox/Graph4Bio/Titan/rexhome/config/rexster.xml] for change.
730 [main] INFO com.thinkaurelius.titan.graphdb.configuration.GraphDatabaseConfiguration - Generated unique-instance-id=0a69045d1736-AngryMac-local1
804 [main] INFO com.thinkaurelius.titan.diskstorage.Backend - Initiated backend operations thread pool of size 8
905 [main] INFO com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog - Loaded unidentified ReadMarker start time Timepoint[1455128079919000 μs] into com.thinkaurelius.titan.diskstorage.log.kcvs.KCVSLog$MessagePuller#302c971f
908 [main] INFO com.tinkerpop.rexster.RexsterApplicationGraph - Graph [bio4j] - configured with allowable namespace [tp:gremlin]
932 [main] INFO com.tinkerpop.rexster.config.GraphConfigurationContainer - Graph bio4j - titangraph[berkeleyje:/Users/Phoenix/Dropbox/Graph4Bio/Bio4j/bio4j] loaded
939 [main] INFO com.tinkerpop.rexster.server.metrics.HttpReporterConfig - Configured HTTP Metric Reporter.
941 [main] INFO com.tinkerpop.rexster.server.metrics.ConsoleReporterConfig - Configured Console Metric Reporter.
2058 [main] INFO com.tinkerpop.rexster.server.HttpRexsterServer - HTTP/REST thread pool configuration: kernal[4 / 4] worker[8 / 8]
2060 [main] INFO com.tinkerpop.rexster.server.HttpRexsterServer - Using org.glassfish.grizzly.strategies.LeaderFollowerNIOStrategy IOStrategy for HTTP/REST.
2160 [main] INFO com.tinkerpop.rexster.server.HttpRexsterServer - Rexster Server running on: [http://localhost:8182]
2160 [main] INFO com.tinkerpop.rexster.server.RexProRexsterServer - Using org.glassfish.grizzly.strategies.LeaderFollowerNIOStrategy IOStrategy for RexPro.
2160 [main] INFO com.tinkerpop.rexster.server.RexProRexsterServer - RexPro thread pool configuration: kernal[4 / 4] worker[8 / 8]
2162 [main] INFO com.tinkerpop.rexster.server.RexProRexsterServer - Rexster configured with [DefaultSecurity].
2163 [main] INFO com.tinkerpop.rexster.server.RexProRexsterServer - RexPro Server bound to [0.0.0.0:8184]
2177 [main] INFO com.tinkerpop.rexster.server.ShutdownManager - Bound shutdown socket to /127.0.0.1:8183. Starting listener thread for shutdown requests.
152568 [Grizzly(2) SelectorRunner] INFO com.tinkerpop.rexster.protocol.EngineController - ScriptEngineManager has factory for: ECMAScript
152568 [Grizzly(2) SelectorRunner] INFO com.tinkerpop.rexster.protocol.EngineController - ScriptEngineManager has factory for: gremlin-groovy
152568 [Grizzly(2) SelectorRunner] INFO com.tinkerpop.rexster.protocol.EngineController - Registered ScriptEngine for: gremlin-groovy
152569 [Grizzly(2) SelectorRunner] INFO com.tinkerpop.rexster.protocol.EngineHolder - Initializing gremlin-groovy engine with additional imports.
153259 [Grizzly(2) SelectorRunner] INFO com.tinkerpop.rexster.protocol.EngineHolder - ScriptEngine initializing with a custom script
154074 [Grizzly(2) SelectorRunner] INFO com.tinkerpop.rexster.protocol.EngineController - ScriptEngineManager has factory for: Groovy
154076 [Grizzly(2) SelectorRunner] INFO com.tinkerpop.rexster.protocol.session.RexProSessions - RexPro Session created: a2b416ce-75ea-4ecb-9835-b287162c90cb
154354 [Grizzly(4)] INFO com.tinkerpop.rexster.protocol.session.RexProSessions - Try to destroy RexPro Session: a2b416ce-75ea-4ecb-9835-b287162c90cb
154355 [Grizzly(4)] INFO com.tinkerpop.rexster.protocol.session.RexProSessions - RexPro Session destroyed or doesn't otherwise exist: a2b416ce-75ea-4ecb-9835-b287162c90cb
154356 [Grizzly(5)] INFO com.tinkerpop.rexster.protocol.session.RexProSessions - RexPro Session created: 5b8a669f-615d-4f84-9d1e-2d10624347f0
154525 [Grizzly(7)] WARN com.tinkerpop.rexster.protocol.server.ScriptServer - Could not process script [bio4j.E] for language [groovy] on session [[B#6634722f] and request [[B#68f38099]
154527 [Grizzly(8)] INFO com.tinkerpop.rexster.protocol.session.RexProSessions - Try to destroy RexPro Session: 5b8a669f-615d-4f84-9d1e-2d10624347f0
154527 [Grizzly(8)] INFO com.tinkerpop.rexster.protocol.session.RexProSessions - RexPro Session destroyed or doesn't otherwise exist: 5b8a669f-615d-4f84-9d1e-2d10624347f0
154529 [Grizzly(1)] INFO com.tinkerpop.rexster.protocol.session.RexProSessions - Try to destroy RexPro Session: 00000000-0000-0000-0000-000000000000
154529 [Grizzly(1)] INFO com.tinkerpop.rexster.protocol.session.RexProSessions - RexPro Session destroyed or doesn't otherwise exist: 00000000-0000-0000-0000-000000000000
Maintainer of mogwai here.
What version of mogwai are you using? in 0.7.7 there is no return value for setup method and the connection object should not be passed around. In fact when you call setup it creates a connection pool (a synchronous rexpro connection pool since there was no concurrency option specified). So in general, just call setup once for the life of your app and you can use execute query without any references.
Also this message in particular stands out:
154525 [Grizzly(7)] WARN com.tinkerpop.rexster.protocol.server.ScriptServer - Could not process script [bio4j.E] for language [groovy] on session [[B#6634722f] and request [[B#68f38099]
Is your graph configured with a graph name of "bio4j"? The default titan graph name is "graph" and the default graph object name mogwai uses is "g". If you have a graph name of "bio4j" you wouldn't reference this directly, you'd use the graph object name associated to the transaction. You can think of a graph-name as a database name in a SQL database, and the graph object being the transactional reference to said database. This is configured in the xml configuration file when starting titan. Particularly:
<graphs>
<graph>
<graph-name>graph</graph-name>
....
</graph>
</graphs>
So assuming you changed that from "graph" to "bio4j" and left the default graph_obj_name in the setup function as "g", then your query should read "g.E".

Resources