I am using kong gateway 0.11.2 with cassandra database version 2.2.7. I have written a custom plugin and I am trying to install the plugin manually. I hve made the necessary changes in the kong.conf file as per the official kong documentation to install the the plugin. I have rum the migrations up command which executed successfully. Post executing the migrations up I am unable to start the kong gateway with below error in the logs:
[error] 1#0: init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:149: [cassandra error] the current database schema does not match this version of Kong. Please run `kong migrations up` to update/initialize the database schema. Be aware that Kong migrations should only run from a single node, and that nodes running migrations concurrently will conflict with each other and might corrupt your database schema!
stack traceback:
[C]: in function 'assert'
/usr/local/share/lua/5.1/kong/init.lua:149: in function 'init'
init_by_lua:3: in main chunk
nginx: [error] init_by_lua error: /usr/local/share/lua/5.1/kong/init.lua:149: [cassandra error] the current database schema does not match this version of Kong. Please run `kong migrations up` to update/initialize the database schema. Be aware that Kong migrations should only run from a single node, and that nodes running migrations concurrently will conflict with each other and might corrupt your database schema!
stack traceback:
[C]: in function 'assert'
/usr/local/share/lua/5.1/kong/init.lua:149: in function 'init'
init_by_lua:3: in main chunk
Kong version: 0.11.2
Cassandra version: 2.2.7
Installing a custom plugin in kong
It's likely that the problem is related to this Git Issue in the Kong project #1294: Cassandra 3.x Support.
Cassandra 2.2 still uses the old system.schema_keyspaces structure, while Cassandra 3.x+ uses new system_schema.keyspaces approach.
Basically, it looks like that version of Kong requires Cassandra 3.x or higher.
Related
I am trying to query a graph hosted on Neo4j from Databricks using neo4j-spark-connector. I am using the following code to access the graph:
df = spark.read.format("org.neo4j.spark.DataSource")\
.option("url", "bolt://<IP adress>:7687")\
.option("authentication.type", "basic")\
.option("authentication.basic.username", "<username>")\
.option("authentication.basic.password", "<password>")\
.load()
This error is returned:
org.neo4j.driver.exceptions.ServiceUnavailableException: Unable to connect to 18.206.149.168:7687, ensure the database is running and that there is a working network connection to it.
Any idea of what I am missing in the use of the connector here ?
Details on versions/ libraries:
Neo4j Browser version: 4.2.1-patch-3.0
Neo4j Server version: 4.4.8 (enterprise)
Installed neo4j-spark-connector on Databricks:
neo4j-contrib:neo4j-connector-apache-spark_2.12:4.0.1_for_spark_3
I have added in the spark configuration following lines:
spark.neo4j.bolt.password <password>
spark.databricks.passthrough.enabled true
spark.neo4j.bolt.url bolt://<IP adress>:7687
spark.neo4j.bolt.user <username>
Unfortunately, online documentation isn't up-to-date to latest releases of neo4j versions (4+).
I am trying to follow the process explained in this blog by adapting it to the neo4j version I am using.
I am using circleCI for deployments, with AKS version 1.11 , the pipelines were working fine but after the AKS upgradation to 1.14.6, failure is seen while applying the deployment and service object files.
I deployed manually at kubernetes cluster, there didn't appear any error but while deploying through circleCI, I am getting following kind of errors while using version 2 of circleCI
error: SchemaError(io.k8s.api.extensions.v1beta1.DeploymentRollback):
invalid object doesn't have additional properties
or the other kind of error appears like -
error: SchemaError(io.k8s.api.core.v1.StorageOSVolumeSource): invalid
object doesn't have additional properties
It's most likely that the version of kubectl used in CircleCI isn't supported by 1.14.6. Note that kubectl version must be either 1.n, 1.(n+1) or 1.(n-1) where n is the minor version of the cluster. In this case your kubectl must be at least 1.13.x or at most 1.15.x
Checkout Kubernetes version and version skew support policy for more details.
I am getting error:
No top file or external data matches found" when running state.highstate command from salt master.
I have created two minions, one on same node where salt master present and one on another host.
When I am running command salt '*' state.highstate, top.sls gets successfully executed on node where master is present and fails on another node with above error.
Minion1 version(Same node as that of master): 2015.8.8(centos)
Minion2 version: 0.17.5(ubuntu)
Issue is resolved for me now. Issue was with salt master and minion version. I was using salt-minion version different from salt-master version.
My salt-master was running with version 0.17.5 and minion was running with different version. I reinstall minion with correct version. Issue is resolved now.
I am currently trying to run through the example first application for Hyperledger Fabric here -> http://hyperledger-fabric.readthedocs.io/en/release-1.1/write_first_app.html
I am unable to get past calling node invoke.js
Originally I was getting the same error as this question Error invoking chaincode using Node.js SDK [TypeError: Cannot read property 'getConnectivityState' of undefined]
But after reverting to grpc#1.9.1 I get the following result:
I am able to do everything up to the node query.js method and that returns successfully but can't quite get past this.
Node version: 8.11.1
Docker Version:
fabric client section of package-lock.json
FYI: I am trying to run on Windows 10 mostly using the docker toolbox bash, or a separate git bash CLI.
Last piece of info!
Even though the invoke js command fails with the above error, I can see that the PUT command to couch db does go through and car10 has been successfully added.
If I check the docker logs for peer0.org1.example.com I see the following:
So did it actually work?
Following is the error that I get when I try to run the command puppet agent -t on Puppet Agent. It happens when PuppetServer tries to reach V3 of PuppetDb instead of V4, although the V3 is depracated, and should not be called ideally. Not sure how to fix this.
All the configs are in place as defined here : http://jurjenbokma.com/ApprenticesNotes/ar27s05.xhtml
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Failed to submit 'replace facts' command for puppetmaster.test.org to PuppetDB at puppetmaster.test.org:8081: [404 ] <html><head><meta http-equiv="Content-Type" content="text/html;charset=ISO-8859-1"/><title>Error 404 </title></head><body><h2>HTTP ERROR: 404</h2><p>Problem accessing /v3/commands. Reason:<pre> Not Found</pre></p><hr /><i><small>Powered by Jetty://</small></i></body></html>
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
I was following a tutorial for an older version, whereas, for latest version (Puppet v4.x) we need to have different modules.
There is an interface between PuppetMaster and PuppetDb which is responsible for making API calls to PuppetDb, in the link being followed it asks to install
sudo puppet resource packagepuppetdb-terminusensure=latest which uses /v3 api of PuppetDb, whereas, for the latest version we need to install
sudo puppet resource packagepuppetdb-terminiensure=latest
which uses /v4 api of PuppetDb...
And the problem is solved!