I am trying to Launch Presto to query Hive ON RHEL Machine.
But while Launching the Presto Server via "./launcher run", i am getting the following error :
3 errors
com.google.inject.CreationException: Unable to create injector, see the following errors:
1) Configuration property 'http-server.http.port=8080 ' was not used
at io.airlift.bootstrap.Bootstrap.lambda$initialize$2(Bootstrap.java:235)
2) Error: Could not coerce value '8080 ' to int (property 'http-server.http.port') in order to call [public io.airlift.http.server.HttpServerConfig io.airlift.http.server.HttpServerConfig.setHttpPort(int)]
at io.airlift.http.server.HttpServerModule.configure(HttpServerModule.java:74)
3) Error: Invalid configuration property node.id: is malformed (for class io.airlift.node.NodeConfig.nodeId)
at io.airlift.node.NodeModule.configure(NodeModule.java:34)
3 errors
at com.google.inject.internal.Errors.throwCreationExceptionIfErrorsExist(Errors.java:466)
at com.google.inject.internal.InternalInjectorCreator.initializeStatically(InternalInjectorCreator.java:155)
at com.google.inject.internal.InternalInjectorCreator.build(InternalInjectorCreator.java:107)
at com.google.inject.Guice.createInjector(Guice.java:96)
at io.airlift.bootstrap.Bootstrap.initialize(Bootstrap.java:242)
at com.facebook.presto.server.PrestoServer.run(PrestoServer.java:116)
at com.facebook.presto.server.PrestoServer.main(PrestoServer.java:67)
A few things. You need to remove the space after "http-server.http.port=8080".
Also, your node.id property is invalid. The node.id should match the following regex: "[A-Za-z0-9][_A-Za-z0-9-]*" (see https://github.com/airlift/airlift/blob/1e5694fb13ac6ca9cbdae1a1e60909c62fc7a64e/node/src/main/java/io/airlift/node/NodeConfig.java#L30).
same question: No factory for connector jmx
remove all /etc/* config files space
Related
I am making JDBC connection to Denodo database using pyspark. The table that i am connecting to contains "TIMESTAMP_WITH_TIMEZONE" datatype for 2 columns. Since spark provides builtin jdbc connection to a handful of dbs only of which denodo is not a part, it is not able to recognize "TIMESTAMP_WITH_TIMEZONE" datatype and hence not able to map to any of its spark sql dataype.
To overcome this i am providing my custom schema(c_schema here) but this is not working as well and i am getting the same error. Below is the code snippet.
c_schema="game start date TIMESTAMP,game end date TIMESTAMP"
df = spark.read.jdbc("jdbc_url", "schema.table_name",properties={"user": "user_name", "password": "password","customSchema":c_schema,"driver": "com.denodo.vdp.jdbc.Driver"})
Please let me know how shall i fix this.
For anyone else facing this issue while connecting to denodo using spark,use CAST function to convert the datatype "TIMESTAMP_WITH_TIMEZONE" into any other datatype like String,Date or Timestamp etc. I had posted this question on denodo community page too and i have attached its official response.
CAST("PLANNED START DATE" as DATE) as "PLANNED_START_DATE"
I am trying to query replication health of all protected Azure VMs and break them down into three states: normal, warning and critical. But I got an error running below code:
AzureDiagnostics
| where replicationProviderName_s == "A2A"
| where isnotempty(name_s) and isnotnull(name_s)
| summarize hint.strategy=partitioned arg_max(TimeGenerated, *) by name_s
| project name_s , replicationHealth_s
| summarize count() by replicationHealth_s
| render piechart
Error is: 'where' operator: Failed to resolve column or scalar ex;pression named 'replicationProviderName_S'
Please help me resolve the error.
the error means there's no column named replicationProviderName_s in the schema of the table/function named AzureDiagnostics.
what does the following return when you run it?
AzureDiagnostics | getschema
I'm trying to sink from one topic to Cassandra. That topic has schema-registry for an avro file created by a kafka stream.
This is my connect-standalone.properties:
bootstrap.servers=localhost:9092
key.converter=org.apache.kafka.connect.json.JsonConverter
value.converter=org.apache.kafka.connect.json.JsonConverter
key.converter.schemas.enable=true
value.converter.schemas.enable=true
offset.storage.file.filename=/tmp/connect.offsets
offset.flush.interval.ms=10000
plugin.path=/usr/share/java
This is my cassandra-sink.properties:
connector.class=io.confluent.connect.cassandra.CassandraSinkConnector
cassandra.contact.points=ip.to.endpoint
cassandra.port=9042
cassandra.keyspace=my_keyspace
cassandra.write.mode=Update
tasks.max=1
topics=avro-sink
key.converter.schema.registry.url=http://localhost:8081
value.converter.schema.registry.url=http://localhost:8081
key.converter=org.apache.kafka.connect.storage.StringConverter
value.converter=org.apache.kafka.connect.avro.AvroConverter
name=cassandra-sink-connector
internal.key.converter=org.apache.kafka.connect.storage.StringConverter
internal.value.converter=org.apache.kafka.connect.avro.AvroConverter
transforms=createKey
transforms.createKey.fields=id,timestamp
transforms.createKey.type=org.apache.kafka.connect.transforms.ValueToKey
And this is the error:
[2019-03-15 16:32:05,238] ERROR Failed to create job for /etc/kafka/cassandra-sink.properties (org.apache.kafka.connect.cli.ConnectStandalone:108)
[2019-03-15 16:32:05,238] ERROR Stopping after connector error (org.apache.kafka.connect.cli.ConnectStandalone:119)
java.util.concurrent.ExecutionException: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 1 error(s):
Invalid value org.apache.kafka.connect.avro.AvroConverter for configuration value.converter: Class org.apache.kafka.connect.avro.AvroConverter could not be found.
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
at org.apache.kafka.connect.util.ConvertingFutureCallback.result(ConvertingFutureCallback.java:79)
at org.apache.kafka.connect.util.ConvertingFutureCallback.get(ConvertingFutureCallback.java:66)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:116)
Caused by: org.apache.kafka.connect.runtime.rest.errors.BadRequestException: Connector configuration is invalid and contains the following 1 error(s):
Invalid value org.apache.kafka.connect.avro.AvroConverter for configuration value.converter: Class org.apache.kafka.connect.avro.AvroConverter could not be found.
You can also find the above list of errors at the endpoint `/{connectorType}/config/validate`
at org.apache.kafka.connect.runtime.AbstractHerder.maybeAddConfigErrors(AbstractHerder.java:423)
at org.apache.kafka.connect.runtime.standalone.StandaloneHerder.putConnectorConfig(StandaloneHerder.java:188)
at org.apache.kafka.connect.cli.ConnectStandalone.main(ConnectStandalone.java:113)
This broker is the one that can be found in https://github.com/confluentinc/cp-docker-images
Any ideas? Thanks!
I solved it by removing all converter properties since the default are already avro for value and string for key on confluent docker container
First I was trying with this code to get all properties of custom type file in my repository. Object Id is of type mcl_engineer. Unable to read custom types file and getting exception.
CmisObject objectt = session.getObject(session.createObjectId("09027bd480031032"));
Document document = (Document)objectt;
System.out.println(document.getProperties())
While querying also getting exception.
SELECT *
FROM cmis:document
WHERE cmis:objectId = '09027bd480031032'
While querying getting this error
CMIS Exception: runtime Internal Server Error
Error Content: Status code: 500[runtime] Failed to get property info for type: mcl_engineer [E_CANT_INIT_PROPERTY_INFO_CACHE] Cant initialize PropertyInfoCache: mcl_engineer category. You must provide a table owner name qualification on your reference to table mcl_category. [DM_QUERY_E_REG_TABLE_QUAL] You must provide a table owner name qualification on your reference to table mcl_category. (CMIS with REST-Atom binding)
Only getting error for custom type file not for default one. So please help me out.
Thanks in advance.
I'm using Web api with Entity Framework 4.2 and the Sybase Ase connector.
This was working without issues returning JSon, until I tried to add a new table.
return db.car
.Include("tires")
.Include("tires.hub_caps")
.Include("tires.hub_caps.colors")
.Include("tires.hub_caps.sizes")
.Include("tires.hub_caps.sizes.units")
.Where(c => c.tires == 13);
The above works without issues if the following line is removed:
.Include("tires.hub_caps.colors")
However, when that line is included, I am given the error:
""An error occurred while preparing the command definition. See the inner exception for details."
The inner exception reads:
"InnerException = {"Specified method is not supported."}"
"source = Sybase.AdoNet4.AseClient"
The following also results in an error:
List<car> cars = db.car.AsNoTracking()
.Include("tires")
.Include("tires.hub_caps")
.Include("tires.hub_caps.colors")
.Include("tires.hub_caps.sizes")
.Include("tires.hub_caps.sizes.units")
.Where(c => c.tires == 13).ToList();
The error is as follows:
An exception of type 'System.Data.EntityCommandCompilationException' occurred in System.Data.Entity.dll but was not handled in user code
Additional information: An error occurred while preparing the command definition. See the inner exception for details.
Inner exception: "Specified method is not supported."
This points to a fault with with the Sybase Ase Data Connector.
I am using data annotations on all tables to control which fields are returned. On the colors table, I have tried the following annotations to limit the properties returned just the key:
[JsonIgnore]
[IgnoreDataMember]
Any ideas what might be causing this issue?
Alternatively, if I keep colors in and remove,
.Include("tires.hub_caps.sizes")
.Include("tires.hub_caps.sizes.units")
then this works also. It seems that the Sybase Ase connector does not support cases when an include statement forks from one object in two directions. Is there a way round this? The same issue occurs with Sybase Ase and the progress data connector.
The issue does not occur in a standard ASP.net MVC controller class - the problem is with serializing two one to many relationships on a single table to JSON.
This issue still occurs if lazy loading is turned on.
It seems to me that this is a bug with Sybase ASE, that none of the connectors are able to solve.