I have a 3 node cluster in cassandra. And created system key using dsetool createsystemkey 'AES/ECB/PKCS5Padding' 128 system_key with proper permission and ownership to /etc/ /etc/dse/conf/.
But when I am creating a table with encryption I am getting the following error.
ConfigurationException: ErrorMessage code=2300 [Query invalid because
of configuration issue] message="Encryptor.create() threw an error:
java.lang.RuntimeException Failed to initialize Encryptor:
com.datastax.bdp.cassandra.crypto.KeyGenerationException:
java.io.IOException: Couldn't encrypt input"
Table schema
CREATE TABLE test ( id text PRIMARY KEY , data text ) WITH compression = {'sstable_compression': 'Encryptor','cipher_algorithm' : 'AES/ECB/PKCS5Padding', 'secret_key_strength' : 128, 'chunk_length_kb' : 1 };
My DSE version : 4.8.4
Related
Added an extra column xyz in CashSchemaV1. I am able to start node with H2 db, but it gives the following error when using PostgreSQL:
[ERROR] 14:52:11+0530 [main] internal.NodeStartupLogging.invoke - Exception during node startup: Incompatible schema change detected. Please run the node with database.initialiseSchema=true. Reason: Schema-validation: missing column [xyz] in table [contract_cash_states]
Followed https://docs.corda.r3.com/database-management.html#database-management-scripts
Added the xyz column in https://github.com/corda/corda/blob/master/finance/workflows/src/main/resources/migration/cash.changelog-init.xml
<column name="pennies" type="BIGINT"/>
<column name="xyz" type="NVARCHAR(130)"/>
then added database migration scripts retrospectively to an existing CorDapp.
After this tried to start the node but getting following error:
[ERROR] 14:52:11+0530 [main] internal.NodeStartupLogging.invoke - Exception during node startup: Incompatible schema change detected. Please run the node with database.initialiseSchema=true. Reason: Schema-validation: missing column [xyz] in table [contract_cash_states]
CashSchemaV1.kt https://github.com/corda/corda/blob/master/finance/contracts/src/main/kotlin/net/corda/finance/schemas/CashSchemaV1.kt
#Type(type = "corda-wrapper-binary")
var issuerRef: ByteArray,
#Column (name = "xyz")
var xyz: String
) : PersistentState()
}
Migration script generated cash-schema-v1.changelog-master.sql
--liquibase formatted sql
--changeset R3.Corda.Generated:initial_schema_for_CashSchemaV1
create table contract_cash_states (
output_index int4 not null,
transaction_id varchar(64) not null,
ccy_code varchar(3) not null,
issuer_key_hash varchar(130) not null,
issuer_ref bytea not null,
owner_name varchar(255),
pennies int8 not null,
xyz varchar(255),
primary key (output_index, transaction_id)
);
create index ccy_code_idx on contract_cash_states (ccy_code);
create index pennies_idx on contract_cash_states (pennies);
Schema should be created with all the columns specified in CashSchemaV1
Steps performed to add extra column :
1)Added <column name="xyz" type="NVARCHAR(130)"/> in cash.changelog-init.xml
2)Added <addNotNullConstraint tableName="abc_states" columnName="xyz" columnDataType="NVARCHAR(130)"/> in cash.changelog-v1.xml
Built the cordapp and then ran the node using this , node started successfully.
Environment
presto 0.215
presto-cli 0.215
presto-jdbc 0.215
Hive Table created by Presto
CREATE TABLE hive.origin.test_part (
id int,
date_key int
)
WITH (
format = 'ORC',
partitioned_by = ARRAY['date_key'],
external_location = '/user/hive/warehouse/origin.db/test_part/'
)
Presto JDBC and CLI both insert into success
partiton '20190122' doesn't exist before and insert succeeded which means rename tmp directory to /user/hive/warehouse/origin.db/test_part/date_key=20190122 succeeded.
/user/hive/warehouse/origin.db/test_part/date_key=20190122/ in hdfs
But Presto CLI CALL system.create_empty_partition() failed
CALL system.create_empty_partition( schema_name => 'origin', table_name => 'test_part', partition_columns => ARRAY['date_key'], partition_values => ARRAY['20190121'])
Full error message
com.facebook.presto.spi.PrestoException: Failed to rename hdfs://datacenter1:8020/tmp/presto-hive/b87162e5-9e48-4d43-a0e7-ecf0994fe625/date_key=20190121 to hdfs://datacenter1:8020/user/hive/warehouse/origin.db/test_part/date_key=20190121: rename returned false
at com.facebook.presto.hive.metastore.SemiTransactionalHiveMetastore.renameDirectory(SemiTransactionalHiveMetastore.java:1787)
at com.facebook.presto.hive.metastore.SemiTransactionalHiveMetastore.access$2700(SemiTransactionalHiveMetastore.java:87)
at com.facebook.presto.hive.metastore.SemiTransactionalHiveMetastore$Committer.prepareAddPartition(SemiTransactionalHiveMetastore.java:1177)
at com.facebook.presto.hive.metastore.SemiTransactionalHiveMetastore$Committer.access$700(SemiTransactionalHiveMetastore.java:957)
at com.facebook.presto.hive.metastore.SemiTransactionalHiveMetastore.commitShared(SemiTransactionalHiveMetastore.java:885)
at com.facebook.presto.hive.metastore.SemiTransactionalHiveMetastore.commit(SemiTransactionalHiveMetastore.java:807)
at com.facebook.presto.hive.HiveMetadata.commit(HiveMetadata.java:1949)
at com.facebook.presto.hive.CreateEmptyPartitionProcedure.createEmptyPartition(CreateEmptyPartitionProcedure.java:126)
at java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:627)
at java.lang.invoke.MethodHandle.invokeWithArguments(MethodHandle.java:649)
at com.facebook.presto.execution.CallTask.execute(CallTask.java:160)
at com.facebook.presto.execution.CallTask.execute(CallTask.java:60)
at com.facebook.presto.execution.DataDefinitionExecution.start(DataDefinitionExecution.java:168)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
/tmp/presto-hive/ in hdfs
So
CALL system.create_empty_partition() use different 'user' to manipulate hdfs?
This is failing due to a bug that prevents it from working with non-bucketed tables. It is fixed in the 301 release.
I am trying to fetch some data from azure data lake to azure datawarehouse, but I am unable to do it I have followed the documentation link
https://learn.microsoft.com/en-us/azure/sql-data-warehouse/sql-data-warehouse-load-from-azure-data-lake-store
But I am getting this error when I am trying to create an external table, I have created another web/api app but still was not able to access thE application here is the error which I am facing
EXTERNAL TABLE access failed due to internal error: 'Java exception raised on call to HdfsBridge_IsDirExist. Java exception message:
GETFILESTATUS failed with error 0x83090aa2 (Forbidden. ACL verification failed. Either the resource does not exist or the user is not authorized to perform the requested operation.). [0ec4b8e0-b16d-470e-9c98-37818176a188][2017-08-14T02:30:58.9795172-07:00]: Error [GETFILESTATUS failed with error 0x83090aa2 (Forbidden. ACL verification failed. Either the resource does not exist or the user is not authorized to perform the requested operation.). [0ec4b8e0-b16d-470e-9c98-37818176a188][2017-08-14T02:30:58.9795172-07:00]] occurred while accessing external file.'
Here is the script which I am trying to get it to work with
CREATE DATABASE SCOPED CREDENTIAL ADLCredential2
WITH
IDENTITY = '2ec11315-5a30-4bea-9428-e511bf3fa8a1#https://login.microsoftonline.com/24708086-c2ce-4b77-8d61-7e6fe8303971/oauth2/token',
SECRET = '3Htr2au0b0wvmb3bwzv1FekK88YQYZCUrJy7OB3NzYs='
;
CREATE EXTERNAL DATA SOURCE AzureDataLakeStore11
WITH (
TYPE = HADOOP,
LOCATION = 'adl://test.azuredatalakestore.net/',
CREDENTIAL = ADLCredential2
);
CREATE EXTERNAL FILE FORMAT TextFileFormat
WITH
( FORMAT_TYPE = DELIMITEDTEXT
, FORMAT_OPTIONS ( FIELD_TERMINATOR = '|'
, DATE_FORMAT = 'yyyy-MM-dd HH:mm:ss.fff'
, USE_TYPE_DEFAULT = FALSE
)
);
CREATE EXTERNAL TABLE [extccsm].[external_medication]
(
person_id varchar(4000),
encounter_id varchar(4000),
fin varchar(4000),
mrn varchar(4000),
icd_code varchar(4000),
icd_description varchar(300),
priority integer,
optional1 varchar(4000),
optional2 varchar(4000),
optional3 varchar(4000),
load_identifier varchar(4000),
upload_time datetime2,
xx_person_id varchar(4000),--Person ID is the ID that we will use to represent the person through out the process uniquely, This requires initial analysis to determine how to set it
xx_encounter_id varchar(4000),--Encounter ID is the ID that will represent the encounter uniquely through out the process, This requires initial analysis to determine hos to set it based on client data
mod_optional1 varchar(4000),
mod_optional2 varchar(4000),
mod_optional3 varchar(4000),
mod_optional4 varchar(4000),
mod_optional5 varchar(4000),
mod_loadidentifier datetime2
)
WITH
(
LOCATION='\testfiles\procedure_azure.txt000\',
DATA_SOURCE = AzureDataLakeStore11, --DATA SOURCE THE BLOB STORAGE
FILE_FORMAT = TextFileFormat, --TYPE OF FILE FORMAT
REJECT_TYPE = percentage,
REJECT_VALUE = 1,
REJECT_SAMPLE_VALUE = 0
);
Please tell me whats wrong here?
I can reproduce this but it's hard to narrow down exactly. I think it's to do with permissions. From the Azure portal:
Data Lake Store > yourDataLakeAccount > your folder > Access
From there, make sure your AD Application has Read, Write and Execute permission on the relevant files / folders. Start with one file initially. I can reproduce the error by assigning / unassigning the Execute permissions but need to repeat the steps to confirm. I'll retrace my steps but for now concentrate your search here. In my example below, my Azure Active Directory Application is called adwAndPolybase; you can see I've given it Read, Write and Execute. I also experimented with the Advanced and 'Apply to children' options:
I have following table
create table test(
userId varchar,
notifId timeuuid,
notification varchar,
time bigint,read boolean,
primary key(userId, notifId)) with clustering order by (notifId desc);
I am running following query:
PreparedStatement pstmt = session.prepare("INSERT INTO notifications(userId, notifId, notification, time, read) VALUES(?, now(), ?, ?, ?)");
BoundStatement boundStatement = new BoundStatement(pstmt);
session.execute(boundStatement.bind("123", "hello", new Date().getTime(), false));
I am getting following error:
Exception in thread "main" com.datastax.driver.core.exceptions.InvalidQueryException: Type error: cannot assign result of function now (type timeuuid) to notifid (type 'org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.TimeUUIDType)')
at com.datastax.driver.core.exceptions.InvalidQueryException.copy(InvalidQueryException.java:35)
at com.datastax.driver.core.ResultSetFuture.extractCause(ResultSetFuture.java:277)
at com.datastax.driver.core.Session.toPreparedStatement(Session.java:281)
at com.datastax.driver.core.Session.prepare(Session.java:172)
at com.example.cassandra.SimpleClient.loadData(SimpleClient.java:130)
at com.example.cassandra.SimpleClient.main(SimpleClient.java:214)
Caused by: com.datastax.driver.core.exceptions.InvalidQueryException: Type error: cannot assign result of function now (type timeuuid) to notifid (type 'org.apache.cassandra.db.marshal.ReversedType(org.apache.cassandra.db.marshal.TimeUUIDType)')
at com.datastax.driver.core.ResultSetFuture.convertException(ResultSetFuture.java:307)
I am using cassandra 1.2.2.
You should insert the timeuuid generated via datastax driver. In your case , since you are using version 1 timeuuid, you must use
UUIDs.timeBased()
You can find the above in following reference
http://www.datastax.com/drivers/java/2.0/com/datastax/driver/core/utils/UUIDs.html
It looks like a bug (similar or the same as https://issues.apache.org/jira/browse/CASSANDRA-5472) , and you're using pretty old version of Cassandra. I recommend you to upgrade to the latest 1.2.x release, retest your scenario and file an issue, if the problem still persists
I am trying out Cassandra for the first time and running it locally for simple session management db. [Cassandra-2.0.4, CQL3, datastax driver 2.0.0-rc2]
The following count query works fine when there is no data in the table:
select count(*) from session_data where app_name=? and account=? and last_access > ?
But after even a single row is inserted into the table, the query fails with the following error:
java.lang.AssertionError
at org.apache.cassandra.db.filter.ExtendedFilter$WithClauses.getExtraFilter(ExtendedFilter.java:258)
at org.apache.cassandra.db.ColumnFamilyStore.filter(ColumnFamilyStore.java:1719)
at org.apache.cassandra.db.ColumnFamilyStore.getRangeSlice(ColumnFamilyStore.java:1674)
at org.apache.cassandra.db.PagedRangeCommand.executeLocally(PagedRangeCommand.java:111)
at org.apache.cassandra.service.StorageProxy$LocalRangeSliceRunnable.runMayThrow(StorageProxy.java:1418)
at org.apache.cassandra.service.StorageProxy$DroppableRunnable.run(StorageProxy.java:1931)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
at java.lang.Thread.run(Thread.java:744)
Here is the schema I am using:
CREATE KEYSPACE session WITH replication= {'class': 'SimpleStrategy', 'replication_factor': 1};
CREATE TABLE session_data (
username text,
session_id text,
app_name text,
account text,
last_access timestamp,
created_on timestamp,
PRIMARY KEY (username, session_id, app_name, account)
);
create index sessionIndex ON session_data (session_id);
create index sessionAppName ON session_data (app_name);
create index lastAccessIndex ON session_data (last_access);
I am wondering if there is something wrong in the table definition/indexes or the query itself. Any help/insight would be greatly appreciated.
It looks like you're tripping over a bug in Cassandra. Here is the assertion and related comments in the Cassandra sources:
/*
* This method assumes the IndexExpression names are valid column names, which is not the
* case with composites. This is ok for now however since:
* 1) CompositeSearcher doesn't use it.
* 2) We don't yet allow non-indexed range slice with filters in CQL3 (i.e. this will never be
* called by CFS.filter() for composites).
*/
assert !(cfs.getComparator() instanceof CompositeType);
This code was modified between cassandra-2.0.4 and trunk as part of ticket CASSANDRA-5417, but it's not clear to me that the author was aware of this issue. The assertion was removed, but the comment was not. I would recommend submitting a bug report to the Cassandra project.