I am trying to create a table in cassandra DB using cql3 and it throws this error on cassadra server and the table is not created, can anyone tell, what is the reason behind it?
Script -->
use demodb1;
CREATE TABLE fishblogscolumnfamily (
userid varchar,
when timestamp,
fishtype varchar,
blog varchar,
image blob,
PRIMARY KEY (userid, when, fishtype)
);
Error:
INFO 14:28:27,057 Create new ColumnFamily:
org.apache.cassandra.config.CFMetaData#70177017[cfId=1024,ksName=demodb1,cfName=fishblogscolumnfamily,cfType=Standa,comparator=org.apache.cassandra.db.marshal.CompositeType(org.apache.cassandra.db.marshal.DateType,org.apache.cassandra.db.marshal.UTF8Type,org.apache.cassandra.db.marshal.UTF8Type),subcolumncomparator=<null>,comment=,readRepairChance=0.1,dclocalReadRepairChance=0.0,replicateOnWrite=true,gcGraceSeconds=864000,defaultValidator=org.apache.cassandra.db.marshal.UTF8Type,keyValidator=org.apache.cassandra.db.marshal.UTF8Type,minCompactionThreshold=4,maxCompactionThreshold=32,keyAlia
java.nio.HeapByteBuffer[pos=0 lim=6 cap=6],columnAliases=[java.nio.HeapByteBuffer[pos=0 lim=4 cap=4], java.nio.HeapByteBuffer[pos=0 lim=8 cap=8]],valueAlias=<n
l>,column_metadata={java.nio.HeapByteBuffer[pos=0 lim=5 cap=5]=ColumnDefinition{name=696d616765, validator=org.apache.cassandra.db.marshal.BytesType, index_typ
null, index_name='null', component_index=2}, java.nio.HeapByteBuffer[pos=0 lim=4 cap=4]=ColumnDefinition{name=626c6f67, validator=org.apache.cassandra.db.marsh
.UTF8Type, index_type=null, index_name='null', component_index=2}},compactionStrategyClass=class org.apache.cassandra.db.compaction.SizeTieredCompactionStrateg
compactionStrategyOptions={},compressionOptions={sstable_compression=org.apache.cassandra.io.compress.SnappyCompressor},bloomFilterFpChance=<null>,caching=KEYS
NLY]
ERROR 14:28:27,071 Error occurred during processing of message.
java.lang.NullPointerException
at org.apache.cassandra.utils.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(FastByteComparisons.java:223)
at org.apache.cassandra.utils.FastByteComparisons$LexicographicalComparerHolder$UnsafeComparer.compareTo(FastByteComparisons.java:110)
at org.apache.cassandra.utils.FastByteComparisons.compareTo(FastByteComparisons.java:41)
at org.apache.cassandra.utils.FBUtilities.compareUnsigned(FBUtilities.java:184)
at org.apache.cassandra.utils.ByteBufferUtil.compareUnsigned(ByteBufferUtil.java:89)
at org.apache.cassandra.db.marshal.BytesType.bytesCompare(BytesType.java:58)
at org.apache.cassandra.db.marshal.AsciiType.compare(AsciiType.java:48)
at org.apache.cassandra.db.marshal.AsciiType.compare(AsciiType.java:28)
at org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:80)
at org.apache.cassandra.db.marshal.AbstractCompositeType.compare(AbstractCompositeType.java:32)
at java.util.TreeMap.rbInsert(Unknown Source)
at java.util.TreeMap.put(Unknown Source)
at org.apache.cassandra.db.TreeMapBackedSortedColumns.addColumn(TreeMapBackedSortedColumns.java:95)
at org.apache.cassandra.db.AbstractColumnContainer.addColumn(AbstractColumnContainer.java:109)
at org.apache.cassandra.db.AbstractColumnContainer.addColumn(AbstractColumnContainer.java:104)
at org.apache.cassandra.config.ColumnDefinition.toSchema(ColumnDefinition.java:195)
at org.apache.cassandra.config.CFMetaData.toSchema(CFMetaData.java:1159)
Related
I'm trying to deploy a service builder module on my Liferay portal. But I'm getting the error below.
Could this be caused because I already have that table on my database? The table was created manually as part of an older project, and now I'm trying to reuse that existing table.
Thanks in advance.
2019-07-24 12:09:47.770 ERROR [pipe-start
998][com_liferay_portal_upgrade_impl:97] Invocation to listener threw
exception com.liferay.portal.kernel.upgrade.UpgradeException: Bundle
com.xxxx.xxxx.service_1.0.0 [998] has invalid
content in tables.sql:_create table dbo.news ( id_ INTEGER not null
primary key,_ title VARCHAR(75) null,_ push_notification
BOOLEAN,_ date_time VARCHAR(75) null,_ short_description VARCHAR(75)
null,_ long_description VARCHAR(75) null,_ picture VARCHAR(75)
null,_ type_ VARCHAR(75) null,_ tag VARCHAR(75) null_); [Sanitized]
at
com.liferay.portal.spring.extender.internal.context.ParentModuleApplicationContextExtender$InitialUpgradeStep.upgrade(ParentModuleApplicationContextExtender.java:220)
at
com.liferay.portal.upgrade.internal.executor.UpgradeExecutor$UpgradeInfosRunnable.run(UpgradeExecutor.java:159)
at
com.liferay.portal.output.stream.container.internal.OutputStreamContainerFactoryTrackerImpl.runWithSwappedLog(OutputStreamContainerFactoryTrackerImpl.java:119)
at
com.liferay.portal.upgrade.internal.executor.UpgradeExecutor.executeUpgradeInfos(UpgradeExecutor.java:110)
at
com.liferay.portal.upgrade.internal.executor.UpgradeExecutor.execute(UpgradeExecutor.java:87)
at
com.liferay.portal.upgrade.internal.release.osgi.commands.ReleaseManagerOSGiCommands$UpgradeInfoServiceTrackerMapListener.keyEmitted(ReleaseManagerOSGiCommands.java:485)
at
com.liferay.portal.upgrade.internal.release.osgi.commands.ReleaseManagerOSGiCommands$UpgradeInfoServiceTrackerMapListener.keyEmitted(ReleaseManagerOSGiCommands.java:474)
at
com.liferay.osgi.service.tracker.collections.internal.map.ServiceTrackerMapImpl$DefaultEmitter.emit(ServiceTrackerMapImpl.java:222)
at
com.liferay.osgi.service.tracker.collections.map.PropertyServiceReferenceMapper.map(PropertyServiceReferenceMapper.java:43)
at
com.liferay.osgi.service.tracker.collections.internal.map.ServiceTrackerMapImpl$ServiceReferenceServiceTrackerCustomizer.addingService(ServiceTrackerMapImpl.java:260)
at
com.liferay.osgi.service.tracker.collections.internal.map.ServiceTrackerMapImpl$ServiceReferenceServiceTrackerCustomizer.addingService(ServiceTrackerMapImpl.java:248)
at
org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:943)
at
org.osgi.util.tracker.ServiceTracker$Tracked.customizerAdding(ServiceTracker.java:1)
at
org.osgi.util.tracker.AbstractTracked.trackAdding(AbstractTracked.java:256)
at
org.osgi.util.tracker.AbstractTracked.track(AbstractTracked.java:229)
at
org.osgi.util.tracker.ServiceTracker$Tracked.serviceChanged(ServiceTracker.java:903)
at
org.eclipse.osgi.internal.serviceregistry.FilteredServiceListener.serviceChanged(FilteredServiceListener.java:109)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEventPrivileged(ServiceRegistry.java:891)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.publishServiceEvent(ServiceRegistry.java:804)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistrationImpl.register(ServiceRegistrationImpl.java:127)
at
org.eclipse.osgi.internal.serviceregistry.ServiceRegistry.registerService(ServiceRegistry.java:228)
at
org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:469)
at
org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:487)
at
org.eclipse.osgi.internal.framework.BundleContextImpl.registerService(BundleContextImpl.java:1004)
at
com.liferay.portal.spring.extender.internal.context.ParentModuleApplicationContextExtender$ParentModuleApplicationContextExtension._processInitialUpgrade(ParentModuleApplicationContextExtender.java:613)
at
com.liferay.portal.spring.extender.internal.context.ParentModuleApplicationContextExtender$ParentModuleApplicationContextExtension.start(ParentModuleApplicationContextExtender.java:571)
at
org.apache.felix.utils.extender.AbstractExtender.createExtension(AbstractExtender.java:259)
at
org.apache.felix.utils.extender.AbstractExtender.modifiedBundle(AbstractExtender.java:232)
at
org.osgi.util.tracker.BundleTracker$Tracked.customizerModified(BundleTracker.java:488)
at
org.osgi.util.tracker.BundleTracker$Tracked.customizerModified(BundleTracker.java:1)
at
org.osgi.util.tracker.AbstractTracked.track(AbstractTracked.java:232)
at
org.osgi.util.tracker.BundleTracker$Tracked.bundleChanged(BundleTracker.java:450)
at
org.eclipse.osgi.internal.framework.BundleContextImpl.dispatchEvent(BundleContextImpl.java:908)
at
org.eclipse.osgi.framework.eventmgr.EventManager.dispatchEvent(EventManager.java:230)
at
org.eclipse.osgi.framework.eventmgr.ListenerQueue.dispatchEventSynchronous(ListenerQueue.java:148)
at
org.eclipse.osgi.internal.framework.EquinoxEventPublisher.publishBundleEventPrivileged(EquinoxEventPublisher.java:230)
at
org.eclipse.osgi.internal.framework.EquinoxEventPublisher.publishBundleEvent(EquinoxEventPublisher.java:137)
at
org.eclipse.osgi.internal.framework.EquinoxEventPublisher.publishBundleEvent(EquinoxEventPublisher.java:129)
at
org.eclipse.osgi.internal.framework.EquinoxContainerAdaptor.publishModuleEvent(EquinoxContainerAdaptor.java:191)
at org.eclipse.osgi.container.Module.publishEvent(Module.java:476)
at org.eclipse.osgi.container.Module.start(Module.java:467) at
org.eclipse.osgi.internal.framework.EquinoxBundle.start(EquinoxBundle.java:428)
at
org.eclipse.osgi.internal.framework.EquinoxBundle.start(EquinoxBundle.java:447)
at
org.eclipse.equinox.console.commands.EquinoxCommandProvider.start(EquinoxCommandProvider.java:243)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498) at
org.apache.felix.gogo.runtime.Reflective.invoke(Reflective.java:139)
at
org.apache.felix.gogo.runtime.CommandProxy.execute(CommandProxy.java:91)
at org.apache.felix.gogo.runtime.Closure.executeCmd(Closure.java:599)
at
org.apache.felix.gogo.runtime.Closure.executeStatement(Closure.java:526)
at org.apache.felix.gogo.runtime.Closure.execute(Closure.java:415)
at org.apache.felix.gogo.runtime.Pipe.doCall(Pipe.java:416) at
org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:229) at
org.apache.felix.gogo.runtime.Pipe.call(Pipe.java:59) at
java.util.concurrent.FutureTask.run(FutureTask.java:266) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748) Caused by:
java.sql.SQLSyntaxErrorException: unexpected token: DBO at
org.hsqldb.jdbc.JDBCUtil.sqlException(JDBCUtil.java:376) at
org.hsqldb.jdbc.JDBCUtil.sqlException(JDBCUtil.java:247) at
org.hsqldb.jdbc.JDBCStatement.fetchResult(JDBCStatement.java:1817) at
org.hsqldb.jdbc.JDBCStatement.executeUpdate(JDBCStatement.java:208)
at
com.zaxxer.hikari.pool.ProxyStatement.executeUpdate(ProxyStatement.java:117)
at
com.zaxxer.hikari.pool.HikariProxyStatement.executeUpdate(HikariProxyStatement.java)
at com.liferay.portal.dao.db.BaseDB.runSQL(BaseDB.java:294) at
com.liferay.portal.dao.db.BaseDB.runSQL(BaseDB.java:264) at
com.liferay.portal.dao.db.BaseDB.runSQLTemplateString(BaseDB.java:452)
at
com.liferay.portal.dao.db.BaseDB.runSQLTemplateString(BaseDB.java:509)
at
com.liferay.portal.spring.extender.internal.context.ParentModuleApplicationContextExtender$InitialUpgradeStep.upgrade(ParentModuleApplicationContextExtender.java:216)
... 59 more Caused by: org.hsqldb.HsqlException: unexpected token:
DBO at org.hsqldb.error.Error.parseError(Error.java:101) at
org.hsqldb.ParserBase.unexpectedToken(ParserBase.java:815) at
org.hsqldb.ParserBase.checkIsIrregularCharInIdentifier(ParserBase.java:335)
at org.hsqldb.ParserDQL.checkIsSchemaObjectName(ParserDQL.java:115)
at org.hsqldb.ParserDQL.readNewSchemaObjectName(ParserDQL.java:5912)
at org.hsqldb.ParserTable.compileCreateTable(ParserTable.java:78) at
org.hsqldb.ParserDDL.compileCreate(ParserDDL.java:156) at
org.hsqldb.ParserCommand.compilePart(ParserCommand.java:236) at
org.hsqldb.ParserCommand.compileStatements(ParserCommand.java:91) at
org.hsqldb.Session.executeDirectStatement(Session.java:1227) at
org.hsqldb.Session.execute(Session.java:1018) at
org.hsqldb.jdbc.JDBCStatement.fetchResult(JDBCStatement.java:1809)
... 67 more
The fact that the table is already existing is surely the issue. Service builder deployment keep track of the "schema" version that is currently deployed, and if there is none, they create it using the sql script.
In your case, I would suggest
renaming or exporting and delete your current table
deploying your new service
move your data back to the new table created
Once this is done, don’t ever touch that data directly in the database, let your new service handle all future interaction. In fact, it might even be better to import the old data using the service API.
i am using Cassandra Trigger on a table. I am following the example and loading trigger jar with 'nodetool reloadtriggers'. Then i am using
'CREATE TRIGGER mytrigger ON ..'
command from cqlsh to create trigger on my table.
Adding an entry into that table , my audit table is being populated.
But calling a method from within my Java application, which persists an entry into my table by using
'session.execute(BoundStatement)' i am getting this exception:
InvalidQueryException: table of additional mutation does not match primary update table
Why does the insertion into the table and the audit work when doing it directly with cqlsh and why does it fail when doing pretty much exactly the same with the Java application?
i am using this as AuditTrigger, very simplified(left out all of the other operations other than Row insertion:
public class AuditTrigger implements ITrigger {
private Properties properties = loadProperties();
public Collection<Mutation> augment(Partition update) {
String auditKeyspace = properties.getProperty("keyspace");
String auditTable = properties.getProperty("table");
CFMetaData metadata = Schema.instance.getCFMetaData(auditKeyspace,
auditTable);
PartitionUpdate.SimpleBuilder audit =
PartitionUpdate.simpleBuilder(metadata, UUIDGen.getTimeUUID());
if (row.primaryKeyLivenessInfo().timestamp() != Long.MIN_VALUE) {
// Row Insertion
JSONObject obj = new JSONObject();
obj.put("message_id", update.metadata().getKeyValidator()
.getString(update.partitionKey().getKey()));
audit.row().add("operation", "ROW INSERTION");
}
audit.row().add("keyspace_name", update.metadata().ksName)
.add("table_name", update.metadata().cfName)
.add("primary_key", update.metadata().getKeyValidator()
.getString(update.partitionKey()
.getKey()));
return Collections.singletonList(audit.buildAsMutation());
It seems like using BoundStatement, the trigger fails:
session.execute(boundStatement);
, using a regular cql queryString works though.
session.execute(query)
We are using Boundstatement everywhere within our application though and cannot change that.
Any help would be appreciated.
Thanks
I tried different ways and googled a lot for the error but no luck so far.
I am trying to make a function which can update an existing shard mapping but I get the following exception.
Microsoft.Azure.SqlDatabase.ElasticScale.ShardManagement.ShardManagementException: Store Error: Error 515, Level 16, State 2, Procedure __ShardManagement.spBulkOperationShardMappingsLocal, Line 98, Message: Cannot insert the value NULL into column 'LockOwnerId', table 'TEST-POS.__ShardManagement.ShardMappingsLocal'; column does not allow nulls. INSERT fails.
Though I created Create Shard and Delete Shard functions and they are working fine. But I get the above error while updating or creating a mapping.
Following is my code:
PointMapping<int> pointMapping;
bool mappingExists = _listShardMap.TryGetMappingForKey(9, out pointMapping);
if (mappingExists)
{
var shardLocation = new ShardLocation(NewServerName, NewDatabaseName);
Shard _shard;
bool shardExists =
_listShardMap.TryGetShard(shardLocation, out _shard);
if (shardExists)
{
var token = _listShardMap.GetMappingLockOwner(pointMapping);
var mappingUpdate = new PointMappingUpdate { Shard = _shard, Status = MappingStatus.Online };
var newMapping = _listShardMap.UpdateMapping(_listShardMap.MarkMappingOffline(pointMapping), mappingUpdate, token);
}
}
I get the same error either I supply the token or not. Then I also tried to supply token in this way MappingLockToken.Create(), but then I get different error that correct token was not provided. It is also obvious because token is different.
_listShardMap.UpdateMapping(offlineMapping, mappingUpdate, MappingLockToken.Create());
Microsoft.Azure.SqlDatabase.ElasticScale.ShardManagement.ShardManagementException: Mapping referencing shard '[DataSource=cps-pos-test-1.database.windows.net Database=Live_MSA_Test_Cloud]' belonging to shard map 'ClientIDShardMap' is locked and correct lock token is not provided. Error occurred while executing procedure
I also checked the LockOwnerId in the [__ShardManagement].[ShardMappingsGlobal] table in the database and this is the ID = 00000000-0000-0000-0000-000000000000
I though I am getting null insertion error because token Id is zero, so I updated it manually to 451a4da0-e3d4-42ac-bdc3-5b57022693d0 in database by executing an update query. But it did not work and I get the same Cannot insert the value NULL into column 'LockOwnerId' error.
I am also facing the same Null error while creating a new mapping and I do not see in the code where to provide a token while creating a mapping. Following is code.
PointMappingCreationInfo<int> newMappingInfo = new PointMappingCreationInfo<int>(10, newShard, MappingStatus.Online);
var newMapping = _listShardMap.CreatePointMapping(newMappingInfo);
I searched it a lot on google and downloaded some sample applications as well, but I am not able to find the solution. I will highly appreciate any kind of help.
I successfully compiled the code example from http://www.lagomframework.com/documentation/1.0.x/ReadSide.html
It's about the read-side of the CQRS schema.
There is only problem: it doesn't run.
Looks like configuration problem... and the official documentation of Lagom at this point is very incomplete.
The error says:
java.util.concurrent.CompletionException: java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table postsummary
Alright, there's a line in the code that does cassandra query, selecting & inserting from & to a table named postsummary.
I thought the tables are auto-created by default. Anyway, in doubt, I simply added this line to my application.conf:
cassandra-journal.keyspace-autocreate = true
cassandra-journal.tables-autocreate = true
Still..., no luck, same error after restarting.
Maybe it has something to do with another error during startup, that says:
[warn] a.p.c.j.CassandraJournal - Failed to connect to Cassandra and initialize. It will be retried on demand. Caused by: ServiceLocator is not bound
I thought... alright, maybe it's trying to contact 9042 (default cassandra port), while lagom by default starts embedded cassandra at 4000.
So I tried adding these lines in application.conf:
cassandra-journal.contact-points = ["127.0.0.1"]
cassandra-journal.port = 4000
lagom.persistence.read-side.cassandra.contact-points = ["127.0.0.1"]
lagom.persistence.read-side.cassandra.port = 4000
Still..., no luck, same error.
Can anyone help me solve it. I need to get this example running, crucial part of CQRS study using lagom.
Some ref.: https://github.com/lagom/lagom/blob/master/persistence/src/main/resources/reference.conf
Here are some screenshots:
Btw, I solved it by creating the tables inside the code, calling this method from the prepare method of the event processor:
private CompletionStage<Done> prepareTables(CassandraSession session) {
CompletionStage<Done> preparePostSummary = session.executeCreateTable(
"CREATE TABLE IF NOT EXISTS postsummary ("
+ "partition bigint, id text, title text, "
+ "PRIMARY KEY (id))"
).whenComplete((ok, err) -> {
if (err != null) {
System.out.println("Failed to create postsummary table, due to: " + err.getMessage());
}
});
CompletionStage<Done> prepareBlogEventOffset = session.executeCreateTable(
"CREATE TABLE IF NOT EXISTS blogevent_offset ("
+ "partition bigint, offset uuid, "
+ "PRIMARY KEY (offset))"
).whenComplete((ok, err) -> {
if (err != null) {
System.out.println("Failed to create blogevent_offset table, due to: " + err.getMessage());
}
});
return preparePostSummary.thenCompose(a -> prepareBlogEventOffset);
}
Thanks!,
Raka
I have a working example here. Even if it does not use auto created tables :
https://github.com/lagom/activator-lagom-cargotracker/blob/master/registration-impl/src/main/java/sample/cargotracker/registration/impl/CargoEventProcessor.java
given below is the method I used to retrieve details from a Dynamodb table. But when I call this method it ended up throwing an exception "Unable to locate property for key attribute appointmentId". primary key of this particular table is appointmentId, but I've already created a global secondary index on patientId column. I'm using that index in below query to get the appointment details by a given patientID.
public async Task GetAppointmentByPatientID(int patientID)
{
var context = CommonUtils.Instance.DynamoDBContext;
PatientAppointmentObjectList.Clear();
DynamoDBOperationConfig config = new DynamoDBOperationConfig();
config.IndexName = DBConstants.APPOINTMENT_PATIENTID_GSI;
AsyncSearch<ScheduledAppointment> appQuery = context.QueryAsync<ScheduledAppointment>(patientID.ToString(), config);
IEnumerable<ScheduledAppointment> appList = await appQuery.GetRemainingAsync();
appList.Distinct().ToList().ForEach(i => PatientAppointmentObjectList.Add(i));
if (PropertyChanged != null)
this.OnPropertyChanged("PatientAppointmentObjectList");
}
}
It was a silly mistake. I've had the hash key column of the table as "appointmentID" and the model object property named as AppointmentID. mismatch in the case of the property name had confused the dynamodb mapping.