Lagom framework / Persistent Read Side / Cassandra / DataStax / Table unconfigured - cassandra

I successfully compiled the code example from http://www.lagomframework.com/documentation/1.0.x/ReadSide.html
It's about the read-side of the CQRS schema.
There is only problem: it doesn't run.
Looks like configuration problem... and the official documentation of Lagom at this point is very incomplete.
The error says:
java.util.concurrent.CompletionException: java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table postsummary
Alright, there's a line in the code that does cassandra query, selecting & inserting from & to a table named postsummary.
I thought the tables are auto-created by default. Anyway, in doubt, I simply added this line to my application.conf:
cassandra-journal.keyspace-autocreate = true
cassandra-journal.tables-autocreate = true
Still..., no luck, same error after restarting.
Maybe it has something to do with another error during startup, that says:
[warn] a.p.c.j.CassandraJournal - Failed to connect to Cassandra and initialize. It will be retried on demand. Caused by: ServiceLocator is not bound
I thought... alright, maybe it's trying to contact 9042 (default cassandra port), while lagom by default starts embedded cassandra at 4000.
So I tried adding these lines in application.conf:
cassandra-journal.contact-points = ["127.0.0.1"]
cassandra-journal.port = 4000
lagom.persistence.read-side.cassandra.contact-points = ["127.0.0.1"]
lagom.persistence.read-side.cassandra.port = 4000
Still..., no luck, same error.
Can anyone help me solve it. I need to get this example running, crucial part of CQRS study using lagom.
Some ref.: https://github.com/lagom/lagom/blob/master/persistence/src/main/resources/reference.conf
Here are some screenshots:
Btw, I solved it by creating the tables inside the code, calling this method from the prepare method of the event processor:
private CompletionStage<Done> prepareTables(CassandraSession session) {
CompletionStage<Done> preparePostSummary = session.executeCreateTable(
"CREATE TABLE IF NOT EXISTS postsummary ("
+ "partition bigint, id text, title text, "
+ "PRIMARY KEY (id))"
).whenComplete((ok, err) -> {
if (err != null) {
System.out.println("Failed to create postsummary table, due to: " + err.getMessage());
}
});
CompletionStage<Done> prepareBlogEventOffset = session.executeCreateTable(
"CREATE TABLE IF NOT EXISTS blogevent_offset ("
+ "partition bigint, offset uuid, "
+ "PRIMARY KEY (offset))"
).whenComplete((ok, err) -> {
if (err != null) {
System.out.println("Failed to create blogevent_offset table, due to: " + err.getMessage());
}
});
return preparePostSummary.thenCompose(a -> prepareBlogEventOffset);
}
Thanks!,
Raka

I have a working example here. Even if it does not use auto created tables :
https://github.com/lagom/activator-lagom-cargotracker/blob/master/registration-impl/src/main/java/sample/cargotracker/registration/impl/CargoEventProcessor.java

Related

"Warning: SQL contains join, recordset not updatable" when I use this SQL query with CDatabase in MFC

I have this function to query some records from my MDB / ACCDB database:
CString strQuery, strNumber;
if (m_dbDatabase.IsOpen())
{
if (m_pRecords != nullptr)
{
strQuery.Format(_T("SELECT [Home Talks].*, [Public Talk Titles].[Theme] FROM [Home Talks] ")
_T("LEFT JOIN [Public Talk Titles] ON ")
_T("[Home Talks].[Talk Number] = [Public Talk Titles].[Talk Number] ")
_T("WHERE [Home Talks].[id] = %d"), iRecordID);
if (m_pRecords->Open( CRecordset::dynaset, (LPCTSTR)strQuery ))
{
// Do stuff
}
m_pRecords->Close();
}
}
It works fine and 100% of the time always reads data correct. But, I have noticed in DEBUG x64 mode the following warning in the Output log:
D:\a_work\1\s\src\vctools\VC7Libs\Ship\ATLMFC\Src\MFC\dbcore.cpp(2890) : AppMsg - Warning: SQL contains join, recordset not updatable
I thought I would just try to use CRecordset::readonly and then then Open method crashed. Is there any way to avoid this warning? Or is it just for information and to be ignored?

Cassandra Trigger Exception: InvalidQueryException: table of additional mutation does not match primary update table

i am using Cassandra Trigger on a table. I am following the example and loading trigger jar with 'nodetool reloadtriggers'. Then i am using
'CREATE TRIGGER mytrigger ON ..'
command from cqlsh to create trigger on my table.
Adding an entry into that table , my audit table is being populated.
But calling a method from within my Java application, which persists an entry into my table by using
'session.execute(BoundStatement)' i am getting this exception:
InvalidQueryException: table of additional mutation does not match primary update table
Why does the insertion into the table and the audit work when doing it directly with cqlsh and why does it fail when doing pretty much exactly the same with the Java application?
i am using this as AuditTrigger, very simplified(left out all of the other operations other than Row insertion:
public class AuditTrigger implements ITrigger {
private Properties properties = loadProperties();
public Collection<Mutation> augment(Partition update) {
String auditKeyspace = properties.getProperty("keyspace");
String auditTable = properties.getProperty("table");
CFMetaData metadata = Schema.instance.getCFMetaData(auditKeyspace,
auditTable);
PartitionUpdate.SimpleBuilder audit =
PartitionUpdate.simpleBuilder(metadata, UUIDGen.getTimeUUID());
if (row.primaryKeyLivenessInfo().timestamp() != Long.MIN_VALUE) {
// Row Insertion
JSONObject obj = new JSONObject();
obj.put("message_id", update.metadata().getKeyValidator()
.getString(update.partitionKey().getKey()));
audit.row().add("operation", "ROW INSERTION");
}
audit.row().add("keyspace_name", update.metadata().ksName)
.add("table_name", update.metadata().cfName)
.add("primary_key", update.metadata().getKeyValidator()
.getString(update.partitionKey()
.getKey()));
return Collections.singletonList(audit.buildAsMutation());
It seems like using BoundStatement, the trigger fails:
session.execute(boundStatement);
, using a regular cql queryString works though.
session.execute(query)
We are using Boundstatement everywhere within our application though and cannot change that.
Any help would be appreciated.
Thanks

ListShardMap. UpdateMapping throws an exception LockOwnerId Cannot be Null

I tried different ways and googled a lot for the error but no luck so far.
I am trying to make a function which can update an existing shard mapping but I get the following exception.
Microsoft.Azure.SqlDatabase.ElasticScale.ShardManagement.ShardManagementException: Store Error: Error 515, Level 16, State 2, Procedure __ShardManagement.spBulkOperationShardMappingsLocal, Line 98, Message: Cannot insert the value NULL into column 'LockOwnerId', table 'TEST-POS.__ShardManagement.ShardMappingsLocal'; column does not allow nulls. INSERT fails.
Though I created Create Shard and Delete Shard functions and they are working fine. But I get the above error while updating or creating a mapping.
Following is my code:
PointMapping<int> pointMapping;
bool mappingExists = _listShardMap.TryGetMappingForKey(9, out pointMapping);
if (mappingExists)
{
var shardLocation = new ShardLocation(NewServerName, NewDatabaseName);
Shard _shard;
bool shardExists =
_listShardMap.TryGetShard(shardLocation, out _shard);
if (shardExists)
{
var token = _listShardMap.GetMappingLockOwner(pointMapping);
var mappingUpdate = new PointMappingUpdate { Shard = _shard, Status = MappingStatus.Online };
var newMapping = _listShardMap.UpdateMapping(_listShardMap.MarkMappingOffline(pointMapping), mappingUpdate, token);
}
}
I get the same error either I supply the token or not. Then I also tried to supply token in this way MappingLockToken.Create(), but then I get different error that correct token was not provided. It is also obvious because token is different.
_listShardMap.UpdateMapping(offlineMapping, mappingUpdate, MappingLockToken.Create());
Microsoft.Azure.SqlDatabase.ElasticScale.ShardManagement.ShardManagementException: Mapping referencing shard '[DataSource=cps-pos-test-1.database.windows.net Database=Live_MSA_Test_Cloud]' belonging to shard map 'ClientIDShardMap' is locked and correct lock token is not provided. Error occurred while executing procedure
I also checked the LockOwnerId in the [__ShardManagement].[ShardMappingsGlobal] table in the database and this is the ID = 00000000-0000-0000-0000-000000000000
I though I am getting null insertion error because token Id is zero, so I updated it manually to 451a4da0-e3d4-42ac-bdc3-5b57022693d0 in database by executing an update query. But it did not work and I get the same Cannot insert the value NULL into column 'LockOwnerId' error.
I am also facing the same Null error while creating a new mapping and I do not see in the code where to provide a token while creating a mapping. Following is code.
PointMappingCreationInfo<int> newMappingInfo = new PointMappingCreationInfo<int>(10, newShard, MappingStatus.Online);
var newMapping = _listShardMap.CreatePointMapping(newMappingInfo);
I searched it a lot on google and downloaded some sample applications as well, but I am not able to find the solution. I will highly appreciate any kind of help.

Transaction not getting completed after commit in Azure SQL Data Warehouse

I am trying out transactions using JDBC in Azure SQL Data Warehouse. The transaction is successfully processed, but after the transaction, DDL command fails with error Operation cannot be performed within a transaction.
Here is the what I am trying to do.
connection.createStatement().execute("CREATE TABLE " + schema + ".transaction_table (id INT)");
connection.createStatement().execute("INSERT INTO " + schema + ".transaction_table (id) VALUES (1)");
connection.createStatement().execute("INSERT INTO " + schema + ".transaction_table (id) VALUES (2)");
// Transaction starts
connection.setAutoCommit(false);
connection.createStatement().execute("DELETE FROM " + schema + ".transaction_table WHERE id = 2");
connection.createStatement().execute("INSERT INTO " + schema + ".transaction_table (id) VALUES (10)");
connection.commit();
connection.setAutoCommit(true);
// Transaction ends
// Next DDL command to succeed, but it does not
connectiom.createStatement().execute("CREATE TABLE " + schema + ".transaction_table_new (id INT)");
// Fails with `Operation cannot be performed within a transaction`
So, how can we close the transaction in Azure SQL Data Warehouse.
I tried to do it like this.
try {
// This fails
connection.createStatement().execute("CREATE TABLE " + schema + ".transaction_table_new (id INT)");
} catch (SQLServerException e) {
if (e.getMessage().contains("Operation cannot be performed within a transaction")) {
// This succeeds
// Somehow the transaction was closed, may be because of the exception
connection.createStatement().execute("CREATE TABLE " + schema + ".transaction_table_new "(id INT)");
}
}
SQL Data Warehouse expects the CREATE TABLE statement to be run outside of a transaction. By setting the connection.setAutoCommit to true, you are forcing Java to run the execute within a transaction. I'm a bit weak on Java (it's been a while) but you should be able to run the second DDL statement by simply commenting out the setAutoCommit(true) line. This will leave the JDBC driver in an execute mode only and not run the execute() operation within a transaction.
It looks like we have to end the transaction manually.
It looks like this
connection.setAutoCommit(false);
// Transaction statement 1
// Transaction statement 2
connection.commit();
connection.setAutoCommit(true);
connection.createStatement().execute("IF ##TRANCOUNT > 0 COMMIT TRAN");
This is because, for Azure SQL Data Warehouse, jdbc connection.commit() doesn’t appear to always issue the COMMIT. It keeps track of transactions it’s managing and decides to be “smart” about what it sends. So manual COMMIT TRAN is executed to close all the open transactions before executing any DDL commands.
This is strange as we don't have to do this for other warehouses or databases, but it works. And, this is not documented.

SqlBulkCopy Failed to obtain column collation information for the destination table

I am getting this error when I try to write rows to a table via SqlBulkCopy and a DataTable object.
Before going any further, let me say that I am aware of the Microsoft KB article below. Every post out there regarding this error references that article. However, I DO NOT have dots in my table or schema name. The table exists in the default schema for the user account, so the table name alone should suffice.
http://support.microsoft.com/kb/944389
Here is the code which performs the bulk write operation:
SqlConnection cn = new SqlConnection(cs);
cn.Open();
SqlTransaction tr = cn.BeginTransaction();
try
{
using (SqlBulkCopy copy = new SqlBulkCopy(cn, SqlBulkCopyOptions.Default, tr))
{
copy.DestinationTableName = CircCountTableName;
copy.ColumnMappings.Add("CirculationRangeID", "CirculationRangeID");
copy.ColumnMappings.Add("GeographyID", "GeographyID");
copy.ColumnMappings.Add("CircCountModelID", "CircCountModelID");
copy.ColumnMappings.Add("Monday", "Monday");
copy.ColumnMappings.Add("Tuesday", "Tuesday");
copy.ColumnMappings.Add("Wednesday", "Wednesday");
copy.ColumnMappings.Add("Thursday", "Thursday");
copy.ColumnMappings.Add("Friday", "Friday");
copy.ColumnMappings.Add("Saturday", "Saturday");
copy.ColumnMappings.Add("Sunday", "Sunday");
copy.ColumnMappings.Add("DataSource", "DataSource");
copy.ColumnMappings.Add("DataSourceID", "DataSourceID");
copy.ColumnMappings.Add("CreateDate", "CreateDate");
copy.ColumnMappings.Add("LastUpdateDate", "LastUpdateDate");
copy.ColumnMappings.Add("LastUpdateUser", "LastUpdateUser");
copy.WriteToServer(circCounts);
tr.Commit();
}
}
catch (Exception ex)
{
tr.Rollback();
}
finally
{
cn.Close();
}
Has any one else encountered this problem when the cause was something other than dot notation? I suspect it's a permissions issue, but I'm not entirely convinced.
Thank you.
I have no idea why this would make a difference, but when I gave the account used to connect to the database the right to Grant the View Definition permission - under Database Properties / Permissions - the error went away.

Resources