I am using SI's aggregator pattern to hold events and wait for the completion events and storing it in JdbcMessage store. I have created the table INT_MESSAGE, INT_MESSAGE_GROUP and INT_GROUP_TO_MESSAGE.
Sometimes, the completion event may not be available and I want to complete and discard the event, remove it from the tables. I don't want the tables to grow big un-necessarily
I have specified the below config in the pipeline
.expireGroupsUponCompletion(true)
.expireGroupsUponTimeout(true)
.groupTimeout(groupMessageTimeOut)
.sendPartialResultOnExpiry(false)
Would this ensure if the completion event doesn't arrive in x minutes then the message group will be expired, discarded in the null channel and removed from the tables.
Please suggest.
Your summary is correct. Both .expireGroupsUponCompletion(true) & .expireGroupsUponTimeout(true) do remove a group from the store.
The sendPartialResultOnExpiry(false) really does what you are asking:
if (this.sendPartialResultOnExpiry) {
if (this.logger.isDebugEnabled()) {
this.logger.debug("Prematurely releasing partially complete group with key ["
+ correlationKey + "] to: " + getOutputChannel());
}
completeGroup(correlationKey, group, lock);
}
else {
if (this.logger.isDebugEnabled()) {
this.logger.debug("Discarding messages of partially complete group with key ["
+ correlationKey + "] to: "
+ (this.discardChannelName != null ? this.discardChannelName : this.discardChannel));
}
if (this.releaseLockBeforeSend) {
lock.unlock();
}
group.getMessages()
.forEach(this::discardMessage);
}
Tell us, please, what made you to be confused about that configuration?
Related
I'm having difficulty with STRIPE customer->search returning incorrect results sometimes.
If I search for a customer, and if it does not exist then add it, then repeat - I can find it adds the customer a second/third/fourth/etc times - for a while. Meanwhile on the stripe dashboard, I can see the customers appearing [multiple times].
If I waited a few minutes between initial refreshes - it seems ok.
Perhaps there's a cache, or a lag or something between adding a customer and the search finding it but this is not very helpful. Am I wrong - or is there something I'm not doing?
The relevant PHP code is here:
$stripe = new \Stripe\StripeClient($stripe_secretkey);
echo "<LI>Searching for $customerNm...</LI>";
$json = $stripe->customers->search(['query' => 'name:\'' . $customerNm . '\'']);
foreach ($json->data as $key) {
echo "<LI>Found id:[" . $key->id . "]";
if (strcmp($key->name, $customerNm) == 0) {
$customerId = $key->id;
$email = $key->email;
break;
}
}
echo "<hr>";
if (strcmp($customerId, "") == 0) {
echo "<LI>Result to search is <PRE style='margin-left:20px'>$json</PRE>";
$email = str_replace(" ", "", str_replace(" ", "", "abc#$customerNm.com"));
$json = $stripe->customers->create(
[
'email' => $email,
'name' => $customerNm,
]);
$customerId = $json->id;
echo "<UL>Created New STRIPE customer id: [$customerId]</UL>\n";
}
}
This code can be found running on a sandbox here** where if you were to run the page multiple times - it will keep creating customers [for a while].
(Note, this page will create a cookie "posterUID" which is used as the customer name; once it's created then it stays until expiry or manual deletion).
** The full source to this can be seen here
This is expected behavior per Stripe's Search documentation:
Don’t use search for read-after-write flows (for example, searching immediately after a charge is made) because the data won’t be immediately available to search. Under normal operating conditions, data is searchable in under 1 minute. Propagation of new or updated data could be more delayed during an outage.
I am trying to use the deleteConfimation function option but I find that the default confirmation box pops up before I even get into the deleteConfimation function - what am I missing?
In the code below I can set break points and watch the data object being set up correctly with its new defaultConfirmMessage, but the basic jtable default delete confirmation box has already appeared and I never see an altered one.
$(container).jtable({
title: tablename,
paging: true,
pageSize: 100,
sorting: true,
defaultSorting: sortvar + ' ASC',
selecting: false,
deleteConfirmation: function(data) {
var defaultMessage = 'This record will be deleted - along with all its assignments!<br>Are you sure?';
if(data.record.Item) { // deleting an item
// Check whether item is in any preset lists
var url = 'CampingTablesData.php?action=CheckPresets&Table=items';
$.when(
ReturnAjax(url, {'ID':data.record.ID}, MyError)
).done(
function(retdata, status) {
if(status=='success') {
if(retdata.PresetList) {
data.deleteConfirmMessage = 'Item is in the following lists: ' + retdata.PresetList + 'Do you still want to delete it?';
}
} else {
data.cancel = true;
data.cancelMessage = retdata.Message;
}
}
);
} else {
data.deleteConfirmMessage = defaultMessage;
}
},
messages: {
addNewRecord: 'Add new',
deleteText: deleteTxt
},
actions: {
listAction: function(postData, jtParams) {
<list action code>
},
createAction: function(postData) {
<create action code>
},
updateAction: 'CampingTablesData.php?action=update&Table=' + tablename,
deleteAction: 'CampingTablesData.php?action=delete&Table=' + tablename
},
fields: tableFields --- preset variable
});
==========
After further testing the problem is only when deleting an item and it goes through the $.when().done() section of code. The Ajax call to the deletion url does not wait for this to complete - how do I overcome this?
i don't think you can get your design to work. What does the A in ajax stand for? Asynchronous! Synchronous Ajax has been deprecated for all sorts of good design and performance reasons.
You need to design you application to function asynchronously. Looking at your code, it feels you are misusing the deleteConfirmation event.
Consider changing the default deleteConfirmation message to inform the user, that the delete might not succeed if certain condition are met. Say
messages: {
deleteConfirmation: "This record will be deleted - along with all its assignments, unless in a preset list. Do you wish to try to delete this record?"
},
Then on the server, check the preset lists, and if not deletable, return an error message for jTable to display.
Depending on how dynamic your preset lists are, another approach might be to let the list function return an additional flag or code indicating which, if any, preset lists the item is already in, then your confirmation function can check this flag / indicator without further access to the server.
Thanks to MisterP for his observation and suggestions. I also considered his last approach but ended up setting deleteConfirmation to false (so as not to generate a system prompt) then writing a delete function that did not actually delete, but returned the information I needed to construct my own deleteConfimation message. Then a simple if confirm(myMessage) go ahead and delete with another Ajax call.
I am trying out transactions using JDBC in Azure SQL Data Warehouse. The transaction is successfully processed, but after the transaction, DDL command fails with error Operation cannot be performed within a transaction.
Here is the what I am trying to do.
connection.createStatement().execute("CREATE TABLE " + schema + ".transaction_table (id INT)");
connection.createStatement().execute("INSERT INTO " + schema + ".transaction_table (id) VALUES (1)");
connection.createStatement().execute("INSERT INTO " + schema + ".transaction_table (id) VALUES (2)");
// Transaction starts
connection.setAutoCommit(false);
connection.createStatement().execute("DELETE FROM " + schema + ".transaction_table WHERE id = 2");
connection.createStatement().execute("INSERT INTO " + schema + ".transaction_table (id) VALUES (10)");
connection.commit();
connection.setAutoCommit(true);
// Transaction ends
// Next DDL command to succeed, but it does not
connectiom.createStatement().execute("CREATE TABLE " + schema + ".transaction_table_new (id INT)");
// Fails with `Operation cannot be performed within a transaction`
So, how can we close the transaction in Azure SQL Data Warehouse.
I tried to do it like this.
try {
// This fails
connection.createStatement().execute("CREATE TABLE " + schema + ".transaction_table_new (id INT)");
} catch (SQLServerException e) {
if (e.getMessage().contains("Operation cannot be performed within a transaction")) {
// This succeeds
// Somehow the transaction was closed, may be because of the exception
connection.createStatement().execute("CREATE TABLE " + schema + ".transaction_table_new "(id INT)");
}
}
SQL Data Warehouse expects the CREATE TABLE statement to be run outside of a transaction. By setting the connection.setAutoCommit to true, you are forcing Java to run the execute within a transaction. I'm a bit weak on Java (it's been a while) but you should be able to run the second DDL statement by simply commenting out the setAutoCommit(true) line. This will leave the JDBC driver in an execute mode only and not run the execute() operation within a transaction.
It looks like we have to end the transaction manually.
It looks like this
connection.setAutoCommit(false);
// Transaction statement 1
// Transaction statement 2
connection.commit();
connection.setAutoCommit(true);
connection.createStatement().execute("IF ##TRANCOUNT > 0 COMMIT TRAN");
This is because, for Azure SQL Data Warehouse, jdbc connection.commit() doesn’t appear to always issue the COMMIT. It keeps track of transactions it’s managing and decides to be “smart” about what it sends. So manual COMMIT TRAN is executed to close all the open transactions before executing any DDL commands.
This is strange as we don't have to do this for other warehouses or databases, but it works. And, this is not documented.
I successfully compiled the code example from http://www.lagomframework.com/documentation/1.0.x/ReadSide.html
It's about the read-side of the CQRS schema.
There is only problem: it doesn't run.
Looks like configuration problem... and the official documentation of Lagom at this point is very incomplete.
The error says:
java.util.concurrent.CompletionException: java.util.concurrent.ExecutionException: com.datastax.driver.core.exceptions.InvalidQueryException: unconfigured table postsummary
Alright, there's a line in the code that does cassandra query, selecting & inserting from & to a table named postsummary.
I thought the tables are auto-created by default. Anyway, in doubt, I simply added this line to my application.conf:
cassandra-journal.keyspace-autocreate = true
cassandra-journal.tables-autocreate = true
Still..., no luck, same error after restarting.
Maybe it has something to do with another error during startup, that says:
[warn] a.p.c.j.CassandraJournal - Failed to connect to Cassandra and initialize. It will be retried on demand. Caused by: ServiceLocator is not bound
I thought... alright, maybe it's trying to contact 9042 (default cassandra port), while lagom by default starts embedded cassandra at 4000.
So I tried adding these lines in application.conf:
cassandra-journal.contact-points = ["127.0.0.1"]
cassandra-journal.port = 4000
lagom.persistence.read-side.cassandra.contact-points = ["127.0.0.1"]
lagom.persistence.read-side.cassandra.port = 4000
Still..., no luck, same error.
Can anyone help me solve it. I need to get this example running, crucial part of CQRS study using lagom.
Some ref.: https://github.com/lagom/lagom/blob/master/persistence/src/main/resources/reference.conf
Here are some screenshots:
Btw, I solved it by creating the tables inside the code, calling this method from the prepare method of the event processor:
private CompletionStage<Done> prepareTables(CassandraSession session) {
CompletionStage<Done> preparePostSummary = session.executeCreateTable(
"CREATE TABLE IF NOT EXISTS postsummary ("
+ "partition bigint, id text, title text, "
+ "PRIMARY KEY (id))"
).whenComplete((ok, err) -> {
if (err != null) {
System.out.println("Failed to create postsummary table, due to: " + err.getMessage());
}
});
CompletionStage<Done> prepareBlogEventOffset = session.executeCreateTable(
"CREATE TABLE IF NOT EXISTS blogevent_offset ("
+ "partition bigint, offset uuid, "
+ "PRIMARY KEY (offset))"
).whenComplete((ok, err) -> {
if (err != null) {
System.out.println("Failed to create blogevent_offset table, due to: " + err.getMessage());
}
});
return preparePostSummary.thenCompose(a -> prepareBlogEventOffset);
}
Thanks!,
Raka
I have a working example here. Even if it does not use auto created tables :
https://github.com/lagom/activator-lagom-cargotracker/blob/master/registration-impl/src/main/java/sample/cargotracker/registration/impl/CargoEventProcessor.java
I found this code to enumerate a list of queues for a QueueManager.
It works, but I see a lot of System Queues, and even channel names in the list it provides. Is there some property I can test to see if it is a "normal" user-defined queue?
ObjectType, QueueType, Usage seemed to always give same values for every queue-name.
// GET QueueNames - this worked on 07/19/2012 - but returned a lot of system queue, and unclear how to separate user queues from system queues.
PCFMessageAgent agent = new PCFMessageAgent(mqQMgr);
// Build the query request.
PCFMessage requestMessage = new PCFMessage(CMQCFC.MQCMD_INQUIRE_Q_NAMES);
requestMessage.AddParameter(MQC.MQCA_Q_NAME, "*");
// Send the request and retrieve the response.
PCFMessage[] responses = agent.Send(requestMessage);
// Retrieve the values requested from the response.
string[] queueNames = responses[0].GetStringListParameterValue(CMQCFC.MQCACF_Q_NAMES);
//string[] objType = responses[0].GetStringListParameterValue(CMQCFC.MQIACF_OBJECT_TYPE);
int loopCounter = 0;
foreach (string queueName in queueNames)
{
loopCounter++;
Console.WriteLine("QueueName=" + queueName);
try
{
mqQueue = mqQMgr.AccessQueue(
queueName,
MQC.MQOO_OUTPUT // open queue for output
+ MQC.MQOO_INQUIRE // inquire required to get CurrentDepth
+ MQC.MQOO_FAIL_IF_QUIESCING); // but not if MQM stopping
Console.WriteLine("QueueName=" + queueName +
" CurrentDepth=" + mqQueue.CurrentDepth +
" MaxDepth=" + mqQueue.MaximumDepth +
" QueueType=" + mqQueue.QueueType +
" Usage=" + mqQueue.Usage
);
}
catch (MQException mex)
{
Console.WriteLine(mex.Message);
}
}
}
For me your sample code lists only queues, no other objects but yes it lists all queues. You can add another filter requestMessage.AddParameter(MQC.MQIA_Q_TYPE, MQC.MQQT_MODEL); to list only model queues. Other values available for MQC.MQIA_Q_TYPE are MQC.MQQT_LOCAL, MQQT_ALIAS, MQQT_CLUSTER and MQC.MQQT_REMOTE.
All system or predefined queue names begin with SYSTEM. So you could probably use this string filter out predefined queues after listing. Also if you look at a queue definition, there is DEFTYPE attribute, system defined queues have value of PREDEFINED. But I could not add a third parameter to filter queue names by DEFTYPE. I got 3014 reason code.
HTH
As Shashi noted, you will only see queue names from that PCF command.
If you only queue names that begin with PAYROLL then change:
requestMessage.AddParameter(MQC.MQCA_Q_NAME, "*");
to
requestMessage.AddParameter(MQC.MQCA_Q_NAME, "PAYROLL.*");
Or add an if statement to exclude the queue names you do not want to see:
if (!(queueName.startsWith("SYSTEM.")))
{
// do something
}