Is there a way to notify A Listener to stop invoking events until further notice ?
For example :
Let’s assume a listener for recurrent structure like SQL SELECT.
“SELECT * FROM TABLE_B b WHERE b.id IN (SELECT a.ID FROM TABLE_A a WHERE a.ID = 10 )”
Now I want to send the sub-select to a secondary listener, but without the main listener processing the sub-select tokens?
something in the line of :
void SQLLISTENER::enterSubSelect (SQLParser::SubSelectContext *ctx) override {
stopProcessingToken() ; //Stop processing main listener
SQLLISTENER * listener = new SQLLISTENER ();
antlr4::tree::ParseTreeWalker::DEFAULT.walk(listener, ctx);
}
void SQLLISTENER::exitSubSelect(SQLParser::SubSelectContext *ctx) override {
StartProcessingToken(); //Resume processing main listener
}
Does ANTLR support this type of skip tokens?
Is there a more elegant way to achieve processing of recursive sections ?
A listener always fires both enter- and exit-events, for all nodes in the parse tree. You can indeed keep track of how many times a certain enter rule is invoked, and base your logic on that. However, this will get cumbersome when the amount of rules becomes large (and the growing amount of checks because of that).
Perhaps better in you case to use a visitor: with a visitor you can decide when you want it to dive into a certain (sub) tree or not. When you decide not to go into a (sub) tree, it will be skipped entirely.
Related
This appears, to me, to be a simple problem that is probably replicated all over the place. A very basic application of the MessageHandlerChain, probably using nothing more than out of the box functionality.
Conceptually, what I need is this:
(1) Polled JDBC reader (sets parameters for integration pass)
|
V
(2) JDBC Reader (uses input from (1) to fetch data to feed through channel
|
V
(3) JDBC writer (writes data fetched by (2) to target)
|
V
(4) JDBC writer (writes additional data from the original parameters fetched in (1))
What I think I need is
Flow:
From: JdbcPollingChannelAdapter (setup adapter)
Handler: messageHandlerChain
Handlers (
JdbcPollingChannelAdapter (inbound adapter)
JdbcOutboundGateway (outbound adapter)
JdbcOutboundGateway (cleanup gateway)
)
The JdbcPollingChannelAdapter does not implement the MessageHandler API, so I am at a loss how to read the actual data based on the setup step.
Since the JdbcOutboundGateway does not implement the MessageProducer API, I am at a bit of a loss as to what I need to use for the outbound adapter.
Are there OOB classes I should be using? Or do I need to somehow wrap the two adapters in BridgeHandlers to make this work?
Thanks in advance
EDIT (2)
Additional configuration problem
The setup adapter is pulling a single row back with two timestamp columns. They are being processed correctly by the "enrich headers" piece.
However, when the inbound adapter is executing, the framework is passing in java.lang.Object as parameters. Not String, not Timestamp, but an actual java.lang.Object as in new Object ().
It is passing the correct number of objects, but the content and datatypes are lost. Am I correct that the ExpressionEvaluatingSqlParameterSourceFactory needs to be configured?
Message:
GenericMessage [payload=[{startTime=2020-11-18 18:01:34.90944, endTime=2020-11-18 18:01:34.90944}], headers={startTime=2020-11-18 18:01:34.90944, id=835edf42-6f69-226a-18f4-ade030c16618, timestamp=1605897225384}]
SQL in the JdbcOutboundGateway:
Select t.*, w.operation as "ops" from ADDRESS t
Inner join TT_ADDRESS w
on (t.ADDRESSID = w.ADDRESSID)
And (w.LASTUPDATESTAMP >= :payload.from[0].get("startTime") and w.LASTUPDATESTAMP <= :payload.from[0].get("endTime") )
Edit: added solution java DSL configuration
private JdbcPollingChannelAdapter setupAdapter; // select only
private JdbcOutboundGateway inboundAdapter; // select only
private JdbcOutboundGateway insertUpdateAdapter; // update only
private JdbcOutboundGateway deleteAdapter; // update only
private JdbcMessageHandler cleanupAdapter; // update only
setFlow(IntegrationFlows
.from(setupAdapter, c -> c.poller(Pollers.fixedRate(1000L, TimeUnit.MILLISECONDS).maxMessagesPerPoll(1)))
.enrichHeaders(h -> h.headerExpression("ALC_startTime", "payload.from[0].get(\"ALC_startTime\")")
.headerExpression("ALC_endTime", "payload.from[0].get(\"ALC_endTime\")"))
.handle(inboundAdapter)
.enrichHeaders(h -> h.headerExpression("ALC_operation", "payload.from[0].get(\"ALC_operation\")"))
.handle(insertUpdateAdapter)
.handle(deleteAdapter)
.handle(cleanupAdapter)
.get());
flowContext.registration(flow).id(this.getId().toString()).register();
If you would like to carry the original arguments down to the last gateway in your flow, you need to store those arguments in the headers since after every step the payload of reply message is going to be different and you won't have original setup data over there any more. That's first.
Second: if you deal with IntegrationFlow and Java DSL, you don't need to worry about messageHandlerChain since conceptually the IntegrationFlow is a chain by itself but much more advance.
I'm not sure why you need to use a JdbcPollingChannelAdapter to request data on demand according incoming message from the source in the beginning of your flow.
You definitely still need to use a JdbcOutboundGateway for just SELECT mode. The updateQuery is optional, so that gateway is just going to perform SELECT and return a data for you in a payload of the reply message.
If you two next steps are just "write" and you don't care about the result, you probably can just take a look into a PublishSubscribeChannel and two JdbcMessageHandler as subscribers to it. Without a provided Executor for the PublishSubscribeChannel they are going to be executed one-by-one.
I have multiple threads running a batch job. When each thread finishes it calls this method of mine:
private static readonly Object lockVar = new Object();
public void UserIsDone(int batchId, int userId)
{
//Get the batch user
var batchUser = context.ScheduledUsersBatchUsers.SingleOrDefault(x => x.User.Id == userId && x.Batch.Id == batchId);
if (batchUser != null)
{
lock (lockVar)
{
context.ScheduledUsersBatchUsers.Remove(batchUser);
context.SaveChanges();
//Try to get the batch with the assumption it has no users left. If we do get the batch back, it means there are no users left.
var dbBatch = context.ScheduledUsersBatches.SingleOrDefault(x => x.Id == batchId && !x.Users.Any());
//So this must have been the last user, the batch is empty, so we fetch it and remove it.
if (dbBatch != null)
{
context.ScheduledUsersBatches.Remove(dbBatch);
context.SaveChanges();
}
}
}
}
What this method does is very simple, it looks up the "BatchUser" to remove him from the queue, which it does. That part works swell.
However, after removing the user I want to check if that was the last user in the whole batch. But since this is multithreaded a race condition can happen.
So I put the removing of the batch user within a lock, after I remove the user, I check if the batch has no more batch users.
But here is my problem... even tho I have a lock, and the query to get the "dbBatch" clearly requires it to have no users to return the object... even so, I sometimes get it back with users like so:
When I do get that, I also get the following error on SaveChanges()
However, at other times I get the dbBatch object back correctly with no children, like so:
And when I do, it all works great, no exceptions.
With debugger I can catch the error by setting a breakpoint on the lock statement (see screenshot one). Then all threads get to the lock (while one goes in). Then I always get the error.
If I only have a breakpoint inside the if-statement it's more random.
With the lock in place, I don't see how this happens.
Update
I Ninject my context, and this is my ninject code
kernel.Bind<MyContext>()
.To<MyContext>()
.InRequestScope()
.WithConstructorArgument("connectionStringOrName", "MyConnection");
kernel.Bind<DbContext>().ToMethod(context => kernel.Get<MyContext>()).InRequestScope();
Update 2
I also tried this solution https://msdn.microsoft.com/en-us/data/jj592904.aspx
But strangely I don't get a DbUpdateConcurrencyException but rather I get a DbUpdateException that has an InnerException that is OptimisticConcurrencyException.
But neither DbUpdateException or OptimisticConcurrencyException contains a Entries property so I can't do ex.Entries.Single().Reload();
I'm also adding the exception in text form here
Here in text also, The outer exception of type DbUpdateException: {"An error occurred while saving entities that do not expose foreign key properties for their relationships. The EntityEntries property will return null because a single entity cannot be identified as the source of the exception. Handling of exceptions while saving can be made easier by exposing foreign key properties in your entity types. See the InnerException for details."}
The InnerException of type OptimisticConcurrencyException: {"Store update, insert, or delete statement affected an unexpected number of rows (0). Entities may have been modified or deleted since entities were loaded. See http://go.microsoft.com/fwlink/?LinkId=472540 for information on understanding and handling optimistic concurrency exceptions."}
I am using cqrs and ddd to build my application.
I have an account entity, a transaction entity and a transactionLine entity. A transaction contains multiple transactionLines. Each transactionLine has an amount and points to an account.
If a user adds a transactionLine in a transaction that already has a transactionLine that points to the same account as the one the new transactionLine, I want to simply add the new transactionLine amount to the existing one, preventing a transaction from having two transactionLines that point to the same account.
Ex :
Before command :
transaction
transactionLine1(amount=100, account=2)
transactionLine2(amount=50, account=1)
Command :
addNewTransaction(amount=25, account=1)
Desired result :
transaction
transactionLine1(amount=100, account=2)
transactionLine2(amount=75, account=1) // Add amount (50+25) instead of two different transactionLines
instead of
transaction
transactionLine1(amount=100, account=2)
transactionLine2(amount=50, account=1)
transactionLine3(amount=25, account=1) // Error, two different transactionLines point to the same account
But I wonder if it is best to handle this in the command or the event handler.
If this case is handled by the command handler
Before command :
transaction
transactionLine1(amount=100, account=2)
transactionLine2(amount=50, account=1)
Command :
addNewTransaction(amount=25, account=1) // Detects the case
Dispatches event
transactionLineAmountChanged(transactionLine=2, amount=75)
AddTransactionLine command is received
Check if a transactionLine exists in the new transactionLine's transaction with the same account
If so, emit a transactionAmountChangedEvt event
Otherwise, emit a transactionAddedEvt event
Corresponding event handler handles the right event
If this case is handled by the event handler
Before command :
transaction
transactionLine1(amount=100, account=2)
transactionLine2(amount=50, account=1)
Command :
addNewTransaction(amount=25, account=1)
Dispatches event
transactionLineAdded(transactionLine=3, amount=25)
Handler // Detects the case
transactionLine2.amount = 75
AddTransactionLine command is received
TransactionLineAdded event is dispatched
TransactionLineAdded is handled
Check if the added transaction's transactionLine points to the same account as an existing transactionLine in this account
If so, just add the amount of the new transactionLine to the existing transactionLine
Otherwise, add a new transactionLine
Neither commands nor events should contain domain logic, only the domain should contain the domain logic. In your domain, aggregate roots represent transaction boundaries (not your transaction entities, but transactions for the logic). Handling logic within commands or events would bypass those boundaries and make your system very brittle.
The right place for that logic is the transaction entity.
So the best way would be
AddTransactionCommand finds the correct transaction entity and calls
Transaction.AddLine(...), which does the logic and publishes events of what happened
TransactionLineAddedEvent or TransactionLineChangedEvent depending on what happened.
Think of commands and events as 'containers', 'dtos' of data that you are going to need in order to hydrate your AggregateRoots or send out to the world (event) for other Bounded Contexts to consume them. That's it. Any other operation that is strictly related to your Domain has no place but your AggregateRoots, Entities and Value Objects.
You can add some 'validation' to your Commands, either by using DataAnnotations or your own implementation of a validate method.
public interface ICommand
{
void Validate();
}
public class ChangeCustomerName : ICommand
{
public string Name {get;set;}
public void Validate()
{
if(Name == "No one")
{
throw new InvalidOperationException("Sorry Aria Stark... we need a name here!");
}
}
}
I am working on Oracle 10gR2.
And here is my problem -
I have a procedure, lets call it *proc_parent* (inside a package) which is supposed to call another procedure, lets call it *user_creation*. I have to call *user_creation* inside a loop, which is reading some columns from a table - and these column values are passed as parameters to the *user_creation* procedure.
The code is like this:
FOR i IN (SELECT community_id,
password,
username
FROM customer
WHERE community_id IS NOT NULL
AND created_by = 'SRC_GLOB'
)
LOOP
user_creation (i.community_id,i.password,i.username);
END LOOP;
COMMIT;
user_Creation procedure is invoking a web service for some business logic, and then based on the response updates a table.
I need to find a way by which I can use multi-threading here, so that I can run multiple instances of this procedure to speed up things. I know I can use *DBMS_SCHEDULER* and probably *DBMS_ALERT* but I am not able to figure out, how to use them inside a loop.
Can someone guide me in the right direction?
Thanks,
Ankur
what you can do is submit lots of jobs in the same time. See Example 28-2 Creating a Set of Lightweight Jobs in a Single Transaction
This fills a pl/sql table with all jobs you want to submit in one tx, all at the same time. As soon as they are submitted (enabled) they will start running, as many as the system can handle, or as many as are allowed by a resource manager plan.
The overhead that the Lightweight jobs have is very ... minimal/light.
I would like to close this question. DBMS_SCHEDULER as well as DBMS_JOB (though DBMS_SCHEDULER is preferred) can be used inside the loop to submit and execute the job.
For instance, here's a sample code, using DBMS_JOB which can be invoked inside a loop:
...
FOR i IN (SELECT community_id,
password,
username
FROM customer
WHERE community_id IS NOT NULL
AND created_by = 'SRC_GLOB'
)
LOOP
DBMS_JOB.SUBMIT(JOB => jobnum,
WHAT => 'BEGIN user_creation (i.community_id,i.password,i.username); END;'
COMMIT;
END LOOP;
Using a commit after SUBMIT will kick off the job (and hence the procedure) in parallel.
I'm trying to create a pagination index view in CouchDB that lists the doc._id for every Nth document found.
I wrote the following map function, but the pageIndex variable doesn't reliably start at 1 - in fact it seems to change arbitrarily depending on the emitted value or the index length (e.g. 50, 55, 10, 25 - all start with a different file, though I seem to get the correct number of files emitted).
function(doc) {
if (doc.type == 'log') {
if (!pageIndex || pageIndex > 50) {
pageIndex = 1;
emit(doc.timestamp, null);
}
pageIndex++;
}
}
What am I doing wrong here? How would a CouchDB expert build this view?
Note that I don't want to use the "startkey + count + 1" method that's been mentioned elsewhere, since I'd like to be able to jump to a particular page or the last page (user expectations and all), I'd like to have a friendly "?page=5" URI instead of "?startkey=348ca1829328edefe3c5b38b3a1f36d1e988084b", and I'd rather CouchDB did this work instead of bulking up my application, if I can help it.
Thanks!
View functions (map and reduce) are purely functional. Side-effects such as setting a global variable are not supported. (When you move your application to BigCouch, how could multiple independent servers with arbitrary subsets of the data know what pageIndex is?)
Therefore the answer will have to involve a traditional map function, perhaps keyed by timestamp.
function(doc) {
if (doc.type == 'log') {
emit(doc.timestamp, null);
}
}
How can you get every 50th document? The simplest way is to add a skip=0 or skip=50, or skip=100 parameter. However that is not ideal (see below).
A way to pre-fetch the exact IDs of every 50th document is a _list function which only outputs every 50th row. (In practice you could use Mustache.JS or another template library to build HTML.)
function() {
var ddoc = this,
pageIndex = 0,
row;
send("[");
while(row = getRow()) {
if(pageIndex % 50 == 0) {
send(JSON.stringify(row));
}
pageIndex += 1;
}
send("]");
}
This will work for many situations, however it is not perfect. Here are some considerations I am thinking--not showstoppers necessarily, but it depends on your specific situation.
There is a reason the pretty URLs are discouraged. What does it mean if I load page 1, then a bunch of documents are inserted within the first 50, and then I click to page 2? If the data is changing a lot, there is no perfect user experience, the user must somehow feel the data changing.
The skip parameter and example _list function have the same problem: they do not scale. With skip you are still touching every row in the view starting from the beginning: finding it in the database file, reading it from disk, and then ignoring it, over and over, row by row, until you hit the skip value. For small values that's quite convenient but since you are grouping pages into sets of 50, I have to imagine that you will have thousands or more rows. That could make page views slow as the database is spinning its wheels most of the time.
The _list example has a similar problem, however you front-load all the work, running through the entire view from start to finish, and (presumably) sending the relevant document IDs to the client so it can quickly jump around the pages. But with hundreds of thousands of documents (you call them "log" so I assume you will have a ton) that will be an extremely slow query which is not cached.
In summary, for small data sets, you can get away with the page=1, page=2 form however you will bump into problems as your data set gets big. With the release of BigCouch, CouchDB is even better for log storage and analysis so (if that is what you are doing) you will definitely want to consider how high to scale.