I have a question about the behavior of transactional() in the following example:
#Bean
IntegrationFlow myFlow(
EntityManagerFactory entityManagerFactory,
TransactionManager transactionManager
) {
return IntegrationFlows.from(MY_CHANNEL)
.routeToRecipients(route -> route
.recipientFlow(flow -> flow
.handle(Jpa.updatingGateway(entityManagerFactory)
.namedQuery(DELETE_EVERYTHING)))
.recipientFlow(flow -> flow
.handle(Jpa.updatingGateway(entityManagerFactory)))
.transactional(transactionManager))
.get();
}
The idea is that I'm first deleting the contents of a database table, and immediately after I'm filling that same table with new data. Will .transactional() in this example make sure that the first step (deletion) is only committed to the DB if the second step (inserting new data) is successful? What part of the documentation can I refer to for this behavior?
You need to read this documentation: https://docs.spring.io/spring-integration/docs/current/reference/html/transactions.html#transactions.
You assumption is correct as long as both recipients are performed on the same thread and, therefore, sequentially.
The TransactionInterceptor is going to be applied for the RecipientListRouter.handleMessage(). And that's where that "auction" for recipient happens.
Do you have any problem with that configuration since you have came to us with such a question?
Related
This appears, to me, to be a simple problem that is probably replicated all over the place. A very basic application of the MessageHandlerChain, probably using nothing more than out of the box functionality.
Conceptually, what I need is this:
(1) Polled JDBC reader (sets parameters for integration pass)
|
V
(2) JDBC Reader (uses input from (1) to fetch data to feed through channel
|
V
(3) JDBC writer (writes data fetched by (2) to target)
|
V
(4) JDBC writer (writes additional data from the original parameters fetched in (1))
What I think I need is
Flow:
From: JdbcPollingChannelAdapter (setup adapter)
Handler: messageHandlerChain
Handlers (
JdbcPollingChannelAdapter (inbound adapter)
JdbcOutboundGateway (outbound adapter)
JdbcOutboundGateway (cleanup gateway)
)
The JdbcPollingChannelAdapter does not implement the MessageHandler API, so I am at a loss how to read the actual data based on the setup step.
Since the JdbcOutboundGateway does not implement the MessageProducer API, I am at a bit of a loss as to what I need to use for the outbound adapter.
Are there OOB classes I should be using? Or do I need to somehow wrap the two adapters in BridgeHandlers to make this work?
Thanks in advance
EDIT (2)
Additional configuration problem
The setup adapter is pulling a single row back with two timestamp columns. They are being processed correctly by the "enrich headers" piece.
However, when the inbound adapter is executing, the framework is passing in java.lang.Object as parameters. Not String, not Timestamp, but an actual java.lang.Object as in new Object ().
It is passing the correct number of objects, but the content and datatypes are lost. Am I correct that the ExpressionEvaluatingSqlParameterSourceFactory needs to be configured?
Message:
GenericMessage [payload=[{startTime=2020-11-18 18:01:34.90944, endTime=2020-11-18 18:01:34.90944}], headers={startTime=2020-11-18 18:01:34.90944, id=835edf42-6f69-226a-18f4-ade030c16618, timestamp=1605897225384}]
SQL in the JdbcOutboundGateway:
Select t.*, w.operation as "ops" from ADDRESS t
Inner join TT_ADDRESS w
on (t.ADDRESSID = w.ADDRESSID)
And (w.LASTUPDATESTAMP >= :payload.from[0].get("startTime") and w.LASTUPDATESTAMP <= :payload.from[0].get("endTime") )
Edit: added solution java DSL configuration
private JdbcPollingChannelAdapter setupAdapter; // select only
private JdbcOutboundGateway inboundAdapter; // select only
private JdbcOutboundGateway insertUpdateAdapter; // update only
private JdbcOutboundGateway deleteAdapter; // update only
private JdbcMessageHandler cleanupAdapter; // update only
setFlow(IntegrationFlows
.from(setupAdapter, c -> c.poller(Pollers.fixedRate(1000L, TimeUnit.MILLISECONDS).maxMessagesPerPoll(1)))
.enrichHeaders(h -> h.headerExpression("ALC_startTime", "payload.from[0].get(\"ALC_startTime\")")
.headerExpression("ALC_endTime", "payload.from[0].get(\"ALC_endTime\")"))
.handle(inboundAdapter)
.enrichHeaders(h -> h.headerExpression("ALC_operation", "payload.from[0].get(\"ALC_operation\")"))
.handle(insertUpdateAdapter)
.handle(deleteAdapter)
.handle(cleanupAdapter)
.get());
flowContext.registration(flow).id(this.getId().toString()).register();
If you would like to carry the original arguments down to the last gateway in your flow, you need to store those arguments in the headers since after every step the payload of reply message is going to be different and you won't have original setup data over there any more. That's first.
Second: if you deal with IntegrationFlow and Java DSL, you don't need to worry about messageHandlerChain since conceptually the IntegrationFlow is a chain by itself but much more advance.
I'm not sure why you need to use a JdbcPollingChannelAdapter to request data on demand according incoming message from the source in the beginning of your flow.
You definitely still need to use a JdbcOutboundGateway for just SELECT mode. The updateQuery is optional, so that gateway is just going to perform SELECT and return a data for you in a payload of the reply message.
If you two next steps are just "write" and you don't care about the result, you probably can just take a look into a PublishSubscribeChannel and two JdbcMessageHandler as subscribers to it. Without a provided Executor for the PublishSubscribeChannel they are going to be executed one-by-one.
I want to update two database tables using hibernate entityManager. Currently I am updating 2nd table after verifying that data has been updated in 1st table.
My question is how to Rollback 1st table if data is not updated in 2nd table.
This is how I am updating individual table.
try {
wapi = getWapiUserUserAuthFlagValues(subject, UserId);
wapi.setFlags((int) flags);
entityManager.getTransaction().begin();
entityManager.merge(wapi);
entityManager.flush();
entityManager.getTransaction().commit();
} catch (NoResultException nre) {
wapi = new Wapi();
wapi.setSubject(merchant);
wapi.setUserId(UserId);
wapi.setFlags((int) flags);
entityManager.getTransaction().rollback();
}
Note - I am calling separate methods to update each table data
Thanks
I got the solution. basically I was calling two methods to update 2 db table and these two methods I am calling from one method.
Ex - I am method p and q from method r.
Initially I was calling begin, merge, flush and commit entitymanager methods in both p and q.
Now I am calling begin and commit in r and merge and flush in p and q.
So now my tables are getting updated together and rollback is also simple.
Hope it will help someone. Because I wasted my time for this, probably it can save someone else.
Thanks
I have a Spring Itegration (5.0.2) flow that reads data from an HTTP endpoint and publish Kafka messages using the (split) data from the HTTP response.
I would like to execute a "final" action before the flow completes, some sort of "try, catch, finally" but I'm not sure what is the best way to achieve this.
This is my code:
#Bean
public IntegrationFlow startFlow() {
return IntegrationFlows.from(() -> new GenericMessage<>(""),
c -> c.id("flow1")
.poller(
Pollers.fixedDelay(period, TimeUnit.MILLISECONDS, initialDelay).taskExecutor(taskExecutor)))
.handle(Http.outboundGateway("http://....)
.charset("UTF-8")
.expectedResponseType(String.class)
.httpMethod(HttpMethod.GET))
.transform(new transformer())
.split()
.channel(CHANNEL_1)
.controlBus()
.get();
}
#Bean
public IntegrationFlow toKafka(KafkaTemplate<?, ?> kafkaTemplate) {
return IntegrationFlows.from(CHANNEL_1)
.handle(Kafka.outboundChannelAdapter(kafkaTemplate)
// KAFKA SPECIFIC
.get();
}
Essentially, when all the messages have been sent (note that I'm using .split), I need to call a Spring bean to update some data.
Thanks
If you talk about "when all the messages" after splitter, then you need to take a look into an .aggregate(): https://docs.spring.io/spring-integration/docs/5.0.3.RELEASE/reference/html/messaging-routing-chapter.html#aggregator.
The splitter populates special sequence details headers into each splitted item and an aggregator is able to gather them to a single entity by default using those headers from received message.
Since you talk about the process after sending to Kafka, you should make your CHANNEL_1 as a PublishSubscribeChannel and have the mentioned .aggregate() as the last subscriber to this channel. The service to update some data should be already after this aggregator.
In spring-batch, data can be passed between various steps via ExecutionContext. You can set the details in one step and retrieve in the next. Do we have anything of this sort in spring-integration ?
My use case is that I have to pick up a file from ftp location, then split it based on certain business logic and then process them. Depending on the file names client id would be derived. This client id would be used in splitter, service activator and aggregator components.
From my newbie level of expertise I have in spring, I could not find anything which help me share state for a particular run.I wanted to know if spring-integration provides this state sharing context in some way.
Please let me know if there is a way to do in spring-context.
In Spring Integration applications there is no single ExecutionContext for state sharing. Instead, as Gary Russel mentioned, each message carries all the information within its payload or its headers.
If you use Spring Integration Java DSL and want to transport the clientId by message header you can use enrichHeader transformer. Being supplied with a HeaderEnricherSpec, it can accept a function which returns dynamically determined value for the specified header. As of your use case this might look like:
return IntegrationFlows
.from(/*ftp source*/)
.enrichHeaders(e -> e.headerFunction("clientId", this::deriveClientId))
./*split, aggregate, etc the file according to clientId*/
, where deriveClientId method might be a sort of:
private String deriveClientId(Message<File> fileMessage) {
String fileName = fileMessage.getHeaders().get(FileHeaders.FILENAME, String.class);
String clientId = /*some other logic for deriving clientId from*/fileName;
return clientId;
}
(FILENAME header is provided by FTP message source)
When you need to access the clientId header somewhere in the downstream flow you can do it the same way as file name mentioned above:
String clientId = message.getHeaders().get("clientId", String.class);
But make sure that the message still contains such header as it could have been lost somewhere among intermediate flow items. This is likely to happen if at some point you construct a message manually and send it further. In order not to loose any headers from the preceding message you can copy them during the building:
Message<PayloadType> newMessage = MessageBuilder
.withPayload(payloadValue)
.copyHeaders(precedingMessage.getHeaders())
.build();
Please note that message headers are immutable in Spring Integration. It means you can't just add or change a header of the existing message. You should create a new message or use HeaderEnricher for that purpose. Examples of both approaches are presented above.
Typically you convey information between components in the message payload itself, or often via message headers - see Message Construction and Header Enricher
I have followed several tutorials, and actually had other activities running in azure data factory. Now, this one in particular doesn't perform any action and yet it never finishes processing. In the activity window, attempts, it shows the status: Running (0% complete).
I am looking at a reason and also, at understanding how does one know what is going on at this stage for the activity. Is there a way to debug this things? I will include the source code, i'm sure is possible I am missing something:
public class MoveBlobsToSQLActivity : IDotNetActivity
{
public IDictionary<string, string> Execute(
IEnumerable<LinkedService> linkedServices, IEnumerable<Dataset> datasets, Activity activity, IActivityLogger logger)
{
logger.Write("Start");
//Get extended properties
DotNetActivity dotNetActivityPipeline = (DotNetActivity)activity.TypeProperties;
string sliceStartString = dotNetActivityPipeline.ExtendedProperties["SliceStart"];
//Get linked service details
Dataset inputDataset = datasets.Single(dataset => dataset.Name == activity.Inputs.Single().Name);
Dataset outputDataset = datasets.Single(dataset => dataset.Name == activity.Outputs.Single().Name);
/*
DO STUFF
*/
logger.Write("End");
return new Dictionary<string, string>();
}
}
Update 1:
After finding this post and following the instructions on the github repo I was able to debug my activity.
It was erroring out here Dataset inputDataset = datasets.Single(dataset => dataset.Name == activity.Inputs.Single().Name); and I would have expected it to finish execution with an error, but in the debugger, it kept going and going to the same result until the pipeline timed out. Weird.
Removed the error but still the pipeline never finishes although the debugger does now :(.
Update 2:
Not sure that the data factory is using in any way my code from the custom activity. I made changes: nothing, I deleted the zip file with the code: nothing, just says the activity is running. Nothing seems to change even if the supposed code is no longer there. I am assuming it is cached somewhere.
Could you share me the ADF runId, we can take a look what happen there.
For your local test (#1), it is also strange for me. It should not hang there.
Btw, I think re-deploy pipeline will cancel the run and start new run with new properties. :)