After excluding
<exclusion>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</exclusion>
I am getting error:
java.lang.IllegalArgumentException: No poller has been defined for Annotation-based endpoint, and no
default poller is available within the context
I have just one QueueChannel (also have MessageAggregator - might be acting as queue?), and few PublishSubscribeChannel
Anyway without exclusion works fine.
What could be the issue? Why exclusion of dependency spring-cloud-stream causes this?
Because Spring Cloud Stream creates a PollerMetadata bean with the PollerMetadata.DEFAULT_POLLER name for us. for its own purpose and for convenience and according Spring Cloud conventions.
Spring Integration by itself doesn't do any convenience assumptions and doesn't create a default poller for us.
You just need to go into your #Configuration and specify there a PollerMetadata with respective trigger for polling logic. Otherwise you can configure a local poller for that MessageAggregator endpoint which consumes from the mentioned QueueChannel.
See docs for more info: https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-endpoints.html#endpoint-pollingconsumer
Related
How to handle/assemble segments of a MQ message when using Spring Integration JMS Jms.messageDrivenChannelAdapter. ? I did find some reference on how to do in when using MQ API .
https://medium.com/#marcus_j/consuming-segmented-ibm-mq-messages-in-java-cbdee4a9ad85
MQGetMessageOptions gmo = new MQGetMessageOptions();
gmo.options = MQC.MQGMO_ALL_SEGMENTS_AVAILABLE
| MQC.MQGMO_SYNCPOINT
| MQC.MQGMO_COMPLETE_MSG;
gmo.matchOptions = MQC.MQMO_NONE;
gmo.waitInterval = 10000;
MQMessage message = new MQMessage();
queue.get(message, gmo);
// Do your stuff with the message
message.clearMessage();
queue.close();
manager.disconnect();
Per my understanding i will have to pass the appropriate MQGetMessageOptions , to be able to ask the queue manager to reassemble the message if it has been segmented. Could not find any reference on how to pass these options when using spring JMS .
Unfortunately IBM MQ does not support Message segmentation in JMS:
This feature is not supported on IBM® MQ for z/OS® or by applications using IBM MQ classes for JMS.
Spring Integration doesn't provide channel adapter implementation for IBM MQ API.
On one hand there is a JMS bridge API over IBM MQ: https://developer.ibm.com/components/ibm-mq/tutorials/mq-develop-mq-jms/
On the other hand (as you already noticed), Spring Integration provides channel adapters for JMS: https://docs.spring.io/spring-integration/docs/current/reference/html/jms.html#jms
So, I would try first to use official IBM JMS client and then reach IBM support for de-segmentation option in their JMS client.
There is indeed nothing what Spring Integration can do.
Although you always can write your own MessageProducerSupport to perform that IBM MQ-specific logic. You need to implement a doStart() and call sendMessage() from there when you done an assembling an IBM MQ message and so on.
Whilst it is true that you won't be able to use out of the box code to apply the gmo options on a #JmsListener or a JmsTemplate, if you have included the MQ JMS Spring Boot Starter as a dependancy -
<dependency>
<groupId>com.ibm.mq</groupId>
<artifactId>mq-jms-spring-boot-starter</artifactId>
<version>2.4.1</version>
</dependency>
Then you do have access to all the MQ Classes that you use in your example. So you can craft your own objects and methods and use them in your Spring application. Although they won't be very JMS Spring like, and the connections won't be managed by Spring.
The difficulty is that get message options are applied at the get, and not on the connection. To override options set on the connection you could have created your own customised connection factory bean, but not for gmo options.
Scenario: I have 3 spring cloud streaming apps
1'st: unmarshalls XML payload into JAXB object
2'nd: Converts JAXB payload into our domain POJO
3'rd: Validate domain object
I am trying to test the 3'rd app. I have included the 1'st and 2'nd applications as test dependencies. I have added:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-test-support</artifactId>
<scope>test</scope>
</dependency>
Now I have about 20 xml files with various validation scenario's. The first test works fine. I am able pick up the expected message of the channel using:
final Message<PaymentInstruction> mceMessage =
(Message<PaymentInstruction>) collector.forChannel(
validationBindings.mce()).take();
The 2'nd test that is run is where I have an issue. The test just sits at 'take'.
I have done some digging in the spring-integration-core-4.3.8.jar and have traced the issue to org.springframework.integration.dispatcher.AbstractDispatcher:
#Override
public synchronized boolean addHandler(MessageHandler handler) {
Assert.notNull(handler, "handler must not be null");
Assert.isTrue(this.handlers.size() < this.maxSubscribers, "Maximum subscribers exceeded");
boolean added = this.handlers.add(handler);
if (this.handlers.size() == 1) {
this.theOneHandler = handler;
}
else {
this.theOneHandler = null;
}
return added;
}
There is a handler that was added for the first test, so it assigns null to 'this.theOneHandler'
My options are:
Refactor the code in the other 2 projects so that I can do the unmarshalling and creating of my domain object without the need for the spring cloud app code
I can create an individual unit test class per test case, however I'd rather not go this route as the startup time for spring boot will be high that will be multiplied by the number of test cases
Do I have some missing configuration that would allow me to have these multiple handlers or am I breaking the way I want to use spring cloud streaming?
Environment:
Java 8 Update 131
org.springframework.cloud:spring-cloud-dependencies:Dalston.RELEASE
org.springframework.boot:spring-boot-dependencies:1.5.2.RELEASE
theOneHandler is a dispatching optimization used when a channel has only one subscriber.
It sounds like you are binding all your services to the same channel, which is not what you want.
Bottom line is you can't do what you are trying to do because each of the services are using the same channel name (presumably input).
You would need to load each service in a separate application context and wire them together with bridges.
EDIT
You could also test them using an aggregated application.
I'm currently implementing a flow on a Spring Integration-based (ver. 3.0.1.RELEASE) application that requires to store messages on a JMS queue to be picked up later.
For that, I've been trying to use a Spring Integration JMS Inbound Channel Adapter with a custom selector, and then picking up the message from the queue by changing the JMS selector of the JMSDestinationPollingSource to some matching ID included as a header property.
One of the requirements for this is that I cannot add a new service or a JAVA method, so I've been trying to sort it out using a Control Bus, but keep receiving the same error when I send the message to set the messageSelector to something different.
Inbound Channel Adapter definition:
<int-jms:inbound-channel-adapter id="inboundAdapter"
channel="inboundChannel"
destinationName="bufferQueue"
connection-factory="connectionFactory"
selector="matchingID = 'NO VALUE'">
<int:poller fixed-delay="1000"/>
</int-jms:inbound-channel-adapter>
Message:
#'inboundAdapter.source'.setMessageSelector("matchingID = 'VALUE'")
Error:
EvaluationException: The method 'public void org.springframework.integration.jms.JmsDestinationPollingSource.setMessageSelector(java.lang.String)' is not supported by this command processor. If usign the Control Bus, consider adding #ManagedOperation or #ManagedAttribute.
Which, AFAIK, means that the JmsDestinationPollingSource class is not Control Bus manageable, as it's not passing the ControlBusMethodFilter.
Is this approach nonviable, or is there something I'm missing? Is there any way to set the selector dynamically using SI XML configuration files only?
First of all it is strange to use Java tool and don't allow to write code on Java...
But that is your choice, or as you said requirements.
Change the employer! ;-)
That's correct: Control Bus allows only #ManagedOperation and #ManagedAttribute method. Since JmsDestinationPollingSource.setMessageSelector. We can make it like that. But does it make so much sense if we can reach it a bit different approach?
<int:outbound-channel-adapter id="changeSelectorChannel"
ref"inboundAdapter.source method="setMessageSelector"/>
where a new selector expression should be as a payload of the Message to this channel.
I have a spring ws endpoint as part of a Spring Integration project and I would like to access the Soap Header. When I add the SoapHeader to the method parameters i get the following exception:
[10/05/16 05:00:05:005 PDT] localhost-startStop-1 DEBUG
springframework.integration.util.MessagingMethodInvokerHelper.doWith():
Method [public
com.bstonetech.ptms.integration.model.ws.external.contract.GetContractResponse
com.bstonetech.ptms.integration.service.ws.GetContractEndpoint.getContract(com.bstonetech.ptms.integration.model.ws.external.contract.GetContractRequest,org.springframework.ws.context.MessageContext)
throws java.lang.Exception] is not eligible for Message handling Found
more than one parameter type candidate:
[#org.springframework.ws.server.endpoint.annotation.RequestPayload
com.bstonetech.ptms.integration.model.ws.external.contract.GetContractRequest]
and [org.springframework.ws.context.MessageContext]. [10/05/16
05:00:05:005 PDT] localhost-startStop-1 WARN
web.context.support.XmlWebApplicationContext.refresh(): Exception
encountered during context initialization - cancelling refresh attempt
java.lang.IllegalArgumentException: Target object of type [class
com.bstonetech.ptms.integration.service.ws.GetContractEndpoint] has no
eligible methods for handling Messages.
The same error occurs when using MessageContext messageContext too.
I am obviously missing something. Any help would be appreciated.
Integration is as follows:
<oxm:jaxb2-marshaller id="contractMarshaller" context-path="com.bstonetech.ptms.integration.model.ws.external.contract"/>
<ws:inbound-gateway id="getContractWs" request-channel="inboundGetContractChannel" mapped-request-headers="fileId" mapped-reply-headers="fileId"
marshaller="contractMarshaller" unmarshaller="contractMarshaller"/>
<int:service-activator id="contractEndpoint" input-channel="inboundGetContractChannel" ref="getContractEndpoint"/>
The endpoint looks as follows:
#Endpoint
public class GetContractEndpoint {
private static final String NAMESPACE_URI = "http://bstonetech.com/contract";
#PayloadRoot(namespace = NAMESPACE_URI, localPart = "GetContractRequest")
#ResponsePayload
public GetContractResponse getContract(#RequestPayload GetContractRequest request, SoapHeader soapHeader) throws Exception {
.....
}
Sorry for delay. We were busy with Spring Integration 4.3.0.RC1 release :-).
Well, looks like you have missed something and therefore ended up with the mix of concerns.
The Spring WS POJO method invocation annotations (#RequestPayload, #PayloadRoot) and SOAP-specific argument injection is really for the POJO case, when you have #Endpoint on the service and rely on the #EnableWs or similar Spring WS mechanism.
On the other hand, right, Spring Integration WS module is fully based on the Spring WS project, but it aims to convert everything into Spring Integration Messaging model as fast as possible. Therefore a result of the <ws:inbound-gateway> is Message sent to the request-channel="inboundGetContractChannel". In your case the payload will be already unmarshalled to some your domain object according to the JaxB mapping.
The <service-activator> can now deal only with the Messaging infrastructure and it already know nothing about SOAP. That is the general purpose of the Messaging, when you can switch to fully different source (e.g. DB), but still use the same <service-activator>.
To meet some POJO method invocation there are useful annotations like #Payload #Header etc.
To be consistent and still provide some SOAP info, the AbstractWebServiceInboundGateway consults with the DefaultSoapHeaderMapper and extract source.getSoapHeader() state as a separate MessageHeaders. So, you still have access to desired headers from request.
I have added a bean called metadataStore to my spring boot + spring integration application and expected that the ftp synchronization would have been persisted and intact even after a server restart.
Nevertheless, my early tests suggests otherwise; If I start the server and let it pick-up and process 3 tests files and then restart the server, then these same 3 files will be pick-up and processed again - as if no persistent metadataStore was defined at all.
I wonder if I am missing some configuration details when setting up the datastore...
#Configuration
public class MetadataStoreConfiguration {
#Bean
public PropertiesPersistingMetadataStore metadataStore() {
PropertiesPersistingMetadataStore metadataStore = new PropertiesPersistingMetadataStore();
return metadataStore;
}
}
Also, I see in the spring-integration reference manual a short example on how to setup an idempotent receiver and metadata store. Is this what my implementation is lacking?
If that's it and if I have to set this up like in the example, where would I define my metadataStore.get and metadataStore.put calls? the outbound adapter I am using doesn't provide me with an expression attribute... Here is my naive and incomplete attempt at this:
<int-file:inbound-channel-adapter id="inboundLogFile"
auto-create-directory="true"
directory="${sftp.local.dir}"
channel="fileInboundChannel"
prevent-duplicates="true">
<int:poller fixed-rate="${setup.inbound.poller.rate}"/>
</int-file:inbound-channel-adapter>
<int:filter id="fileFilter" input-channel="fileInboundChannel"
output-channel="idempotentServiceChannel"
discard-channel="fileDiscardChannel"
expression="#metadataStore.get(payload.name) == null"/>
This is the outbound adapter used in the example:
<int:outbound-channel-adapter channel="idempotentServiceChannel" expression="#metadataStore.put(payload.name, '')"/>
In my ftp outbound adapter I can't insert the above expression :(
<int-ftp:outbound-channel-adapter id="sdkOutboundFtp"
channel="idempotentServiceChannel"
session-factory="ftpsCachingSessionFactory"
charset="UTF-8"
auto-create-directory="true"
use-temporary-file-name="false"
remote-file-separator="/"
remote-directory-expression="${egnyte.remote.dir}"
* NO EXPRESSION POSSIBLE HERE *
mode="REPLACE">
</int-ftp:outbound-channel-adapter>
By default, the PropertiesPersistingMetadataStore only persists the state on a normal application shutdown; it's kept in memory until then.
In 4.1.2, we changed it to implement Flushable so users can flush the state at any time.
Consider using an FileSystemPersistentAcceptOnceFileListFilter in the local-filter on the inbound adapter instead of a separate filter element. See the documentation for more information.
Starting with version 4.1.5 this filter has an option flushOnUpdate to flush() the metadata store on every update.
Other metadata stores that use an external server (Redis, Mongo, Gemfire) don't need to be flushed.