For my SFTP client project, I am using spring integration. We have different clients and have to connect to different SFTP servers, but, all of the logic is same, so I have abstracted them out into AbstractSFTPEndPoint. Each client-specific class implements getClientId(), which is used by AbstractSFTPEndPoint to get client-specific details like SFTP credentials.
However, the entire logic is same for all the clients, but I am still having to implement specific classes for each client. This is mainly because we need separate "MessageSource" for each client.
How can I get rid of this duplication?
public class SFTPEndPointForClientAAAA extends AbstractSFTPEndPoint {
public String getClientId(){
return "clientAAAA";
}
#Bean(name = "channelForClientAAAA")
public QueueChannel inputFileChannel() {
return super.inputFileChannel();
}
#ServiceActivator(inputChannel = "channelForClientAAAA", poller = #Poller(fixedDelay = "500"))
public void serviceActivator(Message message) {
super.serviceActivator(message);
}
#Bean(name = "messageSourceForClientAAAA")
#InboundChannelAdapter(value = "channelForClientAAAA",
poller = #Poller(fixedDelay = "50", maxMessagesPerPoll = "2"))
public MessageSource messageSource() {
return super.messageSource();
}
}
Basically I have a bunch of SFTP hosts to connect to and apply same logic. I want that to be done automatically without having to implement class for each SFTP host.
See the dynamic ftp sample. It uses XML but the same techniques apply to Java configuration. It uses outbound adapters; inbound are a little more complicated because you might need to hook them into a common context. There are links in the readme for how to do that.
However, I recently answered a similar question for multiple IMAP mail adapters using Java configuration and then a follow-up question.
You should be able to use the technique used there.
Related
How do we use a custom CqlSession on a Spring Webflux application combined with Spring starter reactive Cassandra please?
I am currently doing the following, which is working perfectly:
public class BaseCassandraConfiguration extends AbstractReactiveCassandraConfiguration {
#Bean
#NonNull
#Override
public CqlSessionFactoryBean cassandraSession() {
final CqlSessionFactoryBean cqlSessionFactoryBean = new CqlSessionFactoryBean();
cqlSessionFactoryBean.setContactPoints(contactPoints);
cqlSessionFactoryBean.setKeyspaceName(keyspace);
cqlSessionFactoryBean.setLocalDatacenter(datacenter);
cqlSessionFactoryBean.setPort(port);
cqlSessionFactoryBean.setUsername(username);
cqlSessionFactoryBean.setPassword(passPhrase);
return cqlSessionFactoryBean;
}
However, I would like to use a custom session, something like:
CqlSession session = CqlSession.builder().build();
How do we tell this configuration to use it?
Thank you
Option 1:
If you are looking to completely override the auto configured CqlSession bean, you can do so by providing your own CqlSesson bean ie.
#Bean
public CqlSession cassandraSession() {
return CqlSession.builder().withClientId(MyClientId).build();
}
The downside of override the entire bean is that you will lose the ability to configure this session via application properties and you will lose the defaults spring boot ships with.
Option 2:
If you want to leave the default values provided by spring boot and have the ability to configure the session via application properties you can use CqlSessionBuilderCustomizer to provide specific custom configurations to the CqlSession. This can be achieved by defining a bean of that type ie:
#Bean
public CqlSessionBuilderCustomizer myCustomiser() {
return cqlSessionBuilder -> cqlSessionBuilder.withClientId(MyClientId);;
}
My personal preference is option 2 as it maintains the functionality provided by spring boot which in my opinion results in an easier to maintain application over time.
Making the plunge into Java from .NET for a long time.
I am looking for is an example on how to periodically download a file, read the text from it and then take some action based on the read using Springs Integration library and the annotation based approach.
I want to pull GTFS formatted zip file from a transit provider. This provider produces a simple text file with a timestamp to indicate the last publishing time.
Specifically the producers of the data publish a text file at:
https://someserver.com/gfts/published.txt
This file has a simple timestamp to indicate when the last time their data file was published.
Then there is the data:
https://someserver.com/gfts/schedule.zip
I have tried to find some examples on how to go about polling the "published" file. Basically I want to periodically download the file and check the timestamp to determine if the schedule should be downloaded.
Most of the examples I have seen are using the XML based configuration with spring - and I barely am holding onto the annotation based. I have also seen examples of downloading a file using FTP / SFTP.
I need to use http AND I also need to include Basic Authorization (in the header).
This is as far as I have gotten. I am not sure how to go about wiring this up?
From the Spring Integration docs - this is how I am supposed to declare an outbound gateway (I think that is what I need?)
The question is now what? I need that HttpRequestExecutingMessageHandler to save the stream (file) a local file so I can read the contents and take some other action?
#Configuration
#EnableIntegration
public class GtfsConfiguration {
#Bean
public MessageChannel fileUpdateChannel () {
return new DirectChannel();
}
#Bean
#ServiceActivator(inputChannel = "fileUpdateChannel", polling = #Poller(fixedDelay="5000")
public HttpRequestExecutingMessageHandler fileUpdateGateway() {
HttpRequestExecutingMessageHandler handler = new HttpRequestExecutingMessageHandler("https://someserver.com/gtfs/raw/published.txt");
handler.setHttpMethod(HttpMethod.GET);
handler.setExpectedResponseType(byte[].class);
return handler;
}
}
If you need to download such a file periodically, you need to use a "fake" Inbound Channel Adapter, for example:
#Bean
#InboundChannelAdapter(value = "fileUpdateChannel"
poller = #Poller(fixedDelay = "1000", maxMessagesPerPoll = "1"))
public String downloadFileSchedule() {
return () -> "";
}
The #ServiceActivator for the HttpRequestExecutingMessageHandler is going to be called every second. You don't need to have there a #Poller on the #ServiceActivator. It isn't going to do anything by itself. Plus your fileUpdateChannel is a DirectChannel, not QueueChannel.
I don't think you need to save a downloaded file locally. I even would say that handler.setExpectedResponseType(String.class); is fully enough to get a file content as a reply message payload for downstream analyze.
The easiest way to configure a Basic Authorization is with the Apache Commons HTTP Client:
CredentialsProvider provider = new BasicCredentialsProvider();
UsernamePasswordCredentials credentials
= new UsernamePasswordCredentials("user1", "user1Pass");
provider.setCredentials(AuthScope.ANY, credentials);
HttpClient client = HttpClientBuilder.create()
.setDefaultCredentialsProvider(provider)
.build();
and use this in the HttpComponentsClientHttpRequestFactory, which you then should inject into the mentioned HttpRequestExecutingMessageHandler via its setRequestFactory(ClientHttpRequestFactory requestFactory).
Currently, code similar to the following exists in one of our applications:
#Component
public class ProcessRequestImpl {
private ExecutorService executorService;
public processRequest(...) {
// code to pre-process request
executorService.execute(new Runnable() {
public void run() {
ProcessRequestImpl.this.doWork(...);
}
}
}
private void doWork(...) {
// register in external file that request is being processed
// call external service to handle request
}
}
The intent of this is to create a queue of requests to the external service. The external service may take some time to process each incoming request. After it handles each one, it will update the external file to register that the specific request has been processed.
ProcessRequestImpl itself is stateless, in that all state is set in the constructor and there is no external access to that state. The process() method is called by another component in the application.
If this were to be implemented in a Spring Integration application, which of the following two approaches would be best recommended:
Keep the above code as is.
Extract doWork(), into a separate endpoint, configure that endpoint to receive messages on a channel, and to use configuration to achieve the multi threading in place of the executor service.
Some of the reasons we are looking at Spring Integration are as follows:
To remove the workflow logic from the code itself, so that the workflow and the chain of processing is evident on a higher level.
To simplify each class, enhancing readability and testability.
To avoid threading code if possible, and define that at a higher level of abstraction in configuration.
Given the sample code, could those goals be achieved using Spring Integration. Also, what would be an example of the DSL to achieve that.
Thanks
Something like
#Bean
public IntegrationFlow flow() {
return IntegrationFlows.from(SomeGatewayInterface.class)
.handle("someBean", "preProcess")
.channel(MessageChannels.executor(someTaskExecutorBean())
.handle("someBean", "doWork")
.get();
The argument passed to the gateway method become the payload of the preprocess method, which would return some object that becomes the message payload, which becomes the parameter passed to doWork.
I have a simple application that injects another component
#ComponentScan
#EnableAutoConfiguration
#Configuration
class Application {
static void main(String[] args) {
SpringApplication.run(Application, args)
}
#Bean
AuthorizationServerTokenServices tokenServices() {
return MY THING HERE
}
}
I'd like a quick/minimal way to new this up and grab the item springboot wires up (for tokenServices in this example). I'm trying to get at this to verify some configuration/settings/etc using TestNG
I should also say that I"m not using any xml to configure this (using gradle/groovy/springboot)
You can easily introduce conditional bean with the help of Spring profiles.
In your case the code would look like:
#Configuration
#Profile("tokenService")
public TestTokenServiceConfig {
#Primary
#Bean
AuthorizationServerTokenServices tokenServices() {
//implementation
}
}
The custom implementation you supply in this class will only be used by Spring in case the profile tokenService is active. The use of #Primary is needed in order to make Spring use the specified bean instead of any others present in the application context.
Also note that since you are going to be using the custom service in a test environment, you could easily mock the implementation using Mockito (or whatever other mocking framework you prefer)
And the actual integration test would be something like:
#RunWith(SpringJUnit4ClassRunner.class)
#SpringApplicationConfiguration(classes = Application.class)
#ActiveProfiles("tokenService")
class YourIntegrationTest {
#Autowired
AuthorizationServerTokenServices tokenServices;
//test
}
Here are my parameters:
Simple NServiceBus Saga implementation using the default builder
In-house ORM on top of SQL Server
Multitenancy - I have two ASP.NET MVC 4 domains running on the same website, each with their own databases
We configure our ORM using a static method like so:
public class EndpointConfig: IConfigureThisEndpoint, IWantCustomInitialization {
public void Init() {
var bus = Configure.With()
.AutofacBuilder()
.UnicastBus().LoadMessageHandlers().DoNotAutoSubscribe()
.XmlSerializer()
.MsmqTransport().IsTransactional(true).PurgeOnStartup(false)
.MsmqSubscriptionStorage()
.Sagas().RavenSagaPersister().InstallRavenIfNeeded()
.UseInMemoryTimeoutPersister()
.CreateBus()
.Start();
SlenderConfiguration.Init(bus);
}
}
public class SlenderCofnigruation {
private static ORMScope scope { get; set; }
public static void Init(IBus bus)
{
ORMConfig.GetScope = () =>
{
var environment = "dev";
if (bus.CurrentMessageContext.Headers.ContainsKey("Environment"))
environment = bus.CurrentMessageContext.Headers["Environment"];
if (scope == null)
scope = new SlenderScope(ConfigurationManager.ConnectionStrings[environment].ConnectionString);
return scope;
};
}
}
This works fine in our single-tenant Beta environment - it's fine for that static scope to get re-used because the environment header is always the same for a given deployment.
It's my understanding that this won't work for the multitenant situation described above, because NServiceBus will reuse threads across messages. The same scope would then be used, causing problems if the message was intended for a different environment.
What I think I want is a single scope per message, but I'm really not sure how to get there.
I've seen Unit Of Work Implementation for RavenDB, and the unit of work implementation in the full duplex sample, but I'm not sure that's the right path.
I've also seen the DependencyLifecycle enum, but I'm not sure how I can use that to resolve the scope given the way I have to set up the GetScope func.
Obviously I have no idea what's going on here. Any suggestions?
If you need to do something on a per-message basis, consider using message mutators (IMutateIncomingMessages) in addition to your unit-of-work management with some thread-static state.