How to pause the Spring cloud data flow Source class from sending data to kafka? - spring-integration

i am working on spring cloud data flow application ,Following is the code snippet
#Bean
#InboundChannelAdapter(channel = TbeSource.PR1, poller = #Poller(fixedDelay = "2000"))
public MessageSource<Product> getProductSource(ProductBuilder dataAccess) {
return new MessageSource<Product>() {
#SneakyThrows
#Override
public Message<Product> receive() {
System.out.println("calling method");
return MessageBuilder.withPayload(dataAccess.getNext()).build();
}
};
}
In above code the getNext() method will get the data from the database and return that object,so if the data is completely readed then it will return null
we can't return null to this MessageSource.
so is there any options available to pause and resume this Source connection class whenever we need?
Did any one faced / overcome this scenario?

First of all you just can have a Supplier<Product> instead of that MessageSourceand your code would be just like this:
return () -> dataAccess.getNext();
The null result is valid over here and no message is going to be emitted in this case and no error since the framework handles null result properly.
You still can have an idle functionality on that #InboundChannelAdapter when result of the method call is null. For that reason you need to take a look into the SimpleActiveIdleMessageSourceAdvice. See docs for more info: https://docs.spring.io/spring-integration/docs/5.3.4.RELEASE/reference/html/core.html#simpleactiveidlereceivemessageadvice

Related

ArchUnit: Verify method only calls one outside method

In a Controller-Service-Datalayer architecture, I'm searching for a way to verify that my controller methods perform exactly one call to the service layer like this:
#DeleteMapping(value = "/{id}")
public ResponseEntity<String> deleteBlubber(#PathVariable("id") long blubberId) {
service.deleteBlubber(blubberId);
return new ResponseEntity<>("ok", HttpStatus.OK);
}
This should not be allowed:
#DeleteMapping(value = "/{id}")
public ResponseEntity<String> deleteBlubber(#PathVariable("id") long blubberId) {
service.deleteOtherStuffFirst(); // Opens first transaction
service.deleteBlubber(blubberId); // Opens second transaction - DANGER!
return new ResponseEntity<>("ok", HttpStatus.OK);
}
As you can see from the comments, the reason for this is to make sure that each request is handled in one transaction (that is started in the service layer), not multiple transactions.
It seems that ArchUnit can only check meta-data from classes and methods and not what's actually going on in a method. I would have to be able to count the request to the service classes, which seems to not be possible in ArchUnit.
Any idea if this might be possible? Thanks!
With JavaMethod.getMethodCallsFromSelf() you have access to all methods calls of a given method. This could be used inside a custom ArchCondition like this:
methods()
.that().areDeclaredInClassesThat().areAnnotatedWith(Controller.class)
.should(new ArchCondition<JavaMethod>("call exactly one service method") {
#Override
public void check(JavaMethod item, ConditionEvents events) {
List<JavaMethodCall> serviceCalls = item.getMethodCallsFromSelf().stream()
.filter(call -> call.getTargetOwner().isAnnotatedWith(Service.class))
.toList();
if (serviceCalls.size() != 1) {
String message = serviceCalls.stream().map(JavaMethodCall::getDescription).collect(joining(" and "));
events.add(SimpleConditionEvent.violated(item, message));
}
}
})

Java: MQTT MessageProducerSupport to Flux

I have a simple MQTT Client that outputs received messages via IntegrationFlow:
public MqttPahoClientFactory mqttClientFactory() {
DefaultMqttPahoClientFactory factory = new DefaultMqttPahoClientFactory();
MqttConnectOptions options = new MqttConnectOptions();
options.setServerURIs(new String[] { "tcp://test.mosquitto.org:1883" });
factory.setConnectionOptions(options);
return factory;
}
public MessageProducerSupport mqttInbound() {
MqttPahoMessageDrivenChannelAdapter adapter = new MqttPahoMessageDrivenChannelAdapter(
"myConsumer",
mqttClientFactory(),
"/test/#");
adapter.setCompletionTimeout(5000);
adapter.setConverter(new DefaultPahoMessageConverter());
adapter.setQos(1);
return adapter;
}
public IntegrationFlow mqttInFlow() {
return IntegrationFlows.from(mqttInbound())
.transform(p -> p + ", received from MQTT")
.handle(logger())
.get();
}
private LoggingHandler logger() {
LoggingHandler loggingHandler = new LoggingHandler("INFO");
loggingHandler.setLoggerName("siSample");
return loggingHandler;
}
I need to pipe all received messages into a Flux though for further processing.
public Flux<String> mqttChannel() {
...
return mqttFlux;
}
How can I do that? The loggingHandler receives all messages from the IntegrationFlow. Couldn't my Flux get it's input in a similar fashion - by passing it somehow to IntegrationFlows handle function?
MQTT Example code is take from https://github.com/spring-projects/spring-integration-samples/blob/master/basic/mqtt/src/main/java/org/springframework/integration/samples/mqtt/Application.java
Attempt: Following Artem Bilans advise I'm now trying to use toReactivePublisher to convert my inbound IntegrationFlow to Flux.
public Flux<String> mqttChannel() {
Publisher<Message<Object>> flow = IntegrationFlows.from(mqttInbound())
.toReactivePublisher();
Flux<String> mqttFlux = Flux.from(flow)
.log()
.map(i -> "TESTING: Received a MQTT message");
return mqttFlux;
}
Running the example i get following error:
10:14:39.541 [MQTT Call: myConsumer] ERROR o.s.i.m.i.MqttPahoMessageDrivenChannelAdapter - Unhandled exception for GenericMessage [payload=OFF,26.70,65.00,663,-62,192.168.2.100,0.026,25,4,6,7,933,278,27,4,1,0,1580496218,730573600,1800000,1980000,1580496218,730573600,10800000,11880000, headers={mqtt_receivedRetained=true, mqtt_id=0, mqtt_duplicate=false, id=3f7565aa-ff4f-c389-d8a9-712d4f06f1cb, mqtt_receivedTopic=/083B7036697886C41D2DF2FD919143EE/MasterBedroom/Sensor/, mqtt_receivedQos=0, timestamp=1602231279537}]
Conclusion: as soon as the first message arrives, it's handled wrong and an exception is thrown.
Please, read this doc: https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/reactive-streams.html#reactive-streams
It is not clear what you would like to achieve with that "my flux" and how that could look, but for your current configuration there are a couple of solutions.
You can use a FluxMessageChannel which is already a Publisher, so you can simply use Flux.from() and subscriber to that for consuming data produced by the mentioned MqttPahoMessageDrivenChannelAdapter.
Another way is to use a toReactivePublisher() on the IntegrationFlowBuilder to expose the whole flow as a reactive Publsiher source. In this case, of course, you can't use the LoggingHandler because it is a one-way and makes your flow ending exactly here. You may consider to use a log() operator instead though: https://docs.spring.io/spring-integration/docs/5.3.2.RELEASE/reference/html/dsl.html#java-dsl-log
By the way the FluxMessageChannel is publish-subscribe, so you can have it in the flow for those logs and also have it externally for Flux.from() subscription. All the subscribers to this channel are going to get the same message.

How to read messages in MQs using spark streaming,i.e ZeroMQ,RabbitMQ?

As the spark docs says,it support kafka as data streaming source.but I use ZeroMQ,And there is not a ZeroMQUtils.so how can I use it? and generally,how about other MQs. I am totally new to spark and spark streaming, so I am sorry if the question is stupid.Could anyone give me a solution.Thanks
BTW,I use python.
Update, I finally did it in java with a Custom Receiver. Below is my solution
public class ZeroMQReceiver extends Receiver<T> {
private static final ObjectMapper mapper = new ObjectMapper();
public ZeroMQReceiver() {
super(StorageLevel.MEMORY_AND_DISK_2());
}
#Override
public void onStart() {
// Start the thread that receives data over a connection
new Thread(this::receive).start();
}
#Override
public void onStop() {
// There is nothing much to do as the thread calling receive()
// is designed to stop by itself if isStopped() returns false
}
/** Create a socket connection and receive data until receiver is stopped */
private void receive() {
String message = null;
try {
ZMQ.Context context = ZMQ.context(1);
ZMQ.Socket subscriber = context.socket(ZMQ.SUB);
subscriber.connect("tcp://ip:port");
subscriber.subscribe("".getBytes());
// Until stopped or connection broken continue reading
while (!isStopped() && (message = subscriber.recvStr()) != null) {
List<T> results = mapper.readValue(message,
new TypeReference<List<T>>(){} );
for (T item : results) {
store(item);
}
}
// Restart in an attempt to connect again when server is active again
restart("Trying to connect again");
} catch(Throwable t) {
// restart if there is any other error
restart("Error receiving data", t);
}
}
}
I assume you are talking about Structured Streaming.
I am not familiar with ZeroMQ, but an important point in Spark Structured Streaming sources is replayability (in order to ensure fault tolerance), which, if I understand correctly, ZeroMQ doesn't deliver out-of-the-box.
A practical approach would be buffering the data either in Kafka and using the KafkaSource or as files in a (local FS/NFS, HDFS, S3) directory and using the FileSource for reading. Cf. Spark Docs. If you use the FileSource, make sure not to append anything to an existing file in the FileSource's input directory, but move them into the directory atomically.

Is it possible to make servicestack use an unbuffered response stream?

I want to send messages back to a client via a stream. I want the client to start processing these messages as soon as possible (before the server has completed the streaming on the server side).
I have implemented IStreamWriter and I have a service which returns the IStreamWriter implementation.
public class StreamingService : Service
{
public object Any(MyStreamRequest request)
{
return new MyStreamWriter(request);
}
}
Where MyStreamRequest is defined like this:
[DataContract]
public class StreamRequest : IReturn<Stream>
{
[DataMember]
public int HowManySecondsToProduceData { get; set; }
}
When I test my implementation in a self-hosted environment it works perfectly. However, when I host this in IIS, the get call from the client
var client = new ProtoBufServiceClient("");
Stream stream = client.Get(new StreamRequest { HowManySecondsToProduceData = 20};
does not return until the IStreamWriter.WriteTo call returns (20 seconds in the sample above). This prevents my client from processing the stream right away and will also cause failure in high volume cases. I do call responseStream.Flush() inside my IStreamWriter.WriteTo implementation.
Does anybody have any insight on why this does not work in the IIS scenario, but only for the self hosted case? What do I need to do differently?
It seems like a likely cause of this problem is that the servicestack response stream is set to use buffering. I cannot find a way to change this though. Is it possible?
You just need to disable ASP.Nets response buffering:
public class NoBufferAttribute : RequestFilterAttribute
{
public override void Execute( IHttpRequest req, IHttpResponse res, object requestDto )
{
var originalResponse = (System.Web.HttpResponse)res.OriginalResponse;
originalResponse.BufferOutput = false;
}
}
John
I found a solution myself: The solution to this problem is quite simple: Call IHttpResponse Flush() inside the IStreamWriter.WriteTo implementation when you want to send data to the client. You get the IHttpResponse by calling base.Response inside the Service implementation.

Silverlight - limit application to one WCF call at a time

Silverlight can only send a certain number of simultaneous WCF requests at a time. I am trying to serialize the requests that a particular section of my application is performing because I don't need them to run concurrently.
The problem is as follows (summary below):
"WCF proxies in Silverlight applications use the SynchronizationContext of the thread from which the web service call is initiated to schedule the invocation of the async event handler when the response is received. When the web service call is initiated from the UI thread of a Silverlight application, the async event handler code will also execute on the UI thread."
http://tomasz.janczuk.org/2009/08/improving-performance-of-concurrent-wcf.html
summary: basically, if you block the thread that is calling the async method, it will never get called.
I can't figure out the right model of threading this such which would give me what I want in a reasonable way.
My only other requirement is that I don't want the UI thread to block.
As far as I can see, what should work is if the UI thread has a worker thread which queues up the calls as Action delegates, then uses an AutoResetEvent to execute a task one at a time in yet another worker thread. There are two problems:
1) The thread that calls async can't block, because then async will never get called. In fact, if you put that thread into a wait loop, I've noticed it doesn't get called either
2) You need a way to signal from the completed method of the async call that it is done.
Sorry that was so long, thanks for reading. Any ideas?
I have used a class that i build on my own to execute load operations synchronous. With the class you can register multiple load operations of diffrent domaincontexts and then execute them one by one. You can provide an Action to the constructor of the class that gets called, when all operations are finished (successful or failed).
Here´s the code of the class. I think it´s not complete and you have to change it to match your expectations. Maybe it can help you in your situation.
public class DomainContextQueryLoader {
private List<LoadOperation> _failedOperations;
private Action<DomainContextQueryLoader> _completeAction;
private List<QueuedQuery> _pendingQueries = new List<QueuedQuery>();
public DomainContextQueryLoader(Action<DomainContextQueryLoader> completeAction) {
if (completeAction == null) {
throw new ArgumentNullException("completeAction", "completeAction is null.");
}
this._completeAction = completeAction;
}
/// <summary>
/// Expose the count of failed operations
/// </summary>
public int FailedOperationCount {
get {
if (_failedOperations == null) {
return 0;
}
return _failedOperations.Count;
}
}
/// <summary>
/// Expose an enumerator for all of the failed operations
/// </summary>
public IList<LoadOperation> FailedOperations {
get {
if (_failedOperations == null) {
_failedOperations = new List<LoadOperation>();
}
return _failedOperations;
}
}
public IEnumerable<QueuedQuery> QueuedQueries {
get {
return _pendingQueries;
}
}
public bool IsExecuting {
get;
private set;
}
public void EnqueueQuery<T>(DomainContext context, EntityQuery<T> query) where T : Entity {
if (IsExecuting) {
throw new InvalidOperationException("Query cannot be queued, cause execution of queries is in progress");
}
var loadBatch = new QueuedQuery() {
Callback = null,
Context = context,
Query = query,
LoadOption = LoadBehavior.KeepCurrent,
UserState = null
};
_pendingQueries.Add(loadBatch);
}
public void ExecuteQueries() {
if (IsExecuting) {
throw new InvalidOperationException("Executing of queries is in progress");
}
if (_pendingQueries.Count == 0) {
throw new InvalidOperationException("No queries are queued to execute");
}
IsExecuting = true;
var query = DequeueQuery();
ExecuteQuery(query);
}
private void ExecuteQuery(QueuedQuery query) {
System.Diagnostics.Debug.WriteLine("Load data {0}", query.Query.EntityType);
var loadOperation = query.Load();
loadOperation.Completed += new EventHandler(OnOperationCompleted);
}
private QueuedQuery DequeueQuery() {
var query = _pendingQueries[0];
_pendingQueries.RemoveAt(0);
return query;
}
private void OnOperationCompleted(object sender, EventArgs e) {
LoadOperation loadOperation = sender as LoadOperation;
loadOperation.Completed -= new EventHandler(OnOperationCompleted);
if (loadOperation.HasError) {
FailedOperations.Add(loadOperation);
}
if (_pendingQueries.Count > 0) {
var query = DequeueQuery();
ExecuteQuery(query);
}
else {
IsExecuting = false;
System.Diagnostics.Debug.WriteLine("All data loaded");
if (_completeAction != null) {
_completeAction(this);
_completeAction = null;
}
}
}
}
Update:
I´ve just noticed that you are not using WCF RIA Services, so maybe this class will not help your.
There are some options:
- You can take a look at the Agatha-rrsl either by inspecting the implementation of it or by just using it instead of pure wcf. The framework allows you to queue requests. You can read more here.
- Another option is to use the Reactive extension. There is a SO example here and more info here and here.
- You can try the Power Thread library from Jeffrey Richter. He describes it on his book CLR via C#. You can find the library here. This webcast gives you some info about it.
- You can always roll your own implementation. The yield statement is a good help here. Error handling makes it very difficult to get the solution right.

Resources