cometd bayeux client can't get subscribed acknowledgement? - groovy

Actually i am running cometd-demo server in my local using maven jetty run shown in the doc https://docs.cometd.org/current/reference/ and trying to subscribe and publish something in a broadcast channel. Using Groovy script shown below,
ClientSessionChannel.MessageListener mylistener = new Mylistener();
def myurl = "http://localhost:8080/cometd/"
MyHttpClient httpClient = new MyHttpClient();
httpClient.start()
Map<String, Object> options = new HashMap<String, Object>();
ClientTransport transport = new LongPollingTransport(options, httpClient);
BayeuxClient client = new BayeuxClient(myurl, transport)
println 'client started on URL : '+ client.getURL()
client.handshake ( new ClientSessionChannel.MessageListener() {
public void onMessage(ClientSessionChannel channel, Message message) {
if (message.isSuccessful()) {
println 'Handshake Message : ' + message
}
}
})
boolean handshakecheck = client.waitFor(1000, BayeuxClient.State.CONNECTED);
println 'Handshake check : '+ handshakecheck
client.batch( new Runnable() {
public void run() {
client.getChannel("/foo/hello").subscribe(
new ClientSessionChannel.MessageListener() {
public void onMessage(ClientSessionChannel channel,
Message message) {
println "subscribed : "+ message
}
})
}
});
The program Output :
client started on URL : http://localhost:8080/cometd/
Handshake Message : [minimumVersion:1.0, clientId:fv0ozxw8cb5e11vtlwpacm7afp, supportedConnectionTypes:[websocket, long-polling, callback-polling], advice:[reconnect:retry, interval:0, maxInterval:10000, timeout:20000], channel:/meta/handshake, id:1, version:1.0, successful:true]
Handshake check : true
Here I can't get the subscribed message as in the code. But in server log It prints like shown below,
2018-02-12 20:30:32,687 qtp2069584894-17 [ INFO][examples.CometDDemoServlet] Monitored Subscribe from fv0ozxw8cb5e11vtlwpacm7afp,last=0,expire=0 for /foo/hello
Update 1:
Also i can't subscribe with callback method, i get the message as [channel:/meta/subscribe, id:4, subscription:/foo/hello, error:403:denied_by_not_granting:create_denied, successful:false]. I don't know what i am doing wrong ? I am just following the documentation steps. Thanks in advance.

The ClientSessionChannel.MessageListener that you pass to the subscribe(...) method will be invoked whenever a message will be published on channel /foo/hello.
Your program never publishes a message on that channel, so the listener is never invoked, therefore in your code subscribed is never printed.
You want to double check what version of the subscribe() method you want to use, as there are 2 versions.
The single parameter version takes a listener, while the two parameter version takes a listener and a callback.
Guessing from your code, you want the subscribed log line be in the callback not in the listener, so you just need to change your code to use the two parameter version of the subscribe() method.
Also, pay attention to the fact that if the JVM exits at the end of your groovy script, then that client will be gone and will never receive any message.

Related

Send a email when any Errors got in my errorChannel using Spring integration DSL

I am developing an API in spring-integration using DSL, this how it works
JDBC Polling Adapter initiates the flow and gets some data from tables and send it to DefaultRequestChannel, from here the message is handled/flowing thru various channels.
Now I am trying to
1. send a email, if any errors (e.g connectivity issue, bad record found while polling the data) occurred/detected in my error channel.
After sending email to my support group, I want to suspend my flow for 15 mins and then resume automatically.
I tried creating a sendEmailChannel (recipient of my errorChannel), it doesn't work for me. So just created a transformer method like below
this code is running fine, but is it a good practice?
#
#Transformer(inputChannel="errorChannel", outputChannel="suspendChannel")
public Message<?> errorChannelHandler(ErrorMessage errorMessage) throws RuntimeException, MessagingException, InterruptedException {
Exception exception = (Exception) errorMessage.getPayload();
String errorMsg = errorMessage.toString();
String subject = "API issue";
if (exception instanceof RuntimeException) {
errorMsg = "Run time exception";
subject = "Critical Alert";
}
if (exception instanceof JsonParseException) {
errorMsg = ....;
subject = .....;
}
MimeMessage message = sender.createMimeMessage();
MimeMessageHelper helper = new MimeMessageHelper(message);
helper.setFrom(senderEmail);
helper.setTo(receiverEmail);
helper.setText(errorMsg);
helper.setSubject(subject);
sender.send(message);
kafkaProducerSwitch.isKafkaDown());
return MessageBuilder.withPayload(exception.getMessage())
.build();
}
I am looking for some better way of handling the above logic.
And also any suggestions to suspend my flow for few mins.
You definitely can use a mail sending channel adapter from Spring Integration box to send those messages from the error channel: https://docs.spring.io/spring-integration/docs/5.1.5.RELEASE/reference/html/#mail-outbound. The Java DSL variant is like this:
.handle(Mail.outboundAdapter("gmail")
.port(smtpServer.getPort())
.credentials("user", "pw")
.protocol("smtp")))
The suspend can be done via CompoundTriggerAdvice extension, when you check the some AtimocBoolean bean for the state to activate one or another trigger in the beforeReceive() implementation. Such a AtimocBoolean can change its state in one more subscriber to that errorChannel because this one is a PublishSubscribeChannel by default. Don't forget to bring the state back to normal after that you return a false from the beforeReceive(). Just because that is enough to mark your system as normal at this moment since it is is going to work only after 15 mins.

How to update Blazor (hosted) upon socket receive event

Hello i have a blazor page in which i want to display a variable.
This variable gets updated from another thread (Task- which receives data over a websocket) and i want to display it in a thread-safe manner:
Blazor Page
#page "/new"
#inherits NewBase
<button onclick="#(async()=>await OnRunPressed())" class="control-button">Run</button>
NewValue :#socketString
public class NewBase:BlazorComponent
{
[Inject] protected BenchService service { get; set; }
protected CancellationTokenSource src = new CancellationTokenSource();
protected string socketString;
protected async Task OnRunPressed()
{
Task updateTask= Task.Run(async () =>
{
var buffer =new byte[1024];
ClientWebSocket socket = new ClientWebSocket();
await socket.ConnectAsync(new Uri("ws://localhost:8500/monitor"), CancellationToken.None);
while (true)
{
await socket.ReceiveAsync(new ArraySegment<byte>(buffer), CancellationToken.None);
this.socketString = Encoding.UTF8.GetString(buffer);
this.StateHasChanged();
}
},src.Token);
await this.service.HitServerAsync(); //does some stuff while the above task works
src.Cancel();
}
}
Update
Thanks to #Dani now i finally at least get an error :
blazor.server.js:16 POST http://localhost:8500/_blazor/negotiate 500 (Internal Server Error)
Error: Failed to start the connection: Error: Internal Server Error
You may be lacking StateHasChanged(); at the end of the OnRunPressed method
I guess this is a server-side Blazor, right ?
If not, then you should know that Mono on WASM is currently single-threaded...
They are no problem about to call StateHasChanged(); after receive data via websocket. All should to run. I have tested it (as server side) and it runs without issues:
https://github.com/ctrl-alt-d/blazorTestingWebSocketsServerSide/tree/master
Also, I have tested it as client side wasm, and they are several issues:
You are using ArrayPool that is a non netstandard2.0 class.
WebSocket is not able to connect from wasm.

Service Bus Session ReceiveBatchAsync only receiving 1 message

I'm using a Service Bus queue with Sessions enabled and I'm sending 5 messages with the same SessionId. My receiving code uses AcceptMessageSessionAsync to get a session lock so that it will receive all the messages for that session. It then uses session.ReceiveBatchAsync to try and get all the messages for the session. However, it only seems to get the first message, then when another attempt is made, it gets all the others. You should be able to see that there is a gap of almost a minute between the two batches even though all these messages were sent at once:
Session started:AE8DC914-8693-4110-8BAE-244E42A302D5
Message received:AE8DC914-8693-4110-8BAE-244E42A302D5_1_08:03:03.36523
Session started:AE8DC914-8693-4110-8BAE-244E42A302D5
Message received:AE8DC914-8693-4110-8BAE-244E42A302D5_2_08:03:04.22964
Message received:AE8DC914-8693-4110-8BAE-244E42A302D5_3_08:03:04.29515
Message received:AE8DC914-8693-4110-8BAE-244E42A302D5_4_08:03:04.33959
Message received:AE8DC914-8693-4110-8BAE-244E42A302D5_5_08:03:04.39587
My code to process these is a function in a WebJob:
[NoAutomaticTrigger]
public static async Task MessageHandlingLoop(TextWriter log, CancellationToken cancellationToken)
{
var connectionString = ConfigurationManager.ConnectionStrings["ServiceBusListen"].ConnectionString;
var client = QueueClient.CreateFromConnectionString(connectionString, "myqueue");
while (!cancellationToken.IsCancellationRequested)
{
MessageSession session = null;
try
{
session = await client.AcceptMessageSessionAsync(TimeSpan.FromMinutes(1));
log.WriteLine("Session started:" + session.SessionId);
foreach (var msg in await session.ReceiveBatchAsync(100, TimeSpan.FromSeconds(5)))
{
log.WriteLine("Message received:" + msg.MessageId);
msg.Complete();
}
}
catch (TimeoutException)
{
log.WriteLine("Timeout occurred");
await Task.Delay(5000, cancellationToken);
}
catch (Exception ex)
{
log.WriteLine("Error:" + ex);
}
}
}
This is called from my WebJob Main using:
JobHost host = new JobHost();
host.Start();
var task = host.CallAsync(typeof(Functions).GetMethod("MessageHandlingLoop"));
task.Wait();
host.Stop();
Why don't I get all my messages in the first call of ReceiveBatchAsync?
This was answered in the MSDN forum by Hillary Caituiro Monge: https://social.msdn.microsoft.com/Forums/azure/en-US/9a84f319-7bc6-4ff8-b142-4fc1d5f1e2fa/service-bus-session-receivebatchasync-only-receiving-1-message?forum=servbus
Service Bus does not guarantee you will receive the message count you
specify in receive batch even if your queue has them or more. Having
say that, you can change your code to try to get the 100 messages in
the first call, buy remember that your application should not assume
that as a guaranteed behavior.
Below this line of code varclient =
QueueClient.CreateFromConnectionString(connectionString, "myqueue");
add client.PrefetchCount = 100;
The reason that you are getting only 1 message at all times in the
first call is due to that when you accept a session it may be also
getting 1 prefetched message with it. Then when you do receive batch,
the SB client will give you that 1 message.
Unfortunately I found that setting the PrefetchCount didn't have an affect, but the reason given for only receiving one message seemed likely so I accepted it as the answer.

How to be notified of a response message when using RabbitMQ RPC and ServiceStack

Under normal circumstances messages with a response will be published to the response.inq, I understand that and it's a nifty way to notify other parties that "something" has happened. But, when using the RPC pattern, the response goes back to the temp queue and disappears. Is this correct? Is there a convenient way, short of publishing another message, to achieve this behavior, of auto-notification?
The Message Workflow docs in describes the normal message workflow for calling a Service via ServiceStack.RabbitMQ:
Request / Reply
The Request/Reply alters the default message flow by specifying its own ReplyTo address to change the queue where the response gets published, e.g:
const string replyToMq = mqClient.GetTempQueueName();
mqClient.Publish(new Message<Hello>(new Hello { Name = "World" }) {
ReplyTo = replyToMq
});
IMessage<HelloResponse> responseMsg = mqClient.Get<HelloResponse>(replyToMq);
mqClient.Ack(responseMsg);
responseMsg.GetBody().Result //= Hello, World!
When using the Request/Reply pattern no other message is published in any other RabbitMQ topic/queue, to alert other subscribers the client would need to republish the message.
RabbitMqServer callbacks
Another way to find out when a message has been published or received is to use the PublishMessageFilter and GetMessageFilter callbacks on the RabbitMqServer and Client which lets you inspect each message that they sent or received, e.g:
var mqServer = new RabbitMqServer("localhost")
{
PublishMessageFilter = (queueName, properties, msg) => {
//...
},
GetMessageFilter = (queueName, basicMsg) => {
//...
}
};

Multithreaded JMS Transaction enabled Consumer hungs up

My requirements are stated below:
I have to develop a wrapper service on top a queue,so i was just going through some message Queue like (ActiveMQ,Apollo,Kafka). But decided to proceed with ActiveMQ to match our usecases.Now the requirement are as follows:
1) A restful api through which different publisher will publish to queue,based on clientId queue will be selected.
2) Consumer will consume message through restful api and will consume message in batches. say consumer as for something like give me 10 message from queue.
Now the service should provide 10 message if there is 10 message or if message number is less or zero it will send accordingly. After receiving the message the client will process with the message and send back acknowledgement through different res-full uri. upon receiving that acknowledgement,the MQService should commit or rollback message from the queue.
In order to this in the MQService layer, i have used a cached,where im keeping the JMS connection and session object till acknowledgemnt is received or ttl expire.
In-order to retrieve message in batches and send back to client, i have created a multi-threaded consumer,so that for 5 batch message request,the service layer will create 5 thread each having different connection and session object( as stated in ActiveMQ multiple consumer http://activemq.apache.org/multiple-consumers-on-a-queue.html)
Basic use-case:
MQ(BROKER)[A] --> Wrapper(MQService)[B]-->Client [C]
Note:[B] is a restfull service having JMS consumer implemented in it.It keeps the connection and session object in cache.
[C] request to [B] to give 3 message
[B] must fetch 3 message if available in queue,wrap it in batchmsgFormat and send it to [C]
[C] process the message and send acknowledgemnt suces/failed to [B] through /send-ack uri.
Upon receiving Ack from [C], [B] will commit the Jms session and close the session and connection object. Also it will evict those from the cache.
The above work-flow is working fine with single message fetching.
But the queue hungs up on JMS MesageConsumer.receive() when try to fetch message with mutilple consumer using multithreading. ...
Here the JMS Consumer code in MQService layer:
----------------------------------------------
public BatchMessageFormat getConsumeMsg(final String clientId, final Integer batchSize) throws Exception {
BatchMessageFormat batchmsgFormat = new BatchMessageFormat();
List<MessageFormat> msgdetails = new ArrayList<MessageFormat>();
List<Future<MessageFormat>> futuremsgdetails = new ArrayList<Future<MessageFormat>>();
if (batchSize != null) {
Integer msgCount = getMsgCount(clientId, batchSize);
for (int batchconnect = 0; batchconnect <msgCount; batchconnect++) {
FutureTask<MessageFormat> task = new FutureTask<MessageFormat>(new Callable<MessageFormat>() {
#Override
public MessageFormat call() throws Exception {
MessageFormat msg=consumeBatchMsg(clientId,batchSize);
return msg;
}
});
futuremsgdetails.add(task);
Thread t = new Thread(task);
t.start();
}
for(Future<MessageFormat> msg:futuremsgdetails){
msgdetails.add(msg.get());
}
batchmsgFormat.setMsgDetails(msgdetails);
return batchmsgFormat
}
Message fetching:
private MessageFormat consumeBatchMsg(String clientId, Integer batchSize) throws JMSException, IOException{
MessageFormat msgFormat= new MessageFormat();
Connection qC = ConnectionUtil.getConnection();
qC.start();
Session session = qC.createSession(true, -1);
Destination destination = createQueue(clientId, session);
MessageConsumer consumer = session.createConsumer(destination);
Message message = consumer.receive(2000);
if (message!=null || message instanceof TextMessage) {
TextMessage textMessage = (TextMessage) message;
msgFormat.setMessageID(textMessage.getJMSMessageID());
msgFormat.setMessage(textMessage.getText());
CacheObject cacheValue = new CacheObject();
cacheValue.setConnection(qC);
cacheValue.setSession(session);
cacheValue.setJmsQueue(destination);
MQCache.instance().add(textMessage.getJMSMessageID(),cacheValue);
}
consumer.close();
return msgFormat;
}
Acknowledgement and session closing:
public String getACK(String clientId,String msgId,String ack)throws JMSException{
if (MQCache.instance().get(msgId) != null) {
Connection connection = MQCache.instance().get(msgId).getConnection();
Session session = MQCache.instance().get(msgId).getSession();
Destination destination = MQCache.instance().get(msgId).getJmsQueue();
MessageConsumer consumer = session.createConsumer(destination);
if (ack.equalsIgnoreCase("SUCCESS")) {
session.commit();
} else {
session.rollback();
}
session.close();
connection.close();
MQCache.instance().evictCache(msgId);
return "Accepted";
} else {
return "Rejected";
}
}
Does anyone worked on similar scenario or can you pls throw some light? Is there any other way to implement this batch mesage fetching as well as client failure handling?
Try after setting the prefetch limit to 0 as below:
ConnectionFactory connectionFactory
= new ActiveMQConnectionFactory("tcp://localhost:61616?jms.prefetchPolicy.queuePrefetch=0");
I'll give a few pointers to help to code this logic better.
I'm assuming you are using pure JMS 1.1 as much as possible. Ensure that you have one place where you get the connection from the pool or create a connection. You need not do that inside a thread. You can do that outside. Sessions must be created inside a thread and shouldn't be shared. This will impact the logic in the function consumeBatchMsg().
Secondly, its simpler to use one thread to consume all the messages of the given batchSize. I see that you are using transacted session. So you can do one commit after getting all the messages of the batchSize.
If you really want to take the complicated route of having multiple consumers on a queue (probably little better performance), you can using CountDownLatch or CyclicBarrier of Java and set it to batchSize to trigger. Once all the threads have received the messages, it can commit and close the sessions in the respective threads. Never let the session instance go out of the context of the thread that created it.

Resources