I have java application in which a Quartz Task runs every 300ms and try to retrieve a number of messages from ActiveMQ Queue let's say 5 and process them each requested bulk needs about 1 minute to be processed to free the Quartz Scheduler worker
All threads use the same connection which is locked by a synchronized block on the receive method like this:
protected List<?> receive(int bulkSize, String queueName, long receiveTimeoutMillis) {
LinkedList messages = new LinkedList();
try {
QueueReceiver receiver = (QueueReceiver)this.receivers.get(queueName);
if (receiver != null) {
ObjectMessage message = null;
int index = 1;
do {
message = (ObjectMessage)receiver.receive(receiveTimeoutMillis);
if (message != null) {
messages.add(message.getObject());
if (LOGGER.isDebugEnabled()) {
LOGGER.debug("Message received from: " + receiver.getQueue().getQueueName() + " jms message id:" + message.getJMSMessageID());
}
message.acknowledge();
}
if (message == null) {
break;
}
++index;
} while(index <= bulkSize);
LOGGER.info("Consumed " + (index - 1) + " messages from " + queueName + ", elapsed time: " + stopWatch.getTime() + " ms");
} else {
LOGGER.warn("Queue not found " + queueName);
}
} catch (Exception var10) {
LOGGER.warn("error in performing receive: " + var10.getMessage(), var10);
}
return messages;
}
receiveTimeoutMillis is always 150 ms , the bulkSize is 5 as I said.
when I start my application each thread uses this method gets a bulk of 5 messages but after a couple of minutes they all start to get 0 to 2 messages although the Queue is Full with messages
I don't have any clue why this is happening!,so please give me any hint to check it!
Notes
even though I increase the timeout of the receive method from the Queue still the receiver doesn't get any messages!
in the log I see the following messages from each thread running the above code:
2019-01-29 14:59:23,893 INFO com.peer39.commons.pattern.jms.JMSService - [DefaultQuartzScheduler_Worker-2] Consumed 0 messages from TRANSLATOR, elapsed time: 150 ms
2019-01-29 14:59:24,329 INFO com.peer39.commons.pattern.jms.JMSService - [DefaultQuartzScheduler_Worker-2] Consumed 1 messages from TRANSLATOR, elapsed time: 282 ms
2019-01-29 14:59:24,500 INFO com.peer39.commons.pattern.jms.JMSService - [DefaultQuartzScheduler_Worker-15] Consumed 0 messages from TRANSLATOR, elapsed time: 150 ms
2019-01-29 14:59:24,793 INFO com.peer39.commons.pattern.jms.JMSService - [DefaultQuartzScheduler_Worker-15] Consumed 0 messages from TRANSLATOR, elapsed time: 150 ms
2019-01-29 14:59:25,097 INFO com.peer39.commons.pattern.jms.JMSService - [DefaultQuartzScheduler_Worker-18] Consumed 0 messages from TRANSLATOR, elapsed time: 150 ms
2019-01-29 14:59:25,403 INFO com.peer39.commons.pattern.jms.JMSService - [DefaultQuartzScheduler_Worker-18] Consumed 1 messages from TRANSLATOR, elapsed time: 153 ms
2019-01-29 14:59:25,693 INFO com.peer39.commons.pattern.jms.JMSService - [DefaultQuartzScheduler_Worker-15] Consumed 0 messages from TRANSLATOR, elapsed time: 150 ms
2019-01-29 14:59:25,996 INFO com.peer39.commons.pattern.jms.JMSService - [DefaultQuartzScheduler_Worker-15] Consumed 0 messages from TRANSLATOR, elapsed time: 150 ms
2019-01-29 14:59:26,300 INFO com.peer39.commons.pattern.jms.JMSService - [DefaultQuartzScheduler_Worker-15] Consumed 0 messages from TRANSLATOR, elapsed time: 150 ms
2019-01-29 14:59:26,595 INFO com.peer39.commons.pattern.jms.JMSService - [DefaultQuartzScheduler_Worker-15] Consumed 0 messages from TRANSLATOR, elapsed time: 150 ms
2019-01-29 14:59:26,898 INFO com.peer39.commons.pattern.jms.JMSService - [DefaultQuartzScheduler_Worker-15] Consumed 0 messages from TRANSLATOR, elapsed time: 150 ms
2019-01-29 14:59:27,193 INFO com.peer39.commons.pattern.jms.JMSService - [DefaultQuartzScheduler_Worker-15] Consumed 0 messages from TRANSLATOR, elapsed time: 150 ms
2019-01-29 14:59:27,496 INFO com.peer39.commons.pattern.jms.JMSService - [DefaultQuartzScheduler_Worker-15] Consumed 0 messages from TRANSLATOR, elapsed time: 150 ms
I am adding the daemon thread that runs every 300 ms
public class Daemon extends ParallelQuartzDaemon {
#Override
protected ICollectorAgent.ProcessStatus executeWork(JobExecutionContext jobExecutionContext,
Map<String, Double> properties,
Map<String, String> alerts) throws Exception {
ICollectorAgent.ProcessStatus processStatus = ICollectorAgent.ProcessStatus.SUCCESS;
List<FlowRequest> flowRequests = getFlowRequestsForTranslation();
if (!flowRequests.isEmpty()) {
//DO Work! //takes 1-2 minutes!
}
return processStatus;
}
private List<FlowRequest> getFlowRequestsForTranslation() {
translatorContext.getTranslatorJMSService().getFlowRequestsToTranslate(5);
}
}
and the JMS class:
public class TranslatorJMSService extends JMSService {
public List<FlowRequest> getFlowRequestsToTranslate(int count) {
final Long jmsReceiveTimeoutMillis = translatorConfiguration.getJmsReceiveTimeoutMillis();
return (List<FlowRequest>) receive(count, queueName, jmsReceiveTimeoutMillis);
}
}
and last, the receive method is mentioned above.
thanks!
Related
My watch extension fetches many items from coreData in a background thread, using this code (shortened):
coreDataSerialQueue.async {
backgroundManagedContext.performAndWait {
…
let buyItemFetchRequest: NSFetchRequest<CDBuyItem> = CDBuyItem.fetchRequest()
…
do {
let cdShoppingItems: [CDBuyItem] = try backgroundManagedContext.fetch(buyItemFetchRequest)
…
return
} catch let error as NSError {
…
return
}
}
}
This code crashes with the following log:
Event: cpu usage
Action taken: Process killed
CPU: 2 seconds cpu time over 4 seconds (59% cpu average), exceeding limit of 14% cpu over 15 seconds
CPU limit: 2s
Limit duration: 15s
CPU used: 2s
CPU duration: 4s
Duration: 3.57s
Duration Sampled: 0.00s
Steps: 1
Obviously, it takes too long.
My questions:
Is there any time limit for a coreData background thread?
If so, how could I modify my code to avoid it?
I'm trying to parse some logs with grok but am having some trouble doing it when the log lines don't look the same sometimes...
My log file lets say looks like this:
[2017-02-03 19:15:51,112] INFO [Group Metadata Manager on Broker 1]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2017-02-03 19:25:51,112] INFO [Group Metadata Manager on Broker 1]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
[2017-02-03 19:26:20,605] INFO Rolled new log segment for \'omega-replica-sync-dev-8\' in 21 ms. (kafka.log.Log)
[2017-02-03 19:26:20,605] INFO Scheduling log segment 1 for log omega-replica-sync-dev-8 for deletion. (kafka.log.Log)
[2017-02-03 19:27:20,606] INFO Deleting segment 1 from log omega-replica-sync-dev-8. (kafka.log.Log)
My current node code looks like this:
'use strict';
var nodegrok = require('node-grok');
var Regex = require("regex");
var zlib = require('zlib');
var msg = '[2017-02-03 19:15:51,112] INFO [Group Metadata Manager on Broker 1]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)\n[2017-02-03 19:25:51,112] INFO [Group Metadata Manager on Broker 1]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)\n[2017-02-03 19:26:20,605] INFO Rolled new log segment for \'omega-replica-sync-dev-8\' in 21 ms. (kafka.log.Log)\n[2017-02-03 19:26:20,605] INFO Scheduling log segment 1 for log omega-replica-sync-dev-8 for deletion. (kafka.log.Log)\n[2017-02-03 19:27:20,606] INFO Deleting segment 1 from log omega-replica-sync-dev-8. (kafka.log.Log)'
console.log('message: ', msg);
var p2 = '\\[%{TIMESTAMP_ISO8601:timestamp}\\] %{LOGLEVEL:level} \\[%{DATA:message1}\\]: %{GREEDYDATA:message2}'
var lines = msg.toString().split('\n');
for(var i = 0;i < lines.length;i++){
console.log('line [i]:', lines[i])
var str = lines[i]
var patterns = require('node-grok').loadDefaultSync();
var pattern = patterns.createPattern(p2)
console.log('pattern:', pattern.parseSync(lines[i]));
}
but the last two seem to output null...since its missing the 3rd part in the pattern.
line [i]: [2017-02-03 19:15:51,112] INFO [Group Metadata Manager on Broker 1]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
pattern: { timestamp: '2017-02-03 19:15:51,112',
level: 'INFO',
message1: 'Group Metadata Manager on Broker 1',
message2: 'Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)' }
line [i]: [2017-02-03 19:25:51,112] INFO [Group Metadata Manager on Broker 1]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)
pattern: { timestamp: '2017-02-03 19:25:51,112',
level: 'INFO',
message1: 'Group Metadata Manager on Broker 1',
message2: 'Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)' }
line [i]: [2017-02-03 19:26:20,605] INFO Rolled new log segment for 'omega-replica-sync-dev-8' in 21 ms. (kafka.log.Log)
pattern: null
line [i]: [2017-02-03 19:26:20,605] INFO Scheduling log segment 1 for log omega-replica-sync-dev-8 for deletion. (kafka.log.Log)
pattern: null
line [i]: [2017-02-03 19:27:20,606] INFO Deleting segment 1 from log omega-replica-sync-dev-8. (kafka.log.Log)
pattern: null
How can you format lines with varying formats then in grok?
so here is one way of doing it that I got to work...essentially looking to see if the pattern matches with an if statement and then evaluating but, what if there are 6 potential formats of the log? Do I have to right 6 if statements that are nested then? Does sound like an efficient way to me...is there a better way?
'use strict';
var nodegrok = require('node-grok');
var Regex = require("regex");
var zlib = require('zlib');
var msg = '[2017-02-03 19:15:51,112] INFO [Group Metadata Manager on Broker 1]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)\n[2017-02-03 19:25:51,112] INFO [Group Metadata Manager on Broker 1]: Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.GroupMetadataManager)\n[2017-02-03 19:26:20,605] INFO Rolled new log segment for \'omega-replica-sync-dev-8\' in 21 ms. (kafka.log.Log)\n[2017-02-03 19:26:20,605] INFO Scheduling log segment 1 for log omega-replica-sync-dev-8 for deletion. (kafka.log.Log)\n[2017-02-03 19:27:20,606] INFO Deleting segment 1 from log omega-replica-sync-dev-8. (kafka.log.Log)'
console.log('message: ', msg);
var p2 = '\\[%{TIMESTAMP_ISO8601:timestamp}\\] %{LOGLEVEL:level} \\[%{DATA:message1}\\]: %{GREEDYDATA:message2}'
var lines = msg.toString().split('\n');
for(var i = 0;i < lines.length;i++){
console.log('line [i]:', lines[i])
var str = lines[i]
var p = '\\[%{TIMESTAMP_ISO8601:timestamp}\\] %{LOGLEVEL:level} \\[%{DATA:message1}\\]: %{GREEDYDATA:message2}'
var p2 = '\\[%{TIMESTAMP_ISO8601:timestamp}\\] %{LOGLEVEL:level} %{GREEDYDATA:message2}'
var patterns = require('node-grok').loadDefaultSync();
var pattern = patterns.createPattern(p)
if (pattern.parseSync(lines[i]) == null ) {
var pattern = patterns.createPattern(p2)
console.log('patternf:', pattern.parseSync(lines[i]));
} else {
console.log('pattern:', pattern.parseSync(lines[i]));
}
}
public static void main(String[] args) throws SchedulerException
{// Configure job using Quartz.
JobDetail job = JobBuilder.newJob(TriggerJob.class).withIdentity("testJob").build();
System.out.println("Job created....................");
// specify the running period of the job
CronTrigger trigger =TriggerBuilder.newTrigger().withIdentity("triggerName", "groupName").withSchedule(CronScheduleBuilder.cronSchedule("0 51 4 5 1/1 ?")).build(); System.out.println("getCronExpression() = "+ trigger.getCronExpression());
// CronScheduleBuilder.dailyAtHourAndMinute(3, 30))
System.out.println("Trigger created.................");
SchedulerFactory scheduler = new StdSchedulerFactory();
Scheduler sched = scheduler.getScheduler();
sched.start();
sched.scheduleJob(job,trigger);
sched.shutdown();
System.out.println("Job scheduled...................");
}
public class TriggerJob implements Job
{
public void execute(JobExecutionContext arg0) throws JobExecutionException
{
JobKey jobKey = arg0.getJobDetail().getKey();
System.out.println("jobKey = " + jobKey.toString());
Calendar calendar = Calendar.getInstance();
// Call the EHCache loading mechanism once in every day.
System.out.println("Job execution started on - " + calendar.getTime());
// do write ur logic
System.out.println("****************************************************************************************");
System.out.println(" Insert Records");
System.out.println("****************************************************************************************");
System.out.println("Job execution completed on - " + calendar.getTime());
}
}
Console OutPut:
Job created....................
Trigger created.................
getCronExpression() = 0 50 3 4 * ?
219 [main] INFO org.quartz.impl.StdSchedulerFactory - Using default implementation for ThreadExecutor
234 [main] INFO org.quartz.simpl.SimpleThreadPool - Job execution threads will use class loader of thread: main
313 [main] INFO org.quartz.core.SchedulerSignalerImpl - Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
313 [main] INFO org.quartz.core.QuartzScheduler - Quartz Scheduler v.2.1.7 created.
329 [main] INFO org.quartz.simpl.RAMJobStore - RAMJobStore initialized.
329 [main] INFO org.quartz.core.QuartzScheduler - Scheduler meta-data: Quartz Scheduler (v2.1.7) 'DefaultQuartzScheduler' with instanceId 'NON_CLUSTERED'
Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
NOT STARTED.
Currently in standby mode.
Number of jobs executed: 0
Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.
329 [main] INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties'
329 [main] INFO org.quartz.impl.StdSchedulerFactory - Quartz scheduler version: 2.1.7
329 [main] INFO org.quartz.core.QuartzScheduler - Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
Job scheduled...................
#monthly runs the job once a month, on the 1st, at 12:00am. In standard cron syntax this is equivalent to: 0 0 1 * *.
Trying to poll an Azure Service Bus Queue using a WebJob written in Node.js. I created 2 WebJobs. The first is on demand and sends 10 unique messages to the queue. The second job is continuous and polls the queue for messages.
Encountering the following issues:
The polling is SLOW. It takes an average of about 10 minutes to receive 10 messages. See sample log details below. Basically unusable at this speed. All the delay is from getting a response from receiveQueueMessage. Response times vary from 0 seconds to ~120 seconds, with an average of 60 seconds.
The messages are being received in a random order. Not FIFO.
Sometimes messages are received twice, even though they are being read in ReceiveAndDelete mode (I have tried with no read mode parameter which should default to ReceiveAndDelete, with {isReceiveAndDelete:true} and with {isPeekLock:false} with the same results).
When the queue is empty, it should keep the receive request open for a day, but it always returns with a no message error after 230 seconds. According to the documentation the max is 24 days so I don't know where 230 seconds is coming from:
The maximum timeout for a blocking receive operation in Service Bus
queues is 24 days. However, REST-based timeouts have a maximum value
of 55 seconds.
Basically nothing works as advertised. What am I doing wrong?
Send Message Test Job:
var uuid = require('node-uuid');
var azure = require('azure');
var serviceBus = azure.createServiceBusService(process.env.busSearchConnectionString);
var messagesToSend = 10;
sendMessage(0);
function sendMessage(count)
{
var message = {
body: 'test message',
customProperties: {
message_number: count,
sent_date: new Date
},
brokerProperties: {
MessageId: uuid.v4() //ensure that service bus doesn't think this is a duplicate message
}
};
serviceBus.sendQueueMessage(process.env.busSearchQueueName, message, function(err) {
if (!err) {
console.log('sent test message number ' + count.toString());
} else {
console.error('error sending message: ' + err);
}
});
//wait 5 seconds to ensure messages are received by service bus in correct order
if (count < messagesToSend) {
setTimeout(function(newCount) {
//send next message
sendMessage(newCount);
}, 5000, count+1);
}
}
Receive Message Continuous Job:
console.log('listener job started');
var azure = require('azure');
var serviceBus = azure.createServiceBusService(process.env.busSearchConnectionString);
listenForMessages(serviceBus);
function listenForMessages(serviceBus)
{
var start = process.hrtime();
var timeOut = 60*60*24; //long poll for 1 day
serviceBus.receiveQueueMessage(process.env.busSearchQueueName, {timeoutIntervalInS: timeOut, isReceiveAndDelete: true}, function(err, message) {
var end = process.hrtime(start);
console.log('received a response in %ds seconds', end[0]);
if (err) {
console.log('error requesting message: ' + err);
listenForMessages(serviceBus);
} else {
if (message !== null && typeof message === 'object' && 'customProperties' in message && 'message_number' in message.customProperties) {
console.log('received test message number ' + message.customProperties.message_number.toString());
listenForMessages(serviceBus);
} else {
console.log('invalid message received');
listenForMessages(serviceBus);
}
}
});
}
Sample Log Output:
[05/06/2015 21:50:14 > 8c2504: SYS INFO] Status changed to Running
[05/06/2015 21:50:14 > 8c2504: INFO] listener job started
[05/06/2015 21:51:23 > 8c2504: INFO] received a response in 1s seconds
[05/06/2015 21:51:23 > 8c2504: INFO] received test message number 0
[05/06/2015 21:51:25 > 8c2504: INFO] received a response in 2s seconds
[05/06/2015 21:51:26 > 8c2504: INFO] received test message number 4
[05/06/2015 21:51:27 > 8c2504: INFO] received a response in 1s seconds
[05/06/2015 21:51:27 > 8c2504: INFO] received test message number 7
[05/06/2015 21:51:28 > 8c2504: INFO] received a response in 0s seconds
[05/06/2015 21:51:29 > 8c2504: INFO] received test message number 9
[05/06/2015 21:51:49 > 8c2504: INFO] received a response in 20s seconds
[05/06/2015 21:51:49 > 8c2504: INFO] received test message number 1
[05/06/2015 21:53:35 > 8c2504: INFO] received a response in 106s seconds
[05/06/2015 21:53:35 > 8c2504: INFO] received test message number 1
[05/06/2015 21:54:26 > 8c2504: INFO] received a response in 50s seconds
[05/06/2015 21:54:26 > 8c2504: INFO] received test message number 5
[05/06/2015 21:54:35 > 8c2504: INFO] received a response in 9s seconds
[05/06/2015 21:54:35 > 8c2504: INFO] received test message number 9
[05/06/2015 21:55:28 > 8c2504: INFO] received a response in 53s seconds
[05/06/2015 21:55:28 > 8c2504: INFO] received test message number 2
[05/06/2015 21:57:26 > 8c2504: INFO] received a response in 118s seconds
[05/06/2015 21:57:26 > 8c2504: INFO] received test message number 6
[05/06/2015 21:58:28 > 8c2504: INFO] received a response in 61s seconds
[05/06/2015 21:58:28 > 8c2504: INFO] received test message number 8
[05/06/2015 22:00:35 > 8c2504: INFO] received a response in 126s seconds
[05/06/2015 22:00:35 > 8c2504: INFO] received test message number 3
[05/06/2015 22:04:25 > 8c2504: INFO] received a response in 230s seconds
[05/06/2015 22:04:25 > 8c2504: INFO] error requesting message: No messages to receive
[05/06/2015 22:08:16 > 8c2504: INFO] received a response in 230s seconds
[05/06/2015 22:04:25 > 8c2504: INFO] error requesting message: No messages to receive
And the issue was the queue I was using was partitioned (the default option when creating a queue in the Azure portal). Once I created a new queue that was not partitioned, everything worked as expected without the lag (other than the weird 230 second timeout on a long poll attempt). So basically the node.js library doesn't work for partitioned queues. At all. Wasted many days figuring that one out. Will leave this here for others.
Switching off the partitioned flag of the Service Bus queue worked for me, too.
With the partitioned queue some messages had delays of more than 30 minutes.
A simple DotNet webclient could download all messages without any delays. However, as soon as nodejs was supposed to download messages, only the first message would be downloaded without problems, afterwards delays showed up. Playing with nodejs to change the http agent options keepalive and socket timeout did not improve the situation.
After stopping nodejs, I had to wait several minutes before the DotNet client actually started working without problem. This was reproducable several times. I also found the simple DotNet webclient program showed similar problems, after being started and stopped several times in a row.
Anyway, your post showed me the solution: Turn off the partitioned flag :)
Try using the amqp to read the messages off the azure service bus partitioned queue and this will work for a partitioned topic/queue and you don't even have to poll a lot.
const AMQPClient = require('amqp10').Client;
const Policy = require('amqp10').Policy;
const protocol = 'amqps';
const keyName = 'RootManageSharedAccessKey';
const sasKey = 'your_key_goes_here';
const serviceBusHost = 'namespace.servicebus.windows.net';
const uri = `${protocol}://${encodeURIComponent(keyName)}:${encodeURIComponent(sasKey)}#${serviceBusHost}`;
const queueName = 'partitionedQueueName';
const client = new AMQPClient(Policy.ServiceBusQueue);
client.connect(uri)
.then(() => Promise.all([client.createReceiver(queueName)]))
.spread((receiver) => {
console.log('--------------------------------------------------------------------------');
receiver.on('errorReceived', (err) => {
// check for errors
console.log(err);
});
receiver.on('message', (message) => {
console.log('Received message');
console.log(message);
console.log('----------------------------------------------------------------------------');
});
})
.error((e) => {
console.warn('connection error: ', e);
});
https://www.npmjs.com/package/amqp10
I have this Button click handler (MonoMac on OS X 10.9.3):
partial void OnDoButtonClick(NSObject sender)
{
DoButton.Enabled = false;
// Start animation
ProgressIndicator.StartAnimation(this);
ThreadPool.QueueUserWorkItem(_ => {
// Perform a task that last for about a second:
Thread.Sleep(1 * 1000);
// Stop animation:
InvokeOnMainThread(() => {
ProgressIndicator.StopAnimation(this);
DoButton.Enabled = true;
});
});
}
However, when i run the code by pressing the button, the main thread stops the following error occurs:
(lldb) quit* thread #1: tid = 0x2bf20, 0x98fd9f7a libsystem_kernel.dylib`mach_msg_trap + 10, queue = 'com.apple.main-thread', stop reason = signal SIGSTOP
And, the following log is recorded in the system log:
2014/05/21 13:10:51.752 com.apple.debugserver-310.2[3553]: 1 +0.000001 sec [0de1/1503]: error: ::read ( 0, 0x107557a40, 1024 ) => -1 err = Connection reset by peer (0x00000036)
2014/05/21 13:10:51.752 com.apple.debugserver-310.2[3553]: 2 +0.000001 sec [0de1/0303]: error: ::ptrace (request = PT_THUPDATE, pid = 0x0ddc, tid = 0x1a03, signal = -1) err = Invalid argument (0x00000016)
2014/05/21 13:10:51.753 com.apple.debugserver-310.2[3553]: Exiting.
2014/05/21 13:11:05.000 kernel[0]: process <AppName>[3548] caught causing excessive wakeups. Observed wakeups rate (per sec): 1513; Maximum permitted wakeups rate (per sec): 150; Observation period: 300 seconds; Task lifetime number of wakeups: 45061
2014/05/21 13:11:05.302 ReportCrash[3555]: Invoking spindump for pid=3548 wakeups_rate=1513 duration=30 because of excessive wakeups
2014/05/21 13:11:07.452 spindump[3556]: Saved wakeups_resource.spin report for <AppName> version 1.2.1.0 (1) to /Library/Logs/DiagnosticReports/<AppName>_2014-05-21-131107_<UserName>-MacBook-Pro.wakeups_resource.spin
Extract from above: Maximum permitted wakeups rate (per sec): 150; Observation period: 300 seconds; Task lifetime number of wakeups: 45061
The problem does NOT happen if I remove the ProgressIndicator.StartAnimation(this); and ProgressIndicator.StopAnimation(this); lines.
Why is the main thread stopped by SIGSTOP?