What will happen if the size of the event data object is more than 4MB while creating the EventDataBatch using EventDataBatch.TryAdd() in Azure.Messaging.EventHubs.Producer namespace?
As per the MS document it will return true or false.There are two types of exception
ArgumentNullException - Thrown when the EventData is null and ObjectDisposedException -
Thrown when the batch is already disposed.
What exception will it throw in case of the event data object is more than 4MB?
None, it will return false and the event data will not have been added. By having a boolean return value there is no need to throw an exception when the size is too big. That is the whole idea of this method, to add event data and in that process check the total size in a safe way.
Related
So i have 3 lambdas, one with an API event that triggers a lambda that pulls down around 50,000 objects and pushes them all to a queue.
The second lambda reads from the queue, 10 at a time, in a loop 30 times - meaning it reads, does stuff, invokes the third lambda, returns promise, then reads again - 30 times for a total of 300 reads in the time the lambda executes
The 3rd lambda takes the information from the queue and hits another endpoint with it.
The issue is in that second lambda...First i call a function that returns the number of messages in the queue and if it's more than zero i read them. However, even if there's 20,000 messages in the queue it often comes back with nothing. I'm not sure why.
I have WaitTimeSeconds set to 20 for long polling. Any help would be greatly appreciated, the docs claim i can read up to 3,000/second with a FIFO queue and i'm having trouble getting anywhere near that performance.
Here's the code:
exports.handler = (event, context, callback) => {
const sqs = new AWS.SQS({ region: process.env.AWS_REGION });
getMessageCount(sqs)
.then((messageCount) => {
if (messageCount > 0) {
mapSeries(range(0, 30), getMessages(sqs))
.then((messageRes) => {
callback(null, messageRes);
})
.catch(e => Promise.reject(e));
}
callback(null, 'No more messages');
})
.catch((e) => {
callback(e);
});
};
getMessageCount makes a call to sqs.getQueueAttributes and returns a promise that receives the number of messages.
mapSeries allows the loop to wait for the previous promise to be resolved/rejected before iterating and on each iteration it calls getMessages which calls sqs.receiveMessage and invokes the 3rd lambda with the data.
Any perspective on this is appreciated, thank you!
As i understand your questions, the problem lies with getting the number of messages in the queue. If you had also given the getMessageCount(sqs) as well, we could have determined the types of attributes you are trying to retrieve from SQS.
There are three types of attributes relevant, to get the message count in SQS. These attributes are given below.
ApproximateNumberOfMessages - Returns the approximate number of visible
messages in a queue
ApproximateNumberOfMessagesNotVisible - Returns the approximate number of messages that have not timed-out and aren't deleted.
If you want to include the messages that are waiting to be added, you can consider the following property as well.
ApproximateNumberOfMessagesDelayed - Returns the approximate number of
messages that are waiting to be added to the queue.
By considering these attributes, you can get a much more accurate count from SQS.
Also if I may suggest, I implemented a similar system, but without looking for the count.I retrieve 10 messages at a time via polling, process them and delete them from the queue. As per your example, you can repeat this for 30 times. But if the getMessages(sqs) function returns an empty set, we could assume that the list is empty. (This depends on whether you are using short polling or long polling). Nevertheless, checking for the number of messages at every step seems to be redundant. This is according to this example, but it might defer according to the use case.
Read through the API documentation: https://docs.aws.amazon.com/AWSJavaScriptSDK/latest/AWS/SQS.html#receiveMessage-property
Parameters:
MaxNumberOfMessages — (Integer)
The maximum number of messages to return. Amazon SQS never returns
more messages than this value (however, fewer messages might be
returned). Valid values are 1 to 10. Default is 1.
Wrap your code in a while loop and anticipate a frequent case of 0 messages since 0 is fewer than 1 to 10.
Something like...
var messages = [];
while(messages.length < NUMBER_OF_MSGS_YOU_REALLY_WANT) {
var new_messages = await getSQSMessages(NUMBER_OF_MSGS_YOU_REALLY_WANT - messages.length);
if(new_messages.Data.Messages.length > 0) {
messages.push(new_messages.Data.Messages);
}
}
I have a strange issue with multi-threading. I want to print a table view and therefore start a new thread which runs with a progress bar. Eventually this thread dies with memory errors for which I'm seeking the cause. Right now I got
malloc: * error for object 0x10000078c: Invalid signature for
pointer dequeued from free list
set a breakpoint in malloc_error_break to debug CaLister(27054,0x7fff73ea3300) malloc: error for object
0x60800043bcc0: Invalid pointer dequeued from free list
* set a breakpoint in malloc_error_break to debug
But most of the times (still it happens rarely!) it just stops with no specific (but definitely memory related) error. When I look from the debugger my data are nil. But they have not been touched since they are still being available for display in my table view.
Now the question: is there any precaution I need to take to access data being allocated in the main thread so I can safely access them from the detached thread?
Edit:
My print thread is dispatched like this (stripped code):
dispatch_async(dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_BACKGROUND, 0)) { self.performPrint (true) }
func performPrint (async:Bool) {
//First get the shared print info object so we know page sizes. The shared print info object acts like a global variable.
let sharedPrintInfo = NSPrintInfo.sharedPrintInfo()
let printObject = PSPrint ()
//Allocate a new instance of NSView into the variable printPageView
var frame = NSRect(x: 0, y: 0, width: sharedPrintInfo.paperSize.width-sharedPrintInfo.leftMargin-sharedPrintInfo.rightMargin, height: sharedPrintInfo.paperSize.height-sharedPrintInfo.topMargin-sharedPrintInfo.bottomMargin)
let basePrintPageView = PSPrintView(frame: frame)
var printPageView:PSPrintView
for pageNo in 1...paperDimensions.pages {
paperDimensions.pageNo = pageNo
printPageView = basePrintPageView.clone(pageNo)
//Set the option for the printView for what it should draw.
paperDimensions.pageNo = pageNo
//Finally append the view to the PSPrint Object.
printObject.printViews.append(printPageView)
}
dispatch_async(dispatch_get_main_queue()) {
printObject.printTheViews() //print all the views, each view being a 'page'.
}
}
Within
printPageView = basePrintPageView.clone(pageNo)
the access to my table view (where I get the data to be printed) returns nil sometimes.
Edit2: I just noticed that it's not the background thread which crashed, but the main thread :-/ Scratching my head even more but likely I have to close this question.
I'm relatively new to Apex, but I have some questions about a batch job that I am creating. I want to make a query with a subquery (please see the code). Every Portal_c can have more than 200 Exporte_r.
global Database.QueryLocator start(Database.BatchableContext BC) {
String query = 'SELECT Id, Name, (SELECT Id FROM Exporte__r) FROM Portal__c';
return Database.getQueryLocator(query);
}
global void execute(Database.BatchableContext BC, List<Portal__c> scope) {
for (Portal__c portal : scope) {
// doesn't work -> First error: Aggregate query has too many rows for direct assignment, use FOR loop
// when using FOR loop -> System.QueryException: invalid query locator
//List<Export__c> relatedExports = portal.Exporte__r;
// grab all the related Export__c records using 'getSObjects' to avoid errors described above
Export__c[] relatedExports = portal.getSObjects('Exporte__r');
if (relatedExports != null) {
for (Export__c exp : relatedExports) {
// do something
}
}
}
}
I have the following questions:
If I use List<Export__c> relatedExports = portal.Exporte__r (which I commented out) to get the sub query records then I will receive the error message: “Aggregate query has too many rows for direct assignment, use FOR loop”. The error message makes no sense for me as the SOQL is done already before. Is there any explaination?
With the solution above the maximal amount of records from type Exporte_r received per Portal_c with the sub query is 199 though I have more than 200 for some records of Portal__c, why is it limited to that number? It seems all records above 199 are ignored in this case.
Is there any possibility to receive more than 199 records from a sub query? I have tried to change the batch size but it seems it is independent of the number of records receivable by the sub query. Any idea?
Many thanks!
As per the salesforce doc http://www.salesforce.com/us/developer/docs/apexcode/Content/langCon_apex_loops_for_SOQL.htm
You might get a QueryException in a SOQLfor loop with the message
Aggregate query has too many rows for direct assignment, use FOR loop.
This exception is sometimes thrown when accessing a large set of child
records of a retrieved sObject inside the loop, or when getting the
size of such a record set. To avoid getting this exception, use a for
loop to iterate over the child records, as follows.
Integer count=0;
for (Contact c : returnedAccount.Contacts) {
count++;
// Do some other processing
}
I'm very new to ActionScript Workers, but I would like to know if this is possible.
From what I have read, ActionScript Workers (ASW) are like separate threads that can do more CPU intensive calculations without interrupting the Main thread (which is executing your main SWF file).
The only example I really seen kicking around was the one illustrating animation playing at a consistent rate while an ASW took care of loading or calculating some intensive math formulas.
Is the Sound API available for ActionScriptWorkers?
(Reposted under my proper login)
They certainly can! Check out my recent blog post on exactly this:
http://flexmonkey.blogspot.co.uk/2012/09/multi-threaded-sound-synthesis-in-flex.html#!/2012/09/multi-threaded-sound-synthesis-in-flex.html
After a fair bit of tinkering, I actually generate a byte array in a background worker and then write the data back to the SampleDataEvent's data property in the primordial thread (i.e. the user interface).
I write the data from the worker from the previous SampleDataEvent while the worker generates the data for the next one - so FlashPlayer is actually doing three tasks simultaneously: offering a responsive UI, playing a tone and generating the next tone.
simon
I'm going to venture out onto a cliff and answer YES to this question.
The release notes has a list of "non-functional" API's, I don't see any sound related classes in the list.
The following APIs will not be available from within a background
worker. Any attempt to construct an instance of any of these will
throw an IllegalOperationError with the message "This feature is not
available within this context," the errorID will be the same in all
instances, allowing developers to key off of this value.
flash.desktop.Clipboard // calling constructor will throw; calling generalClipboard will return null
flash.desktop.NativeDragManager // isSupported returns false
flash.desktop.Updater // isSupported returns false
flash.display.NativeMenu // isSupported returns false
flash.display.NativeWindow // isSupported returns false
flash.display.ToastWindow // can't access instance because stage.window will never be defined
flash.display.Window // can't access instance because stage.window will never be defined
flash.external.ExtensionContext // createExtensionContext() will always return null or throw an error
flash.external.ExternalInterface // available returns false
flash.html.* // HTMLLoader.isSupported returns false
flash.media.CameraRoll // supportsAddBitmapData and supportsBrowseForImage returns false
flash.media.CameraUI // isSupported returns false
flash.media.StageWebView // isSupported returns false
flash.net.drm.* // DRMManager.isSupported returns false
flash.printing.* // PrintJob.isSupported returns false
flash.security.XMLSignatureValidator // isSupported returns false
flash.system.IME // isSupported returns false
flash.system.SystemUpdater // calling constructor throws
flash.text.StageText // calling constructor throws flash.ui.ContextMenu // isSupported returns false
flash.ui.GameInput // isSupported returns false
flash.ui.Mouse // all methods are no-ops; setting 'cursor' property is a no-op
inherited a website which uses subsonic 2.0 and gets an intermittent error of "Offset and length were out of bounds for the array" . If we were to restart the app or recycle the app pool, the issue would go away. I suspect it has something to do with subsonic caching the table schema based on the error log below. Has anyone experience this issue and can suggest a fix?
System.ArgumentException
Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
System.Exception: Exception of type 'System.Web.HttpUnhandledException' was thrown. ---> System.ArgumentException: Offset and length were out of bounds for the array or count is greater than the number of elements from index to the end of the source collection.
at System.Array.BinarySearch[T](T[] array, Int32 index, Int32 length, T value, IComparer1 comparer)
at System.Collections.Generic.SortedList2.IndexOfKey(TKey key)
at System.Collections.Generic.SortedList`2.ContainsKey(TKey key)
at SubSonic.DataService.GetSchema(String tableName, String providerName, TableType tableType)
at SubSonic.DataService.GetTableSchema(String tableName, String providerName)
at SubSonic.Query..ctor(String tableName)
at G05.ProductController.GetProductByColorName(Int32 productId, String colorName) in C:\Projects\G05\Code\BusinessLogic\ProductController.vb:line 514
Strange that it's intermittent . How are the objects being generated? Is it using the .abp file? If so, I'd recommend running the files through the subcommander to hard generate the classes. That way the generation of the objects isn't ever executed on production environment.