Hey how to get big amount of information like 1000 rows without stuck?
I try with this:
dispatch_async(dispatch_get_main_queue(), {
//here code
})
but when I executed the request self.context.executeFetchRequest it returns me fatal error: unexpectedly found nil while unwrapping an Optional value. I have an error and I have to add self. in front of the function.
let queue:dispatch_queue_t = dispatch_get_global_queue(DISPATCH_QUEUE_PRIORITY_DEFAULT, 0)
dispatch_async(queue, { () -> Void in
//code
})
but also I get the same error...
I use NSFetchRequest and I add the results in NSArray and I loop the results in for loop and in the loop I sort results in a dictionaries.
1000 records is not very much for Core Data. Just fetch them on the thread.
I would not advise to "sort results in a dictionaries". You should think how your app logic interacts with the data and simply fetch the objects you need from the Core Data persistent store.
For example, if you want to display 1000 lines in a table view, use `NSFetchedResultsController´ which is optimized for this sort of situation - so you will avoid memory and performance issues without any work.
If you really need threading with Core Data (which I doubt) I would advise not to start with GCD but to use Core Data's own concurrency APIs, such as performBlock and global queue child contexts. But most likely you won't have to worry about those.
Finally, your error is really referring to some code that you have not posted. It has to do with Swift's optionals. For example, if you declare a variable as type variable : String? (or you use an API that returns such a type), you can unwrap it with variable! if you are sure it is not nil . If it is nil you will get the above crash.
Related
The following functions and fields are part of the same class in a Visual Studio DLL. Data is continuously being read and processed using the run function on a thread. However, getPoints is being accessed in a Qt app on a QTimer. I don't wan't to miss a single processed vector, because it seems it could be skipping leading to jumpy data. What's the safest way to get the points to the updated version?
If possible I'd like an answer that uses the C++ standard library as I've been exploring mutex-es, but it still seems to lead to jumpy data.
vector<float> points;
// std::mutex ioMutex;
// function running on a thread
void run(){
while(running){
//ioMutex.lock()
vector<byte> data = ReadData()
points = processData(data);
//ioMutex.unlock()
}
}
vector<float> getPoints(){
return points;
}
I believe there is a mistake in your code. The while loop will consume all the process activity and will not allow proper functionality of other functions. In Qt, in such continuous loops, usually it is a good habit to use the following because it actually gives other process time to access the event buffer properly. If this dll is written in Qt, please add the following within the while loop
QCoreApplication::processEvents();
The safest (and probably easiest) way to deliver your points-data to the main thread is by calling qApp->postEvent() with an object of a custom QEvent-subclass that contains your vector<float> as a member-variable.
That will cause the event(QEvent *) method of (whatever Qt object you specified as the first argument to postEvent()) to be called from inside the main/GUI thread, and so you can override that method to read the vector<float> out of the QEvent-subclassed object and update the GUI with that data.
I have been writing a lot of NodeJS recently and that has forced me to attack some problems from a different perspective. I was wondering what patterns had developed for the problem of processing chunks of data sequentially (rather than in parallel) in an asynchronous request-environment, but I haven't been able to find anything directly relevant.
So to summarize the problem:
I have a list of data stored in an array format that I need to process.
I have to send this data to a service asynchronously, but the service will only accept a few at a time.
The data must be processed sequentially to meet the restrictions on the service, meaning making a number of parallel asynchronous requests is not allowed
Working in this domain, the simplest pattern I've come up with is a recursive one. Something like
function processData(data, start, step, callback){
if(start < data.length){
var chunk = data.split(start, step);
queryService(chunk, start, step, function(e, d){
//Assume no errors
//Could possibly do some matching between d and 'data' here to
//Update data with anything that the service may have returned
processData(data, start+step, step, callback);
});
}
else{
callback(data);
}
}
Conceptually, this should step through each item, but it's intuitively complex. I feel like there should be a simpler way of doing this. Does anyone have a pattern they tend to follow when approaching this kind of problem?
My first thought process would be to rely on object encapsulation. Create an object that contains all of the information about what needs to be processed and all of the relevant data about what has been processed and is being processed and the callback function will just call the 'next' function for the object, which will in turn start processing on the next piece of data and update the object. Essentially working like a n asynchronous for-loop.
During a code review I was told to make sure that I do the dispose method for all query objects(LinqToEntities) I use.
Is this such a big memory leak? How can I fix this more elegantly, right now I don't do any dispose after I'm finished?
My code is the following:
var query = from x in listOfEntities where some_condition select x;
I think this query will not cause memory leak. Its the not the query which may cause the problem. Its probably the data context. Since the data context implements IDisposable you can do:
using (var ctx = new yourDataContext())
{
var query = from x in ctx.listOfEntities where some_condition select x;
}
with using clause it will ensure that the connection to the database is closed after the code exits the using statement. It is similar to try/finally block.
var ctx = new yourDataContext();
try
{
var query = from x in ctx.listOfEntities where some_condition select x;
}
finally
{
if(ctx != null)
ctx.Dispose();
}
EDIT:(Based on comments from #Fredrik Mörk). The above code is just to show you the usage of using statement with respect to DataContext. To use the query object outside the using block, you may define it outside the using block and call ToList or similar method so that you get the execution of the query. Later you can use it. Otherwise due to deferred execution the current code block will fail.
Not specific to this situation, but whether a lack of calling Dispose will cause a leak or not is dependent on the inner workings of the disposable object. There are examples in the Framework of disposable objects that do little or nothing on dispose. Some implementations also only handle managed objects, which will be handled by the GC eventually anyway (so in this instance, you only lose deterministic disposal of objects, not necessarily causing any memory leaks).
The important part of the IDisposable contract is the convention that it brings to the table. You adhere to the contract regardless of the inner workings as the inner workings are open to change, the contract is not open to change. Although I'm not for blindly applying rules, I tend to always try to dispose items that I am using.
I've got a computation (CTR encryption) that requires results in a precise order.
For this I created a multithreaded design that calculates said results, in this case the result is a ByteBuffer. The calculation itself of course runs asynchronous, so the results may become available at any time and in any order. The "user" is a single-threaded application that uses the results by calling a method, after which the ByteBuffers are returned to the pool of resources by said method - the management of resources is already handled (using a thread safe stack).
Now the question: I need something that aggregates the results and makes them available in the right order. If the next result is not available, the method that the user called should block until it is. Does anyone know a good strategy or class in java.util.concurrent that can return asynchronously calculated results in order?
The solution it must be thread safe. I would like to avoid third party libraries, Thread.sleep() / Thread.wait() and theading related keywords other than "synchronized". Futhermore, The tasks may be given to e.g. an Executor in the correct order if that is required. This is for research, so feel free to use Java 1.6 or even 1.7 constructs.
Note: I've tagged these quesions [jre] as I want to keep within the classes defined in the JRE and [encryption] as somebody may already have had to deal with it, but the question itself is purely about java & multi-threading.
Use the executors framework:
ExecutorService executorService = Executors.newFixedThreadPool(5);
List<Future> futures = executorService.invokeAll(listOfCallables);
for (Future future : futures) {
//do something with future.get();
}
executorService.shutdown();
The listOfCallables will be a List<Callable<ByteBuffer>> that you have constructed to operate on the data. For example:
list.add(new SubTaskCalculator(1, 20));
list.add(new SubTaskCalculator(21, 40));
list.add(new SubTaskCalculator(41, 60));
(arbitrary ranges of numbers, adjust that to your task at hand)
.get() blocks until the result is complete, but at the same time other tasks are also running, so when you reach them, their .get() will be ready.
Returning results in the right order is trivial. As each result arrives, store it in an arraylist, and once you have ALL the results, just sort the arraylist. You could use a PriorityQueue to keep the results sorted at all times as they arrive, but there is no point in doing this, since you will not be making any use of the results before all of them have arrived anyway.
So, what you could do is this:
Declare a "WorkItem" class which contains one of your bytearrays and its ordinal number, so that they can be sorted by ordinal number.
In your work threads, do something like this:
...do work and produce a work_item...
synchronized( LockObject )
{
ResultList.Add( work_item );
number_of_results++;
LockObject.notifyAll();
}
In your main thread, do something like this:
synchronized( LockObject )
while( number_of_results != number_of_items )
LockObject.wait();
ResultList.Sort();
...go ahead and use the results...
My new answer after gaining a better understanding of what you want to do:
Declare a "WorkItem" class which contains one of your bytearrays and its ordinal number, so that they can be sorted by ordinal number.
Make use of a java.util.PriorityQueue which is kept sorted by ordinal number. Essentially, all we care is that the first item in the priority queue at any given time will be the next item to process.
Each work thread stores its result in the PriorityQueue and issues a NotifyAll on some locking object.
The main thread waits on the locking object, and then if there are items in the queue, and if the ordinal of the (peeked, not dequeued) first item in the queue is equal to the number of items processed so far, then it dequeues the item and processes it. If not, it keeps waiting. If all of the items have been produced and processed, it is done.
Dear Community. I like to understand a little task, which have to help me improve performance for my application.
I have array of dictionaries, in singleton area with objects NSDictionary and keys
code
country
specific
I have to receive country and specific values from this array.
My first version of application was using predicate, but later i find a lot of memory leaks and performance issues by this way. Application was too slow and don't empty very quickly a memory stack, coming to around 1G and crash.
My second version was little bit more complicated. I was filled array in singleton area with objects per one code and function, which u can see bellow.
-(void)codeIsSame:(NSArray *)codeForCheck;
{
//#synchronized(self) {
NSString *code = [codeForCheck objectAtIndex:0];
if ([_code isEqualToString:code])
{
code = nil;
NSUInteger queneNumberInt = [[codeForCheck objectAtIndex:1] intValue];
NSLog(#"We match code:%# country:%# specific:%# quene:%lu",_code, _country,_specific, queneNumberInt);
[[ProjectArrays sharedProjectArrays].arrayDictionaryesForCountryCodesResult insertObject:_result atIndex:queneNumberInt];
}
code = nil;
//}
return;
}
The way to receive necessary issues is a :
SEL selector = #selector(codeIsSame:);
[[ProjectArrays sharedProjectArrays].myCountrySpecificCodeListWithClass makeObjectsPerformSelector:selector withObject:codePlusQueueNumber];
This version working much better, no memory leaks, very quickly, but too hard to debug. Sometimes i receive empty result, i tried to synchronize thread jobs, but it still not work stable. The main problem in this way is that in strange reason sometimes i don't have result in my singleton array. I tried to debug it, using index of array for different threads, and have result that class just missed answer.
Core data don't allow me to make copy of main MOC and for multithreading design i can't using it (lock and unlock is not good idea, and that's way product too much error in lock/unlock part of code.
Maybe anybody can suggest, what i can do better in this case? I need a best way to make decision which will work stable, will be easy to coding and understand it?
My current solution is using NSDictionary, where is a keys is a code and under that code i have dictionary with country/specific. Working fine as well, but don't decide a main task - using core data if u need multiply access from too many threads to the same data.