Switching GPUImageFilter chain on the fly - gpuimage

I'm running a set of filters on a video stream (not using GPUImageVideoCamera, but processing a sample buffer) currently using GPUImageFilterPipeline.
To change my filters, I'm simply using:
[self.filterPipeline replaceAllFilters:self.warmFilterArray];
Or [self.filterPipeline replaceAllFilters:self.coolFilterArray];
Or [self.filterPipeline removeAllFilters];
I'm having a problem with crashes whenever I change filters. The crashes are inconsistent, but if I change filters too rapidly, I'm more likely to crash.
I suspect that it has something to do with the targets in the filter chain being abruptly removed. Any ideas on how to safely remove filters? Thanks

I solved the issue by using multiple pipelines with multiple inputs. I send the sample buffer to be processed to whichever input is appropriate, allowing me to avoid removing filters from a chain while it is processing!

Related

make node server wait for client input before continuing

I have a nodejs app that has a finite specific sequence of actions.
One of the actions is getting an array of images, sending it to a client, and displaying it for a manual human filtering.
After filtering was done, (say a button was pressed), I need the nodejs app to keep executing the sequence until it's done.
I've been wondering over and over how to perform such a thing (and if possible, without the use of sockets.)
I tried creating a boolean representing if filtering was done, and using
while (!boolean), but server seems to be busy running it so it can't event handle the response which should update that same boolean.
Is there a better way?

NCA R12 with LoadRunner 12.02 - nca_get_top_window returns NULL

Connection successfully established by nca_connect_server() but i am trying to capture current open window by using nca_get_top_window() but it returns NULL. Due to this all subsequent requests fail
It depends on how you obtained your script, whether it recorded or manually written.
If script is written manually there is guarantee that it could be replayed, since it may happen that sequence of API (or/and its parameters) is not valid. If script is recorded – there might be missed correlation or something like this, common way to spot the issue – is to compare recording and replaying behavior (by comparing log files related to these two stages, make sure you are using completely extended kind of log files) to find out what and why goes wrong on replay, and how it digress from recording activity.

QSortFilterProxyModel filtering complete signal

I'm using a custom QSortFilterProxyModel to implement custom filtering for QTableView by overriding filterAcceptsRow(). How can my application be notified when I change the filtering criteria and after filterAcceptsRow() is applied on the whole table?
Basically, I want to get a list of the visible item after filtering is applied, currently I calculate this list by a custom function I've implemented in my model that iterates on the rows and get a list of visible rows. This is inefficient as two calls to this function will yield the same results if no filtering action occurred in between.
All models should emit layoutAboutToBeChanged() and layoutChanged(), before and after they are sorted, filtered, or changed in any other way that could affect the view.
from my observations (in Qt 4.8), the layout*() signals will fire when sorting the proxy model, but not if you implement filtering. also the docs explicitly refer to the order of items that are meant by these signals, and filtering naturally doesn't change the order but affects rows only.
in this case only the rows*(...) signals (inserted, removed, etc.) will fire, depending on what the filter just did. the downside is, if the filter is applied recursively (usually it is), these signals will fire in masses, so will not be useful to tie to a single resulting action.
to overcome this you could call invalidate() after setting filters (not invalidateFilter btw, since it also won't fire the layout signals).
since this re-applies filtering and sorting (the latter wouldn't be necessary when filtering, but can't be avoided), the layout*()signals will fire after both did end.
but then it would be better to handle the filter string/regExp/whatever on your own instead using the base methods (like setFilterFixedString(...)) to set them, to at least avoid filtering twice - not much overhead anyway if you already re-implemented the filterAcceptsRow(...).
an alternative way would be to emit an own signal when setting sorting or filter, and connecting to it by using Qt::QueuedConnection to make sure it is executed after filtering did end. that's what i finally did (to update a register of the table) and as far i can tell it works like expected.
You can get a "filtering complete signal" by connecting to QAbstractItemModel::rowsRemoved and QAbstractItemModel::rowsInserted emitted by the proxy model and connecting it to your own signal or slot.
connect( filterProxyModel_, &QAbstractItemModel::::rowsRemoved,
this, &MyWidget::onFilteringDone ) ;
connect( filterProxyModel_, &QAbstractItemModel::::rowsInserted,
this, &MyWidget::onFilteringDone ) ;
The other signals like layoutChanged or dataChanged are not emitted in case of filtering.

After saving Core Data, binding of NSManagedObject not works

After saving context, UITableViewCell's data is not been refreshed because binding is not triggered for UITextFields contained by cell. Have you seen any similar side effect so far?
The problem was that I executed really a lot of operation / query on persistent store from observeValueFromKeyPath. And observeValueFromKeyPath was triggered too many times. So in somehow the system in the background might just get overloaded, and I think this produced the strange side effects.

Has anybody some advice on programming realtime audio synthesis?

I'm currently working on a personal project: creating a library for realtime audio synthesis in Flash. In short: tools to connect wavegenarators, filters, mixers, etc with eachother and supply the soundcard with raw (realtime) data. Something like max/msp or Reaktor.
I already have some working stuff, but I'm wondering if the basic setup that I wrote is right. I don't want to run into problems later on that force me to change the core of my app (although that can always happen).
Basically, what I do now is start at the end of the chain, at the place where the (raw) sounddata goes 'out' (to the soundcard). To do that, I need to write chunks of bytes (ByteArrays) to an object, and to get that chunk I ask whatever module is connected to my 'Sound Out' module to give me his chunk. That module does the same request to the module that's connected to his input, and that keeps happening until the start of the chain is reached.
Is this the right approach? I can imagine running into problems if there's a feedbackloop, or if there's another module with no output: if i were to connect a spectrumanalyzer somewhere, that would be a dead end in the chain (a module with no outputs, just an input). In my current setup, such a module wouldnt work because i only start calculating from the sound-output module.
Has anyone experience with programming something like this? I'd be very interested in some thoughts about the right approach. (For clarity: i'm not looking for specific Flash-implementations, and that's why i didnt tag this question under flash or actionscript)
I did a similar thing a while back, and I used the same approach as you do - start at the virtual line out, and trace the signal back to the top. I did this per sample though, not per buffer; if I were to write the same application today, I might choose per-buffer instead though, because I suspect it would perform better.
The spectrometer was designed as an insert module, that is, it would only work if both its input and its output were connected, and it would pass its input to the output unchanged.
To handle feedback, I had a special helper module that introduced a 1-sample delay and would only fetch its input once per cycle.
Also, I think doing all your internal processing with floats, and thus arrays of floats as the buffers, would be a lot easier than byte arrays, and it would save you the extra effort of converting between integers and floats all the time.
In later versions you may have different packet rates in different parts of your network.
One example would be if you extend it to transfer data to or from disk. Another example
would be that low data rate control variables such as one controlling echo-delay may, later, become a part of your network. You probably don't want to process control variables with the same frequency that you process audio packets, but they are still 'real time' and part of the function network. They may for example need smoothing to avoid sudden transitions.
As long as you are calling all your functions at the same rate, and all the functions are essentially taking constant-time, your pull-the-data approach will work fine. There will
be little to choose between pulling data and pushing. Pulling is somewhat more natural for playing audio, pushing is somewhat more natural for recording, but either works and ends up making the same calls to the underlying audio processing functions.
For the spectrometer you've got
the issue of multiple sinks for
data, but it is not a problem.
Introduce a dummy link to it from
the real sink. The dummy link can
cause a request for data that is not
honoured. As long as the dummy link knows
it is a dummy and does not care about
the lack of data, everything will be
OK. This is a standard technique for reducing multiple sinks or sources to a single one.
With this kind of network you do not want to do the same calculation twice in one complete update. For example if you mix a high-passed and low-passed version of a signal you do not want to evaluate the original signal twice. You must do something like record a timer tick value with each buffer, and stop propagation of pulls when you see the current tick value is already present. This same mechanism will also protect you against feedback loops in evaluation.
So, those two issues of concern to you are easily addressed within your current framework.
Rate matching where there are different packet rates in different parts of the network is where the problems with the current approach will start. If you are writing audio to disk then for efficiency you'll want to write large chunks infrequently. You don't want to be blocking your servicing of the more frequent small audio input and output processing packets during those writes. A single rate pulling or pushing strategy on its own won't be enough.
Just accept that at some point you may need a more sophisticated way of updating than a single rate network. When that happens you'll need threads for the different rates that are running, or you'll write your own simple scheduler, possibly as simple as calling less frequently evaluated functions one time in n, to make the rates match. You don't need to plan ahead for this. Your audio functions are almost certainly already delegating responsibility for ensuring their input buffers are ready to other functions, and it will only be those other functions that need to change, not the audio functions themselves.
The one thing I would advise at this stage is to be careful to centralise audio buffer
allocation, noticing that buffers are like fenceposts. They don't belong to an audio
function, they lie between the audio functions. Centralising the buffer allocation will make it easy to retrospectively modify the update strategy for different rates in different parts of the network.

Resources