Propagational delay in circuits - circuit

which is better for accurate proportional delay: spice simulation method or calculation using elmores delay (RC delay modeling)

Spice simulation is more accurate than elmore delay modelling. This is mentioned in the book CMOS VLSI Design by Weste Harris on page 93, Section 2.6
Blindly trusting one’s models
Models should be viewed as only approximations to reality, not reality itself, and used within
their limitations. In particular, simple models like the Shockley or RC models aren’t even close
to accurate fits for the I-V characteristics of a modern transistor. They are valuable for the
insight they give on trends (i.e., making a transistor wider increases its gate capacitance and
decreases its ON resistance), not for the absolute values they predict. Cutting-edge projects
often target processes that are still under development, so these models should only be
viewed as speculative. Finally, processes may not be fully characterized over all operating regimes;
for example, don’t assume that your models are accurate in the subthreshold region
unless your vendor tells you so. Having said this, modern SPICE models do an extremely good
job of predicting performance well into the GHz range for well-characterized processes and
models when using proper design practices (such as accounting for temperature, voltage, and
process variation).

Related

Determine fundamental frequency of voice recordings

I am using the command line tool aubiopitch to analyze voice recordings. My goal is to determine the fundamental frequency of the voice recorded. I know, of course, that the frequency varies – that's why I want to calculate an "average" in Hz over a 30-second recording.
My question: aubio uses different methods to determine the pitch of a recording: Schmitt trigger, harmonic comb, yin, yinfft etc. Which one of those would be my preferred choice when dealing with pure human voice recordings (no background music, atmo etc.).
I would recommend using yinfast or yinfft (default). For a discussion of the algorithms, their parameters, and their performance, see Chapter 3 of this document.
Note that the median is better suited than the average in this case.
CREPE is good and outperforms many others since it uses advanced neural-network machine learning for pitch prediction. It might be unstable in unseen conditions though and might not be very easy to plug since it requires tensorflow.
For more traditional and lightweight solution oyu can try REAPER.

Does ray tracing do diffraction/interference/dispersion effects?

As I understand it, ray tracing as used in computer graphics is "geometrical optics" and no wave phenomena are taken into account.
Is there a way to include it anyway in an efficient way, or are there known tricks to fake these concepts into a ray tracing algorithm? My intuitive answer would be no; wave optical simulations are not fast enough for computer gaphics purposes.
tiny update: Are there computer graphics ray tracing algorithms/implementations that can simulate white light dispersing on/through a prism?
I've never seen a graphics rendering software package that used anything other than Geometrical Optics for scene illumination, and I guess it's mainly because you don't visually witness many of the wave effects most of the time so GO is good enough.
Some renderers use at least Physical Optics at the gathering step (when computing light returning to an observer) to account for certain phenomena but no creeping wave effects or interference there.
However there certainly are lots of computational electromagnetics software packages out there that use other models accounting for such effects, and specialized software for photonics where wave effects are really important.
Some of those software use algorithms based on Geometrical Optics that are not too far from the classical raytracing approach (adaptative beam tracing with beam subdivision based on scene geometry, shooting and bouncing rays, ...).
Some software even take advantage of the parallel processing power of GPUs.
However such algorithms are generally really specialized for one kind of problem and don't scale well for whatever wave lengths or scene sizes because they have to take the boldest simplification hypotheses possible for a given class of problems to make computations fast.
I worked on one algorithm that used raytracing and took interferences into account (among other things) to simulate RADARs used in automotive applications at interactive-ish speeds, but it could not be used for simulating anything else. There are also some proposal for taking diffraction and creeping wave effects into account with raytracing.
It's really a matter of knowing what you want to simulate and what are the features of the output you are interested in, then trade of performance and realism. The only realtime electromagnetics simulator that can take all wave effects into account at every wave length for all scene sizes that I can think of is the real world. ;-)
Also don't forget that a lot of computer graphics techniques come from computational electromagnetics. There are lots of academic resources in this field regarding wave effects that are generally overlooked in CG, along with technical solutions to take those effects into account.

what are the calculations or presumptions for the frequency of the core to be used in super computers?

What are these calculations that lets us know, that so and so frequency should be used to do the job which may include weather forecast or calculating critical equations, like all stuff that super computers do.
Core frequency is just one aspect that governs the speed of a computer, other things like cache sizes and speed, inter core and inter module communication speeds, etc.
Super computers of today use regular CPU:s, like Xeon processors. The difference between a super computer and a regular desktop is the number of CPU:s and the interconnections between the different CPU:s and memory areas.
Modern CPU:s has a lot of caching and branch prediction that makes it hard to calculate the number of clock cycles required for a certain algorithm.

OpenCL GPU Audio

There's not much on this subject, perhaps because it isn't a good idea in the first place.
I want to create a realtime audio synthesis/processing engine that runs on the GPU. The reason for this is because I will also be using a physics library that runs on the GPU, and the audio output will be determined by the physics state. Is it true that GPU only carries audio output and can't generate it? Would this mean a large increase in latency, if I were to read the data back on the CPU and output it to the soundcard? I'm looking for a latency between 10 and 20ms in terms of the time between synthesis and playback.
Would the GPU accelerate synthesis by any worthwhile amount? I'm going to have a large number of synthesizers running at once, each of which I imagine could take up their own parallel process. AMD is coming out with GPU audio, so there must be something to this.
For what it's worth, I'm not sure that this idea lacks merit. If DarkZero's observation about transfer times is correct, it doesn't sound like there would be much overhead in getting audio onto the GPU for processing, even from many different input channels, and while there are probably audio operations that are not very amenable to parallelization, many are very VERY parallelizable.
It's obvious for example, that computing sine values for 128 samples of output from a sine source could be done completely in parallel. Working in blocks of that size would permit a latency of only about 3ms, which is acceptable in most digital audio applications. Similarly, the many other fundamental oscillators could be effectively parallelized. Amplitude modulation of such oscillators would be trivial. Efficient frequency modulation would be more challenging, but I would guess it is still possible.
In addition to oscillators, FIR filters are simple to parallelize, and a google search turned up some promising looking research papers (which I didn't take the trouble to read) that suggest that there are reasonable parallel approaches to IIR filter implementation. These two types of filters are fundamental to audio processing and many useful audio operations can be understood as such filters.
Wave-shaping is another task in digital audio that is embarrassingly parallel.
Even if you couldn't take an arbitrary software synth and map it effectively to the GPU, it is easy to imagine a software synthesizer constructed specifically to take advantage of the GPU's strengths, and avoid its weaknesses. A synthesizer relying exclusively on the components I have mentioned could still produce a fantastic range of sounds.
While marko is correct to point out that existing SIMD instructions can do some parallelization on the CPU, the number of inputs they can operate on at the same time pales in comparison to a good GPU.
In short, I hope you work on this and let us know what kind of results you see!
DSP operations on modern CPUs with vector processing units (SSE on x86/x64 or NEON on ARM) are already pretty cheap if exploited properly. This is particularly the case with filters, convolution, FFT and so on - which are fundamentally stream-based operations. There are the type of operations where a GPU might also excel.
As it turns out, soft synthesisers have quite a few operations in them that are not stream-like, and furthermore, the tendency is to process increasingly small chunks of audio at once to target low latency. These are a really bad fit for the capabilities of GPU.
The effort involved in using a GPU - particularly getting data in and out - is likely to far exceed any benefit you get. Furthermore, the capabilities of inexpensive personal computers - and also tablets and mobile devices - are more than enough for many digital audio applications AMD seem to have a solution looking for a problem. For sure, the existing music and digital audio software industry is not about to start producing software that only targets a limited sub-set of hardware.
Typical transfer times for some MB to/from GPU take 50us.
Delay is not your problem, however parallelizing a audio synthesizer in the GPU may be quite difficult. If you don't do it properly it may take more time the processing rather than the copy of data.
If you are going to run multiple synthetizers at once, I would recommend you to perform each synthesizer in a work-group, and parallelize the synthesis process with the work-items available. It will not be worth to have each synthesizer in one work-item, since it is unlikely you will have thousand.
http://arxiv.org/ftp/arxiv/papers/1211/1211.2038.pdf
You might be better off using OpenMP for it's lower initialization times.
You could check out the NESS project which is all about physical modelling synthesis. They are using GPUs for audio rendering because it the process involves simulating an acoustic 3D space for whichever given sound, and calculating what happens to that sound within the virtual 3D space (and apparently GPUs are good at working with this sort of data). Note that this is not realtime synthesis because it is so demanding of processing.

Massively parallel application: what about several 8 bits cores for non-vector IA applications?

I was thinking (oh god, it starts badly) about neuron networks and how it is not possible to simulate those because they require many atomic operation at the same time (here meaning simultaneously), because that's how neurons are faster: they are many to compute stuff.
Since our processors are 32 bits so they can compute a significantly larger band (meaning a lot of different atomic numbers, being floating points or integers), the frequency race is also over and manufacturers start shipping multicore processors, requiring developpers to implement multithreading in their application.
I was also thinking about the most important difference between computers and brains; brains use a lot of neurons, while computers use precision at a high frequency: that's why it seems harder or impossible to simulate an real time AI with the current processor model.
Since 32bits/64bits chips also take a great deal of transistors and since AI doesn't require vector/floating point precision, would it be a good idea to have many more 8bits cores on a single processor, like 100 or 1000 for example since they take much less room (I don't work at intel or AMD so I don't know how they design their processors, it's just a wild guess), to plan for those kind of AI simulations ?
I don't think it would only serve AI research though, because I don't know how webservers can really take advantage of 64 bits processors (strings use 8bits), Xeon processors are only different considering their cache size.
What you describe is already available by means of multimedia instruction sets. It turns out that computer graphics needs also many parallel operations on bytes or even half-bytes. So the CPUs started growing vector operations (SSE, MMX, etc); more recently, graphic processors have opened up to general purpose computing (GPGPU).
I think you are mistaken in assuming that neuronal processing is not a vector operation: many AI neuronal networks heavily rely on vector and matrix operations.

Resources