How large is the average delay from key-presses - keyboard

I am currently helping someone with a reaction time experiment. For this experiment reaction times on the keyboard are measured. For this experiment it might be important to know, how much error could be introduced because of the delay between the key-press and the processing in the software.
Here are some factors that I found out using google already:
The USB-bus is polled at 125Hz at minimum and 1000Hz at maximum (depending on settings, see this link).
There might be some additional keyboard buffers in Windows that might delay the keypresses further, but I do not know about the logic behind those.
Unfortunately it is not possible to control the low level logic of the experiment. The experiment is written in E-Prime a software that is often used for this kind of experiments. However the company that offers E-Prime also offers additional hardware, that they advertise for precise reaction-timing. Hence they seem to be aware about this effect (but do not tell how large it is).
Unfortunately it is necessary to use a standart keyboard, so I need to provide ways to reduce the latency.

any latency from key presses can be attributed to the debounce routine (i usually use 30ms to be safe) and not to the processing algorithms themselves (unless you are only evaluating the first press).

If you are running an experiment where millisecond timing is important you may want to use http://www.blackboxtoolkit.com/ to find sources of error.
Your needs also depend on the nature of your study. I've run RT experiments in Eprime with a keyboard. Since any error should be consistent on average across participants, for some designs it is not a big problem. If you need to sync up the data though with something else (like Eye tracking or EEG) or want to draw conclusions about RT where specific magnitude is important then E-Primes serial resp box (or another brand, though I have had compatibility issues in the past with other brand boxes and eprime) is a must.

Related

Is SRM in Google Optimize (Bayesian Model) a thing

So checking for Sample Ratio Mismatch is good for data quality.
But in Google Optimize i can't influence the sample size or do something against it.
My problem is, out of 15 A/B Tests I only got 2 Experiment with no SRM.
(Used this tool https://www.lukasvermeer.nl/srm/microsite/)
In the other hand the bayesian model deals with things like different sample sizes and I dont need to worry about, but the opinions on this topic are different.
Is SRM really a problem in Google Optimize or can I ignore it?
SRM affects Bayesian experiments just as much as it affects Frequentist. SRM happens when you expect a certain traffic split, but end up with a different one. Google Optimize is a black box, so it's impossible to tell if the uneven sample sizes you are experiencing are intentional or not.
Lots of things can cause a SRM, for example if your variation's javascript code has a bug in some browsers those users may not be tracked properly. Another common cause is if your variation causes page load times to increase, more people will abandon the page and you'll see a smaller sample size than expected.
That lack of statistical rigor and transparency is one of the reasons I built Growth Book, which is an open source A/B testing platform with a Bayesian stats engine and automatic SRM checks for every experiment.

How can I synchronize two audio recordings *without* timestamps?

Let's say I have two separate recordings of the same concert (created on a user's phone and then uploaded to our server). These recordings are then aligned according to their creation timestamp. However, when these recordings are played together or quickly toggled between, it is revealed that their creation timestamps must be off because there is a perceptible delay.
Since the time stamp is not a reliable way to align these recordings, what is an alternative? I would really prefer not to have to learn about audio signal processing to solve this problem, but recognize this may be the only way. So, I guess my question is:
Can I get away with doing some kind of clock synchronization? Is that even possible if the internal device clocks are clearly off by an unknown amount? If yes, a general outline of how this would work and key words would be appreciated.
If #1 is not an option, I guess I need to learn about audio signal processing? Again, a general outline of how to tackle the problem from that angle and some key words would be appreciated.
There are 2 separate issues you need to deal with. Issue 1 is the alignment of the start time of the recordings. I doubt you can expect that both user's pressed record at the exact same moment. Even if they did they may be located different distances from the speaker and it takes time for sound to travel. Aligning the start times by hand is pretty trivial. The human brain is good at comparing the similarities of sound. Programmatically it's a different story. You might try using something like cross correlation or looking over on dsp.stackexchange.com. There is no exact method though.
Issue 2 is that the clocks driving the A/D converters on the two devices are not going to be running at the same exact rate. So even if you synchronize the start time, eventually the two are going to drift apart. The time it takes to noticeably drift is a function of the difference of the two clock frequencies. If they are relatively close you may not notice in a short recording. To counter act this you need to stretch the time of one of the recordings. This increases or decreases the duration of the recording without affecting the pitch. There are plenty of audio recording apps that allow you to time stretch but they don't give you any help in figuring out by how much. Start be googling "time stretching" or again have a look at dsp.stackexchange.com.
I realize neither of these are direct answers - rather suggestions.
Take a look at this document, describes how you can align recordings using Sonic Visualizer(GPL) and a plugin.
I've not used it before, but found the document (and this question) when I was faced with a similar problem.

How does the Ableton Drum-To-MIDI function work?

I can't seem to find any information regarding the process that Ableton uses to efficiently detect atonal percussion and convert it into MIDI. I assume feature extraction and onset detection algorithms are executed, but I'm intrigued as to what algorithms. I am particularly interesting how its efficiency is maintained for a beatboxed input.
Cheers
Your guesses are as good as everyone else's - although they look plausible. The reality is that the way this feature is implemented in Ableton is a trade secret and likely to remain that way.
If I'm not mistaken Ableton licenses technology from https://www.zplane.de/ for these things.
I don't exactly know how the software assigns the different drum sounds, but the chapter in the live manual Convert Drums to New MIDI Track says that it can only detect kick, snare and hi-hat. An important thing is that they are identified by the transient Markers. For a good result you should manually check and adjust them. The transient Markers look like the warp Markers, but are grey.
compared to a kick and a snare for example, a beatboxed input is likely to have less difference between the individual sounds and therefore likely to be harder for Ableton to individually extract the seperate sounds (depends on the beatboxer). In any case, some combination of frequency and amplitude - more specifically(Attack, Decay, Sustain, Release) as well as perhaps the different overtone combinations that account for differences in timbre are going to be the characteristics that would have to be evaluated in order to separate the kick snare and hihat .
Before this feature existed I used gates and hi/low pass filters to accomplish a similar task. So perhaps Ableton's solution is not as complicated as we might imagine.

Some questions regarding game loop, tick and real time programming

First I want to apologize for my approximate English, as I'm French. I'm currently making a real-time game in java, using LWJGL.
I have some questions regarding game loops:
I'm running the rendering routine in a thread. Is it a good idea? Usually, the rendering routine is fairly slow and should not slow down the world update (tick) routine, which is way more important. So I guess using a thread here seems like a good idea (minus the complications from using a thread).
In the world update routine, I'm updating a list of entities with the current time. Each entity can then compute their own deltaTime, corresponding to the last time they were updated. This differs from the usual update loop, which updates every entity in the list with the same deltaTime. This seemed appropriate because of the threaded rendering. Is it a good idea? Should I use the second method instead? If so, is the threaded rendering still needed? If so, do I have to add a maximum deltaTime?
In general, is it a good idea to have a maximum deltaTime?
Thanks for your time!
Is it a good idea? Separate threads are fairly advanced stuff, I see no reason to do multithreading to begin with. All the mobile games I have worked on so far have not needed multiple threads, even though they are 'real-time'. Hardcore PC and console games are where multithreading really starts to come into play. Here is a link to a recent talk on the subject if interested : http://archive.assembly.org/2011/seminars/adventures-in-multithreaded-gameplay-coding.
Sounds like this could cause some strange things if the physics are not handled in one go. Not sure about this. Colliding an object that has already been updated to another position with an object that comes another time, for example, correcting this sort of situation may become problematic? Fast moving collisions may need to be subdivided, which may be why you have the separate update thread, but why not have them all calculated as happening at the same time?
'Variable timestep' and 'Fixed timestep' are the options available for rendering. Most games at the moment seem to choose a 30 fps fixed timestep. The rendering has to be kept under the limits so no catching up should be needed.
One problem with variable timestep is you are forced to pass deltaTime to all time-dependent areas. Fixed timestep is handy as you can assume you are running at say 30 fps, and use that value everywhere. It is a preferred method at the moment as far as I know.
Though this question is a few years old…
AFAIK,
Rendering is usually done in separated processor — GPU, so they're already a separated thread. But, drawing command must be processed by graphics driver (which is running in CPU) before dispatched to GPU, and this processing may be saved by being multi-threaded. Anyway in this case, you're responsible to manage synchronization between logics and rendering thread.
Generally speaking, games are all about interactions between objects, and it's very hard to divide state-graph into fully separated divisions. As a result, whole game state usually becomes single graph, and this graph cannot be updated while being rendered. In this case, you have no benefit by being multi-threaded.
If you can keep a separated immutable data for rendering, than you may gain some benefit from rendering in separated thread. But otherwise, I don't recommend it.
In addition, you should consider GC if you truly want a realtime game. GC related performance issues usually the biggest obstacles to make realtime stuffs.

Ubiquitous computing and magnetic interference

Imagine the radio of a car, does the electro magnetic fields through which the car goes through, have interference in the processing? It's easy to understand that a strong field can corrupt data. But what about the data under processment? Can it also be changed?
If so how could you protect your code against this? (without electrial protections just code ones)
For the most robust mission critical systems you use multiple processors and compare results. This is what we did with aircraft auto pilot (autolanding). We had three autopilots, one flying the aircraft and two check that one. If any one of the three disagreed, it was shut down.
You're referring to what Wikipedia calls soft errors. The traditional, industry-accepted work-around for this is through redundancy, as Jim C and fmsf noted.
Several years ago, our repair department's analysis showed an unacceptable number of returned units with single-bit errors in the battery-backed SRAM that held the firmware. Despite our efforts at root-cause analysis, we were unable to explain the source of the problem. At that point a hardware change was out of the question, so we needed a software-only solution to treat the symptom.
We wanted a reliable fix that we could implement simply and quickly, so we generated parity checks on blocks of code in the SRAM. We chose a block size that required very little additional storage for the parity data, yet provided enough redundancy to detect and correct any of the errors we'd seen and then some. It logs the errors it detects and indicates whether it can correct them, so we still know when bit errors occur in the field. So far, so good!
Our product manager did some additional research out of curiosity and convinced himself that the culprit was cosmic radiation. We never proved it unequivocally, but he was satisfied that the number of errors seemed to agree with what would be expected based on the data he found. I'm just glad the returns have stopped.
I doubt you can.
Code that is changed won't run, so likely your program(s) will crash if you have this problem.
This is a hardware problem.

Resources