How can capacitive touchscreens realise a high touch point recognition precision? - sensors

A capacitive touchscreen only has limited sensors to detect where has been touched, for instance, there're merely 29 sensing channels * 16 driving channels on a 6-inch touchscreen, but it can recognize touch point with a precision of display pixel level or even sub-pixel level.
How can it do that?

Related

setting a virtual resolution in wayland

Is it possible to set a virtual resolution of a screen, meaning increasing its resolution over its normal resolution (say, I've got a 1920x1080 screen, can I use it like it was a 3640x2160 screen)?
With X it was easy, just xrandr --scale 2x2, but with wayland I can't seem to find a way to do it...
It would be to set up a multi-screen environment, with one good screen and a bad screen, and I need to double the resolution of the bad screen, to have the windows about the same size in both screens, which is my goal.
I've read somewhere about multi-screen scaling, but couldn't fine more informations about
Thank you for your help
If you are using weston compositor then you can specify the "scale" factor in weston.ini file under [ouput] section please refer here http://manpages.ubuntu.com/manpages/bionic/man5/weston.ini.5.html
scale=factor
The scaling multiplier applied to the entire output, in support of high resolution("HiDPI" or "retina") displays, that roughly corresponds to the pixel ratio of the display's physical resolution to the logical resolution. Applications that do not support high resolution displays typically appear tiny and unreadable. Weston will scale the output of such applications by this multiplier, to make them readable.Applications that do support their own output scaling can draw their content in high resolution, in which case they avoid compositor scaling. Weston will not scale the output of such applications, and they are not affected by this multiplier.
An integer, 1 by default, typically configured as 2 or higher when needed, denoting
the scaling multiplier for the output.

Why do we use the term DPI for matters involving images on computers

I'm told that DPI and Points are no longer relevant in terminology involving graphical displays on computer screens and mobile devices yet we use the term "High DPI Aware" and in Windows you can set the various DPI levels (96, 120, 144, 192).
Here is my understanding of the various terms that are used in displaying images on computer monitors and devices:
DPI = number of dots in one linear inch. But DPI refers to printers and printed images.
Resolution = the number of pixels that make up a picture whether it is printed on paper or displayed on a computer screen. Higher resolution provides the capability to display more detail. Higher DPI = Higher resolution, however, resolution does not refer to size, it refers to the number of pixels in each dimension.
DPI Awareness = an app takes the DPI setting into account, making it possible for an application to behave as if it knew the real size of the pixels.
Points and Pixels: (There are 72 points per inch.)
At 300 DPI, there are 300 pixels per inch. So 4.16 Pixels = 1 point.
At 96 DPI there are 1.33 pixels in one point.
Is there a nice way to "crisply" describe the relationship between DPI, PPI, Points, and Resolution?
You are correct that DPI refers to the maximum amount of detail per unit of physical length.
Computer screens are devices that have a physical size, so we speak of the number of pixels per inch they have. Traditionally this value has been around 80 PPI, but now it can be up to 400 PPI.
The notion of "High DPI Aware" (e.g. Retina) is based on the fact that physical screen sizes don't change much over time (for example, there have been 10-inch tablets for more than a decade), but the number of pixels we pack into the screens is increasing. Because the size isn't increasing, it means the density - or the PPI - must be increasing.
Now when we want to display an image on a screen has more pixels than an older screen, we can either:
Map the old pixels 1:1 onto the new screen. The physical image is smaller due to the increased density. People start to complain about how small the icons and text are.
Stretch the old image and fill in the extra details. The physical image is the same size, but now there are more pixels to represent the content. For example, this results in font curves being smoother and photographs showing more fine details.
The term DPI (Dots Per Inch) to refer to device or image resolution came into common use well before the invention of printers that could print multiple dots per pixel. I remember using it in the 1970's. The term PPI was invented later to accommodate the difference, but the old usage still lingers in places such as Windows which was developed in the 1980's.
The DPI assigned in Windows rarely corresponds to the actual PPI of the screen. It's merely a way to specify the intended scaling of elements such as fonts.
DPI vs. resolution – What’s the difference?
The acronym dpi stands for dots per inch. Similarly, ppi stands for pixels per inch. So, why have two different acronyms for measuring roughly the same thing? Because there is a key difference between the two and if you don’t understand this difference it can have a negative impact on your digital signage project.
Part of the confusion between the two terms stems from the fact that many people who use them are lazy and tend to use the terms interchangeably. The simplest way of thinking about them is that one is digital (ppi) and represents what you see on the computer screen and the other is physical (dpi) for example, how an image appears when you print it out on a piece of paper.
I suggest you to check this in-depth article talking about the technicality of this topic.
https://blog.viewneo.com/blog/72-dpi-resolution-vs-300-dpi-for-digital-solutions/

Capturing only pixels from Google Glass camera

I would like to capture only a few pixels from the Google Glass camera at regular intervals, to obtain color data over time. Is there a way, to save battery life, to only capture a few pixels rather than take a full image every time and have to post-render it (which is much more intensive and battery-consuming)? Perhaps this is configured on the hardware level, and thus I cannot do such a thing.
As an alternative, I was hoping the light sensor would give RGB data, but it appears to be a monochromatic light level that is provided in units of lux.

What is Web Audio API's bit depth?

What is the bit depth of Web Audio API's audio context?
For example if you want to create a custom curve to use with WaveShaperNode what is the appropriate Float32Array size?
I have seen developers using 65536 which is for 16-Bit audio, but i cant find any info in the spec.
Actually, internally the system uses Float32, which has a significand of 23 bits. Using floating point allows the ability to avoid most clipping problems, while enabling good precision. This means technically there is little point in ever attempting to create a waveshaping curve larger than 8388608 (2^23) samples; but in reality, a 16-bit curve is pretty high-resolution (signal-to-noise is ~96dB). A lot of the reason for 32-bit audio processing was to avoid clipping problems, not improving SNR of input/output; use of floating point helps this dramatically. The WaveShaperNode specifically clips to [-1, +1] (most nodes don't), incidentally.
So in short - just use 16-bit (65535), but make sure your signal is in the -1,+1 range.

How can I calculate audio dB level?

I want to calculate room noise level with the computer's microphone. I record noise as an audio file, but how can I calculate the noise dB level?
I don't know how to start!
All the previous answers are correct if you want a technically accurate or scientifically valuable answer. But if you just want a general estimation of comparative loudness, like if you want to check whether the dog is barking or whether a baby is crying and you want to specify the threshold in dB, then it's a relatively simple calculation.
Many wave-file editors have a vertical scale in decibels. There is no calibration or reference measurements, just a simple calculation:
dB = 20 * log10(amplitude)
The amplitude in this case is expressed as a number between 0 and 1, where 1 represents the maximum amplitude in the sound file. For example, if you have a 16 bit sound file, the amplitude can go as high as 32767. So you just divide the sample by 32767. (We work with absolute values, positive numbers only.) So if you have a wave that peaks at 14731, then:
amplitude = 14731 / 32767
= 0.44
dB = 20 * log10(0.44)
= -7.13
But there are very important things to consider, specifically the answers given by the others.
1) As Jörg W Mittag says, dB is a relative measurement. Since we don't have calibrations and references, this measurement is only relative to itself. And by that I mean that you will be able to see that the sound in the sound file at this point is 3 dB louder than at that point, or that this spike is 5 decibels louder than the background. But you cannot know how loud it is in real life, not without the calibrations that the others are referring to.
2) This was also mentioned by PaulR and user545125: Because you're evaluating according to a recorded sound, you are only measuring the sound at the specific location where the microphone is, biased to the direction the microphone is pointing, and filtered by the frequency response of your hardware. A few feet away, a human listening with human ears will get a totally different sound level and different frequencies.
3) Without calibrated hardware, you cannot say that the sound is 60dB or 89dB or whatever. All that this calculation can give you is how the peaks in the sound file compares to other peaks in the same sound file.
If this is all you want, then it's fine, but if you want to do something serious, like determine whether the noise level in a factory is safe for workers, then listen to Paul, user545125 and Jörg.
You do need reference hardware (i.e., a reference mic) to calculate noise level (dB SPL, or sound pressure level). One thing Radio Shack sells is a $50 dB SPL meter. If you're doing scientific calculations, I wouldn't use it. But if the goal is to get a general idea of a weighted measurement (dBA or dBC) of the sound pressure in a given environment, then it might be useful. As a sound engineer, I use mine all the time to see how much sound volume I'm generating while I mix. It's usually accurate to within 2 dB.
That's my answer. The rest is FYI stuff.
Jorg is correct that dB SPL is a relative measurement. All decibel measurements are. But you've implied a reference of 0 dB SPL, or 20 micropascals, scientifically agreed to be the most quiet sound a human ear can detect (though, understandably, what a person can actually hear is very difficult to determine). This, according to Wikipedia, is about the sound of a flying mosquito from about 10 feet away (http://en.wikipedia.org/wiki/Decibel).
By assuming you don't understand decibels, I think Jorg is just trying to out-geek you. He clearly didn't give you a practical answer. :-)
Unweighted measurements (dB, instead of dBA or dBC) are rarely used, because most sound pressure is not detected by the human ear. In a given office environment, there is usually 80-100 dB SPL (sound pressure level). To give you an idea of exactly how much is not heard, in the U.S., occupational regulations limit noise exposure to 80 dBA for a given 8-hour work shift (80 dBA is about the background noise level of your average downtown street - difficult, but not impossible to talk over). 85 dBA is oppressive, and at 90, most people are trying to get away. So the difference between 80 dB and 80 dBA is very significant -- 80 dBA is difficult to talk over, and 80 dB is quite peaceful. :-)
So what is 'A' weighting? 'A' weighting compensates for the fact that we don't perceive lower frequency sounds as well as high frequency sounds (we hear 20 Hz to 20,000 Hz). There's a lot of low-end rumble that our ears/brains pretty much ignore. In addition, we're more sensitive to a certain midrange (1000 Hz to 4000 Hz). Most agree that this frequency range contains the sounds of consonants of speech (vowels happen at a much lower frequency). Imagine talking with just vowels. You can't understand anything. Thus, the ability of a human to be able to communicate (conventionally) rests in the 1kHz-5kHz bump in hearing sensitivity. Interestingly, this is why most telephone systems only transmit 300 Hz to 3000 Hz. It was determined that this was the minimal response needed to understand the voice on the other end.
But I think that's more than you wanted to know. Hope it helps. :-)
You can't easily measure absolute dB SPL, since your microphone and analogue hardware are not calibrated. You may be able to do an approximate calibration for a particular hardware set up but you would need to repeat this for every different microphone and hardware set up that you plan to support.
If you do have some kind of SPL reference source that you can use then then it gets easier:
use your reference source to generate a tone at a known dB SPL - measure this
measure the ambient noise
calculate noise level = 20 * log10 (V_noise / V_ref) + dB_ref
Of course this assumes that the frequency response of your microphone and audio hardware is reasonably flat and that you just want a flat (unweighted) noise figure. If you want a weighted (e.g. A-weight) noise figure then you'll have to do rather more processing.
According to Merchant et al. (section 3.2 in the appendix: "Measuring acoustic habitats", Methods in Ecology and Evolution, 2015), you can actually calculate absolute, calibrated SPL values using manufacturer specifications by subtracting a correction term S to your relative (scaled to maximum) SPL values:
S = M + G + 20*log10(1/Vadc) + 20*log10(2^Nbit-1)
where M is the sensitivity of the transducer (microphone) re 1 V/Pa. G is the gain applied by the user. Vadc is the zero-to-peak voltage, given by multiplying the rms ADC voltage by a conversion factor of squareroot(2). Nbit is the bit sampling depth.
The last term is necessary if your system scales the amplitude by its maximum.
The correction will be more accurate using end-to-end calibration with sound calibrators.
Note that the formula above is dependent on frequency, but you could apply it over a wider frequency range if your microphone has a flat frequency response.
You can't. dB is a relative unit, IOW it is a unit for comparing two measurements against each other. You can only say that measurement A is x dB louder than measurement B, but in your case you only have one measurement. Therefore, it simply isn't possible to calculate the dB level.
The short answer is: you cannot do sound level measurements with your laptop, nor with your cellphone, etc., for all the reasons outlined previously, plus the fact your cellphone, laptop, etc. use compression algorithms to assure that everything recorded is within the hardware capability. So, if for example you measure a sound then run it through signal processing software such as Head Artemis or LMS Test.Lab, the indicated sound pressure level will always be in the neighborhood of 80 dB(A) regardless of the true level. I can say this from having used cellphone or laptop audio to get an idea of a noise frequency spectrum, while taking level measurements using a calibrated sound level meter. Interestingly, Radio Shack used to sell a microphone intended for speech input while videoconferencing that had very flat frequency response over a broad range, and only cost about $15.
I use a sound level calibrator.
It produces 94 dB or 114dB at 1 KHz
wich is a frecuency where weighting
filters share the same level.
With calibrator at 114dB I adjust mic gain to reach almost full scale
input simply watching a sound card based virtual osciloscope.
Now I know Vref # 114dB.
I developed a simple software based SPL meter
that can be provided if needed. You can use REW too.
You hace to know that PC hardware hardly
reaches 60 dB of dynamic range so calibrating
#114 dB it wont read less than 54dB, wich
is pretty high if you consider that sleeping
is good with less than 35 dB A.
In this case you can calibrate at 94dB
and then you may measure down to 34dB
but again you will hit pc and mic self noise
wich may you prevent to reach such low levels.
Anyway, once calibrated, measures at 114dB
and 94dB should read fine.
Note: the lab standard pistonphone calibrator operates at 250 Hz.
Well! I Used RobertT's Method But It Always Giving Me Oveflow Exception, Then I Used:- int dB = -36 - (value * -1), The Exception Gone, I Don't Know Whether It's Telling dB Values, If You Knew Using Code Given Below, Please Comment Me Whether it's A dB Value or not.
VB.NET:-
Dim dB As Integer = -36 - (9 * -1)
C#:-
int dB = -36 - (9 * -1)

Resources