Esd protected workstation - How can I test my workstation for static? - circuit

I have just set up my first esd workstation for my integrated circuits. It has a grounded 2 layer mat and a wrist strap. I have tested the resistance of the mat cord, the conductive layer of my mat, the wrist cord and the wrist strap using a multimeter as well as the outlet using an outlet tester. How can I test that I am discharged and there is no esd from me to my circuit? How can I test that the outlets ground wire is the real ground and is not just wired to neutral?
Thanks!
Update: I have now tested the ground wiring for a ac voltage difference with a gas pipe and a water pipe. Both have a small stable voltage difference.

As far as ESD goes, it's all about relative charge. If your circuits are on the mat, and your wrist strap is tied to the mat/mat's ground, then you're all set because your potential charge, your mat's potential charge, and your electronics potential charge will all be the exact same so no charge will flow.
I assume you're living in the USA...
By code, ground and neutral are bonded at the main breaker box so that's not really a concern.
You can easily test your outlet with a volt meter. Hot (small slot) to Neutral (large slot) should read approximately 120v. Hot to ground (roundish hole) should read 120v. Neutral to Ground should read virtually nothing, though a volt or two is acceptable.

Related

How can capacitive touchscreens realise a high touch point recognition precision?

A capacitive touchscreen only has limited sensors to detect where has been touched, for instance, there're merely 29 sensing channels * 16 driving channels on a 6-inch touchscreen, but it can recognize touch point with a precision of display pixel level or even sub-pixel level.
How can it do that?

Why were square waves used in old hardware instead of sine waves?

This answer here postulates that to actually generate a square wave (or any other abstract wave-shape) you have to layer multiple sine waves on top of each other. Yet old hardware (Commodore, NES, etc) lacked sine wave channels and instead relied heavily on square pulse-waves, triangle waves, noise and sawtooth waves. I always assumed this was done because those waves are easier to generate than a simple sine wave. So,would genereating these wave shapes not be computationally more expensive? Why was it done anyway?
This answer here postulates that to actually generate a square wave […] you have to layer multiple sine waves on top of each other.
Not really, it just describes how a square wave can be analyzed to prove certain facts about its sound - how much energy is in each frequency band and such. This is somewhat similar to how every integer can be factored into one or more smaller prime factors (15=3×5) which is useful when analyzing algorithms but still doesn't change how we came up with the original number (maybe counting 15 sheep).
Separating a "complex" wave into sinusoidal components are very useful mathematically, but does not tell us the mechanism behind its original creation.
I always assumed this was done because those waves are easier to generate than a simple sine wave.
Your assumption here is correct. Starting with a digital circuit, the square wave is the easiest and cheapest waveform to create1. Just turn a voltage on and off using a single transistor. It is also cheaper in a mass-market manufacturing context because a sine wave generator (and even a saw-tooth) made from analog electronics will require a lot of extra components in order to not drift with temperature, age, and humidity.
It is also arguably more useful in a synthesizer context than one single sine wave because it has a lot of harmonics you can modify with a filter like in the SID.
The next step on the complexity ladder is any ramp-shape, like the triangle or saw-tooth. While you can make these using analog electronics, even back in the early eighties they were typically implemented by a simple DAC driven by a digital counter. The rate of the counter determined how fast the waveform goes from 0 to MAX and thus determined the pitch.
Once you have your DAC in your computer you could use it to generate a sine wave but it requires either impossibly expensive real-time calculations or a large table of pre-calculated sine values, so it was rarely (never?) done. When computers got some useful amount of RAM and bandwidth, they quickly switched to plain arbitrary samples and never looked back.
1) In fact, anything else is so much more complicated that today we just do everything using simple digital pulses and just filter the result in various ways (PDM, PWM, Delta-sigma)
My recollection was that a member of our team figured out we could generate sounds by turning something on and off quickly. This was the early 1980's and unfortunately I don't remember the specifics. But I think a key point is that we were flipping a switch, not computing the data for those waves. The waves that resulted were the results of a "pulsed" action. This may account for some of the early sounds but I think its also speculative. I wasn't the one directly involved, and theoretically, this at best only explains square and pulse waves, not triangle or sawtooth waves. Will be interested in what others come up with.

Clock domain crossing signals and Jitter requirement

I am reading the DVCON paper 2006 "Pragmatic Simulation-Based Verification of Clock Domain Crossing Signals and Jitter using SystemVerilog Assertings" by Mark Litterick. I am confused with some of the statements
Page 2 Section 4.2 Input data values must be stable for three destination clock edges.
the paper seems to imply positive edges since that is what the property p_stability seems to check.
But the paper by Clifford Cummings (CDC design and verification techniques Using System Verilog) mentions this as 1.5x. So he is suggesting 2 positive and 1 negative edge. Can someone confirm if the paper meant positive edge?
Page 5, Section 6, Figure 11 Synchronizer will Jitter Emulation allows 3 clock delay randomly. For a single-bit input, how do we get 3 clock delay? I can see that being useful for multi-bit input where there is some skew but not for a single bit.
property p_stability;
#(posedge clk) // NOTE POSITIVE EDGE
!$stable(d_in) |=> $stable(d_in)[*2];
endproperty
I can confirm the intent of the original statement is 3 positive edges, let me explain why. It is quite straightforward to identify the potential for a pulse that is two positive edges wide to be filtered - specifically if the actual pulse (lets say high level) violates the setup time for the first edge and the hold time for the second edge, then RTL simulation would see the signal as high for two clock edges, but it could be filtered out completely due to metastability. If the verification remains in the event driven simulation domain, then a safe verification margin is to say we can (only) guarantee propagation if it is observed for 3 consecutive positive edges.
Now the reality, in the time domain rather than event driven domain, is that the pulse width must be strictly greater than the clock period plus the setup and hold times.... which is more than two edges but less than three. But you would need a temporal check to validate that, not an event based check.
(for the second question, I need to go back to the paper myself)
Hope that helps,
Mark
I read both papers a while back. My understanding is Clifford Cummings's statement is more accurate. D input width > 1.5x receiving clock period is the minimum requirement. This will guarantee to have 2 positive sampling edges and some space for hold and setup time.

PWM Current calculation and dependency on frequency

I am using a PIC16F877a to drive a solid state realy connected to a 300W starter motor (R=50. millohms, L=50mH);
I tried varying The frequency and duty cycle to reduce the inrush current. it worked my current reduced to almost half.
I know that the average voltage for a pwm is V*duty cycle. But i am not driving the motor directly but through a relay. can anyone tell me a formula on how to calculate the current to the motor for validation.
Regs,
cj
I think you would need a datasheet of the motor with its electrical and mechanical characteristcs to determine the current. But that would still be a theoretical value. In the real world you will have the wires, contacts and so on, that add additional resistance and will "help" to limit the start current. But don't choose the wires to small and use a fuse for safety reasons. This should help you to choose the right wires: American Wire Gauge
If it's a DC motor there is a better and quiet simple solution.
Because of mechanical wearing and the limited switching frequency you should better not use a relay. The better solution would be an application fitting field effect transistor (FET)switching at a pwm frequency of about 20kHz so it would not produce any annoying humming or whimpering sounds in the motor. Depending on the transistor you will need a driver circuit for the FET to operate fine, dropping just a small ammount of power (passive cooling might still be needed).
For a smooth start of the motor with a minimum of peak current you should apply a linear duty cycle sweep from 0 to 100%. The optimum duration of the sweep depends on the motor and the mechanical load. Discover your optimum by trying different sweep durations.

How can I calculate audio dB level?

I want to calculate room noise level with the computer's microphone. I record noise as an audio file, but how can I calculate the noise dB level?
I don't know how to start!
All the previous answers are correct if you want a technically accurate or scientifically valuable answer. But if you just want a general estimation of comparative loudness, like if you want to check whether the dog is barking or whether a baby is crying and you want to specify the threshold in dB, then it's a relatively simple calculation.
Many wave-file editors have a vertical scale in decibels. There is no calibration or reference measurements, just a simple calculation:
dB = 20 * log10(amplitude)
The amplitude in this case is expressed as a number between 0 and 1, where 1 represents the maximum amplitude in the sound file. For example, if you have a 16 bit sound file, the amplitude can go as high as 32767. So you just divide the sample by 32767. (We work with absolute values, positive numbers only.) So if you have a wave that peaks at 14731, then:
amplitude = 14731 / 32767
= 0.44
dB = 20 * log10(0.44)
= -7.13
But there are very important things to consider, specifically the answers given by the others.
1) As Jörg W Mittag says, dB is a relative measurement. Since we don't have calibrations and references, this measurement is only relative to itself. And by that I mean that you will be able to see that the sound in the sound file at this point is 3 dB louder than at that point, or that this spike is 5 decibels louder than the background. But you cannot know how loud it is in real life, not without the calibrations that the others are referring to.
2) This was also mentioned by PaulR and user545125: Because you're evaluating according to a recorded sound, you are only measuring the sound at the specific location where the microphone is, biased to the direction the microphone is pointing, and filtered by the frequency response of your hardware. A few feet away, a human listening with human ears will get a totally different sound level and different frequencies.
3) Without calibrated hardware, you cannot say that the sound is 60dB or 89dB or whatever. All that this calculation can give you is how the peaks in the sound file compares to other peaks in the same sound file.
If this is all you want, then it's fine, but if you want to do something serious, like determine whether the noise level in a factory is safe for workers, then listen to Paul, user545125 and Jörg.
You do need reference hardware (i.e., a reference mic) to calculate noise level (dB SPL, or sound pressure level). One thing Radio Shack sells is a $50 dB SPL meter. If you're doing scientific calculations, I wouldn't use it. But if the goal is to get a general idea of a weighted measurement (dBA or dBC) of the sound pressure in a given environment, then it might be useful. As a sound engineer, I use mine all the time to see how much sound volume I'm generating while I mix. It's usually accurate to within 2 dB.
That's my answer. The rest is FYI stuff.
Jorg is correct that dB SPL is a relative measurement. All decibel measurements are. But you've implied a reference of 0 dB SPL, or 20 micropascals, scientifically agreed to be the most quiet sound a human ear can detect (though, understandably, what a person can actually hear is very difficult to determine). This, according to Wikipedia, is about the sound of a flying mosquito from about 10 feet away (http://en.wikipedia.org/wiki/Decibel).
By assuming you don't understand decibels, I think Jorg is just trying to out-geek you. He clearly didn't give you a practical answer. :-)
Unweighted measurements (dB, instead of dBA or dBC) are rarely used, because most sound pressure is not detected by the human ear. In a given office environment, there is usually 80-100 dB SPL (sound pressure level). To give you an idea of exactly how much is not heard, in the U.S., occupational regulations limit noise exposure to 80 dBA for a given 8-hour work shift (80 dBA is about the background noise level of your average downtown street - difficult, but not impossible to talk over). 85 dBA is oppressive, and at 90, most people are trying to get away. So the difference between 80 dB and 80 dBA is very significant -- 80 dBA is difficult to talk over, and 80 dB is quite peaceful. :-)
So what is 'A' weighting? 'A' weighting compensates for the fact that we don't perceive lower frequency sounds as well as high frequency sounds (we hear 20 Hz to 20,000 Hz). There's a lot of low-end rumble that our ears/brains pretty much ignore. In addition, we're more sensitive to a certain midrange (1000 Hz to 4000 Hz). Most agree that this frequency range contains the sounds of consonants of speech (vowels happen at a much lower frequency). Imagine talking with just vowels. You can't understand anything. Thus, the ability of a human to be able to communicate (conventionally) rests in the 1kHz-5kHz bump in hearing sensitivity. Interestingly, this is why most telephone systems only transmit 300 Hz to 3000 Hz. It was determined that this was the minimal response needed to understand the voice on the other end.
But I think that's more than you wanted to know. Hope it helps. :-)
You can't easily measure absolute dB SPL, since your microphone and analogue hardware are not calibrated. You may be able to do an approximate calibration for a particular hardware set up but you would need to repeat this for every different microphone and hardware set up that you plan to support.
If you do have some kind of SPL reference source that you can use then then it gets easier:
use your reference source to generate a tone at a known dB SPL - measure this
measure the ambient noise
calculate noise level = 20 * log10 (V_noise / V_ref) + dB_ref
Of course this assumes that the frequency response of your microphone and audio hardware is reasonably flat and that you just want a flat (unweighted) noise figure. If you want a weighted (e.g. A-weight) noise figure then you'll have to do rather more processing.
According to Merchant et al. (section 3.2 in the appendix: "Measuring acoustic habitats", Methods in Ecology and Evolution, 2015), you can actually calculate absolute, calibrated SPL values using manufacturer specifications by subtracting a correction term S to your relative (scaled to maximum) SPL values:
S = M + G + 20*log10(1/Vadc) + 20*log10(2^Nbit-1)
where M is the sensitivity of the transducer (microphone) re 1 V/Pa. G is the gain applied by the user. Vadc is the zero-to-peak voltage, given by multiplying the rms ADC voltage by a conversion factor of squareroot(2). Nbit is the bit sampling depth.
The last term is necessary if your system scales the amplitude by its maximum.
The correction will be more accurate using end-to-end calibration with sound calibrators.
Note that the formula above is dependent on frequency, but you could apply it over a wider frequency range if your microphone has a flat frequency response.
You can't. dB is a relative unit, IOW it is a unit for comparing two measurements against each other. You can only say that measurement A is x dB louder than measurement B, but in your case you only have one measurement. Therefore, it simply isn't possible to calculate the dB level.
The short answer is: you cannot do sound level measurements with your laptop, nor with your cellphone, etc., for all the reasons outlined previously, plus the fact your cellphone, laptop, etc. use compression algorithms to assure that everything recorded is within the hardware capability. So, if for example you measure a sound then run it through signal processing software such as Head Artemis or LMS Test.Lab, the indicated sound pressure level will always be in the neighborhood of 80 dB(A) regardless of the true level. I can say this from having used cellphone or laptop audio to get an idea of a noise frequency spectrum, while taking level measurements using a calibrated sound level meter. Interestingly, Radio Shack used to sell a microphone intended for speech input while videoconferencing that had very flat frequency response over a broad range, and only cost about $15.
I use a sound level calibrator.
It produces 94 dB or 114dB at 1 KHz
wich is a frecuency where weighting
filters share the same level.
With calibrator at 114dB I adjust mic gain to reach almost full scale
input simply watching a sound card based virtual osciloscope.
Now I know Vref # 114dB.
I developed a simple software based SPL meter
that can be provided if needed. You can use REW too.
You hace to know that PC hardware hardly
reaches 60 dB of dynamic range so calibrating
#114 dB it wont read less than 54dB, wich
is pretty high if you consider that sleeping
is good with less than 35 dB A.
In this case you can calibrate at 94dB
and then you may measure down to 34dB
but again you will hit pc and mic self noise
wich may you prevent to reach such low levels.
Anyway, once calibrated, measures at 114dB
and 94dB should read fine.
Note: the lab standard pistonphone calibrator operates at 250 Hz.
Well! I Used RobertT's Method But It Always Giving Me Oveflow Exception, Then I Used:- int dB = -36 - (value * -1), The Exception Gone, I Don't Know Whether It's Telling dB Values, If You Knew Using Code Given Below, Please Comment Me Whether it's A dB Value or not.
VB.NET:-
Dim dB As Integer = -36 - (9 * -1)
C#:-
int dB = -36 - (9 * -1)

Resources