Is there a way to calculate/estimate the physical distance to a long-distance passive RFID tag when reading it with a tag reader? E.g. to determine the order of books in a shelf, or telling if one object is close or far away.
If the answer is 'No - not according to the standard', would it be possible to build a reader with this feature? (I guess the only way to achieve this would be to measure the time between call and response very precisely).
It is possible, but to what extent end precision depends on a lot of factors: reader and tag performance, the quality of the software and the resources you are willing to invest in such a software (both time and people in R&D).
There are mainly two ways this can be achieved: The first one relies on getting the RSSI, which is basically the signal strength. The main difficulty using this indicator is that signal strength depends on a lot of factors that can influence it like, reflections if the signal needs to pass a wood cabinet or a wall, the quality of the tag, etc.
The second one is use the time the response is received to an enquiry (Time Differece of Arrival between tags). Given that you know the speed of the beam you can estimate the distance given a very precise timer. The problem here is that this also is influenced by a lot of factors: the mean time the tag needs to complete a cycle (which you should know, and should be the same for every tag used), the timer precision which is not built precisely for these purposes.
Naturally a combination of both should be employed for maximum precission and both are actually used by companies that rely on these algorithms to provide RTLS (Real Time Location Systems) application through Triangulation and Trilateration.
For further information you can check: RTLS, RSSI, TDOA, Trilateration (and Multilateration).
It is possible. As far as I know the company below (I'm not working there, I just happen to know someone who worked there a year before):
http://www.lambda4.com/
is working on such a technology.
It may not be possible if you have a single reader; however if you have multiple receivers and reasonably clear lines of sight , "estimating" the distance becomes possible by looking at signal strengths. It's not trivial though, since the power radiated by a RFID tag is not isotropic (I.e. not uniform in all directions) due to the antenna design; if you have three receivers and a uniform source of RF, you can solve for the distance, but when you add in the antenna pattern and other factors like signal path attenuation and multiparty, it becomes really hard - especially when there are multiple devices in the vicinity.
This is at least in part because the RFID was not designed with an output pattern that helps optimize localization, such as a frequency chirp, short power bursts, or other modulation features that allow estimating the time of flight of the signal from source to receiver and back.
General equation to find distance to RFID tag is Ploss = 20⋅log[
(4 π ⋅ d)
/λ]
In case of UHF RFID, the equation to find the gap or distance to passive tag from the reader is
Pgap = 22.6(dB)+ Patt, where 22.6dB is the power for near field(λ =c/f ≈ 35cm), where f is frequency operated, Patt is the magnitude of POWER ATTENUATOR
22.6+Patt = 20⋅log[(4 π ⋅ d)/λ],
In free space, by using the above equation, the approximate distance to RFID tag may be acheived..
Related
I am trying to build a system that on providing an image of a car can assess the damage percentage of it and also find out which parts are damaged in the car.
Is there any possible way to do this using Python and open-cv or tensorflow ?
The GitHub repositories I found that were relevant to my work are these
https://github.com/VakhoQ/damage-car-detector/tree/master/DamageCarDetector
https://github.com/neokt/car-damage-detective
But what they provide is a qualitative output( like they say the car damage is high or low), I wanted to print out a quantitative output( percentage of damage ) along with the individual part names which are damaged
Is this possible ?
If so please help me out.
Thank you.
To extend the good answers given by #yves-daoust: It is not a trivial task and you should not try to do it at once with one single approach.
You should question yourself how a human with a comparable task, i.e. say an expert who reviews these cars after a leasing contract, proceeds with this. Then you have to formulate requirements and also restrictions for your system.
For instance, an expert first checks for any visual occurences and rates these, then they may check technical issues which may well be hidden from optical sensors (i.e. if the car is drivable, driving a round and estimate if the engine is running smoothly, the steering geometry is aligned (i.e. if the car manages to stay in line), if there are any minor vibrations which should not be there and so on) and they may also apply force (trying to manually shake the wheels to check if the bearings are ok).
If you define your measurement system as restricted to just a normal camera sensor, you are somewhat limited within to what extend your system is able to deliver.
If you just want to spot cosmetic damages, i.e. classification of scratches in paint and rims, I'd say a state of the art machine vision application should be able to help you to some extent:
First you'd need to detect the scratches. Bear in mind that visibility of scratches, especially in the field with changing conditions (sunlight) may be a very hard to impossible task for a cheap sensor. I.e. to cope with reflections a system might need to make use of polarizing filters, special effect paints may interfere with your optical system in a way you are not able to spot anything.
Secondly, after you detect the position and dimension of these scratches in the camera coordinates, you need to transform them into real world coordinates for getting to know the real dimensions of these scratches. It would also be of great use to know the exact location of the scratch on the car (which would require a digital twin of the car - which is not to be trivially done anymore).
After determining the extent of the scratch and its position on the car, you need to apply a cost model. Because some car parts are easily fixable, say a scratch in the bumper, just respray the bumper, but scratch in the C-Pillar easily is a repaint for the whole back quarter if it should not be noticeable anymore.
Same goes with bigger scratches / cracks: The optical detection model needs to be able to distinguish between scratches and cracks (which is very hard to do, just by looking at it) and then the cost model can infer the cost i.e. if a bumper needs just respray or needs complete replacement (because it is cracked and not just scratched). This cost model may seem to be easy but bear in mind this needs to be adopted to every car you "scan". Because one cheap damage for the one car body might be a very hard to fix damage for a different car body. I'd say this might even be harder than to spot the inital scratches because you'd need to obtain the construction plans/repair part lists (the repair handbooks / repair part lists are mostly accessible if you are a registered mechanic but they might cost licensing fees) of any vehicle you want to quote.
You see, this is a very complex problem which is composed of multiple hard sub-problems. The easiest or probably the best way to do this would be to do a bottom up approach, i.e. starting with a simple "scratch detector" which just spots scratches in paint. Then go from there and you easily see what is possible and what is not
I'm trying to create my own equalizer. I want to implement 10 IIR bandpass filters. I know the equations to calculate those but I read that for higher center frequencies (above 6000Hz) they should be calculated differently. Of course I have no idea how (and why). Or maybe it's all lies and I don't need other coefficients?
Source: http://cache.freescale.com/files/dsp/doc/app_note/AN2110.pdf
You didn't read closely enough; the application note says "f_s/8 (or 6000Hz)", because for the purpose of where that's written, the sampling rate is 48000Hz.
However, that is a very narrowing look at filters; drawing the angles involved in equations 4,5,6 from that applications notes into an s-plane diagram this looks like it'd make sense, but these aren't the only filter options there are. The point the AN makes is that these are simple formulas, that approximate a "good" filter (because designing an IIR is usually a bit more complex), and they can only be used below f_2/8. I haven't tried to figure out what happens mathematically at higher frequencies, but I just guess the filters aren't as nicely uniform afterwards.
So, my approach would simply be using any filter design tool to calculate coefficients for you. You could, for example, use Matlab's filter design tool, or you could use GNU Radio's gr_filter_design, to give you IIRs. However, automatically found IIRs will usually be longer than 3 taps, unless you know very well how you mathematically define your design requirements so that the algorithm does what you want.
As much as I like the approach of using IIRs for audio equalization, where phase doesn't matter, I'd say the approach in the Application node is not easy to understand unless one has very solid background in filter/system theory. I guess you'd either study some signal theory with an electrical engineering textbook, or you just accept the coefficients as they are given on p. 28ff.
I am a bit stuck here as I cant seem to find some algorithms in trying to distinguish whether a sound produced is a chord or a single note. I am working on Guitar instrument.
Currently, what I am experimenting on is trying to get the Top 5 frequencies with the highest amplitudes, and then determining if they are harmonics of the fundamental (the one with the highest amplitude) or not. I am working on the theory that single notes contain more harmonics than chords, but I am unsure as to if this is the case.
Another thing I am considering is trying to add in the various amplitude values of the harmonics as well as comparing notes comprising the 'supposed chord' to the result from the FFT.
Can you help me out here? It would be really appreciated. Currently, I am only working on Major and Minor chords first.
Thank you very much!
Chord recognition is still a research topic. A good solution might require some fairly sophisticated AI pattern matching techniques. The International Society for Music Information Retrieval seems to run an annual contest on automatic transcription type problems. You can look up the conference and research papers on what has been tried, and how well it works.
Also note that the fundamental pitch is not necessarily the frequency with the highest FFT amplitude result. With a guitar, it very often is not.
You need to think about it in terms of the way we hear sound. Looking for the top 5 frequencies isnt going to do you any good.
You need to look for all frequencies within (Max Frequency Amplitude)/srt(2) to determin the chord/not chord aspect of the signal.
Is there level limiting somewhere in the digital audio chain?
I'm working on a tower defence game with OpenAL, and there will be many towers firing at the same time, all using the same sound effect (at least for now). My concern is that triggering too many sounds at the same time could lead to speakers being blown, or at the very least a headache for the user.
It seems to me that there should be a level limiter either in software, or at the sound card hardware level to prevent fools like me from doing this.
Can anyone confirm this, and if so, tell me where this limiter exists? Thanks!
as it is, you'd be lucky if the signal were simply clipped in the software before it hit the DAC. you can easily implement this yourself. when i say 'clipped', i mean that amplitudes that exceed the maximum are set to the maximum, rather than allowed to overflow, wrap, or other less unpleasant results. clipping at this stage often sounds terrible, but the alternatives i mentioned sound worse.
there's actually a big consideration to account for in this regard: are you rendering in float or int? if int, what is your headroom? with int, you could clip or overflow at practically any stage. with floating point, that will only happen as a serious design flaw. of course, you'll often eventually have to convert to int, when interfacing with the DAC/hardware. the DAC will limit the output because it handles signals within very specific limits. at worst, this will be the equivalent of (sampled) white noise at 0 dB FS (which can be an awful experience for the user). so... the DAC serves as a limiter, although this stage only makes it significantly less probable that a signal will cause hearing or equipment damage.
at any rate, you can easily avoid this, and i recommend you do it yourself, since you're directly in control of the number of sounds and their amplitude. at worst, samples with peaks of 0 dB FS will all converge at the same sample and you'll need to multiply signal (the sum of shots) by the reciprocal of the sum of shots:
output[i] = numShots > 1 ? allThoseShots[i]*(1.0/numShots) : allThoseShots[i];
that's not ideal in many cases (because there will be an exaggerated ducking sound). so you should actually introduce a limiter in addition to overall reduction for the number of simultaneous shots. then you back off the shots signal by a lower factor since their peaks will not likely converge at the same point in time. a simple limiter with ~10 ms of lookahead should prevent you from doing something awful. it would also be a good idea to detect heavy limiting in debug mode, this catches upstream design issues.
in any case, you should definitely consider appropriate gain compensation your responsibility - you never want to clip the output dac. in fact, you want to leave some headroom (ref: intersample peaks).
This one is probably for someone with some knowledge of music theory. Humans can identify certain characteristics of sounds such as pitch, frequency etc. Based on these properties, we can compare one sound to another and get a measure pf likeliness. For instance, it is fairly easy to distinguish the sound of a piano from that of a guitar, even if both are playing the same note.
If we were to go about the same process programmatically, starting with two audio samples, what properties of the sounds could we compute and use for our comparison? On a more technical note, are there any popular APIs for doing this kind of stuff?
P.S.: Please excuse me if I've made any elementary mistakes in my question or I sound like a complete music noob. Its because I am a complete music noob.
There are two sets of properties.
The "Frequency Domain" -- the amplitudes of overtones in a specific sample. This is the amplitudes of each overtone.
The "Time Domain" -- the sequence of amplitude samples through time.
You can, using Fourier Transforms, convert between the two.
The time domain is what sound "is" -- a sequence of amplitudes. The frequency domain is what we "hear" -- a set of overtones and pitches that determine instruments, harmonies, and dissonance.
A mixture of the two -- frequencies varying through time -- is the perception of melody.
The Echo Nest has easy-to-use analysis apis to find out all you might want to know about a piece of music.
You might find the analyze documentation (warning, pdf link) helpful.
Any and all properties of sound can be represented / computed - you just need to know how. One of the more interesting is spectral analysis / spectrogramming (see http://en.wikipedia.org/wiki/Spectrogram).
Any properties you want can be measured or represented in code. What do you want?
Do you want to test if two samples came from the same instrument? That two samples of different instruments have the same pitch? That two samples have the same amplitude? The same decay? That two sounds have similar spectral centroids? That two samples are identical? That they're identical but maybe one has been reverberated or passed through a filter?
Ignore all the arbitrary human-created terms that you may be unfamiliar with, and consider a simpler description of reality.
Sound, like anything else that we perceive is simply a spatial-temporal pattern, in this case "of movement"... of atoms (air particles, piano strings, etc.). Movement of objects leads to movement of air that creates pressure waves in our ear, which we interpret as sound.
Computationally, this is easy to model; however, because this movement can be any pattern at all -- from a violent random shaking to a highly regular oscillation -- there often is no constant identifiable "frequency", because it's often not a perfectly regular oscillation. The shape of the moving object, waves reverberating through it, etc. all cause very complex patterns in the air... like the waves you'd see if you punched a pool of water.
The problem reduces to identifying common patterns and features of movement (at very high speeds). Because patterns are arbitrary, you really need a system that learns and classify common patterns of movement (i.e. movement represented numerically in the computer) into various conceptual buckets of some sort.