sensor programming [closed] - programming-languages

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 11 months ago.
Improve this question
I´ve a question according sensor programming. I´m searching a sensor that tells me, for example, if a glass of water is more than half full. I´ve already googled that, but I can´t find anything.
So my questions are:
Where can I buy such a sensor?
What programming language do I need to control such a sensor?
Thanks for answers..
Update from comments below one of the answers
What I really need it for is a big container, in which is some corn. I
want to use the sensor to tell me, just as the corn is under a defined
point of the container. So that I can calculate, at which time I have
to refill the container.

Your sensor could be a level sensor. There are several principles on which level sensors work (see here). Some of them will work with granular solid material. (For example, an ultrasonic range sensor could shoot a pulse at the surface of corn mass, detect the reflection, measure round trip time of flight.)
... or it could be a proximity sensor, as somebody had suggested above.
... or it could be a weight sensor. Here's an application note on weighing vessels.
If you google "level sensor for grains", you may find something useful.
What language to use would depend on what you will connect connect the sensor to. If it will be connected to a microcontroller, the language would be C. If it will be connected to a PC, then it would depend a lot on the particular model of the sensor.
By the way, here's a web group dedicated to sensors.

I would imagine you could use a similar mechanism to a car's fuel tank. Have a mechanism that stays afloat in the container with an attached arm and a magnet on it, then using a Hall sensor you can observe the change in hall reading as the floating part rises or falls within the container.

"What I really need it for is a big container, in which is some corn."
Perhaps one of those sensors that are used to ensure garage entry ways are clear before an automatic garage door is allowed to close. It uses an optical beam of light.

Do you know the size of the glass in question? You could just get a scale and work out how heavy the glass would be when it is half full of water. My guess is that you could probably find a sensor that could do this and it would most likely need to be written in C.
This guy seems to be having the same problems:
http://forums.makezine.com/comments.php?DiscussionID=6052
Good luck.
Also check out Arduino for micro controller electronics.

Related

Baby Cry Sound detection [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I wanted to write a code to detect baby cry sound. I am using Windows as platform. Presently, I am able to get audio samples and its frequency plot(using FFT) but not sure how to proceed forward.
I wanted to ask what steps I should follow to detect the baby cry sound given its time-frequency plot.
I saw some methods such as median filter followed by HMM in speech recognition. But for simple sound detection do I need to go for such sophiticated method?
I will be very grateful if you could help me.
Hidden markov models are widely used in speach recognition, but since you don't really need to know what your baby is saying (next project: baby translator), i don't think it is what you need.
What you should probably do is look at a lot of spectorgrams of babies crying, and look for patterns. Or, even better, let your algorithm do this. What you do is calculate certain metrics about your sound called MFCCs.
You do this on, say, 1000 samples of crying sound, and then you have a 1000 vectors of metrics.
Now, for each metric you calculate the standard deviation. This gives you a way to tell of a sample of random babysound how much different it is from avarage crying sound.
This sounds very hard, but i know there are tools out there. Have a look at sphinx. You can probably train to work.
But either way, start by collecting baby-crying sounds ;) (but don't steal candy)

How Text to Audio softwares works [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 8 years ago.
Improve this question
I want to create a software that can convert readable-texts(non-English) to Audio sound output.
After some searches what I have realized that most of the existing audio readers are too robotic and lacks the human-speech like effects.
I am looking for some algorithm/paper-work, which can give me some idea on how to proceed/implement such a thing.
or
Does anyone know, How some of the world's best Text-Reader software works?
My expectation are:
Reduced Robotic-like sound, and more of Human-like sound
High Quality Output
Light weight, yet Fast process speed
**Please edit this question, if anyone thinks some points are missing on this aspect.
Some small steps might help you give some basic Idea of what happens-
You need to create a dictionary of words, each word with its name and sound.
Create your own signal processor, this will help you add effects to your sound, like you might want robotic, or a female version or something else.
Parse the text file you want to read in array formats, dividing each word and punctuations, to form an array and. eg. "I want to die, this isn't a correct way to live." this will form an array as {I:want:to:die:,:this:isn't:a:correct:way:to:live:.}
Use the punctuation to implement life like parameters like , for small pause and . for longer pauses in your audio reader.
Use the words to take out audio from your database(dictionary) list in point 1.
Play the whole array continuously with a pause between each array element, will work similar to spaces
I think these are major ways to do this. To make it faster you can use advanced sound processing tools, to cache small sound data and add data on fly while you are modulating sound signals.
Might this help you.
Could be nice if you can tell us what kind of app you'll create (Movil, Web, Desktop) and also in what code you'll develop it (Php, Java, C++, etc). Because if you search in google, you'll find a lot plugins for website that convert text to audio that you can download them and see the code.
Also it's hard to find an app that not sound like a robot and if you find it maybe you'll pay for it.
The "robotic" aspect of text to speech that you are concerned about is a matter of the quality of "prosody". This is an active research area. You could probably get a PhD for working on improving prosody in TTS systems. If you would like to read about current research you can try searching for "improving prosody in text to speech".
A big part of the problem is having an accurate model of speech prosody in a given language. The thesis "MeLos: Analysis and Modelling of Speech Prosody and Speaking Style" by Nicolas Obin (2012) contains a survey of the state of the art in speech prosody modelling. Or try searching for "text to speech prosody survey state of the art".

How or where can I get separate notes of an instrument for playback in my application? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am looking to create a music creation application, and would like to allow the user to play the individual notes of an instrument. Is there a place online where I can find individual sound files that I may playback for each note, or is there a way of programmatically "generating" each pitch? I am not concerned with sound quality at this point in my development.
EDIT: I am still in the early stages of development. I want the app to be browser based, using Javascript or something similar. A Linux development environment, if that is of relevance at all. The notes will be played via an on-screen interface.
The University of Iowa's Electronic Music Studios has a very nice and complete archive of sampled instruments, with one musical note per file. You should also check out freesound, though that is a much more general-purpose sample sharing site.
There are plenty of places online to find sampled instruments. If you're not concerned about sound quality, some free soundfonts will most likely do the job.
For example, this site http://soundfonts.homemusician.net/ has pianos, basses, guitars, horn etc. (Google "free sf2" for more)
There are plenty of ways to generate (aka synthesise) tones as well.
If you don't mind MIDI files, you can get a free MIDI software piano and create your own files: C.mid, C#.mid, D.mid, etc.
Here's one with a quirky interface but there are many more:
http://download.cnet.com/MidiPiano/3000-2133_4-10542342.html
The easiest way to do this is to simply output MIDI messages to the synth built-in to every computer. No need to create MIDI files or use extra sound fonts.
You didn't mention what language you are using, so it is hard to suggest ways to get started. In all cases though, you'll want to read up a bit on what MIDI actually is.
Basically, MIDI is nothing but control data, commonly used with synthesizers. At a basic level, there are note-on, and note-off messages. There are many other kinds of messages too, such as pitch bend, control change, etc. MIDI supports 16 "channels", which are sent all down the same line, just with a different identifier.
A good utility (on Windows) for debugging MIDI messages (and getting a better idea of the protocol in general!) can be found here: http://www.midiox.com/

Looking for programs on audio tape/cassette containing programs for Sinclair ZX80 PC? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 6 years ago.
Improve this question
OK, so back before ice age, I recall having a Sinclair ZX80 PC (with TV as a display, and a cassette tape player as storage device).
Obviously, the programs on cassette tapes made a very distinct sound (er... noise) when playing the tape... I was wondering if someone still had those tapes?
The reason (and the reason this Q is programming related) is that IIRC different languages made somewhat different pitched noises, but I would like to run the tape and listen myself to confirm if that was really the case...
I have the tapes but they've been stored in the garage at my parents' house and the last thirty years hasn't been kind to them.
You can get images here though: http://www.zx81.nl/dload if that's any use. Perhaps there is a tool out there for converting from the bytes back to the audio ;)
Edit: Perhaps here: http://ldesoras.free.fr/prod.html#src_ay3hacking
On the ZX80, ZX81 and ZX Spectrum, tape output is achieved by the CPU toggling the output line level between a high state and a low state. Input is achieved by having the CPU watch an input line level. The very low level of operation was one of Sir Clive's cost-saving measures; rival machines like the BBC Micro had dedicated hardware for serialisation and deserialisation of data, so the CPU would just say "output 0xfe" and then the hardware would make the relevant noises and raise an interrupt when it was ready for the next byte. The BBC Micro specifically implements the Kansas City Standard, whereas the Sinclair machines in every instance use whatever adhoc format best fitted the constraints of the machine.
The effect of that is that while almost every other machine that uses tape has tape output that sounds much the same from one program to the next by necessity, programs on a Sinclair machine could choose to use whatever encoding they wanted, which is the principle around which a thousand speed loaders were written. It's therefore not impossible that different programs would output distinctively different sounds. Some even used the symmetry between the tape input and output to do crude digital sampling, editing and playback, though they were never more than novelties for obvious reasons.
That being said, the base units of the ZX80 and ZX81 contained just 1kb RAM so it's quite likely that programmers would just use the ROM routines for reading and writing data, due to space constraints if nothing else. Then the sound differences would just be on account of characteristic data, as suggested by slugster.
I know these come up on auction sites like Ebay quite frequently - if you want to buy them yourself. If you get someone else who owns one to listen then you are going to get their subjective opinion :)
In any case, the language used to save it would be the secondary cause of the pitch changes - it will be related to the data. IOW you could probably create a straight binary data file that sounded very similar to a BASIC program (the BASIC would have been saved as text, as it is interpreted).
I know the threads old but... I was playing about with something similar last night and I've got a wav of an old zx81 game if you're still interested? pm me and I'll post it somewhere.
You can use something like http://www.wintzx.fr/ or pick something from http://www.worldofspectrum.org/utilities.html#tzxtools to convert an emulator file to an audio file and then you can just play it on your PC. Some tools also allow you to play the file directly. Emulator files can be found at http://www.zx81.nl/files.html and many other places.

nVidia SLI Tricks [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 3 years ago.
Improve this question
I'm optimizing a directx graphics application to take advantage of nVidia's SLI technology. I'm currently investigating some of the techniques mentioned in their 'Best Practices' web page, but wanted to know what advice/experience any of you have had with this?
Thanks!
This is not really an answer to you question, more of a comment on SLI.
My understanding is that SLI is only really a cost-effective means of gaining performance when you buy two cards right away, which few people actually do. Many people buy an SLI motherboard and card thinking it will give them a better upgrade path down the road, but the reality is that by the time you get to that point, it is going to be cheaper to buy a new, faster card, than it is to duplicate the one you already have just to get SLI going.
Just a thought before you pour too much energy into it. If you have a requirement to support SLI, then that's what you have to do. But personally, I would rather see optimization energy put towards non-SLI implementations.
The one thing SLI can do that having two non-SLIed graphics can't do is Nvidia Surround.
In some games this will allow you to play the game at 1080x(3x1920). So you can play the game on three monitors as if it was one.
The disadvantage that I have found to SLI is
A) It limits the number of monitors you can have running at once. Example:
I have two geforce 560 gtx ti's. When not using SLI I can have 4 independent monitors running. With SLI I can only have 2, or 3 monitors running in Surround.
B) Because when you run 3 monitors in Surround, it treats them as one large monitor, if you use the Window Dock left and right, the window will take up 1.5 monitors. Which is not only annoying but also makes the feature almost useless.
Right now what I do is turn on SLI when I am about to run my game, and when I am not gaming have it set to "Activate all Displays" in Nvidia Control Panel. Though you can't change back and forth with applications like Chrome open. So before I launch a game I have to close everything... Working for a better solution.

Resources