I am trying to make an automated door which opens/closes the door on receiving commands via Bluetooth.
All I want the servo to do is:
Remain steady on powering up the Arduino. (Currently it rotates to a certain angle and comes back on powering up Arduino).
Rotate from 0 degree to 90 degree and stops, on receiving another command it should rotate from 90 degree to 0 degree and stops.
This is my code :
else if (val=='i'){
myservo.write(0);
delay(4000);
for(pos = 0; pos <= 90; pos += 1){
myservo.write(pos);
delay(15);
}
}
else if (val=='j'){
myservo.write(0);
delay(4000);
for(pos = 90; pos >= 0; pos -= 1){
myservo.write(pos);
delay(15);
}
}
This is a characteristic of the electronics of servos. Supply the PWM signal to the servo before powering it up, or within a few milliseconds of power up. If you want to continue using an Arduino, the bootloader waits a few seconds during which the servo doesn't have a signal, so add a transistor to switch the power to the servos on as the last thing in your startup code. If you can program the microcontroller directly and remove the Arduino bootloader, then the microcontroller should start executing your servo controls quickly enough not to have a noticeable glitch. In either case, the servo will still jump to the position you tell it to on start-up rather than a random position; you can save the last commanded position in the EEROM so the jump will be less noticeable, but when unpowered the servo will move if mechanically loaded so there may still be a jump. There isn't a way of saying 'stay in your current position' to a RC servo.
Your val == 'i',val == 'j' branches move the servo quickly to zero before rotating slowly from 90 to zero or zero to 90. Remember the position you were in and don't move to zero before moving from that position to the desired position.
Mechanically, the sort of servo which is controlled by the servo library is not likely to be strong enough to open or close a normal door; if it's a door in a doll's house or a cat flap you would be OK, but otherwise you should use a more powerful actuator and end stops and some force sensor so you don't crush people.
Related
I'm trying to synthesize sound on the Arduboy, which is a handheld gaming device with an AVR ATMega32u4 microcontroller and a speaker attached between its pins C6 and C7.
My plan is to use timer 4 to generate a high-frequency PWM signal on C7, and then use timer 3 to change timer 4's duty cycle. For a "hello world"-level program, I'm trying to read 3906 8-bit samples per second from PROGMEM.
First of all, to make sure my sample file is really in the format I think it is, I have used SoX to play it on a computer:
$ play -e unsigned-integer -r 3906 -b 8 sample2.raw
Here's the relevant parts of my code:
pub fn setup() {
without_interrupts(|| {
PLLFRQ::unset(PLLFRQ::PLLTM1);
PLLFRQ::set(PLLFRQ::PLLTM0);
TCCR4A::write(TCCR4A::COM4A1 | TCCR4A::PWM4A); // Set output C7 to high between 0x00 and OCR4A
TCCR4B::write(TCCR4B::CS40); // Enable with clock divider of 1
TCCR4C::write(0x00);
TCCR4D::write(0x00);
TC4H::write(0x00);
OCR4C::write(0xff); // One full period = 256 cycles
OCR4A::write(0x00); // Duty cycle = OCR4A / 256
TCCR3B::write(TCCR3B::CS32 | TCCR3B::CS30); // Divide by 1024
OCR3A::write(3u16); // 4 cycles per period => 3906 samples per second
TCCR3A::write(0);
TCCR3B::set(TCCR3B::WGM30); // count up to OCR3A
TIMSK3::set(TIMSK3::OCIE3A); // Interrupt on OCR3A match
// Speaker
port::C6::set_output();
port::C6::set_low();
port::C7::set_output();
});
}
progmem_file_bytes!{
static progmem SAMPLE = "sample2.raw"
}
// TIMER3_COMPA
#[no_mangle]
pub unsafe extern "avr-interrupt" fn __vector_32() {
static mut PTR: usize = 0;
// This runs at 3906 Hz, so at each tick we just replace the duty cycle of the PWM
let sample: u8 = SAMPLE.load_at(PTR);
OCR4A::write(sample);
PTR += 1;
if PTR == SAMPLE.len() {
PTR = 0;
}
}
The basic problem is that it just doesn't work: instead of hearing the audio sample, I just hear garbled noise from the speaker.
Note that it is not "fully wrong", there is some semblance of the intended operation. For example, I can hear that the noise has a repeating structure with the right length. If I set the duty cycle sample to 0 when PTR < SAMPLE.len() / 2, then I can clearly hear that there is no sound for half of my sample length. So I think timer 3 and its interrupt handler are certainly working as intended.
So this leaves me thinking either I am configuring timer 4 incorrectly, or I am misunderstanding the role of OCR4A and how the duty cycle needs to be set, or I could just have a fundamentally wrong understanding of how PWM-based audio synthesis is supposed to work.
Irrespective of the code you have written above, it seems your design suffers from several flaws:
a) speakers require alternating current to flow through it; it is achieved by varying the tension between a positive and a negative voltage. In your design, the output of your PWM is likely to be between 0 and 5V. If your PWM has a duty cycle of 50%, it turns into an average offset voltage of 2.5V constantly applied to your speaker. This could be fatal to it (the saving grace potentially being flaw 3 below, i.e. not enough current). A common way to fix this is to cut off DC current by connecting the speaker through a capacitor, to filter out any DC current. The value of the capacitor must be adjusted such as it doesn't filter out the signal you wish to use on low frequencies.
b) you shouldn't connect a speaker directly to your microcontroller PINs, for several reasons:
Your microcontroller is likely the expensive device of your board, and should be preserved from devices malfunctions (like shortcuts) that could lead to destroy it.
Your microcontroller can drive max 40mA per io PIN, as described in datasheet, 29.1 Absolute Maximum Ratings . A typical speaker has a resistance varying between 8 Ohms and 32 Ohms, and if we consider a duty cycle of 50%, you are passing in average 20mA across it. To drive your HP efficiently, you would need much more current to flow through. That's why speakers are generally driven by an amplifier stage.
Speakers are devices with coils inside. As such, they store magnetic energy that they yield back to the circuit when the voltage drops, injecting reverse currents. These reverse currents can easily destroy transistor gates if not properly handled. In your case, luckily, the amount of inverse current is low, since you don't drive the HP enough. But you should avoid that situation at all times.
c) Using a PWM to produce sound seems awkward. Since they produce square waves , these are plenty of harmonics that will add a lot of parasitic noises to the sound. But more importantly, PWM output does not allow you to vary the intensity (voltage) of the output, which is normally needed to reproduce sound properly.
To produce sound, you need an digital to analogue converter (DAC), then the proper circuitry to condition that signal and amplify it for a speaker. Unfortunately, the ATMega32u4 doesn't seem to have that feature. You should pick another microcontroller.
For questions on electronic designs, I suggest you to search on https://electronics.stackexchange.com.
My goal is to record audio using an electret microphone hooked into the analog pin of an esp8266 (12E) and then be able to play this audio on another device. My circuit is:
In order to check the output of the microphone I connected the circuit to the oscilloscope and got this:
In the "gif" above you can see the waves made by my voice when talking to microphone.
here is my code on esp8266:
void loop() {
sensorValue = analogRead(sensorPin);
Serial.print(sensorValue);
Serial.print(" ");
}
I would like to play the audio on the "Audacity" software in order to have an understanding of the result. Therefore, I copied the numbers from the serial monitor and paste it into the python code that maps the data to (-1,1) interval:
def mapPoint(value, currentMin, currentMax, targetMin, targetMax):
currentInterval = currentMax - currentMin
targetInterval = targetMax - targetMin
valueScaled = float(value - currentMin) / float(currentInterval)
return round(targetMin + (valueScaled * targetInterval),5)
class mapper():
def __init__(self,raws):
self.raws=raws.split(" ")
self.raws=[float(i) for i in self.raws]
def mapAll(self):
self.mappeds=[mapPoint(i,min(self.raws),max(self.raws),-1,1) for i in self.raws ]
self.strmappeds=str(self.mappeds).replace(",","").replace("]","").replace("[","")
return self.strmappeds
Which takes the string of numbers, map them on the target interval (-1 ,+1) and return a space (" ") separated string of data ready to import into Audacity software. (Tools>Sample Data Import and then select the text file including the data). The result of importing data from almost 5 seconds voice:
which is about half a second and when I play I hear unintelligible noise. I also tried lower frequencies but there was only noise there, too.
The suspected causes for the problem are:
1- Esp8266 has not the capability to read the analog pin fast enough to return meaningful data (which is probably not the case since it's clock speed is around 100MHz).
2- The way software is gathering the data and outputs it is not the most optimized way (In the loop, Serial.print, etc.)
3- The microphone circuit output is too noisy. (which might be, but as observed from the oscilloscope test, my voice has to make a difference in the output audio. Which was not audible from the audacity)
4- The way I mapped and prepared the data for the Audacity.
Is there something else I could try?
Are there similar projects out there? (which to my surprise I couldn't find anything which was done transparently!)
What can be the right way to do this? (since it can be a very useful and economic method for recording, transmitting and analyzing audio.)
There are many issues with your project:
You do not set a bias voltage on A0. The ADC can only measure voltages between Ground and VCC. When removing the microphone from the circuit, the voltage at A0 should be close to VCC/2. This is usually achieved by adding a voltage divider between VCC and GND made of 2 resistors, and connected directly to A0. Between the cap and A0.
Also, your circuit looks weird... Is the 47uF cap connected directly to the 3.3V ? If that's the case, you should connect it to pin 2 of the microphone instead. This would also indicate that right now your ADC is only recording noise (no bias voltage will do that).
You do not pace you input, meaning that you do not have a constant sampling rate. That is a very important issue. I suggest you set yourself a realistic target that is well within the limits of the ADC, and the limits of your serial port. The transfer rate in bytes/sec of a serial port is usually equal to baud-rate / 8. For 9600 bauds, that's only about 1200 bytes/sec, which means that once converted to text, you max transfer rate drops to about 400 samples per second. This issue needs to be addressed and the max calculated before you begin, as the max attainable overall sample rate is the maximum of the sample rate from the ADC and the transfer rate of the serial port.
The way to grab samples depends a lot on your needs and what you are trying to do with this project, your audio bandwidth, resolution and audio quality requirements for the application and the amount of work you can put into it. Reading from a loop as you are doing now may work with a fast enough serial port, but the quality will always be poor.
The way that is usually done is with a timer interrupt starting the ADC measurement and an ADC interrupt grabbing the result and storing it in a small FIFO, while the main loop transfers from this ADC fifo to the serial port, along the other tasks assigned to the chip. This cannot be done directly with the Arduino libraries, as you need to control the ADC directly to do that.
Here a short checklist of things to do:
Get the full ESP8266 datasheet from Expressif. Look up the actual specs of the ADC, mainly: the sample rates and resolutions available with your oscillator, and also its electrical constraints, at least its input voltage range and input impedance.
Once you know these numbers, set yourself some target, the math needed for successful project need input numbers. What is your application? Do you want to record audio or just detect a nondescript noise? What are the minimum requirements needed for things to work?
Look up in the Arduino documentartion how to set up a timer interrupt and an ADC interrupt.
Look up in the datasheet which registers you'll need to access to configure and run the ADC.
Fix the voltage bias issue on the ADC input. Nothing can work before that's done, and you do not want to destroy your processor.
Make sure the input AC voltage (the 'swing' voltage) is large enough to give you the results you want. It is not unusual to have to amplify a mic signal (with an opamp or a transistor), just for impedance matching.
Then you can start writing code.
This may sound awfully complex for such a small task, but that's what the average day of an embedded programmer looks like.
[EDIT] Your circuit would work a lot better if you simply replaced the 47uF DC blocking capacitor by a series resistor. Its value should be in the 2.2k to 7.6k range, to keep the circuit impedance within the 10k Ohms or so needed for the ADC. This would insure that the input voltage to A0 is within the operating limits of the ADC (GND-3.3V on the NodeMCU board, 0-1V with bare chip).
The signal may still be too weak for your application, though. What is the amplitude of the signal on your scope? How many bits of resolution does that range cover once converted by the ADC? Example, for a .1V peak to peak signal (SIG = 0.1), an ADC range of 0-3.3V (RNG = 3.3) and 10 bits of resolution (RES = 1024), you'll have
binary-range = RES * (SIG / RNG)
= 1024 * (0.1 / 3.3)
= 1024 * .03
= 31.03
A range of 31, which means around Log2(31) (~= 5) useful bits of resolution, is that enough for your application ?
As an aside note: The ADC will give you positive values, with a DC offset, You will probably need to filter the digital output with a DC blocking filter before playback. https://manual.audacityteam.org/man/dc_offset.html
In my raspberry pi, i need to run two motors with a L298N.
I can pwm on enable pins to change speeds. But i saw that gpiozero robot library can make things a lot easier. But
When using gpiozero robot library, how can i alter speeds of those motors by giving signel to the enable pins.
I have exactly the same situation. You can of course program the motors separately but it is nice to use the robot class.
Looking into the gpiocode for this, I find that in our case the left and right tuples have a third parameter which is the pin for PWM motor speed control. (GPIO Pins 12 13 18 19 have hardware PWM support). The first two outout pins in the tuple are to be signalled as 1, 0 for forward, 0,1 for back.
So here is my line of code:
Initio = Robot(left=(4, 5, 12), right=(17, 18, 13))
Hope it works for you!
I have some interesting code on the stocks for controlling the robot's absolute position, so it can explore its environment.
To alter the speeds you need a PWM signal which can be done without using any library.
To create a PWM instance:
p = GPIO.PWM(channel, frequency)
To start PWM:
p.start(dc) # where dc is the duty cycle (0.0 <= dc <= 100.0)
To change the frequency:
p.ChangeFrequency(freq) # where freq is the new frequency in Hz
To change the duty cycle:
p.ChangeDutyCycle(dc) # where 0.0 <= dc <= 100.0
To stop PWM:
p.stop()
I'm writing a "play sound" program on Reapberry Pi with ALSA.
I called snd_pcm_writei every 1280 samples.
I wish to add a small LED, and let it "lighter while the sound is louder, and darker while the sound is quieter."
My plane is, if there's a callback every short time period(ex: 100ms), and I can get the instantaneous volume in the callback, I can control the LED in it.
In android, there's AudioTrack.setPositionNotificationPeriod.
However, I've no idea how to do this under Linux with ALSA.
Could anyone give me some advise?
The playback function looks like this:
// nLeftFrameSize: Total sample number.
// hDevice: Play device handle (initialized beforehand).
// lpbyBuffer: Sample buffer.
while(nLeftFrameSize > 0){
nRes = (int)snd_pcm_writei(( snd_pcm_t*)hDevice, lpbyBuffer, 1280);
nLeftFrameSize -= 1280;
}
I've tried calculate RMS in the while loop before the snd_pcm_writei(), set the LED brightness, and sleep, to make sure the LED can light up while these 1280 samples are playing. But this cause the sound discontinuous.
So I'd create another thread for the LED control, and I can sleep in that thread without bothering the playback.
I recently designed a Sound recorder on a mac using AudioUnits. It was designed to behave like a video security system, recording continuously, with a graphics display of power levels for playback browsing.
I've noticed that every 85 minutes distortion appears for 3 minutes. After a day of elimination it appears that the sound acquisition that occurs before callback is called uses a circular buffer, and the callback's audioUnitRender function extracts from this buffer but with a slightly slower speed, which eventually causes the internal buffer write to wrap around and catch up with audioUnitRender reads. The duplex operation test shows the latency ever increasing, and after 85 minutes you hear about 200-300ms of latency and the noise begins as the render buffer frame has a combination of buffer segments at end and beginning of buffer, i.e long and short latencies. as the pointers drift apart the noise disappears and you hear clean audio with original short latency, then it repeats again 85 mins later. Even with low impact callback processing this still happens. I've seen some posts regarding latency but none regarding clashes, has anyone seen this?
osx 10.9.5, xcode 6.1.1
code details:-
//modes 1=playback, 2=record, 3=both
AudioComponentDescription outputcd = {0}; // 10.6 version
outputcd.componentType = kAudioUnitType_Output;
outputcd.componentSubType = kAudioUnitSubType_HALOutput; //allows duplex
outputcd.componentManufacturer = kAudioUnitManufacturer_Apple;
AudioComponent comp = AudioComponentFindNext (NULL, &outputcd);
if (comp == NULL) {printf ("can't get output unit");exit (-1);}
CheckError (AudioComponentInstanceNew(comp, au),"Couldn't open component for outputUnit");
//tell input bus that its's input, tell output it's an output
if(mode==1 || mode==3) r=[self setAudioMode:*au :0];//play
if(mode==2 || mode==3) r=[self setAudioMode:*au :1];//rec
// register render callback
if(mode==1 || mode==3) [self setCallBack:*au :0];
if(mode==2 || mode==3) [self setCallBack:*au :1];
// if(mode==2 || mode==3) [self setAllocBuffer:*au];
// get default stream, change amt of channels
AudioStreamBasicDescription audioFormat;
UInt32 k=sizeof(audioFormat);
r= AudioUnitGetProperty(*au,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
1,
&audioFormat,
&k);
audioFormat.mChannelsPerFrame=1;
r= AudioUnitSetProperty(*au,
kAudioUnitProperty_StreamFormat,
kAudioUnitScope_Output,
1,
&audioFormat,
k);
//start
CheckError (AudioUnitInitialize(outputUnit),"Couldn't initialize output unit");
//record callback
OSStatus RecProc(void *inRefCon,
AudioUnitRenderActionFlags *ioActionFlags,
const AudioTimeStamp *inTimeStamp,
UInt32 inBusNumber,
UInt32 inNumberFrames,
AudioBufferList * ioData)
{
myView * mv2=(__bridge myView*)inRefCon;
AudioBuffer buffer,buffer2;
OSStatus status;
buffer.mDataByteSize = inNumberFrames *4 ;// buffer size
buffer.mNumberChannels = 1; // one channel
buffer.mData =mv2->rdata;
buffer2.mDataByteSize = inNumberFrames *4 ;// buffer size
buffer2.mNumberChannels = 1; // one channel
buffer2.mData =mv2->rdata2;
AudioBufferList bufferList;
bufferList.mNumberBuffers = 2;
bufferList.mBuffers[0] = buffer;
bufferList.mBuffers[1] = buffer2;
status = AudioUnitRender(mv2->outputUnit, ioActionFlags, inTimeStamp, inBusNumber, inNumberFrames, &bufferList);
[mv2 recproc :mv->rdata :mv->rdata2 :inNumberFrames];
return noErr;
}
You seem to be using the HAL output unit for pulling input. There might not be a guarantee that the input device and output device sample rates are exactly locked. Any slow slight drift in the sample rate of either device could eventually cause a buffer underflow or overflow.
One solution might be to find and set an input device for a separate input audio unit instead of depending on the default output unit. Try a USB mic, for instance.
According to this article https://www.native-instruments.com/forum/threads/latency-drift-problem-on-macbook.175551/ this problem appears to be a usb audio driver bug in maverick. I didn't find a kext replacement solution anywhere.
After making a sonar type tester (1 cycle 22khz square wave click every 600 ms to speaker, display selected recorded frame number after click) and could see the 3 to 4 samples drift per second along with the distortion/latency drift reset experience after 1.5 hrs, I decided to look around and find how to access the buffer pointers to stabilise the latency drift, but also no luck.
Also api latency queries show no changes as it drifts.
I did find that you could reset the latency with audiounitstop then audiounitstart (same thread), but it worked only if only one audiounit bus system wide was active. Research also showed that the latency could be reset if you toggle the hardware device sample-rate in Audio Midi Setup. this is a bit aggressive and would be uncomfortable for some.
My design toggled the nominalsamplerate (AudioObjectSetPropertyData with kAudioDevicePropertyNominalSampleRate) every 60 minutes (48000 then back to 44100), with delay by way of waiting for change notification through a callback.
This cause a 2 second void in audio input and output every hour. Safari playing a youtube video would mute, and cause a 1-2 second video freeze during this time . VLC showed the same but video remained smooth during 2 second silence.
Like I said, it wouldn't work for all, but I chose system wide 2 second mute every hour over a recording that has 3 minutes of fuzzy audio every 1.5 hrs. Its been posted that a yosemite upgrade fixes this, although some have also found crackling after going up to yosemite.