- (IBAction)triggerSound {
if (player == nil) {
NSError *error;
player = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:&error];
} else if ([player isPlaying]) {
[player stop];
player.currentTime = 0;
}
if (![player play]) {
NSLog(#"fail");
}
}
Where player is AVAudioPlayer and this action is linked with a UIButton. The sound can be played, and can be played again after the sound finish, but when I press the button again during the playback, the message "fail" is logged and the sound cannot be played, without any exception.
I and using iPad with iOS 4.2.1.
I have noticed that the audio stream fails to play after calling stop(which according to the reference undoes what prepareToPlay and play does).
Instead of calling stop, setting currentTime to player.duration(end of the stream) worked.
Related
I've read that we can now play custom sounds on the apple watch in watchos 3.
According to the announcement from Apple so apparently there is but I don't have an example to test it out: 3D spatial audio implemented using SCNAudioSource or SCNAudioPlayer. Instead, use playAudioSource:waitForCompletion: or the WatchKit sound or haptic APIs. Found here: https://developer.apple.com/library/prerelease/content/releasenotes/General/WhatsNewInwatchOS/Articles/watchOS3.html
Can someone place a simple example of this. I'm not using SceneKit in my app as I don't need it but if that's the only way to play a custom sound then I'd like to know the minimum code required to accomplish this. Preferably in Objective c but I'll take it in whatever shape. I'm ok using SpriteKit if that's easier also.
Here's what I have so far but it doesn't work:
SCNNode * audioNode = [[SCNNode alloc] init];
SCNAudioSource * audioSource = [SCNAudioSource audioSourceNamed:#"mysound.mp3"];
SCNAudioPlayer * audioPlayer = [SCNAudioPlayer audioPlayerWithSource:audioSource];
[audioNode addAudioPlayer:audioPlayer];
SCNAction * play = [SCNAction playAudioSource:audioSource waitForCompletion:YES];
[audioNode runAction:play];
I can confirm, that #ApperleyA solution really works!
Here is the swift version:
var _audioPlayer : AVAudioPlayerNode!
var _audioEngine : AVAudioEngine!
func playAudio()
{
if (_audioPlayer==nil) {
_audioPlayer = AVAudioPlayerNode()
_audioEngine = AVAudioEngine()
_audioEngine.attach(_audioPlayer)
let stereoFormat = AVAudioFormat(standardFormatWithSampleRate: 44100, channels: 2)
_audioEngine.connect(_audioPlayer, to: _audioEngine.mainMixerNode, format: stereoFormat)
do {
if !_audioEngine.isRunning {
try _audioEngine.start()
}
} catch {}
}
if let path = Bundle.main.path(forResource: "test", ofType: "mp3") {
let fileUrl = URL(fileURLWithPath: path)
do {
let asset = try AVAudioFile(forReading: fileUrl)
_audioPlayer.scheduleFile(asset, at: nil, completionHandler: nil)
_audioPlayer.play()
} catch {
print ("asset error")
}
}
}
This is Objective-c but can be translated into Swift
I ended up using AVAudioEngine and AVAudioPlayerNode to play audio on the Apple watch.
The gist of how to do this is as follows:
I call the following inside the init method of my AudioPlayer (it's an NSObject subclass to encapsulate the functionality)
_audioPlayer = [[AVAudioPlayerNode alloc] init];
_audioEngine = [[AVAudioEngine alloc] init];
[_audioEngine attachNode:_audioPlayer];
AVAudioFormat *stereoFormat = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:44100 channels:2];
[_audioEngine connect:_audioPlayer to:_audioEngine.mainMixerNode format:stereoFormat];
if (!_audioEngine.isRunning) {
NSError* error;
[_audioEngine startAndReturnError:&error];
}
I have a cache setup so I don't recreate the AVAudioFile assets every time I want to play a sound but you don't need to.
So next create an AVAudioFile object:
NSError *error;
NSBundle* appBundle = [NSBundle mainBundle];
NSURL *url = [NSURL fileURLWithPath:[appBundle pathForResource:key ofType:#"aifc"]];
AVAudioFile *asset = [[AVAudioFile alloc] initForReading:url &error];
Then play that file:
[_audioPlayer scheduleFile:asset atTime:nil completionHandler:nil];
[_audioPlayer play];
UPDATE: If the app goes to sleep or is put to the background there is a chance the audio will stop playing/fade out. By activating an Audio Session this will be prevented.
NSError *error;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback error:&error];
if (error) {
NSLog(#"AVAudioSession setCategory ERROR: %#", error.localizedDescription);
}
[[AVAudioSession sharedInstance] setActive:YES error:&error];
if (error) {
NSLog(#"AVAudioSession setActive ERROR: %#", error.localizedDescription);
}
I didn't go over handling any errors but this should work. Don't forget to #import <AVFoundation/AVFoundation.h> at the top of your implementation file.
This worked for me in the simulator
let soundPath = Bundle.main.path(forResource: "cheerMP3", ofType: "mp3")
let soundPathURL = URL(fileURLWithPath: soundPath!)
let audioFile = WKAudioFileAsset(url: soundPathURL)
let audioItem = WKAudioFilePlayerItem(asset: audioFile)
let audioPlayer = WKAudioFilePlayer.init(playerItem: audioItem)
if audioPlayer.status == .readyToPlay
{
audioPlayer.play()
}
else
{
print("Not ready!!")
}
but only if I had a breakpoint at both audioPlayer.play() and after the last }.
dunqan, what did you put at the top of the file, the import statements? I wasn't able to include
import AVFoundation
without an error using Xcode 8.2.1
I need to do some actions when the user starts touching the screen, moves a finger and then ends a touch. Touch began works fine, move as well, but touch end runs with a delay between 0.5-1 sec. Below there's a code:
-(id) init {
if (self = [super init]) {
//Adding a listener for catching touch events and get call back to selector method
[self addGestureListener:#selector(gestureCallback:)];
CCScene *scene = [CCScene node];
[scene addChild: self];
[[CCDirector sharedDirector] runWithScene:scene];
}
return self;
}
- (UIPanGestureRecognizer *)addGestureListener:(SEL)selector {
UIPanGestureRecognizer *recognizer = [[[UIPanGestureRecognizer alloc] initWithTarget:self action:selector] autorelease];
[[[CCDirector sharedDirector] openGLView] addGestureRecognizer:recognizer];
return recognizer;
}
-(void) gestureCallback:(UIPanGestureRecognizer *) recognizer {
if (recognizer.state == UIGestureRecognizerStateBegan) {
NSLog(#"start");
} else if (recognizer.state == UIGestureRecognizerStateChanged) {
NSLog(#"moved");
} else if (recognizer.state == UIGestureRecognizerStateEnded) {
NSLog(#"ended");
}
}
In the log (last 2 lines) I see this:
2012-10-15 11:29:03.609 App[6169:c07] moved
2012-10-15 11:29:04.267 App[6169:c07] ended
Any ideas?
Have a look at the GestureRecognizer's delaysTouchesEnded propery. From Apple's docs:
When the value of this property is YES (the default) and the receiver is analyzing touch events, the window suspends delivery of touch objects in the UITouchPhaseEnded phase to the attached view. If the gesture recognizer subsequently recognizes its gesture, these touch objects are cancelled (via a touchesCancelled:withEvent: message). If the gesture recognizer does not recognize its gesture, the window delivers these objects in an invocation of the view’s touchesEnded:withEvent: method. Set this property to NO to have touch objects in the UITouchPhaseEnded delivered to the view while the gesture recognizer is analyzing the same touches.
I am only getting this delay on the simulator. I do not see the same delay when I am using a physical device.
I have a GameCenter Sandbox-Account have tested my game, earned achievements, etc.
Now I've made some changes and want to test earning Achievements again!
Do I have to make an entire new Sandbox-Account or is there a way to reset my account?
The following code is from the Apple Documentation.
- (void) resetAchievements
{
// Clear all locally saved achievement objects.
achievementsDictionary = [[NSMutableDictionary alloc] init];
// Clear all progress saved on Game Center
[GKAchievement resetAchievementsWithCompletionHandler:^(NSError *error)
{
if (error != nil)
// handle errors
}];
}
Also have a look at Apple's sample project GKTapper.
// Reset all the achievements for local player
- (void)resetAchievements
{
[GKAchievement resetAchievementsWithCompletionHandler: ^(NSError *error)
{
if (!error) {
[storedAchievements release];
storedAchievements = [[NSMutableDictionary alloc] init];
// overwrite any previously stored file
[self writeStoredAchievements];
} else {
// Error clearing achievements.
}
}];
}
I have a music button in the main UI of iOS and I want the button function as stop music button and at the same time can play the music again. My problem is I can only stop the music but can't play it again. Here is my iOS code in xcode4. How can I define one button for both play and stop music button? I have assigned one button for both stopMusic and playMusic but it doesn't work.
- (IBAction)stopMusic{
self.playBgMusic.enabled = YES;
[self.player stop];
}
- (IBAction)playMusic{
self.playBgMusic.enabled = YES;
[self.player play];
}
// Implement viewDidLoad to do additional setup after loading the view, typically from a nib.
- (void)viewDidLoad
{
NSString *path = [[NSBundle mainBundle] pathForResource:#"music" ofType:#"mp3"];
self.player=[[AVAudioPlayer alloc] initWithContentsOfURL:[NSURL fileURLWithPath:path] error:NULL];
player.delegate = self;
[player play];
player.numberOfLoops = -1;
[super viewDidLoad];
}
Rather than have a single button try to send two different messages (stopMusic, playMusic), have it send toggleMusic every time. The toggleMusic method can then stop or play the music based on the current state of the player.
- (IBAction)toggleMusic{
if ([self.player isPlaying] == YES) {
[self.player stop];
} else {
[self.player play];
}
self.playBgMusic.enabled = YES;
}
I use Matt Gallagher's audio streamer for streaming radio stations. But how to record the audio? Is there a way to get the downloaded packets into NSData and save it in an audio file in the documents folder on the iPhone?
Thanks
Yes, there is and I have done it. My problem is being able to play it back IN the same streamer (asked elsewhere). It will play back with the standard AVAudioPlayer in iOS. However, this will save the data to a file by writing it out in the streamer code.
This example is missing some error checks, but will give you the main idea.
First, a call from the main thread to start and stop recording. This is in my viewController when someone presses record:
//---------------------------------------------------------
// Record button was pressed (toggle on/off)
// writes a file to the documents directory using date and time for the name
//---------------------------------------------------------
-(IBAction)recordButton:(id)sender {
// only start if the streamer is playing (self.streamer is my streamer instance)
if ([self.streamer isPlaying]) {
NSDate *currentDateTime = [NSDate date]; // get current date and time
NSDateFormatter *dateFormatter = [[[NSDateFormatter alloc] init] autorelease];
[dateFormatter setDateFormat:#"EEEE MMMM d YYYY 'at' HH:mm:ss"];
NSString *dateString = [dateFormatter stringFromDate:currentDateTime];
self.isRecording = !self.isRecording; // toggle recording state BOOL
if (self.isRecording)
{
// start recording here
// change the record button to show it is recording - this is an IBOutlet
[self.recordButtonImage setImage:[UIImage imageNamed:#"Record2.png"] forState:0];
// call AudioStreamer to start recording. It returns the file pointer back
//
self.recordFilePath = [self.streamer recordStream:TRUE fileName:dateString]; // start file stream and get file pointer
} else
{
//stop recording here
// change the button back
[self.recordButtonImage setImage:[UIImage imageNamed:#"Record.png"] forState:0];
// call streamer code, stop the recording. Also returns the file path again.
self.recordFilePath = [self.streamer recordStream:FALSE fileName:nil]; // stop stream and get file pointer
// add to "recorded files" for selecting a recorderd file later.
// first, add channel, date, time
dateString = [NSString stringWithFormat:#"%# Recorded on %#",self.model.stationName, dateString]; // used to identify the item in a list laster
// the dictionary will be used to hold the data on this recording for display elsewhere
NSDictionary *row1 = [[[NSDictionary alloc] initWithObjectsAndKeys: self.recordFilePath, #"path", dateString, #"dateTime", nil] autorelease];
// save the stream info in an array of recorded Streams
if (self.model.recordedStreamsArray == nil) {
self.model.recordedStreamsArray = [[NSMutableArray alloc] init]// init the array
}
[self.model.recordedStreamsArray addObject:row1]; // dict for this recording
}
}
}
NOW, in AudioStreamer.m I need to handle the record setup call above
- (NSString*)recordStream:(BOOL)record fileName:(NSString *)fileName
{
// this will start/stop recording, and return the file pointer
if (record) {
if (state == AS_PLAYING)
{
// now open a file to save the data into
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
// will call this an mp3 file for now (this may need to change)
NSMutableString *temp = [NSMutableString stringWithString:[documentsDirectory stringByAppendingFormat:#"/%#.mp3",fileName]];
// remove the ':' in the time string, and create a file name w/ time & date
[temp replaceOccurrencesOfString:#":" withString:#"" options:NSLiteralSearch range:NSMakeRange(0, [temp length])];
self.filePath = temp; // file name is date time generated.
NSLog(#"Stream Save File Open = %#", self.filePath);
// open the recording file stream output
self.fileStream = [NSOutputStream outputStreamToFileAtPath:self.filePath append:NO];
[self.fileStream open];
NSLog(#"recording to %#", self.fileStream);
self.isRecording = TRUE;
return (self.filePath); // if started, send back the file path
}
return (nil); // if not started, return nil for error checking
} else {
// save the stream here to a file.
// we are done, close the stream.
if (self.fileStream != nil) {
[self.fileStream close];
self.fileStream = nil;
}
NSLog(#"stop recording");
self.isRecording = FALSE;
return (self.filePath); // when stopping, return nil
}
}
LASTLY, we need to modify the data portion of the streamer to actually save the bytes. You need to modify the stream code in the method: -(void)handleReadFromStream:(CFReadStreamRef)aStreameventType:(CFStreamEventType)eventType
Scroll down in that method until you find:
#synchronized(self)
{
if ([self isFinishing] || !CFReadStreamHasBytesAvailable(stream))
{
return;
}
//
// Read the bytes from the stream
//
length = CFReadStreamRead(stream, bytes, kAQDefaultBufSize);
if (length == -1)
{
[self failWithErrorCode:AS_AUDIO_DATA_NOT_FOUND];
return;
}
RIGHT after the length = line, add the following code:
//
// if recording, save the raw data to a file
//
if(self.isRecording && length != 0){
//
// write the data to a file
//
NSInteger bytesWritten;
NSInteger bytesWrittenSoFar;
bytesWrittenSoFar = 0;
do {
bytesWritten = [self.fileStream write:&bytes[bytesWrittenSoFar] maxLength:length - bytesWrittenSoFar];
NSLog(#"bytesWritten = %i",bytesWritten);
if (bytesWritten == -1) {
[self.fileStream close];
self.fileStream = nil;
NSLog(#"File write error");
break;
} else {
bytesWrittenSoFar += bytesWritten;
}
} while (bytesWrittenSoFar != length);
}
Here are the .h declarations:
Added to the interface for AudioStreamer.h
// for recording and saving a stream
NSString* filePath;
NSOutputStream* fileStream;
BOOL isRecording;
BOOL isPlayingFile;
In your view controller you will need:
#property(nonatomic, assign) IBOutlet UIButton* recordButtonImage;
#property(nonatomic, assign) BOOL isRecording;
#property (nonatomic, copy) NSString* recordFilePath;
Hope this helps someone. Let me know if questions, and always happy to hear someone who can improve this.
Also, someone asked about self.model.xxx Model is a Data Object I created to allow me to easily pass around data that is used by more than one object, and is also modified by more than one object. I know, global data is bad form, but there are times that just make it easier to access. I pass the data model to each new object when called. I save an array of channels, song name, artist name, and other stream related data inside the model. I also put any data I want to persist through launches here, like settings, and write this data model to a file each time a persistent data is changed. IN this example, you can keep the data locally. If you need help on the model passing, let me know.
OK, here is how I play back the recorded file. When playing a file, the station URL contains the path to the file. self.model.playRecordedSong contains a time value for how many seconds into the stream I want to play. I keep a dictionary of song name and time index, so I can jump into the recorded stream at the start of any song. Use 0 to start form the beginning.
NSError *error;
NSURL *url = [NSURL fileURLWithPath:[NSString stringWithFormat:self.model.stationURL, [[NSBundle mainBundle] resourcePath]]];
// get the file URL and then create an audio player if we don't already have one.
if (audioPlayer == nil) {
// set the seconds count to the proper start point (0, or some time into the stream)
// this will be 0 for start of stream, or some value passed back if they picked a song.
self.recordPlaySecondsCount = self.model.playRecordedSong;
//create a new player
audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:&error];
// set self so we can catch the end of file.
[audioPlayer setDelegate: self];
// audio player needs an NSTimeInterval. Get it from the seconds start point.
NSTimeInterval interval = self.model.playRecordedSong;
// seek to the proper place in file.
audioPlayer.currentTime = interval;
}
audioPlayer.numberOfLoops = 0; // do not repeat
if (audioPlayer == nil)
NSLog(#"AVAudiolayer error: %#", error);
// I need to do more on the error of no player
else {
[audioPlayer play];
}
I hope this helps you play back the recorded file.
Try This Class This Have Full Solution OF All Radio streaming recording Playing All..
In Git Hub You Can Find This Use This Class Very Easy To Use