I am trying to remove Audio from Video and i am using SCRecorder Class.
but still there is Audio play. So Is there a way to remove Audio from Video using SCRecorder Class.I try following Code in my Project.
SCRecorder *recorder = [SCRecorder recorder]; // You can also use +[SCRecorder sharedRecorder]
SCAudioConfiguration *audio = recorder.audioConfiguration;
// Whether the audio should be enabled or not
audio.enabled = NO;
[_player play];
IN SCRecoder Class you need to stop or comment
this bunch of code
// if (self.audioConfiguration.enabled) {
// if (_audioOutput == nil) {
// _audioOutput = [[AVCaptureAudioDataOutput alloc] init];
// [_audioOutput setSampleBufferDelegate:self queue:_audioQueue];
// }
//
// if ([session canAddOutput:_audioOutput]) {
// [session addOutput:_audioOutput];
// _audioOutputAdded = YES;
// } else {
// audioError = [SCRecorder createError:#"Cannot add audioOutput inside the sesssion"];
// }
// }
ann you find this code in below method
- (void)openSession:(void(^)(NSError *sessionError, NSError *audioError, NSError *videoError, NSError *photoError))completionHandler {
Related
I've read that we can now play custom sounds on the apple watch in watchos 3.
According to the announcement from Apple so apparently there is but I don't have an example to test it out: 3D spatial audio implemented using SCNAudioSource or SCNAudioPlayer. Instead, use playAudioSource:waitForCompletion: or the WatchKit sound or haptic APIs. Found here: https://developer.apple.com/library/prerelease/content/releasenotes/General/WhatsNewInwatchOS/Articles/watchOS3.html
Can someone place a simple example of this. I'm not using SceneKit in my app as I don't need it but if that's the only way to play a custom sound then I'd like to know the minimum code required to accomplish this. Preferably in Objective c but I'll take it in whatever shape. I'm ok using SpriteKit if that's easier also.
Here's what I have so far but it doesn't work:
SCNNode * audioNode = [[SCNNode alloc] init];
SCNAudioSource * audioSource = [SCNAudioSource audioSourceNamed:#"mysound.mp3"];
SCNAudioPlayer * audioPlayer = [SCNAudioPlayer audioPlayerWithSource:audioSource];
[audioNode addAudioPlayer:audioPlayer];
SCNAction * play = [SCNAction playAudioSource:audioSource waitForCompletion:YES];
[audioNode runAction:play];
I can confirm, that #ApperleyA solution really works!
Here is the swift version:
var _audioPlayer : AVAudioPlayerNode!
var _audioEngine : AVAudioEngine!
func playAudio()
{
if (_audioPlayer==nil) {
_audioPlayer = AVAudioPlayerNode()
_audioEngine = AVAudioEngine()
_audioEngine.attach(_audioPlayer)
let stereoFormat = AVAudioFormat(standardFormatWithSampleRate: 44100, channels: 2)
_audioEngine.connect(_audioPlayer, to: _audioEngine.mainMixerNode, format: stereoFormat)
do {
if !_audioEngine.isRunning {
try _audioEngine.start()
}
} catch {}
}
if let path = Bundle.main.path(forResource: "test", ofType: "mp3") {
let fileUrl = URL(fileURLWithPath: path)
do {
let asset = try AVAudioFile(forReading: fileUrl)
_audioPlayer.scheduleFile(asset, at: nil, completionHandler: nil)
_audioPlayer.play()
} catch {
print ("asset error")
}
}
}
This is Objective-c but can be translated into Swift
I ended up using AVAudioEngine and AVAudioPlayerNode to play audio on the Apple watch.
The gist of how to do this is as follows:
I call the following inside the init method of my AudioPlayer (it's an NSObject subclass to encapsulate the functionality)
_audioPlayer = [[AVAudioPlayerNode alloc] init];
_audioEngine = [[AVAudioEngine alloc] init];
[_audioEngine attachNode:_audioPlayer];
AVAudioFormat *stereoFormat = [[AVAudioFormat alloc] initStandardFormatWithSampleRate:44100 channels:2];
[_audioEngine connect:_audioPlayer to:_audioEngine.mainMixerNode format:stereoFormat];
if (!_audioEngine.isRunning) {
NSError* error;
[_audioEngine startAndReturnError:&error];
}
I have a cache setup so I don't recreate the AVAudioFile assets every time I want to play a sound but you don't need to.
So next create an AVAudioFile object:
NSError *error;
NSBundle* appBundle = [NSBundle mainBundle];
NSURL *url = [NSURL fileURLWithPath:[appBundle pathForResource:key ofType:#"aifc"]];
AVAudioFile *asset = [[AVAudioFile alloc] initForReading:url &error];
Then play that file:
[_audioPlayer scheduleFile:asset atTime:nil completionHandler:nil];
[_audioPlayer play];
UPDATE: If the app goes to sleep or is put to the background there is a chance the audio will stop playing/fade out. By activating an Audio Session this will be prevented.
NSError *error;
[[AVAudioSession sharedInstance] setCategory:AVAudioSessionCategoryPlayback error:&error];
if (error) {
NSLog(#"AVAudioSession setCategory ERROR: %#", error.localizedDescription);
}
[[AVAudioSession sharedInstance] setActive:YES error:&error];
if (error) {
NSLog(#"AVAudioSession setActive ERROR: %#", error.localizedDescription);
}
I didn't go over handling any errors but this should work. Don't forget to #import <AVFoundation/AVFoundation.h> at the top of your implementation file.
This worked for me in the simulator
let soundPath = Bundle.main.path(forResource: "cheerMP3", ofType: "mp3")
let soundPathURL = URL(fileURLWithPath: soundPath!)
let audioFile = WKAudioFileAsset(url: soundPathURL)
let audioItem = WKAudioFilePlayerItem(asset: audioFile)
let audioPlayer = WKAudioFilePlayer.init(playerItem: audioItem)
if audioPlayer.status == .readyToPlay
{
audioPlayer.play()
}
else
{
print("Not ready!!")
}
but only if I had a breakpoint at both audioPlayer.play() and after the last }.
dunqan, what did you put at the top of the file, the import statements? I wasn't able to include
import AVFoundation
without an error using Xcode 8.2.1
During the call I try to switch voice from internal speaker to Loud speaker on iOS device using pjsip 2.2 library. It returns TRUE as success, but physically it doesn't change sound destination.
I use the next code
- (BOOL)setLoud:(BOOL)loud {
if (loud) {
#try {
pjmedia_aud_dev_route route = PJMEDIA_AUD_DEV_ROUTE_LOUDSPEAKER;
pj_status_t pj_status = pjsua_snd_set_setting(PJMEDIA_AUD_DEV_CAP_OUTPUT_ROUTE,
&route, PJ_TRUE);
if (pj_status == PJ_SUCCESS) {
return YES;
}
else
{
return NO;
}
}
#catch (NSException *exception) {
return NO;
}
} else {
#try {
pjmedia_aud_dev_route route = PJMEDIA_AUD_DEV_ROUTE_EARPIECE;
pj_status_t pj_status = pjsua_snd_set_setting(PJMEDIA_AUD_DEV_CAP_OUTPUT_ROUTE,
&route, PJ_TRUE);
if (pj_status == PJ_SUCCESS) {
return YES;
}
else
{
return NO;
}
}
#catch (NSException *exception) {
return NO;
}
}
}
Could you suggest how can we make this work?
With the introduction of iOS 7, you should now be using AVAudioSession to handle any audio management. It took me a long time to finally get this to work but I finally figured out the problem of why my audio was not automatically routing to my iPhone Speaker. The problem is that when you answer a call, pjsip was automatically overriding the AVAudioSessionPortOverride I was performing before the call is answered. To tackle this problem, you simply just have to override the output audio port AFTER answering the call.
To make my VoIP application work efficiently with the background mode, I decided to handle the audio routing in a custom callback method named on_call_state. This method, on_call_state, is called by pjsip when a call state has changed. As you can read here, http://www.pjsip.org/pjsip/docs/html/group__PJSIP__INV.htm, there are many different flags you can check for when a call state has changed. The states I used in this example are PJSIP_INV_STATE_CONNECTING and PJSIP_INV_STATE_DISCONNECTED.
PJSIP_INV_STATE_CONNECTING is called when a audio call connects to another peer.
PJSIP_INV_STATE_DISCONNECTED is called when a audio call ends with another peer.
static void on_call_state(pjsua_call_id call_id, pjsip_event *e)
{
pjsua_call_info ci;
PJ_UNUSED_ARG(e);
pjsua_call_get_info(call_id, &ci);
PJ_LOG(3,(THIS_FILE, "Call %d state=%.*s", call_id,
(int)ci.state_text.slen,
ci.state_text.ptr));
if (ci.state == PJSIP_INV_STATE_CONNECTING) {
BOOL success;
AVAudioSession *session = [AVAudioSession sharedInstance];
NSError *error = nil;
success = [session setCategory:AVAudioSessionCategoryPlayAndRecord
withOptions:AVAudioSessionCategoryOptionMixWithOthers
error:&error];
if (!success) NSLog(#"AVAudioSession error setCategory: %#", [error localizedDescription]);
success = [session overrideOutputAudioPort:AVAudioSessionPortOverrideSpeaker error:&error];
if (!success) NSLog(#"AVAudioSession error overrideOutputAudioPort: %#", [error localizedDescription]);
success = [session setActive:YES error:&error];
if (!success) NSLog(#"AVAudioSession error setActive: %#", [error localizedDescription]);
} else if (ci.state == PJSIP_INV_STATE_DISCONNECTED) {
BOOL success;
AVAudioSession *session = [AVAudioSession sharedInstance];
NSError *error = nil;
success = [session setActive:NO error:&error];
if (!success) NSLog(#"AVAudioSession error setActive: %#", [error localizedDescription]);
}
}
I'm using Apple's KMLViewer to load a KML file and display it in a MapView. There are over 50,000 lines of coordinates in the KML file which, of course, causes it to load slowly. In an attempt to speed things up, I'm trying to perform the parsing in another thread using GCD.
I have it working reasonably well as far as it is displaying properly and the speed is acceptable. However, I'm getting intermittent runtime errors when loading the map. I suspect it is because the way I have things laid out, the UI is being updated within the GCD block. Everything I'm reading says the UI should be updated in the main thread or else runtime errors can occur which are intermittent and hard to track down. Well, that's what I'm seeing.
The problem is, I can't figure out how to update the UI in the main thread. I'm still new to iOS programming so I'm just throwing things against the wall to see what works. Here is my code, which is basically Apple's KMLViewerViewController.m with some modifications:
#import "KMLViewerViewController.h"
#implementation KMLViewerViewController
- (void)viewDidLoad
{
[super viewDidLoad];
activityIndicator.hidden = TRUE;
dispatch_queue_t myQueue = dispatch_queue_create("My Queue",NULL);
dispatch_async(myQueue, ^{
// Locate the path to the route.kml file in the application's bundle
// and parse it with the KMLParser.
NSString *path = [[NSBundle mainBundle] pathForResource:#"BigMap" ofType:#"kml"];
NSURL *url = [NSURL fileURLWithPath:path];
kmlParser = [[KMLParser alloc] initWithURL:url];
[kmlParser parseKML];
dispatch_async(dispatch_get_main_queue(), ^{
// Update the UI
// Add all of the MKOverlay objects parsed from the KML file to the map.
NSArray *overlays = [kmlParser overlays];
[map addOverlays:overlays];
// Add all of the MKAnnotation objects parsed from the KML file to the map.
NSArray *annotations = [kmlParser points];
[map addAnnotations:annotations];
// Walk the list of overlays and annotations and create a MKMapRect that
// bounds all of them and store it into flyTo.
MKMapRect flyTo = MKMapRectNull;
for (id <MKOverlay> overlay in overlays) {
if (MKMapRectIsNull(flyTo)) {
flyTo = [overlay boundingMapRect];
} else {
flyTo = MKMapRectUnion(flyTo, [overlay boundingMapRect]);
}
}
for (id <MKAnnotation> annotation in annotations) {
MKMapPoint annotationPoint = MKMapPointForCoordinate(annotation.coordinate);
MKMapRect pointRect = MKMapRectMake(annotationPoint.x, annotationPoint.y, 0, 0);
if (MKMapRectIsNull(flyTo)) {
flyTo = pointRect;
} else {
flyTo = MKMapRectUnion(flyTo, pointRect);
}
}
// Position the map so that all overlays and annotations are visible on screen.
map.visibleMapRect = flyTo;
});
});
}
-(void)viewWillAppear:(BOOL)animated
{
[super viewWillAppear:animated];
activityIndicator.hidden = FALSE;
[activityIndicator startAnimating];
}
#pragma mark MKMapViewDelegate
- (MKOverlayView *)mapView:(MKMapView *)mapView viewForOverlay:(id <MKOverlay>)overlay
{
return [kmlParser viewForOverlay:overlay];
}
- (MKAnnotationView *)mapView:(MKMapView *)mapView viewForAnnotation:(id <MKAnnotation>)annotation
{
return [kmlParser viewForAnnotation:annotation];
}
- (void)mapViewDidFinishLoadingMap:(MKMapView *)mapView
{
[activityIndicator stopAnimating];
activityIndicator.hidden = TRUE;
}
#end
Suggestions?
I'm adding FBLoginView to my ViewController < FBLoginViewDelegate >:
FBLoginView *loginview = [[FBLoginView alloc] init];
loginview.frame = CGRectOffset(loginview.frame, 5, 5);
loginview.delegate = self;
[self.view addSubview:loginview];
[loginview sizeToFit];
All the necessary fields in plist (FacebookAppID, FacebookDisplayName, URL Schemes) are all set according to the tutorial. The facebook app is also configured according to the tutorial (bundle ID is set, Facebook login is enabled).
But the login still isn't performed. When I press on "log in", I get redirected to the browser with facebook login, but when it's finished, I'm not logged in the app (loginViewFetchedUserInfo:user: isn't called, "log in" hasn't changed to "log out").
What can be the problem?
Everything worked after I implemented the following in the AppDelegate.m (taken from one of the official examples):
- (void)sessionStateChanged:(FBSession *)session
state:(FBSessionState) state
error:(NSError *)error
{
switch (state) {
case FBSessionStateOpen:
if (!error) {
// We have a valid session
//NSLog(#"User session found");
[FBRequestConnection
startForMeWithCompletionHandler:^(FBRequestConnection *connection,
NSDictionary<FBGraphUser> *user,
NSError *error) {
if (!error) {
self.loggedInUserID = user.id;
self.loggedInSession = FBSession.activeSession;
}
}];
}
break;
case FBSessionStateClosed:
case FBSessionStateClosedLoginFailed:
[FBSession.activeSession closeAndClearTokenInformation];
break;
default:
break;
}
[[NSNotificationCenter defaultCenter]
postNotificationName:FBSessionStateChangedNotification
object:session];
if (error) {
UIAlertView *alertView = [[UIAlertView alloc]
initWithTitle:#"Error"
message:error.localizedDescription
delegate:nil
cancelButtonTitle:#"OK"
otherButtonTitles:nil];
[alertView show];
}
}
/*
* Opens a Facebook session and optionally shows the login UX.
*/
- (BOOL)openSessionWithAllowLoginUI:(BOOL)allowLoginUI {
return [FBSession openActiveSessionWithReadPermissions:nil
allowLoginUI:allowLoginUI
completionHandler:^(FBSession *session,
FBSessionState state,
NSError *error) {
[self sessionStateChanged:session
state:state
error:error];
}];
}
/*
*
*/
- (void) closeSession {
[FBSession.activeSession closeAndClearTokenInformation];
}
/*
* If we have a valid session at the time of openURL call, we handle
* Facebook transitions by passing the url argument to handleOpenURL
*/
- (BOOL)application:(UIApplication *)application
openURL:(NSURL *)url
sourceApplication:(NSString *)sourceApplication
annotation:(id)annotation {
// attempt to extract a token from the url
return [FBAppCall handleOpenURL:url sourceApplication:sourceApplication];
}
You need to add the following to the app delegate
- (BOOL)application:(UIApplication *)application
openURL:(NSURL *)url
sourceApplication:(NSString *)sourceApplication
annotation:(id)annotation {
// Call FBAppCall's handleOpenURL:sourceApplication to handle Facebook app responses
BOOL wasHandled = [FBAppCall handleOpenURL:url sourceApplication:sourceApplication];
// You can add your app-specific url handling code here if needed
return wasHandled;
}
You may need to implement the
(BOOL)application:(UIApplication *)application handleOpenURL:(NSURL *)url method
Be sure to call:
return [FBSession.activeSession handleOpenURL:url];
When applicable.
#Sergey: Are you want to open FBDialog on your native app or in browser? If you to want open in your native app then use "FBSessionLoginBehaviorForcingWebView". Here is my code that I am using:
NSArray *permission = [NSArray arrayWithObjects:kFBEmailPermission,kFBUserPhotosPermission, nil];
FBSession *session = [[FBSession alloc] initWithPermissions:permission];
[FBSession setActiveSession: [[FBSession alloc] initWithPermissions:permission] ];
[[FBSession activeSession] openWithBehavior:FBSessionLoginBehaviorForcingWebView completionHandler:^(FBSession *session, FBSessionState status, NSError *error) {
switch (status) {
case FBSessionStateOpen:
[self yourmethod];
break;
case FBSessionStateClosedLoginFailed: {
// prefer to keep decls near to their use
// unpack the error code and reason in order to compute cancel bool
NSString *errorCode = [[error userInfo] objectForKey:FBErrorLoginFailedOriginalErrorCode];
NSString *errorReason = [[error userInfo] objectForKey:FBErrorLoginFailedReason];
BOOL userDidCancel = !errorCode && (!errorReason || [errorReason isEqualToString:FBErrorLoginFailedReasonInlineCancelledValue]);
if(error.code == 2) {
UIAlertView *errorMessage = [[UIAlertView alloc] initWithTitle:#"FBAlertTitle"
message:#"FBAuthenticationErrorMessage"
delegate:nil
cancelButtonTitle:#"Ok"
otherButtonTitles:nil];
[errorMessage performSelectorOnMainThread:#selector(show) withObject:nil waitUntilDone:YES];
errorMessage = nil;
}
}
break;
// presently extension, log-out and invalidation are being implemented in the Facebook class
default:
break; // so we do nothing in response to those state transitions
}
}];
permission = nil;
or you want to open in browser then use following :
In your .h file
#import <FacebookSDK/FacebookSDK.h> and add FBLoginViewDelegate delegate
In you .m file
FBLoginView *loginview = [[FBLoginView alloc] init];
loginview.frame = CGRectOffset(loginview.frame, 5, 5);
loginview.delegate = self;
[self.view addSubview:loginview];
[loginview sizeToFit];
// use following delegate methods
- (void)loginViewShowingLoggedInUser:(FBLoginView *)loginView {
// first get the buttons set for login mode
}
- (void)loginViewFetchedUserInfo:(FBLoginView *)loginView
user:(id<FBGraphUser>)user {
// here we use helper properties of FBGraphUser to dot-through to first_name and
// id properties of the json response from the server; alternatively we could use
// NSDictionary methods such as objectForKey to get values from the my json object
NSLog(#"userprofile:%#",user);
}
- (void)loginViewShowingLoggedOutUser:(FBLoginView *)loginView {
//BOOL canShareAnyhow = [FBNativeDialogs canPresentShareDialogWithSession:nil];
}
- (void)loginView:(FBLoginView *)loginView handleError:(NSError *)error {
// see https://developers.facebook.com/docs/reference/api/errors/ for general guidance on error handling for Facebook API
// our policy here is to let the login view handle errors, but to log the results
NSLog(#"FBLoginView encountered an error=%#", error);
}
I use Matt Gallagher's audio streamer for streaming radio stations. But how to record the audio? Is there a way to get the downloaded packets into NSData and save it in an audio file in the documents folder on the iPhone?
Thanks
Yes, there is and I have done it. My problem is being able to play it back IN the same streamer (asked elsewhere). It will play back with the standard AVAudioPlayer in iOS. However, this will save the data to a file by writing it out in the streamer code.
This example is missing some error checks, but will give you the main idea.
First, a call from the main thread to start and stop recording. This is in my viewController when someone presses record:
//---------------------------------------------------------
// Record button was pressed (toggle on/off)
// writes a file to the documents directory using date and time for the name
//---------------------------------------------------------
-(IBAction)recordButton:(id)sender {
// only start if the streamer is playing (self.streamer is my streamer instance)
if ([self.streamer isPlaying]) {
NSDate *currentDateTime = [NSDate date]; // get current date and time
NSDateFormatter *dateFormatter = [[[NSDateFormatter alloc] init] autorelease];
[dateFormatter setDateFormat:#"EEEE MMMM d YYYY 'at' HH:mm:ss"];
NSString *dateString = [dateFormatter stringFromDate:currentDateTime];
self.isRecording = !self.isRecording; // toggle recording state BOOL
if (self.isRecording)
{
// start recording here
// change the record button to show it is recording - this is an IBOutlet
[self.recordButtonImage setImage:[UIImage imageNamed:#"Record2.png"] forState:0];
// call AudioStreamer to start recording. It returns the file pointer back
//
self.recordFilePath = [self.streamer recordStream:TRUE fileName:dateString]; // start file stream and get file pointer
} else
{
//stop recording here
// change the button back
[self.recordButtonImage setImage:[UIImage imageNamed:#"Record.png"] forState:0];
// call streamer code, stop the recording. Also returns the file path again.
self.recordFilePath = [self.streamer recordStream:FALSE fileName:nil]; // stop stream and get file pointer
// add to "recorded files" for selecting a recorderd file later.
// first, add channel, date, time
dateString = [NSString stringWithFormat:#"%# Recorded on %#",self.model.stationName, dateString]; // used to identify the item in a list laster
// the dictionary will be used to hold the data on this recording for display elsewhere
NSDictionary *row1 = [[[NSDictionary alloc] initWithObjectsAndKeys: self.recordFilePath, #"path", dateString, #"dateTime", nil] autorelease];
// save the stream info in an array of recorded Streams
if (self.model.recordedStreamsArray == nil) {
self.model.recordedStreamsArray = [[NSMutableArray alloc] init]// init the array
}
[self.model.recordedStreamsArray addObject:row1]; // dict for this recording
}
}
}
NOW, in AudioStreamer.m I need to handle the record setup call above
- (NSString*)recordStream:(BOOL)record fileName:(NSString *)fileName
{
// this will start/stop recording, and return the file pointer
if (record) {
if (state == AS_PLAYING)
{
// now open a file to save the data into
NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
NSString *documentsDirectory = [paths objectAtIndex:0];
// will call this an mp3 file for now (this may need to change)
NSMutableString *temp = [NSMutableString stringWithString:[documentsDirectory stringByAppendingFormat:#"/%#.mp3",fileName]];
// remove the ':' in the time string, and create a file name w/ time & date
[temp replaceOccurrencesOfString:#":" withString:#"" options:NSLiteralSearch range:NSMakeRange(0, [temp length])];
self.filePath = temp; // file name is date time generated.
NSLog(#"Stream Save File Open = %#", self.filePath);
// open the recording file stream output
self.fileStream = [NSOutputStream outputStreamToFileAtPath:self.filePath append:NO];
[self.fileStream open];
NSLog(#"recording to %#", self.fileStream);
self.isRecording = TRUE;
return (self.filePath); // if started, send back the file path
}
return (nil); // if not started, return nil for error checking
} else {
// save the stream here to a file.
// we are done, close the stream.
if (self.fileStream != nil) {
[self.fileStream close];
self.fileStream = nil;
}
NSLog(#"stop recording");
self.isRecording = FALSE;
return (self.filePath); // when stopping, return nil
}
}
LASTLY, we need to modify the data portion of the streamer to actually save the bytes. You need to modify the stream code in the method: -(void)handleReadFromStream:(CFReadStreamRef)aStreameventType:(CFStreamEventType)eventType
Scroll down in that method until you find:
#synchronized(self)
{
if ([self isFinishing] || !CFReadStreamHasBytesAvailable(stream))
{
return;
}
//
// Read the bytes from the stream
//
length = CFReadStreamRead(stream, bytes, kAQDefaultBufSize);
if (length == -1)
{
[self failWithErrorCode:AS_AUDIO_DATA_NOT_FOUND];
return;
}
RIGHT after the length = line, add the following code:
//
// if recording, save the raw data to a file
//
if(self.isRecording && length != 0){
//
// write the data to a file
//
NSInteger bytesWritten;
NSInteger bytesWrittenSoFar;
bytesWrittenSoFar = 0;
do {
bytesWritten = [self.fileStream write:&bytes[bytesWrittenSoFar] maxLength:length - bytesWrittenSoFar];
NSLog(#"bytesWritten = %i",bytesWritten);
if (bytesWritten == -1) {
[self.fileStream close];
self.fileStream = nil;
NSLog(#"File write error");
break;
} else {
bytesWrittenSoFar += bytesWritten;
}
} while (bytesWrittenSoFar != length);
}
Here are the .h declarations:
Added to the interface for AudioStreamer.h
// for recording and saving a stream
NSString* filePath;
NSOutputStream* fileStream;
BOOL isRecording;
BOOL isPlayingFile;
In your view controller you will need:
#property(nonatomic, assign) IBOutlet UIButton* recordButtonImage;
#property(nonatomic, assign) BOOL isRecording;
#property (nonatomic, copy) NSString* recordFilePath;
Hope this helps someone. Let me know if questions, and always happy to hear someone who can improve this.
Also, someone asked about self.model.xxx Model is a Data Object I created to allow me to easily pass around data that is used by more than one object, and is also modified by more than one object. I know, global data is bad form, but there are times that just make it easier to access. I pass the data model to each new object when called. I save an array of channels, song name, artist name, and other stream related data inside the model. I also put any data I want to persist through launches here, like settings, and write this data model to a file each time a persistent data is changed. IN this example, you can keep the data locally. If you need help on the model passing, let me know.
OK, here is how I play back the recorded file. When playing a file, the station URL contains the path to the file. self.model.playRecordedSong contains a time value for how many seconds into the stream I want to play. I keep a dictionary of song name and time index, so I can jump into the recorded stream at the start of any song. Use 0 to start form the beginning.
NSError *error;
NSURL *url = [NSURL fileURLWithPath:[NSString stringWithFormat:self.model.stationURL, [[NSBundle mainBundle] resourcePath]]];
// get the file URL and then create an audio player if we don't already have one.
if (audioPlayer == nil) {
// set the seconds count to the proper start point (0, or some time into the stream)
// this will be 0 for start of stream, or some value passed back if they picked a song.
self.recordPlaySecondsCount = self.model.playRecordedSong;
//create a new player
audioPlayer = [[AVAudioPlayer alloc] initWithContentsOfURL:url error:&error];
// set self so we can catch the end of file.
[audioPlayer setDelegate: self];
// audio player needs an NSTimeInterval. Get it from the seconds start point.
NSTimeInterval interval = self.model.playRecordedSong;
// seek to the proper place in file.
audioPlayer.currentTime = interval;
}
audioPlayer.numberOfLoops = 0; // do not repeat
if (audioPlayer == nil)
NSLog(#"AVAudiolayer error: %#", error);
// I need to do more on the error of no player
else {
[audioPlayer play];
}
I hope this helps you play back the recorded file.
Try This Class This Have Full Solution OF All Radio streaming recording Playing All..
In Git Hub You Can Find This Use This Class Very Easy To Use