spotify session callback get_audio_buffer_stats - spotify

I'm trying to make a program in Spotify that collects the audio data. I saw in the API that there is a callback get_audio_buffer_stats, which has stutter and samples. I tried adding that to the program (I am just modifying the jukebox example), but it only ever prints 0 for stutter and samples, even when I turn off the wifi and wait for the song to stop playing. And by adding the code, I mean that I made a callback function for it, and I added it to the session callbacks. Am I missing something? Can anyone help me to get this callback to work? Thanks! My code is below:
static void get_audio_buffer_stats(sp_session *sess, sp_audio_buffer_stats *stats)
{
pthread_mutex_lock(&g_notify_mutex);
//log session data
stuttervariable = stats->stutter;
samplesvariable = stats->samples;
printf("stutter, %d\n", stuttervariable);
printf("samples, %d\n", samplesvariable);
pthread_cond_signal(&g_notify_cond);
pthread_mutex_unlock(&g_notify_mutex);
}
/**
* The session callbacks
*/
static sp_session_callbacks session_callbacks = {
.logged_in = &logged_in,
.notify_main_thread = &notify_main_thread,
.music_delivery = &music_delivery,
.metadata_updated = &metadata_updated,
.play_token_lost = &play_token_lost,
.log_message = NULL,
.end_of_track = &end_of_track,
.get_audio_buffer_stats = &get_audio_buffer_stats,
};

I think the idea with get_audio_buffer_stats is that you are supposed to tell libspotify if you've suffered stuttering and how many samples are left in your buffer. When it calls get_audio_buffer_stats, it passes a pointer to a struct that you are supposed to fill in. Presumably if you tell libspotify that you're suffering stutter it will try to send you a bit more data to keep your buffer more full. By telling libspotify how full your buffer is, it can accommodate for drift in your clock causing you to consume audio slightly faster or slower than it expects.

Related

dataTaskWithURL for dummies

I keep learning iDev but I still can't deal with http requests.
It seems to be crazy, but everybody whom I talk about synchronous requests do not understand me. Okay, it's really important to keep on a background queue as much as it possible to provide smooth UI. But in my case I load JSON data from server and I need to use this data immediately.
The only way I achieved it are semaphores. Is it okay? Or I have to use smth else? I tried NSOperation, but in fact I have to many little requests so creating each class for them for me seems to be not easy-reading-code.
func getUserInfo(userID: Int) -> User {
var user = User()
let linkURL = URL(string: "https://server.com")!
let session = URLSession.shared
let semaphore = DispatchSemaphore(value: 0)
let dataRequest = session.dataTask(with: linkURL) { (data, response, error) in
let json = JSON(data: data!)
user.userName = json["first_name"].stringValue
user.userSurname = json["last_name"].stringValue
semaphore.signal()
}
dataRequest.resume()
semaphore.wait(timeout: DispatchTime.distantFuture)
return user
}
You wrote that people don't understand you, but on the other hand it reveals that you don't understand how asynchronous network requests work.
For example imagine you are setting an alarm for a specific time.
Now you have two options to spend the following time.
Do nothing but sitting in front of the alarm clock and wait until the alarm occurs. Have you ever done that? Certainly not, but this is exactly what you have in mind regarding the network request.
Do several useful things ignoring the alarm clock until it rings. That is the way how asynchronous tasks work.
In terms of a programming language you need a completion handler which is called by the network request when the data has been loaded. In Swift you are using a closure for that purpose.
For convenience declare an enum with associated values for the success and failure cases and use it as the return value in the completion handler
enum RequestResult {
case Success(User), Failure(Error)
}
Add a completion handler to your function including the error case. It is highly recommended to handle always the error parameter of an asynchronous task. When the data task returns it calls the completion closure passing the user or the error depending on the situation.
func getUserInfo(userID: Int, completion:#escaping (RequestResult) -> ()) {
let linkURL = URL(string: "https://server.com")!
let session = URLSession.shared
let dataRequest = session.dataTask(with: linkURL) { (data, response, error) in
if error != nil {
completion(.Failure(error!))
} else {
let json = JSON(data: data!)
var user = User()
user.userName = json["first_name"].stringValue
user.userSurname = json["last_name"].stringValue
completion(.Success(user))
}
}
dataRequest.resume()
}
Now you can call the function with this code:
getUserInfo(userID: 12) { result in
switch result {
case .Success(let user) :
print(user)
// do something with the user
case .Failure(let error) :
print(error)
// handle the error
}
}
In practice the point in time right after your semaphore and the switch result line in the completion block is exactly the same.
Never use semaphores as an alibi not to deal with asynchronous patterns
I hope the alarm clock example clarifies how asynchronous data processing works and why it is much more efficient to get notified (active) rather than waiting (passive).
Don't try to force network connections to work synchronously. It invariably leads to problems. Whatever code is making the above call could potentially be blocked for up to 90 seconds (30 second DNS timeout + 60 second request timeout) waiting for that request to complete or fail. That's an eternity. And if that code is running on your main thread on iOS, the operating system will kill your app outright long before you reach the 90 second mark.
Instead, design your code to handle responses asynchronously. Basically:
Create data structures to hold the results of various requests, such as obtaining info from the user.
Kick off those requests.
When each request comes back, check to see if you have all the data you need to do something, and then do it.
For a really simple example, if you have a method that updates the UI with the logged in user's name, instead of:
[self updateUIWithUserInfo:[self getUserInfoForUser:user]];
you would redesign this as:
[self getUserInfoFromServerAndRun:^(NSDictionary *userInfo) {
[self updateUIWithUserInfo:userInfo];
}];
so that when the response to the request arrives, it performs the UI update action, rather than trying to start a UI update action and having it block waiting for data from the server.
If you need two things—say the userInfo and a list of books that the user has read, you could do:
[self getUserInfoFromServerAndRun:^(NSDictionary *userInfo) {
self.userInfo = userInfo;
[self updateUI];
}];
[self getBookListFromServerAndRun:^(NSDictionary *bookList) {
self.bookList = bookList;
[self updateUI];
}];
...
(void)updateUI
{
if (!self.bookList) return;
if (!self.userInfo) return;
...
}
or whatever. Blocks are your friend here. :-)
Yes, it's a pain to rethink your code to work asynchronously, but the end result is much, much more reliable and yields a much better user experience.

SAPI 5 TTS Events

I'm writing to ask you some advices for a particular problem regarding SAPI engine. I have an application that can speak both to the speakers and to a WAV file. I also need some events to be aware, i.e. word boundary and end input.
m_cpVoice->SetNotifyWindowMessage(m_hWnd, TTS_MSG, 0, 0);
hr = m_cpVoice->SetInterest(SPFEI_ALL_EVENTS, SPFEI_ALL_EVENTS);
Just for test I added all events! When the engine speaks to speakers all events are triggered and sent to the m_hWnd window, but when I set output to the WAV file, none of them are sent
CSpStreamFormat fmt;
CComPtr<ISpStreamFormat> pOld;
m_cpVoice->GetOutputStream(&pOld);
fmt.AssignFormat(pOld);
SPBindToFile(file, SPFM_CREATE_ALWAYS, &m_wavStream, &fmt.FormatId(), fmt.WaveFormatExPtr());
m_cpVoice->SetOutput(m_wavStream, false);
m_cpVoice->Speak(L"Test", SPF_ASYNC, 0);
Where file is a path passed as argument.
Really this code is taken from the TTS samples found on the SAPI SDK. It seems a little bit obscure the part setting the format...
Can you help me in finding the problem? Or does anyone of you know a better way to write TTS to WAV? I can not use manager code, it should be better to use the C++ version...
Thank you very much for help
EDIT 1
This seems to be a thread problem and searching in the spuihelp.h file, that contains the SPBindToFile helper I found that it uses the CoCreateInstance() function to create the stream. Maybe this is where the ISpVoice object looses its ability to send event in its creation thread.
What do you think about that?
I adopted an on-the-fly solution that I think should be acceptable in most of the cases, In fact when you write speech on files, the major event you would be aware is the "stop" event.
So... take a look a the class definition:
#define TTS_WAV_SAVED_MSG 5000
#define TTS_WAV_ERROR_MSG 5001
class CSpeech {
public:
CSpeech(HWND); // needed for the notifications
...
private:
HWND m_hWnd;
CComPtr<ISpVoice> m_cpVoice;
...
std::thread* m_thread;
void WriteToWave();
void SpeakToWave(LPCWSTR, LPCWSTR);
};
I implemented the method SpeakToWav as follows
// Global variables (***)
LPCWSTR tMsg;
LPCWSTR tFile;
long tRate;
HWND tHwnd;
ISpObjectToken* pToken;
void CSpeech::SpeakToWave(LPCWSTR file, LPCWSTR msg) {
// Using, for example wcscpy_s:
// tMsg <- msg;
// tFile <- file;
tHwnd = m_hWnd;
m_cpVoice->GetRate(&tRate);
m_cpVoice->GetVoice(&pToken);
if(m_thread == NULL)
m_thread = new std::thread(&CSpeech::WriteToWave, this);
}
And now... take a look at the WriteToWave() method:
void CSpeech::WriteToWav() {
// create a new ISpVoice that exists only in this
// new thread, so we need to
//
// CoInitialize(...) and...
// CoCreateInstance(...)
// Now set the voice, i.e.
// rate with global tRate,
// voice token with global pToken
// output format and...
// bind the stream using tFile as I did in the
// code listed in my question
cpVoice->Speak(tMsg, SPF_PURGEBEFORESPEAK, 0);
...
Now, because we did not used the SPF_ASYNC flag the call is blocking, but because we are on a separate thread the main thread can continue. After the Speak() method finished the new thread can continue as follow:
...
if(/* Speak is went ok */)
::PostMessage(tHwn, TTS_WAV_SAVED_MSG, 0, 0);
else
::PostMessage(tHwnd, TTS_WAV_ERROR_MSG, 0, 0);
}
(***) OK! using global variables is not quite cool :) but I was going fast. Maybe using a thread with the std::reference_wrapper to pass parameters would be more elegant!
Obviously, when receiving the TTS messages you need to clean the thread for a next time call! This can be done using a CSpeech::CleanThread() method like this:
void CSpeech::CleanThread() {
m_thread->join(); // I prefer to be sure the thread has finished!
delete m_thread;
m_thread = NULL;
}
What do you think about this solution? Too complex?

how to stop(or terminate ) MPI_Recv after some perticular time when there is deadlock in MPI?

I am trying to detect deadlocks in MPI
is there any method in which we can jump from function like MPI_Recv after particular time.
MPI_Recv is a blocking function and will just sit there untill it receives the data it is waiting for, so if you are looking to have it timeout and error if things lock up then I don't think that's the one for you.
You could look into using MPI_Irecv, which is the non-blocking version. You could then emulate the blocking behaviour of MPI_Recv using MPI_Wait or MPI_Test.
If you use a combination of MPI_Irecv and MPI_Test you could make a snippet that waits to recieve for a specified length of time, then errors if it hasn't. Rough example:
MPI_Irecv(..., &request); //start a receive request, non-blocking
time_t start_time = time(); //get start time
MPI_Test(&request, &gotData, ...); //test, have we got it yet
//loop until we have received, or taken too long
while (!gotData && difftime(time(),start_time) < TIMEOUT_TIME) {
//wait a bit.
MPI_Test(&request, &gotData, ...); //test again
}
//By now we either have received the data, or taken too long, so...
if (!gotData) {
//we must have timed out
MPI_Cancel(&request);
MPI_Request_free(&request);
//throw an error
}

How to create an actionscript Object in Haxe

I am creating an actionscript video player in Haxe and to avoid the asyncError I am trying to create a custom Object. How do I do this is Haxe?
The client property specifies the object on which callback methods are invoked. The default object is the NetStream object being created. If you set the client property to another object, callback methods will be invoked on that other object.
Here is my code.
public function new()
{
super();
trace("video");
//initialize net stream
nc = new NetConnection();
nc.connect(null);
ns = new NetStream(nc);
buffer_time = 2;
ns.bufferTime = buffer_time;
//Add video to stage
myVideo = new flash.media.Video(640, 360);
addChild(myVideo);
//Add callback method for listeing on NetStream meta data
client = new Dynamic();
ns.client = client;
client.onMetaData = metaDataHandler;
}
public function playVideo(url:String)
{
urlName = new String(url);
myVideo.attachNetStream(ns);
ns.play(urlName);
ns.addEventListener(NetStatusEvent.NET_STATUS, netstat);
}
function netstat(stats:NetStatusEvent)
{
trace(stats.info.code);
}
function metaDataHandler(infoObject:Dynamic)
{
myVideo.width = infoObject.width;
myVideo.height = infoObject.height;
}
You should probably do:
client : Dynamic = {};
Forget the client object; it isn't necessary for playing FLVs or for handling async errors. For that, just add a listener to the NetStream for AsyncErrorEvent.ASYNC_ERROR.
I suggest you add a listener to the NetConnection and the NetStream for NetStatusEvent.NET_STATUS, and then trace out the event.info.code value within the listener.
You should first see the string "NetConnection.Connect.Success" coming from the NetConnection; when you play your video through the NetStream, you should see "NetStream.Play.StreamNotFound" if there's a problem loading the FLV. Otherwise you should see "NetStream.Play.Start".
Unless you're progressively streaming your FLV, you may not see any video playing until the file has finished loading. If the movie file is long, this may explain why your program is running without errors but isn't playing the movie. There are small test FLV files available online that you might wish to use while you track the problem down.
(ActionScript's FLV playback API is bizarre and haXe's documentation is rudimentary, so you're rightfully frustrated.)
This maybe useful... http://code.google.com/p/zpartan/source/browse/zpartan/media/
You can see it being used
http://code.google.com/p/jigsawx/

Default Audio Output - Getting Device Changed Notification? (CoreAudio, Mac OS X, AudioHardwareAddPropertyListener)

I am trying to write a listener using the CoreAudio API for when the default audio output is changed (e.g.: a headphone jack is plugged in). I found sample code, although a bit old and using deprecated functions (http://developer.apple.com/mac/library/samplecode/AudioDeviceNotify/Introduction/Intro.html, but it didn't work. Re-wrote the code in the 'correct' way using AudioHardwareAddPropertyListener method, but still it doesn't seem to work. When I plug in a headphone the function that I registered is not triggered. I'm a bit of a loss here... I suspect the problem may lay some where else, but I can't figure out where...
The Listener Registration Code:
OSStatus err = noErr;
AudioObjectPropertyAddress audioDevicesAddress = { kAudioHardwarePropertyDefaultOutputDevice, KAudioObjectPropertyScopeGlobal, KAudioObjectPropertyElementMaster };
err = AudioObjectAddPropertyListener ( KAudioObjectAudioSystemObject, &AudioDevicesAddress, coreaudio_property_listener, NULL);
if (err) trace ("error on AudioObjectAddPropertyListener");
After a search in sourceforge for projects that used the CoreAudio API, I found the rtaudio project, and more importantly these lines:
// This is a largely undocumented but absolutely necessary
// requirement starting with OS-X 10.6. If not called, queries and
// updates to various audio device properties are not handled
// correctly.
CFRunLoopRef theRunLoop = NULL;
AudioObjectPropertyAddress property = { kAudioHardwarePropertyRunLoop,
kAudioObjectPropertyScopeGlobal,
kAudioObjectPropertyElementMaster };
OSStatus result = AudioObjectSetPropertyData( kAudioObjectSystemObject, &property, 0, NULL, sizeof(CFRunLoopRef), &theRunLoop);
if ( result != noErr ) {
errorText_ = "RtApiCore::RtApiCore: error setting run loop property!";
error( RtError::WARNING );
}
After adding this code I didn't even need to register a listener myself.
Try CFRunLoopRun() - it has the same effect. i.e. making sure the event loop that is calling your listener is running.

Resources