My action streams audio to a Google-Home via MediaResponse webhook call. This works well, but even changing the volume isn't possible while streaming!
Tested on simulator and Google-Home mini. Only intents of my action are recognized.
My expectation was, that the Google standard commands (volume up, down, ask the time...) are still working, even if a started action streams the audio.
But Google does not understand the command and cancels my action after 4 unrecognized tries.
Is this a normal behavior? Hopefully not, would be a big show stopper...
Users can now use voice commands to control the volume while your Action is using the Media Response.
Related
Do any of you, know a way to get the audio stream of a music platform and plug it to the Web Audio API ?
I am doing a music visualizer based on the Web Audio API. It currently reads sounds from the mic of my computer and process a real-time visualization. If I play music loud enough, my viz works !
But now I'd like to move on and only read the sound coming from my computer, so that the visualization render only to the music and no other sound such as people chatting.
I know I can buffer MP3 file in that API and it would work perfectly. But in 2020, streaming music is very common, via Deezer, Spotify, Souncloud etc.
I know they all have an API but they often offer an SDK where you cannot really do more than "play" music. There is no easy access to the stream of audio data. Maybe I am wrong and that is why I ask your help.
Thanks
The way to stream music to WebAudio is to use a MediaElementAudioSourceNode or MediaStreamAudioSourceNode. However, these nodes will output zero unless you're allowed to access the data. This means you have to set the CORS property correctly on your end and also requires the server to allow the access through CORS.
A google search will help with setting up CORS. But many sites won't allow access unless you have the right permissions. Then you are out of luck.
I find a "no-code" work around. At least on Ubuntu 18.04, I am able to tell Firefox to take my speakers as the "microphone input".
You just have to select the good "mic" in the list when your browser asks for mic permission.
That solution is very convenient since I do not need to write platform-specific binding-code to access to the audio stream
I'm trying to migrate a music player Alexa Skill to Google Home. But I cannot find a pre-built music playback (Actions or DialogFlow)... I want to reproduce streaming music using my own music server (not from Spotify or Google music).
I found a couple of examples using buildRichResponse and/or MediaObject, but these are not exactly a playback service.
Does anyone know if google home has a multimedia playback or a way to do it easily?
Thx
The Assistant's Media response is the nearest parallel to Alexa's AudioPlayer, although there are clearly differences between the two:
Alexa's playback is done outside the context of a session / conversation. So once you start a playback, you only have the playback controls available. Assistant Media controls are part of a conversation, so you can fully handle anything the user might say.
One consequence of this is that Alexa treats the playback as the result of the skill, while the Assistant treats it as part of the Action.
Google only sends an event when media playback has finished, and doesn't give any indication of why it has finished. Alexa reports more of the controls and has more events describing the state of the playback.
This makes it fairly easy to "queue up" the next audio for Alexa, but that brings additional complexity for how to handle when the queue ends up being wrong at the last moment. The Assistant doesn't have any way to queue the next audio, so there ends up being a gap between the audio ending and the next beginning while the event is handled on the server.
Although the approaches are slightly different, both seem to offer a basic long-audio playback service.
This doesn't sound like what you are trying to do, but if you are looking for something slightly more static, you can also look at the content actions that Google supports.
See https://github.com/Limag/aiplayer/ for an example to play self hosted MP3s. Unfortunately, even changing the volume will not be recognized. And it seems there is no way to add this.
If you use Google Play Music, you can upload MP3s with a tiny helper application, provided by Google. Google Play Music works well, but has some other disadvantages. E.g. it is unusable for audio books, all playlists starting always from the beginning.
I'm trying to understand the best possible ways (technically and from a user experience) point of view to test the user's camera, microphone, and speakers. Or does it really come down to letting the user select an output for each and testing them manually, i.e.:
I see my self in the camera so it's working
My mic works b/c there's a visual indicator that tells me it's
picking up sound
My speakers work because there's a visual indicator that moves when I
talk
Thanks!
- Jess
Assuming your app is network connected, its often the case that people will supply some sort of 'test call' functionality.
This allows your user make a call to a server which can verify that audio and video is received, and send back both audio and video to the user so they can observe that they are reaching them correctly.
I have been searching for the best practice of stopping the playback on a device when the chromecast is selected. right now I connect to an audio stream, it starts playing on the chromecast fine, but also stays playing on my phone. I had hoped this was some type of automatic switch that was supposed to occur. Is it up to me to manage all of this?? If so what are the best practices to start/resume playback when switching back and forth from the chromecast to the device? It is a live stream so no way to pause and pick up where it left off.
Are there certain callbacks that I need to watch for to make the switch?
Yes, it is your responsibility to manage the behavior of your app. Our UX Design Checklist outlines the flow that we are recommending; for example when you start streaming to a cast device, you stop the local playback. Details of how you can stop the playback locally depends on your application but what you should use is a set of callbacks that the Android Cast SDK provides for you to learn about the success of your cast control commands and state changes that happen on the receiver. These callbacks can tell you if your launch of application was successful or not, whether the media is playing or paused, or when the metadata for the media has changed. You need to look at our SDK documentation to see which ones are appropriate for your case. We also have a number of sample projects that do most of these tasks.
I've spent two days on this and have gotten nowhere. I'm trying to use [MPMusicPlayerController applicationMusicPlayer] to play audio chosen from the user's iPod library and have it run in the background as well as support remote events. Now getting the music actually playing is the easy part. Get the instance, pick the songs, assign the music queue and play. Done and done. BUT... a) I can't get it to play in the background, and b) even when in the foreground I can't get the remote control events to work at all!
And before you ask, yes, I have set the plist entries, the audio session category, the call to say I'm interested in getting remote events and set up a first responder to listen for them, so please know, yes, I've read read every single document on the subject that I could find* (*a task I blame Apple for for not being clear at all on this topic, nor having ANY example code for it!) and I've watched every one of the WWDC videos relating to it (even freezing the screen to copy the code exactly from their example...) so unless I've missed something not in this list, replying with any of those answers is not going to help.
One more thing... I am explicitly talking about using the MPMusicPlayerController which according to the docs, never uses an application session. It always uses the system session. (Maybe that in itself answers my question, but the docs don't clearly say that so I'm not sure, hence this question.)
That said, after two days, my thoughts are this:
When using the MPMusicPlayerController, regardless of what methods you call or what plist entries you set, your app will never run in the background. Period. If you use the ipodMusicPlayer instance, the music keeps playing, but that's because it's the iPod that's playing, not your app. If you use the applicationMusicPlayer instance instead, when going to the background your music stops. In both cases, your app is suspended.
Regardless of your using the ipodMusicPlayer or applicationMusicPlayer instances, all remote events go to the iPod application itself, not yours, even if you've explicitly asked for them. If you are using the applicationMusicPlayer instance and you use the remote to select 'Play', the iPod app receives the command so your audio ducks out and is interrupted and playback begins in the iPod app. If you've chosen the ipodMusicPlayer instead, then of course it doesn't matter as you have explicitly said you're basically just interested in remotely controlling the iPod app which again, is what actually receives the remote events.
The icon in the quick-switch controls at the bottom never changes to your app's icon because again, your app is never actually set up to receive the events. The iPod application is, which is why its icon does appear there.
So what I want to know is... am I wrong here? Has anyone successfully been able to use MPMusicPlayerController and been able to intercept the remote events? While I'd prefer to use the applicationMusicPlayer with background music support so I don't muck with the user's iPod, the bigger thing is remote control notifications, meaning if I have to use the ipodMusicControl and keep my app in the foreground to intercept those messages, so be it. It's ugly that way, but at least it's something.
Code examples, or at least explicit steps against one of the built-in app templates would be GREATLY appreciated. (Don't even need the implementation... just the steps. Hopefully that will appease the inevitable 'It's still under NDA' thing that people keep answering questions with.)
Mark
I solved it. The info is in my other question over here...
Stack Overflow: Play iPod music while receiving remote control events
...but the short version is you have to use AVPlayer (but not AVAudioPlayer. No idea why that is!) with the asset URL from the MPMediaItem you got from the library, then set the audio session's category to Playable (do NOT enable mixable!) and add the appropriate keys to your info.plist file telling the OS your app wants to support background audio.
This lets you play items from your iPod library (except Audible.com files for some reason!) and still get remote events. Granted you have to do more work, and since this is your audio player which is separate from, and will interrupt the iPod app (which may or may not be desirable. And again, don't enable mixing or the iPod app will hijack the remote control events) but those are the breaks!
For anyone who wants to know, I found out to get the audio playing in the background, you have to set the audio session's category to Playable and then background audio works just fine. If you also want to play your own sounds at the same time, you have to mark the category as mixable. That solved the background music part. But what I've found out is any time the iPod is playing, it doesn't seem possible for you to get remote notifications.
Here's the updated thread...
How can you play music from the iPod app while still receiving remote control events in your app?
M