Control Actions-on-Google Media-Response (e.g. start at minute 3) - dialogflow-es

i want to develop an google-action. (ideally using dialogflow).
but the google-action needs some features where i couldn't find a solution, and i'm not sure if it's even possible.
My Usecases:
The google action starts a mps. someone stops and exits the google action, and if the user starts the google action again, i would resume the mp3.
but i couldn't find a solution where i can determine the "offset", when the user stops the mp3.
and even i would have this offset, i didn't find a solution how to tell google assistant, that i want to play the mp3, but starts at e.g. Minute 51.
I would be really wondered, it the google action possibilitys are so extremly restricted.
can someone confirm, that this usecases are not possible, or can someone give me a hint how to do it?
i only found this one, which is restricted to start a mp3 at beginning.
https://developers.google.com/actions/assistant/responses#media_responses
Kind Regards
Stefan

To start an mp3 file at a certain point you can try the SSML tag and its clipBegin property.
clipBegin - A TimeDesignation that is the offset from the audio source's beginning to start playback from. If this value is greater than or equal to the audio source's actual duration, then no audio is inserted.
https://developers.google.com/actions/reference/ssml
To use this, your mp3 file has to be hosted using HTTPS
Hope that this helps.

You could use the conversational actions (instead of dialogflow) where media responses allow using a start_offset
....
"content": {
"media": {
"start_offset": "2.12345s",
...
For more details see
https://developers.google.com/assistant/conversational/prompts-media#MediaResponseProperties
Even conversational actions seem to be the "newest" technology for google actions. Or at least released recently.

Related

Add audio in dialog (Bixby)

This is my first question, new and fresh, hello guys.
As the title mentions, is there any workaround or way to add audio inside dialog-speech-template? As it doesn't support mp3, and only wav, I found it hard to implement.
The audio I wanted to get is origin from API, and hence it's not possible for me to download the mp3 file and convert it (as changes may happen to the audio).
Is there any programmatic way to convert the mp3 audio to wav? I am pretty new to Bixby, hope elders here can help.
Unfortunately, Bixby SSML only for certain wav format. Please refer SSML#AudioClip for details. There are also instructions how to convert using ffmpeg tool.
To support mp3 format, you can raise a Feature Request in our community. This forum is open to other Bixby developers who can upvote it, leading to more visibility within the community and with the Product Management team.

Alexa Skill Static Audio File implementation

I have been playing around with this project from git, and so far so good.
https://github.com/bespoken/streamer
I would like to enhance it to play a long form static audio file when the user asks for it. For example, if the user asks for "Ask Streamer to Play the National Anthem", I would like to just play just that file. Does anyone have a good idea on the best way to implement this simple thing?
I tried a few approaches and I am having trouble getting the end result. For one, I do not want the data of static file to be saved on dynamodb, but still want the podcast information to be saved.
I added an Intent for 'Anthem', and sample utterances for that intent. In the constants.js file, I added a new "STATIC_MODE", and tried to replicate how PLAY_MODE is implemented through out
Here is the issue I am running into, whenever I stop the Anthem file from playing, and later invoke the Podcast Player, it starts playing The Anthem, instead of podcasts.I tried commenting out the saveState in audioEventHandlers.js for the STATIC_MODE handler, yet, when I try to play podcast, it still plays the Anthem when I ask for play podcast.
Any help would be appreciated!
This is probably bad, but I have never coded in javascript, just tried to follow the git project to enhance the functionality to my liking.
I created the Streamer project that you reference. In the interest of providing a simpler example on how to use the Alexa AudioPlayer, I also created this project:
https://github.com/bespoken/super-simple-audio-player
I believe it happens to do exactly what you requested, which is it simply plays a single, static audio file. I created this because I wanted a have a less complicated example to show people how the AudioPlayer works. Hope you find it helpful!

Searching for Old YouTube Videos

I'm trying to find all of the YouTube videos created by IGN's channel during the month of February 2014. IGN currently has 118,000+ videos uploaded, so going back through all of them is not possible. I previously used the following Google search string and a custom date range to find them:
site:youtube.com ignentertainment
This doesn't work anymore for some reason. I'd be much obliged if anyone has any ideas of how to do this. I have no idea what an API is, but if there's a VERY simple way of using that to do what I want that can be explained briefly, I'm willing to go that route.
Thanks.
You can use google to limit the period that it fetches search hits from.
Start by searching using "site:youtube.com ignentertainment" or simply "ignentertainment" and then click on the tools button, you now got a new bar between your search bar and the results that can limit time among other things.
Open the time related options and choose to input a specified period and your all done.
Edit: oh and the command site:youtube.com ignentertainment sure worked for me.

How to invoke "next track" and "star current track" from my Spotify App?

I just started writing a little Spotify App and can't figure out how to invoke the two functions next/star from Javascript. I just need this simple functionality: From within my App (Javascript) call a method that skips the current track and plays the next one (if there is any) OR call a method that "stars" (is this really a verb?) the current song.
Is this API DOC the only resource for building my own App? Thanks in advance for any hints on this!
UPDATE: Just found out how to SKIP: sp.trackPlayer.skipToNextTrack();
Unfortunately, how to "star" a track remains unknown.
UPDATE 2: GOT IT! : models.library.starredPlaylist.add(models.player.track); – yep that makes sense.
The correct way to star a track is indeed the function you wrote:
models.library.starredPlaylist.add(models.player.track);
trackPlayer is not a supported object that shouldn't be accessed by developers since it's not versioned properly. This means that it may break in the future when we do updates to the platform bridge.
We recommend only using the documented classes on our developer website.
https://developer.spotify.com/technologies/apps/docs/beta/
The correct way to skip to the next track is to use:
models.player.next()

Seeking through a streamed MP3 file with HTML5 <audio> tag

Hopefully someone can help me out with this.
I'm playing around with a node.js server that streams audio to a client, and I want to create an HTML5 player. Right now, I'm streaming the code from node using chunked encoding, and if you go directly to the URL, it works great.
What I'd like to do is embed this using the HTML5 <audio> tag, like so:
<audio src="http://server/stream?file=123">
where /stream is the endpoint for the node server to stream the MP3. The HTML5 player loads fine in Safari and Chrome, but it doesn't allow me to seek, and Safari even says it's a "Live Broadcast". In the headers of /stream, I include the file size and file type, and the response gets ended properly.
Any thoughts on how I could get around this? I certainly could just send the whole file at once, but then the player would wait until the whole thing is downloaded--I'd rather stream it.
Make sure the server accepts Range requests, you can check to see if Accept-Ranges is in the header. In jPlayer this is a common issue in Webkit (particularly Chrome) browsers when it comes to progress and seeking functionality.
You might not be using jPlayer, but the Server Response information on the official website may be of some use.
http://www.jplayer.org/latest/developer-guide/#jPlayer-server-response
but I had the same problem.
Its necessary to set some headers for media file response.
as example:
Accept-Ranges:bytes
Content-Length:7437847
Content-Range:bytes 0-7437846/7437847
Then audio tag will be able to seeking
have a look here http://www.scottandrew.com/pub/html5audioplayer/, I used this and it plays while it is downloading the file. It waits a little bit for the buffer but then plays. Never tried seeking though but I would start by trying to set the "aud.currentTime" in his code if that can be done.
Good luck
Are you sending an Accept-Ranges (RFC 2616, Section 14.5) response header?
From what I understand you want the player to allow the user to jump to parts of the audio/video that haven't buffered yet, something like what Vimeo / YouTube players do.. Tbh I'm not sure if this is possible, as I've looked at some of the examples of html5 medial elements and they just didn't allow me to seek to unbuffered parts :(
If you want to seek through the buffered part - then it's not a problem. In fact - you're creating a problem for yourself here, as far as I understand. You want to stream the file, and what this does is makes the player think you have some kind of live stream out there. If you just sent the file directly - you wouldn't have this issue, because the player will be able to start playing before it loaded the whole file. This works fine with both audio and video elements, and I've confirmed this behaviour in both Chrome and FF :)
Hope this helps!
perhaps this html5 audio player example will explain and demonstrate us the new element and its .load, .play, .currentTime, etc. methods.
i use an array of elements and can set the currentTime position of course.
we can use also eventhandlers (e.g. 'loadeddata') to wait before allow to seek.
ping and have fun with html5 :)

Resources