One of my favorite commands for Bixby is to play a song. Then, Bixby plays music using 'Melon' app that I set as the default player. Melon is a music streaming service like Spotify, so it plays songs not downloading music files.
Recently I'm working on music recommendation app and I want to make Bixby capsule for it. Can I use this 'Play a music' command in capsule development? For example, if I know the title of the song and the artist's name, can I make my capsule play the song using default player of my phone?
You can definitely do this!
This is one of the use cases for which the app-launch functionality (documentation) exists.
Please explore the design guide (documentation) regarding when it is okay to use this functionality and when it is not to ensure that your users have the best experience when using your capsule.
Related
I am making a game where I want to command the AI using word i speak.
Say for example I can say go and AI bot goes to certain distance.
Question is I am finding asset and no provider is giving me grantee that it is possible ?
What are the difficulties for doing it?
I am programmer so if some one suggest the way to handle it I can do it.
Should I make mic listener on all the time and read audio and then pass audio to some external sdk which can convert my voice to text ?
these are the asset provider i have contacted.
https://www.assetstore.unity3d.com/en/#!/content/73036
https://www.assetstore.unity3d.com/en/#!/content/45168
https://www.assetstore.unity3d.com/en/#!/content/47520
and few more !
If someone just explains the steps I need to follow then I can try it for sure.
I am currently using this external api for pretty much the same thing: https://api.ai/
It comes with a unity SDK that works quite well:
https://github.com/api-ai/api-ai-unity-sample#apiai-unity-plugin
You have to connect a audio source to the sdk, and tell it to start listening. It will then convert your voice audio to text, and even detect pre-selected intentions from your voice audio / text.
You can find all steps on how to integrate the unity plugin in the api.ai Unity SDK documentation on github.
EDIT: It's free too btw :)
If you want to recognize offline without sending data to the server, you need to try this plugin:
https://github.com/dimixar/unity3DPocketSphinx-android-lib
It uses open source speech recognition engine CMUSphinx
i'm new to action on Googles and right now doing R&D. I've created an audio skill on Alexa, and now want same for Google assistant as well. But i've few questions:
1- Can we return audio in response? my audios are about 1hour long, so can we play them in our action? In Alexa, we have audio player. Anything like that in assistant?
2- I didn't find any SDK, but devs are talking about it, so there must be some. Kindly share the link.
Thanks in anticipation.
Update:
I believe, SDK is actions-on-google. I've not explored it yet, but it's the SDK that i found for creating actions with node js
Link: actions-on-google
Actions support SSML which provides the playback of audio files: https://developers.google.com/actions/reference/ssml#support_for_ssml_elements
At the moment there is a 120 seconds maximum duration for all the audio formats supported, but you can break up the audio and play them in sequence if they are longer.
If you have your own NLU, you can use the Actions SDK. If you don't have your own NLU, then you can use API.AI to create an action.
A node.js client library is available for either of these options: https://github.com/actions-on-google/actions-on-google-nodejs
For any other developer questions, you should look at the actions documentation: https://developers.google.com/actions/develop/conversation
I have developed an iOS App which plays music from the users music library simultaneously with audio files which are included in the app. I have hired a developer to make several changes in the app, one of which enables the user to use their "Spotify" playlists instead of their "Music" playlists, if they have a spotify account. He tells me it is impossible for this functionality to work with the Spotify API.
Can someone please help me as I really need this functionality to work??
Thankyou.
This isn't possible with the Web API, however, take a look at CocoaLibSpotify which is a Mac and iOS library that allows you to access a user's Spotify playlists and play tracks.
I'm using the Spotify API 11.1.60 for iOS and created successfully an app that can download / play songs from Spotify, download and show the covers, execute searches. But I can't find a way to crossfade songs. I only can play one song a time.
Does anybody know how to crossfade songs?
I did not implement crossfade but I think the only solution is to bufferize your current song to be able to crossfade.
You have to update your music_delivery callback and your code used to play the sound. The solution depend on the sound engine your using (OpenAL, CoreAudio, ...)
Following is the URL. I want to play following URL video in my iPhone application. Please help me out to play this
http://www.example.com/SomeVideoOrAnother.m4v
I am building an application where I'll provide user list of movie or advertisement. User will select one of them and redirected to enjoy creative ad.
You seem to not have done any background learning on iOS, judging by the broadness of your question.
I would recommend the Apple Documentation (Getting started) as a good place to start.
Also, see some of these books on Amazon.
To answer you question, there are a few frameworks to look into, for example AV Foundation.