Chatbot that plays audio as response - audio

I want to make a chatbot, which answers to my voice or text by a mp3 audio for example, If I say "hello, how are you?" I wanna install a preaudio which would play after I say that.

Related

Is it possible to show the title and link of a song that started from queue while using lavalink.py

I am making a disord.py music bot using lavalink.py for music streaming. Right now whenever you have a queue of songs it will play through them all, but in order to see what song is currently playing you need to use a command for it. I want the discord bot to send an embed with the song name and link when the next song in the queue starts. I used to use youtube-dl for the music function but it has since stopped working. I'm not sure if I need to use the StartTrackEvent and if I do, I'm not sure how to actually implement it to where it will send a message in the discord channel. Also, my music code is in a cog if that makes much of a difference.
(I'm new to SO, excuse me if my answer isn't super great)
You would not need an event to do this (assuming you're using the commands extension.) You could make your queue command, and use AudioTrack.title and AudioTrack.identifier to get the track and Youtube identifier. Using the identifier you could link to the track with https://youtube.com/watch?v=<identifier>. An example of this would be:
#commands.command(name="current",description="Shows the current playing song.",usage="current",aliases=['np','nowplaying'])
async def current(self,ctx):
player = self.bot.lavalink.player_manager.get(ctx.guild.id)
embed=discord.Embed(title=player.current.title,url=f"https://youtube.com/watch?v={player.current.identifier}")
await ctx.send(embed=embed)
This would return an embed that contains the title of the current track, and a link to the site in the title.

Add audio in dialog (Bixby)

This is my first question, new and fresh, hello guys.
As the title mentions, is there any workaround or way to add audio inside dialog-speech-template? As it doesn't support mp3, and only wav, I found it hard to implement.
The audio I wanted to get is origin from API, and hence it's not possible for me to download the mp3 file and convert it (as changes may happen to the audio).
Is there any programmatic way to convert the mp3 audio to wav? I am pretty new to Bixby, hope elders here can help.
Unfortunately, Bixby SSML only for certain wav format. Please refer SSML#AudioClip for details. There are also instructions how to convert using ffmpeg tool.
To support mp3 format, you can raise a Feature Request in our community. This forum is open to other Bixby developers who can upvote it, leading to more visibility within the community and with the Product Management team.

Dialogflow Intent that will display additional text to that which is spoken

I have created a simple conversational flow in Dialogflow that accepts various questions and speaks pre-programmed replies, all defined in a series of intents. There are no external hooks etc.
When used on a screen based device (eg. mobile phone) I want to display more text than that which is spoken. (displayText) eg:
User: "What colour is the sky?"
Bot: "Blue" (spoken and displayed on screen). "At night it is black". (Additional information displayed on screen only.)
I want to do the same for each intent.
What is the simplest way of achieving that please? I would prefer to keep most of it in Dialogflow and to write the minimum amount of code possible.
It is ok, I found the solution thanks. In Dialogflow intents under Response there are two tabs, Default and Google Assistant. Under Google Assistant there is an option Customise audio output. When you select that you get two input fields, one for text and one for speech.
So to use the above example under intent training phrase I entered "What colour is the sky?"
Under Default Response I entered "Blue"
Under Google Assistant response, Text Output field I entered: "Blue. At night it is black."
Under Google Assistant response, Speech Output field I entered: "Blue".
It works perfectly in both Google Home (voice only) and Assistant on mobile phone (Speaks "Blue" but displays "Blue. At night it is black.")
It doesn't even seem necessary to enter anything in Default Response. It works fine on Google Home and Assistant on the phone without it. Not sure about other platforms though.

get newest additions to a youtube playlist using youtube-dl

I've got a collaborative youtube playlist with some friends that we use when we get together to play games. The problem is that the internet connection where we get together is quite bad. So I made a little script where people can send songs using bluetooth or by sending a youtube link (youtube-dl downloads the mp3 file of that video using a script that uses the currently selected (youtube) link). I wanted an easier method of adding videos to the offline playlist.
I want to use the collaborative playlist to determine which songs are to be downloaded but I only want the newest additions to the playlist (since the last check/download) is it possible to retrieve the latest youtube playlist items in linux bash?
Have a look at the video selection options. In particular, --download-archive can be used for this purpose.
Simply run youtube-dl --download-archive /path/to/the/archive/file playlist_url. This will download all new songs in the playlist. If your playlist is large, you can also use --playlist-end 42 to only consider the first 42 songs.

Generating Subtitles from audio file using speech Framework iOS

In my app i play the audios using url with the help of AvPlayer. Now i want to add the support of subtitles in it. iOS 10 introduces the Speech framework which help us to recognize the real time and recorded speech. As according to the apple:
"You can perform speech transcription of both real-time and recorded audio. For example, you can get a speech recognizer and start simple speech recognition using code like this:
let recognizer = SFSpeechRecognizer()
let request = SFSpeechURLRecognitionRequest(url: audioFileURL)
recognizer?.recognitionTask(with: request, resultHandler: { (result, error) in
print (result?.bestTranscription.formattedString)
})
Now i am looking for the way that how can i get the Subtitles in the form of string of currently playing audio using this speech framework. And how i shall be able to know that which dialogue is currently playing so that i can show exactly the same string on the screen.
In the segments portion of SFSpeechRecognition you can selectively identify the Subtitles you wish. To do this you need to parse the segments through a filter highlighting specific text.

Resources