I am new to Web Audio API, i am working on a web app where users will select a video and that video will contain 2 channels:
Ch 1: Original Version (Full Mix)
Ch 2: M&E version (Music & Effect No Dialogue)
I need to give user the option to select the channel he wants to play for video.
I have seen that it can be possible using Web Audio API, but I am not able to write proper code for this.
Any help will be really apriciated.
Thanks in Advance.
Related
Can anyone please guide me, step by step how to watermark/Overlay a video in Azure as I am new to Azure. If Possible please guide me with a tutorial or Video tutorial for the same. I have uploaded a .mp4 video in Azure and streamed it and able to view it in azure media player. Please guide me for watermarking or overlaying a video in azure.
Also I need to understand, azure is providing watermarking / overlay as a service so is there a way to do water marking directly with an Azure interface without visual studio C# coding.
Thanks in advance..
Do you need to overlay an image onto a video? Or do you want to overlay a video over another video? For the former case, the image will have to overlaid on the input video during the encoding process. There is a basic example documented here. In that example, the output contains a single MP4 at 640x360 resolution, which is sufficient for delivery via progressive download. Since you need to stream your video, you should update the Codecs section in that example with additional video bitrates - such as the one showed here.
You also mention needing to do this without writing code. If you have a PC, then you can install and run AMS Explorer (https://aka.ms/amse). Browse to the input video you want to process, hit "Ctl+R" and you will see tabs to specify the encoding settings, and others for advanced features including overlays.
I am making a game where I want to command the AI using word i speak.
Say for example I can say go and AI bot goes to certain distance.
Question is I am finding asset and no provider is giving me grantee that it is possible ?
What are the difficulties for doing it?
I am programmer so if some one suggest the way to handle it I can do it.
Should I make mic listener on all the time and read audio and then pass audio to some external sdk which can convert my voice to text ?
these are the asset provider i have contacted.
https://www.assetstore.unity3d.com/en/#!/content/73036
https://www.assetstore.unity3d.com/en/#!/content/45168
https://www.assetstore.unity3d.com/en/#!/content/47520
and few more !
If someone just explains the steps I need to follow then I can try it for sure.
I am currently using this external api for pretty much the same thing: https://api.ai/
It comes with a unity SDK that works quite well:
https://github.com/api-ai/api-ai-unity-sample#apiai-unity-plugin
You have to connect a audio source to the sdk, and tell it to start listening. It will then convert your voice audio to text, and even detect pre-selected intentions from your voice audio / text.
You can find all steps on how to integrate the unity plugin in the api.ai Unity SDK documentation on github.
EDIT: It's free too btw :)
If you want to recognize offline without sending data to the server, you need to try this plugin:
https://github.com/dimixar/unity3DPocketSphinx-android-lib
It uses open source speech recognition engine CMUSphinx
i'm new to action on Googles and right now doing R&D. I've created an audio skill on Alexa, and now want same for Google assistant as well. But i've few questions:
1- Can we return audio in response? my audios are about 1hour long, so can we play them in our action? In Alexa, we have audio player. Anything like that in assistant?
2- I didn't find any SDK, but devs are talking about it, so there must be some. Kindly share the link.
Thanks in anticipation.
Update:
I believe, SDK is actions-on-google. I've not explored it yet, but it's the SDK that i found for creating actions with node js
Link: actions-on-google
Actions support SSML which provides the playback of audio files: https://developers.google.com/actions/reference/ssml#support_for_ssml_elements
At the moment there is a 120 seconds maximum duration for all the audio formats supported, but you can break up the audio and play them in sequence if they are longer.
If you have your own NLU, you can use the Actions SDK. If you don't have your own NLU, then you can use API.AI to create an action.
A node.js client library is available for either of these options: https://github.com/actions-on-google/actions-on-google-nodejs
For any other developer questions, you should look at the actions documentation: https://developers.google.com/actions/develop/conversation
I am trying to write a simple chrome app to play a sequence of online pictures on my chromecast device.
I have looked at some examples, but could't find anything which I could tweak around to get the simple behavior i needed. Maybe someone here could help, by providing directions or advise on getting started with developing something like that for chromecast.
UPDATE:
To give you a better idea, about the specifics, let me add some more details to my requirements.
It needs to be controlled from chrome
I want to pass a playlist with 10s-100s of images so it can slide them in circles.
After receiving playlist chromecast device should be able to continue on its own, without continuously asking for next image.
This is actually similar to backdrop feature Google is planning to introduce, but I wanted to write something myself.
Thanks
If you don't want to develop your own Cast receiver, then you can use the media namespace channel and the Styled Media Receiver to display a photo at a time:
https://developers.google.com/cast/docs/styled_receiver
You will have to add the logic to advance from photo to photo in your sender app.
If you are willing to develop your own custom receiver, then you can start with this Cast sample app:
https://github.com/googlecast/CastHelloText-android
It allows you to send messages to a custom receiver. You can use that to send the URLs of the photos and then you can add JavaScript logic in the receiver to play a slideshow.
Just to let you know, I have tried various options and ended up writing custom receiver and Chrome sender applications. This was really straightforward and exactly what I wanted.
See the links above for guidance and also examples here.
I'm creating an app that reads a single podcast feed (unique to the app) and shows the episode titles in a LongListSelector. I can obtain the MP3 URI for each episode by parsing the RSS file. I'd like to add functionality that, when the user taps an item in the list, the URI is passed to an audio streamer and played like a music file.
I saw a tutorial on How to play background audio for Windows Phone, which points me to a project template for streaming audio.
I'm just wondering, is it still necessary to follow those steps and create a separate project, or is there a built-in API call in Windows Phone 8 that I can just pass my URI to and have it stream automatically?
Yes,if you need to use BackgroungAudioPlayer, it is necessary to create another project for AudioPlayer and add reference of it to your project.
Through MediaPlayer you can play files from medialibrary or IsolatedStorage.
So,for you it is necessary to follow those steps.Hope this helps.
Use the media element tag from the Windows Phone 8 Toolbox:
You should be able to achieve this without any difficulty, But it will not run in the background.