Iam developing an application to Play 360 videos, everything is working fine but i want to include touch input to pan through the video along with gyro input. Is there any way to include it? Does VR SDK supports this? Thanks in advance.
i don't think it's build in the sdk but there is this:
https://developer.android.com/training/gestures/detector.html
Related
I am new to Web Audio API, i am working on a web app where users will select a video and that video will contain 2 channels:
Ch 1: Original Version (Full Mix)
Ch 2: M&E version (Music & Effect No Dialogue)
I need to give user the option to select the channel he wants to play for video.
I have seen that it can be possible using Web Audio API, but I am not able to write proper code for this.
Any help will be really apriciated.
Thanks in Advance.
I want to know if it's possible to stream a video from a pc to an xbox and use the app moview & tv ?
For example i want to reproduce this feature :
Anybody know the protocol or the api used for this ?
Thank's
I answer my question. DLNA can provide this feature. So I take a nodejs lib for dlna client and play my video.
I am making a game where I want to command the AI using word i speak.
Say for example I can say go and AI bot goes to certain distance.
Question is I am finding asset and no provider is giving me grantee that it is possible ?
What are the difficulties for doing it?
I am programmer so if some one suggest the way to handle it I can do it.
Should I make mic listener on all the time and read audio and then pass audio to some external sdk which can convert my voice to text ?
these are the asset provider i have contacted.
https://www.assetstore.unity3d.com/en/#!/content/73036
https://www.assetstore.unity3d.com/en/#!/content/45168
https://www.assetstore.unity3d.com/en/#!/content/47520
and few more !
If someone just explains the steps I need to follow then I can try it for sure.
I am currently using this external api for pretty much the same thing: https://api.ai/
It comes with a unity SDK that works quite well:
https://github.com/api-ai/api-ai-unity-sample#apiai-unity-plugin
You have to connect a audio source to the sdk, and tell it to start listening. It will then convert your voice audio to text, and even detect pre-selected intentions from your voice audio / text.
You can find all steps on how to integrate the unity plugin in the api.ai Unity SDK documentation on github.
EDIT: It's free too btw :)
If you want to recognize offline without sending data to the server, you need to try this plugin:
https://github.com/dimixar/unity3DPocketSphinx-android-lib
It uses open source speech recognition engine CMUSphinx
So I am using the react-native-audio package to play preloaded audio files and capture the user's recorded audio. What I would like to do is convert the audio into some sort of data for visualization and analysis. There seems to be several options for web but not much in this direction specifically for React Native. How would I achieve this? Thank you.
I've just bump with this post, I am building a React Native Waveform visualiser, still work in progres with the android side, but its working on the iOS side.
Pretty much is a port from WaveForm on IOS ,using Igor Shubin's solution.
You are very welcome to check out the code at https://github.com/juananime/react-native-audiowaveform
To try straight away:
npm react-native-audiowaveform --save
Cheers!
I have an app that I have written using Xamarin forms. I wanted to know if there is any media
class I could use to record audio as well as stream audio from a server. All articles I have found on the web are platform specific so far.
thanks in advance
There currently isn't any audio support in Xamarin.Forms. You will need to write platform specific code for handling the audio and use the XF DependencyService (or something similar) to call it from your shared code.