How to get unity to stream a camera for a Google Cardboard project? - google-cardboard

I am trying to get Unity (version 5.0.1f1) to stream a live feed from a camera for a Google Cardboard project at my university. The live feed I plan to use is from a single GoPro Hero 3 camera which I plan to duplicate the image.
I just wanted to know if this is a feasible idea with Unity or any other program, any help would be greatly appreciated.
Thanks,
Matthew

I have not tried this, but if you can get the GoPro to act as a WebCam, you can try using a WebCamTexture in Unity.
Getting GoPro to act as a webcam:
https://www.youtube.com/watch?v=ltxPZuIC6mk
Be sure to read the comment by "webzkey" who explains how to do it.
Unity WebCamTexture:
http://docs.unity3d.com/ScriptReference/WebCamTexture.html
No guarantees this will work, but worth a shot.
Note: you do not necessarily have to duplicate the image. Just put the texture on a plane and place that in front of the camera at a comfortable distance.

Related

How to do composite sprite on godot

I'm currently working on a 2 dimensional top down role playing game and i want to implement a equipment system wherein the way you look changes depending on the gear you have equipped and my idea is to do this by attaching the armor and weapons sprites onto the base sprite/composite sprite. I've been looking for tutorials on how i could implement this in Godot unfortunately the ones i see are only for unity so is this not possible on Godot? though if it is please tell me where i can learn it.

Vuforia/Android Studio - Working with VideoPlayback and image targets samples at the same time

I need to use Vuforia, to implement AR in an android app using Android Studio.
I was able to run the samples separately with no issues. My doubt is if any one knows how to use the video playback and image target samples at the same time when the camera is active?
For example, I have two images in my database located on assets. When the first image is recognized, I need to play a video (video playback) and when the second image is recognized, another image is placed with AR above the target (image target).
I know this is a bit late, but perhaps it could be of an assistance nevertheless. I cannot give you any code, but I can tell you for sure that there is no real problem of doing this - in fact, it is only a matter of a correct integration between two of Vuforia's samples. Once you have implemented the functions for drawing an image on a target and for drawing a video on a target, you simply invoke the relevant function based on the target id. Looking at a specific difficulty and trying to help on that would be a lot easier, once you actually did the integration.

How to capture still image from a live video programatically

When i try to take a screenshot of my desktop I found the area of the Windows Media Player window was empty, nothing in it, I google for it for a while and found that most of video players user Overlay surfaces for performance, and overlay surfaces can not be caputured, so some ideas come out said to disable the DDraw accelaration so that you can grap an still image from a live video, but when the player was launched, it's already use the hardware accelaration, even i disable hardware accelaration, it will not take effect until i relaunch the player, my question is: how to capture a image from a live video without diasble the ddraw accelaration? or how to make the settings(disable hardware accelaration) work work without relaunch the video player?
I won't play the vedio with my program, i just want to take a still
image while it is played by a 3rd party player such as Windows Media
Player or Real player etc...
I want to do this programatically, say
by C/C++ and DirectX, so I don't want to use any exsisting software
or tools
No matter which player in use, my program should capture it, I know some tool can do this like CapTrue and tencent qq, so i think it is possible to do so.
A workaround can be to use vlc to play your file. It gives a screenshot option in it directly.
AFAIK, this is an intentional "feature" in WMP, for protection. If you need to have WMP, then you need a decent screengrabber. Unfortunately, the ones I know like hypersnap are not free.
If you only want a screengrab of a frame, VLC is your friend, like #zdd said.

getting audio frequency from amr file

I am new in j2me developing world.
I just want to know that how to get audio frequency from the audio recording application which stores data in .amr file.
Please help me, I tried a lot, but I am helpless.
So any idea regarding this will be appreciated.
thanks in advance.
im gonna ad here what i have found from the other sites that may be useful to you and me(as a newbie)
http://www.developer.nokia.com/Community/Discussion/showthread.php?154169-Getting-Recorded-Audio-Frequency-in-J2ME
If you want frequency of sound in Hz then it is actually not a single value but a series of values as a function of time.
You will have to calculate fourier transform of the sound samples which will give you frequency.
Read about this on wikipedia on how to calculate fourier transform and frequency graph...
http://www.developer.nokia.com/Community/Discussion/showthread.php?95262-Frequency-Analysis-in-J2ME-MMAPI
this forum says something about fft(fast fourrier transform) and analysing recorded amr sound rather than processing live stream and provides 3 link about fft which are right underneat this line have a look at them:..
look at the site mobile-tuner.com. (im new too. in fact i know nothing about java.)
but the site says that tuner function enabled phones are s60 phones. i was trying to write guitar tuner program. since my phone is nokia 5310 express music which is s40 i gave up.
so good luck to you
note: javax.microedition.media.control.RecordControl
--i don't know too much but i have a hunch about that ""Record Control"" class or function is related to audio frequency function in j2me. and the frequency analysis thing is inside the "sound processing"

TI-99 speech effect?

I want to make a program that takes recorded speech and transforms it so it sounds like it's coming from a Texas TI-99. Do you have any good ideas and resources for how to go about that?
Most of those old speech synthesizers were build directly in-chip. Perhaps you could find a synthesizer that sounds like the chip, but if you really want the original sound, you would either have to simulate the chip (I don't know if it's a simple matter, perhaps the chip internals aren't published).
I only know because I burnt out a number of the Radio Shack speech synthesizer ICs before I managed to get a SP0256-AL2 working.
If you're more of a do-it yourself type guy, you need to find out which IC actually drove the speech synthesis in a TI-99, and then build the chip up on a bread board. That's what I was trying to do back then, and I managed to get the chip to speak, but lost patience after I fried my third chip due to a mis-wiring issue when I attempted to attach it to my PC's parallel port. I think this was the book I was using back then, but there's no cover art featured so it's hard to know for sure.
If you are familiar with how to use ROM images, there seems to be a gentleman that has managed to refeverse engineer the ROM image out of a SP0256-AL2. Look here for the image and the incredible granted permission to do the work and distribute the results.
You could start with open source that does something similar: Adding Robotic/Vocoder effect to your song using Audacity

Resources