Cardboard headtracking with no gyro - google-cardboard

I have a LG G3 phone which after some testing does not have a gyroscope(only accelerometer) and I've been testing the Cardboard with it and faced some issues.
Sometimes the camera can suddently jump up to 90 degrees in any direction from where I was looking, and this can at it's worst happen every 10 seconds or so(usually about every 30sec). I did test the accelerometer output and it didn't seem to be that inaccurate that the camera would jump that much. I've looked around and found a couple of other users reporting the same issue too.
This issue is present not only in the Unity Cardboard SDK Demo but also in some vr apps, and by some I mean there are a couple of apps that I've tried that works perfectly fine(Can't remember which ones right now but one was a roller coaster vr app). Though this issue is really apparent in the Cardboard Labs app.
This jumpyness doesn't only destroy the immersion but also induces a bit of desorientation aswell as nausea when the jumping gets really bad. I had a hard time finishing the Cardboard Labs tests because of this...
Soo last but not least can the headtracking code be optimized for phones without a gyro so that these experiences can be improved? If not on the google side of the SDK, is there anything I can do to the SDK to help minimize this effect?

Ok, so after some testing I seem to have it fixed now.
The reason seems to be that since I have rooted my device and I'm often fiddling with the frequency of the CPU in it the motion tracking somehow gets messed up. This can easily be fixed with a reboot with stock clocks.
I'm not sure if it has to do with the polling of the motion sensor fetching incorrect data when it's reading too fast or if it's the cpu that can't keep up but nevertheless, I seem to be stuck with stock clocks if I'm going to play VR games. I'm leaving this question here for those who might have the same issue.
EDIT: After some more testing the issue reappeared after a while. I'm guessing there is an app or service that might be the problem here because after a restart it's fixed again.. I'll post more when I've tested it further.

Related

Recognize specific ringtone

What I want is to be able to get a signal at my raspberry pi at home when I'm not at home so I can e.g. wake up my PC. I always have an old phone lying around that I never really use. So I thought, I can call my phone, a specific mp3 ringtone plays, my raspberry pi listens and recognizes the ringtone and therefore the signal. So I can pretty much chose whatever ringtone I want (but hopefully a not too long one). But the problem is, that it should be recognizable by the raspberry and it should be distinguishable from other sounds. At best I can play random music at home and it will not get signalled until it's the specific ringtone i chose.
So I'm at the very beginning of the project and I have a lot of question. Is this even feasible? How do I listen to the ringtone? Should I use a normal microphone or could I e.g. trigger some gpio pin as long as a specific frequency is played? What kind of ringtone should I use to be as distinguishable as possible? And how to create the software to recognize the sound?
I know this is a lot and I don't expect a step by step solution. But maybe you got some hints to get me in the right direction?
If someone has a similar problem, I found a solution: First I had to choose between a mostly hardware solution and a mostly software solution. The hardware solution is to filter specific frequencies. This seems to be pretty hard using normal band-pass filters if you want narrow bands. There are also components that can do that, now I know of the NE567. But this component only reacts to one frequency and takes quite a lot of energy. To recognize a ringtone, more of these components are needes which means more power consumption. Additionally this solution is pretty unflexible.
So I went for the software solution. Now I have an Arduino Uno that gets an amplified electret microphone signal at an analog input pin. The data is collected and simultaneously analysed with an FFT algorithm. Then I check the dominant frequency if there is any and safe it in an array. Everytime a got a new data point I compare the array with the pattern of my ringtone and calculate a score for the match. If the score is big enough the ringtone is "found" and I can trigger my event.
I'm actually pretty pleased with the solution because it works quite well even with the phone some feet away from the microphone. I thought I need to put the microphone almost directly next to the phone to get good results, but I dont have to. It's still a little sensitive, because the sound volume shouldnt be too high or to low. But with the right volume settings it works with a quite big area when the phone is in the same room. It works even better with some space between microphone and phone, because the phones radiation from the call seems to disturb the circuit quite a lot. There is also the problem, that other noises block the ringtone recognition. I could compensate that with my algorithm, but I almost used up all resources of the Arduino, so I had to keep the algorithm simple. But in my case I dont have a noisy environment, so this is not a problem for me. Another pro is that my event was never triggered from another sound and it seems almost impossible that this could happen by accident.
So it is feasible and I think its actually a quite elegant solution. I also thought about a vibration detection or even directly using the vibration motor's signal but I have no control over the vibration function of that old phone. But I can chose the ringtone for every contact, so I only gave the "magic" ringtone to myself and so the event can only be triggered by myself. I only have to say, that writing the software was kind of hard with the Arduinos limitations. Because I need the data in real time I have limited time for the calculation. I had to limit the incomping data and therefore I can only listen to frequencies up to 10kHz. But the ringtone recognition is still possible and I think it was worth the effort. :)

Gesture Recognition for my Grandma (Kinect) Linux

I'm looking into making a project with the Kinect to allow my Grandma to control her TV without being daunted by using the remote. So, I've been looking into basic gesture recognition. The aim will be to say turn the volume of the TV up by sending the right IR code to the TV when the program detects that the right hand is being "waved."
The problem is, no matter where I look, I can't seem to find a Linux based tutorial which shows how to do something as a result of a gesture. One other thing to note is that I don't need to have any GUI apart from the debug window as this will slow my program down a fair bit.
Does anybody know of something somewhere which will allow me to in a loop, constantly check for some hand gesture and when it does, I can control something, without the need of any GUI at all, and on Linux? :/
I'm happy to go for any language but my experience revolves around Python and C.
Any help will be accepted with great appreciation.
Thanks in advance
Matt
In principle, this concept is great, but the amount of features a remote offers is going to be hard to replicate using a number of gestures that an older person can memorize. They will probably be even less incentivized to do this (learning new things sucks) if they already have a solution (remote), even though they really love you. I'm just warning you.
I recommend you use OpenNI and NITE. Note that the current version of OpenNI (2) does not have Kinect support. You need to use OpenNI 1.5.4 and look for the SensorKinect093 driver. There should be some gesture code that works for that (googling OpenNI Gesture yields a ton of results). If you're using something that expects OpenNI 2, be warned that you may have to write some glue code.
The basic control set would be Volume +/-, Channel +/-, Power on/off. But that will be frustrating if she wants to go from Channel 03 to 50.
I don't know how low-level you want to go, but a really, REALLY simple gesture recognize could look at horizontal and vertical swipes of the right hand exceeding a velocity threshold (averaged). Be warned: detected skeletons can get really wonky when people are sitting (that's actually a bit of what my PhD is on).

HTML5 web audio getting stuck (buffering issue?)

I am working on a (weekly radio show) audio website and I keep getting the same problem, the audio files that are up to 1hr long keep getting stuck.
I have tested several different players, both the flash player Wimpy Player, and HTML5 players such as Audio5js, jPlayer, and pickle player.
AND I have tested to sample the sound in different bit rates, 8,24,64,128 but the sound files keeps getting stuck. Not always, but often enough to be a serious problem.
The file starts playing but a bit in (all from a few seconds to almost at the end of the 1 hr show) it just stops and the only way to keep playing it is to reload the file. To me it seems like a buffering problem.
I don't understand why.
If anyone has ever had a similar problem, please tell me what I am missing.
I got the answer myself to this one.
After double-, triple-, quadruple-checking everything it hit me, the site was hosted by GoDaddy and I know that it is not the best quality you can expect from them. (Actually they are really bad in more than one way. Never go with GoDaddy if you can avoid it. Just my personal opinion.)
We changed the hosting and the problem was gone.
To bad it had cost me days of work for nothing.

LWJGL starts to run low FPS on Display

I'm having this problem with LWJGL. I have a simple game and all works fine. My main loop is calculating when it should render and update my game. It stays constant 59-60 fps. The problem comes in opengl I guess. After random amounts of time my whole game starts to run at very low fps. My game loop still calculates 60 fps and updates, but what I see on screen doesn't match it. I'm guessing I overload openGL. I'm clearing color buffer bit and depth buffer(though I don't do any depth). Is there anything more I need to clear?
It's king of tough to say what may be wrong with your program without actually looking at the code. Clearing off the screen is one thing but it really shouldn't have the biggest impact so unfortunately I can't really tell you without any additional information.
Possibly it is a problem with slow hardware? This seems like a trivial "I have a slow graphics card" or "I have a lot of things open in the background" kind of problem. Also note that on most laptops if you shake it the hard drive will lock up for a few seconds, causing stuttering.
As Andrew said you can't really pinpoint this sort of problem without code.

2D platformers: why make the physics dependent on the framerate?

"Super Meat Boy" is a difficult platformer that recently came out for PC, requiring exceptional control and pixel-perfect jumping. The physics code in the game is dependent on the framerate, which is locked to 60fps; this means that if your computer can't run the game at full speed, the physics will go insane, causing (among other things) your character to run slower and fall through the ground. Furthermore, if vsync is off, the game runs extremely fast.
Could those experienced with 2D game programming help explain why the game was coded this way? Wouldn't a physics loop running at a constant rate be a better solution? (Actually, I think a physics loop is used for parts of the game, since some of the entities continue to move normally regardless of the framerate. Your character, on the other hand, runs exactly [fps/60] as fast.)
What bothers me about this implementation is the loss of abstraction between the game engine and the graphics rendering, which depends on system-specific things like the monitor, graphics card, and CPU. If, for whatever reason, your computer can't handle vsync, or can't run the game at exactly 60fps, it'll break spectacularly. Why should the rendering step in any way influence the physics calculations? (Most games nowadays would either slow down the game or skip frames.) On the other hand, I understand that old-school platformers on the NES and SNES depended on a fixed framerate for much of their control and physics. Why is this, and would it be possible to create a patformer in that vein without having the framerate dependency? Is there necessarily a loss of precision if you separate the graphics rendering from the rest of the engine?
Thank you, and sorry if the question was confusing.
There are no reasons why physics should depend on the framerate and this is clearly a bad design.
I've once tried to understand why people do this. I did a code review for a game written by another team in the company, and I didn't see it from the beginning but they used a lot of hardcoded value of 17 in their code. When I ran the game on debug mode with the FPS shown, I saw it, FPS was exactly 17! I look over the code again and now it's clear: the programmers assumed that the game will always have a 17 FPS constant frame rate. If the FPS was greater than 17, they did a sleep to make the FPS be exactly 17. Of course, they did nothing if the FPS was smaller than 17 the game just went crazy (like when played at 2 FPS and driving a car in the game, the game system alerted me: "Too Fast! Too Fast!").
So I write an email asking why they hardcoded this value and use it their physics engine and they replied that this way they keep the engine simpler. And i replied again, Ok, but if we run the game on a device that is incapable of 17 FPS, your game engine runs very funny but not as expected. And they said that will fix the issue until the next code review.
After 3 or 4 weeks I get a new version of the source code so I was really curious to find out what they did with the FPS constant so first thing i do is search through code after 17 and there are only a couple matches, but one of them was not something i wanted to see:
final static int FPS = 17;
So they removed all the hardcoded 17 value from all the code and used the FPS constant instead. And their motivation: now if I need to put the game on a device that can only do 10 FPS, all i need to do is to set that FPS constant to 10 and the game will work smooth.
In conclusion, sorry for writing such a long message, but I wanted to emphasize that the only reason why anyone will do such a thing is the bad design.
Here's a good explanation on why your timestep should be kept constant: http://gafferongames.com/game-physics/fix-your-timestep/
Additionally, depending on the physics engine, the system may get unstable when the timestep changes. This is because some of the data that is cached between frames is timestep-dependant. For example, the starting guess for an iterative solver (which is how constraints are solved) may be far off from the answer. I know this is true for Havok (the physics engine used by many commericial games), but I'm not sure which engine SMB uses.
There was also an article in Game Developer Magazine a few months ago, illustrating how a jump with the same initial velocity but different timesteps was achieved different max heights with different frame rates. There was a supporting anecdote from a game (Tony Hawk?) where a certain jump could be made when running on the NTSC version of the game but not the PAL version (since the framerates are different). Sorry I can't find the issue at the moment, but I can try to dig it up later if you want.
They probably needed to get the game done quickly enough and decided that they would cover sufficient user base with the current implementation.
Now, it's not really that hard to retrofit independence, if you think about it during development, but I suppose they could go down some steep holes.
I think it's unnecessary, and I've seen it before (some early 3d-hw game used the same thing, where the game went faster if you looked at the sky, and slower if you looked at the ground).
It just sucks. Bug the developers about it and hope that they patch it, if they can.

Resources