Racing game in j2me(2d) - java-me

I'm developing a racing game in top-view. the roads are straight and has few curves.how to tilt the road to take a turn.
In j2me no any methods to rotate an image. I ref and tried with
the following link. But in device response time very slow..
How to make this game play... Any ref or link ?

Related

3d audio representation of objects on a map

OK, I'm going to try to explain the issue I am having in a few lines.
I am a blind game developer, and as such I make games using only audio and not graphics, so other blind people can play.
I am trying to build a binaural audio scene in a game. However, the sounds that play only represent their center position on the screen, x/2 and y/2.
This works fine for persons, cars, and other small objects however, when there is a door or a wall, that occupies 5 squares or the whole x or y in the case of walls, I am clueless as to how to implement this into sound.
Does anyone have any ideas?
I don't think this has nothing to do with the sound library that I use, rather it's to do with maths or geometry etc.
I thought about creating multiple coppies of the sound, one for each position, but people say it's a really bad idea.
So what do you suggest?
Thanks.
I would create a set of audio filters which gets applied to sound of any object to indicate attributes like its size and whether its moving towards or away from POV
Experiment with defining these audio filters to be possibly modulating its volume up and down or maybe modulating the frequency of an object's audio
These audio filters are applying an agreed upon audio texture to the object's audio

Render loop vs. explicitely calling update method

I am working on a 3D simulation program using openGL which uses a render loop with a fixed framerate to keep the screen updated as the world changes. Standard procedure really, and for a typical video game this is certainly the best approach (I originally took this code from an openGL game tutorial). But for me, the 3D scene will not be changing as rapidly and unpredictably as in a computer game. It will be possible for the 3D scene itself to change from time to time but in general it won't change between render calls (it's more of a visualisation tool for geometric problems). The user will be able to control the position/orientation of the camera but in general there will be times when the camera won't move for several seconds/minutes (potentially hundreds of render calls) and since the 3D scene is likely be static for the majority of the time, I wonder if I really need a continuous render loop...?
My thinking is that I will remove the automatic render loop and instead I will explicitly call my update method when either,
The 3D scene changes (very rare)
The camera moves (somewhat rare)
As I will be using this largely for research purposes, the scene/camera is likely to stay in one state for several minutes at a time and it seems silly to be continuously updating the frame buffer when it's not changing.
My question then is, is this a good approach? All the online tutorials for 3D graphics rendering seem to deal with game design but that's not really my requirement. In other words, what are the pros and cons of using a render loop vs. manually calling "update()" whenever something changes?
Thanks
There's no problem with this approach, in fact many 3D apps, like 3DS MAX use explicit rendering. You just pick what is better for your needs, in most games scene changes each frame so it's better to have update loop, but if you were doing some chess game, without animated UI you could also use explicit rendering only when the scene changes.
For apps with rare changes, like 3DS or Blender it would be better to call rendering only on change. This way you save the CPU/GPU but also power and your PC don't heat up so much.
With explicit rendering you can also have some performance tricks, like drawing simplified scene when camera moves, for better performance. Then when camera stops you render the full scene in background once again, and replace the low-quality rendering with the new one.

What are the common reasons for slow down of games made with phaser for 2-3 seconds in the middle of game play?

I wrote a game for desktop using phaser, i had followed all their guidelines regarding memory free & object destruction after completion of a state but i can't understand why the game is giving jerks for 2-3 seconds each time during the entire game play(especially the tile sprite), i want to know what might be the other reasons?
from my Experience there are few things i noticed that it makes phaser game slow specially on mobile devices .
tileSprit : as you mention it is very slow and to be honest i don't know why but i create a blank game and tested it FPS = 60 , then i draw tile sprite simple tile
game.add.tileSprite(0,0,worldWidth , worldHeight , key);
FPS = 30 !
so i replaced it with one big sprite and tested it FPS = 45 to 50 ! it is ok i can live with that .
bitmap font : is also heavy don't use it a lot
loop inside update function is also drop the fps .
p2 physic : calling a lot of collide function and a lot of bodies (destroy the physic body as son as you done with it )
particle system : simple particle also Reduce the FPS more than 10
phaser is nice and easy but performance part need a lot of work .
EDIT
i tested Pixi for tile sprite and it is fast like Leopard FPS = 60 and sometimes more than that i will recommend using pixi tile sprite .
Profile it using Chrome and see. If it's a function, that will show it. If it's lagging while rendering, it will show spikes during paint operations. It could be anything though - garbage collection, audio decoding (a common hidden frame rate killer), things you thought were destroyed but weren't really, excessive texture loads on the GPU and so on.

How to get unity to stream a camera for a Google Cardboard project?

I am trying to get Unity (version 5.0.1f1) to stream a live feed from a camera for a Google Cardboard project at my university. The live feed I plan to use is from a single GoPro Hero 3 camera which I plan to duplicate the image.
I just wanted to know if this is a feasible idea with Unity or any other program, any help would be greatly appreciated.
Thanks,
Matthew
I have not tried this, but if you can get the GoPro to act as a WebCam, you can try using a WebCamTexture in Unity.
Getting GoPro to act as a webcam:
https://www.youtube.com/watch?v=ltxPZuIC6mk
Be sure to read the comment by "webzkey" who explains how to do it.
Unity WebCamTexture:
http://docs.unity3d.com/ScriptReference/WebCamTexture.html
No guarantees this will work, but worth a shot.
Note: you do not necessarily have to duplicate the image. Just put the texture on a plane and place that in front of the camera at a comfortable distance.

Request for approach recommendation for an object identification

Equipment
Windows 7, OpenCV 2.3.1, Visual Studio C++ 2010 Express, and, eventually, any digital video cameras needed, lenses (?)
Project
I want to build a machine to identify characteristics of the flight of a baseball my son hits to the outfield (length, direction, height, etc) in realtime.
Solution description
I will have two fixed digital video cameras observe the flight of the ball and will analyze those video streams with OpenCV to locate and track the ball.
OpenCV methods
There are three methods I've read about and/or seen to identify a ball:
circle detection from edges
circle detection of blobs in a color range (orange ball and tennis ball examples)
moving circle blob detection by frame differencing (car and people identifying and tracking examples)
I have done the first (cvtColor, GaussianBlur, Canny, HoughCircles) only well enough that I can get it to work with certain color backgrounds. I started the second but before I spend days making it work I realized I don't know what the best approach is. It seems to someone with no image analysis experience - me - that my PC could have difficulty in 1 finding the right edges since the weather and background will change from game to game. 2 could be difficult because there may be several blobs in the foreground (players' white uniforms, bases) and the background (white lettering or background on signs) that would also be baseball white, and because the baseball white would change as the sun went down or the ball got dirty. I think 3 is the best way to go but don't want to spend a lot of time making it work (my early attempts failed) if I'd just learn it's shortcomings for tracking a baseball after I had it functioning.
The question
Which of 1-3 or 4, 5, 6 (I'm sure there are other methods you know of that I don't) is the most appropriate approach in OpenCV to learn characteristics about the 3D flight (distance, height, direction, etc.) of a baseball hit to the outfield?
(I'm expecting to need to write the code myself but I wouldn't turn down portions of the program that are sent to me.)
Thanks for any advice.

Resources