I'm making an application with KINECT SDK 1.8, OpenGL and DirectShow.
My objective is to extract the human with KINECT background removal API and add avi video's frame as a background using DirectShow's IVMR9WindowlessControl9:GetCurrentImage. Then draw about 1000 square textures which have some transparency to make a fire(particle dynamics).
I add video background if the extracted image's pixel's alpha is 0. If the pixel's alpha is 0, I add video frame's pixel in there.
When KINECT didn't detect human, and video frame's pixel fills the background, the fire comes out very well.
The problem is this : When KINECT detected human, and video frame's pixel fills the background except the human area, the fire comes out, but there are few particles consisting fire, and sometimes, the other particles which weren't showed up when the human was detect showed up and then disappear in a moment.
After the KINECT couldn't detect the human, the hidden particles just show up and it runs very well.
Related
I'm wanting to create a mobile game, using Android Studio.
I want to create a pixel game, but I wanted to know how to change the app/image resolution so that the images don't get too small and don't lose quality.
I tried to change the size in the XML, but this is different on the device.
I just used the ImageView component, the image was loaded but not as I wanted, I tried to stretch it but it lost the quality, in this case the image is quite small as it is a pixel game.
I'm writing a DirectX 11 OpenXR app. OpenXR creates the DX11 swapchain with a format of DXGI_FORMAT_R8G8B8A8_UNORM_SRGB (the device doesn't support non sRGB formats).
The problem I have is that I need to render a video texture. And IMFMediaEngine::TransferVideoFrame() won't work unless the texture is in a non sRGB format like DXGI_FORMAT_R8G8B8A8_UNORM. So if I use the UNORM texture format for the video frame it works.
The problem is that in the headset the colors of the texture are off. They are too bright. I suspect converting them from sRGB->Linear inside the pixel shader would make them look like they are supposed to. But I'm not sure what is happening.
I'm a bit confused about how to handle this situation where the swapchain (sRGB) is created by OpenXR and it does the presenting but I have to sample from UNORM textures.
Can anyone provide guidance?
I have followed the RajawaliVuforia tutorial and integrated the rajawali with vuforia CloudReco and i am able to get the 3D model but model is not positioned properly in target image center and also if i move camera close or up, the model is positioning out of the target image. Can someone let me know what could be the issue.
Vuforia passes the position (Vector3) and orientation (Quaternion) to Rajawali. Rajawali then uses this to position and rotate the model. This might interfere with animations applied to the model. If you're using animations or if you're setting the position manually you'll get unpredictable results. The reason for this is that the position is set twice on each frame.
The way to fix this is to put your 3D model in a container (an empty BaseObject3D). Vuforia's position and orientation will be applied to this container and not your 3D model. This way you can animate the model without getting unpredictable results.
I have drawn a canvas and i want to know how to get a pixel color of the canvas ?
Create a mutable Image the same size as your Canvas. Then, any operations you perform on your Canvas's Graphics object, perform the same ones on your Image's Graphics object.
Finally, get the pixel data from the Image using getRGB(); it should be the same as the Canvas.
Unfortunately, you can't. Graphics class, which is used to draw to Canvas, is for painitng only, it can't give you any information on the pixels.
If you are targeting a platform that supports the NokiaUI API you can use the DirectGraphics#getPixels to read pixel data. On mobile platforms with graphics accelerator hardware, reading pixels tend to be slower so you should use this very sparingly.
I am currently working on a game in SDL which has destructible terrain. At the moment the terrain is one large (5000*500, for testing) bitmap which is randomly generated.
Each frame the main surface is cleared and the terrain bitmap is blitted into it. The current resolution is 1200 * 700, so when I was testing 1200 * 500 pixels were visible at most of the points.
Now the problem is: The FPS are already dropping! I thought one simple bitmap shouldn't show any effect - but I am already falling down to ~24 FPS with this!
Why is blitting & drawing a bitmap of that size so slow?
Am I taking a false approach at destructible terrain?
How have games like Worms done this? The FPS seem really high although there's definitely a lot of pixels drawn in there
Whenever you initialize a surface, do it the following way:
SDL_Surface* mySurface;
SDL_Surface* tempSurface;
tempSurface = SDL_LoadIMG("./path/to/image/image.jpg_or_whatever");
/* SDL_LoadIMG() is correct name? Not sure now, I`m at work, so I can`t verify it. */
mySurface = SDL_DisplayFormat(tempSurface);
SDL_FreeSurface(tempSurface);
The SDL_DisplayFormat() method converts the pixel format of your surface to the format the video surface uses. If you don`t do it the way I described above, SDL does this each time the surface is blitted.
And always remember: just blit the necessary parts that really are visible to the player.
That`s my first guess, why you are having performance problems. Post your code or ask more specific questions, if you want more tipps. Good luck with your game.
If you redraw the whole screen at each frame your will always get a bad FPS. You have to redraw only part of the screen which have changed. You can also try to use SDL_HWSURFACE to use hardware but it won't work on every graphical card.
2d in SDL is pretty slow and there isn't much you can do to make it faster (on windows at least it uses GDI for drawing by default.) Your options are:
Go opengl and start using textured quads for sprites.
Try SFML. It provides a hardware accelerated 2d environment.
Use SDL 1.3 Get a source snapshot it is unstable and still under development but hardware accelerated 2d is supposed to be one of the main selling points.