Kinect & Processing - Convert position of joint as mouse x and mouse y? - position

I'm currently using an XBOX KINECT model 1414, and processing 2.2.1. I'm hoping to use the right hand as a mouse to guide a character through the screen.
I managed to draw an ellipse to follow the right hand joint on a kinect skeleton. How would I be able to figure out the position of that joint so that I could replace mouseX and mouseY if needed?
Below is the code that will track your right hand and draw a red ellipse over it:
import SimpleOpenNI.*;
SimpleOpenNI kinect;
void setup()
{
// instantiate a new context
kinect = new SimpleOpenNI(this);
kinect.setMirror(!kinect.mirror());
// enable depthMap generation
kinect.enableDepth();
// enable skeleton generation for all joints
kinect.enableUser();
smooth();
noStroke();
// create a window the size of the depth information
size(kinect.depthWidth(), kinect.depthHeight());
}
void draw()
{
// update the camera...must do
kinect.update();
// draw depth image...optional
image(kinect.depthImage(), 0, 0);
background(0);
// check if the skeleton is being tracked for user 1 (the first user that detected)
if (kinect.isTrackingSkeleton(1))
{
int joint = SimpleOpenNI.SKEL_RIGHT_HAND;
// draw a dot on their joint, so they know what's being tracked
drawJoint(1, joint);
PVector point1 = new PVector(-500, 0, 1500);
PVector point2 = new PVector(500, 0, 700);
}
}
///////////////////////////////////////////////////////
void drawJoint(int userID, int jointId) {
// make a vector to store the left hand
PVector jointPosition = new PVector();
// put the position of the left hand into that vector
kinect.getJointPositionSkeleton(userID, jointId, jointPosition);
// convert the detected hand position to "projective" coordinates that will match the depth image
PVector convertedJointPosition = new PVector();
kinect.convertRealWorldToProjective(jointPosition, convertedJointPosition);
// and display it
fill(255, 0, 0);
float ellipseSize = map(convertedJointPosition.z, 700, 2500, 50, 1);
ellipse(convertedJointPosition.x, convertedJointPosition.y, ellipseSize, ellipseSize);
}
//////////////////////////// Event-based Methods
void onNewUser(SimpleOpenNI curContext, int userId)
{
println("onNewUser - userId: " + userId);
println("\tstart tracking skeleton");
curContext.startTrackingSkeleton(userId);
}
void onLostUser(SimpleOpenNI curContext, int userId)
{
println("onLostUser - userId: " + userId);
}
Any kind of links or help will be very appreciated, thanks!

In your case I would recommend, that you use the coordinates of the right hand joint. This is how you get them:
foreach (Skeleton skeleton in skeletons) {
Joint RightHand = skeleton.Joints[JointType.HandRight];
double rightX = RightHand.Position.X;
double rightY = RightHand.Position.Y;
double rightZ = RightHand.Position.Z;
}
Be aware of the fact that we are looking at 3 dimensions so you will have a x,y and z coordinate.
FYI: You will have to insert these lines of code in the event handler SkeletonFramesReady.
If you still want the circle around it have a look at the Skeleton-Basics WPF Example in the Kinect SDK's.
Does this help you?

It's slightly unclear what you're trying to achieve.
If you simply need the position of the hand in 2D screen coordinates, the code you posted already includes this:
kinect.getJointPositionSkeleton() retrieves the 3D coordinates
kinect.convertRealWorldToProjective() converts them to 2D screen coordinates.
If you want to be able to swap between using kinect tracked hand coordinates and mouse coordinates, you can store the PVector used in the 2D conversion as a variable visible to the whole sketch which you updated either by kinect skeleton if it is being tracked or mouse otherwise:
import SimpleOpenNI.*;
SimpleOpenNI kinect;
PVector user1RightHandPos = new PVector();
float ellipseSize;
void setup()
{
// instantiate a new context
kinect = new SimpleOpenNI(this);
kinect.setMirror(!kinect.mirror());
// enable depthMap generation
kinect.enableDepth();
// enable skeleton generation for all joints
kinect.enableUser();
smooth();
noStroke();
// create a window the size of the depth information
size(kinect.depthWidth(), kinect.depthHeight());
}
void draw()
{
// update the camera...must do
kinect.update();
// draw depth image...optional
image(kinect.depthImage(), 0, 0);
background(0);
// check if the skeleton is being tracked for user 1 (the first user that detected)
if (kinect.isTrackingSkeleton(1))
{
updateRightHand2DCoords(1, SimpleOpenNI.SKEL_RIGHT_HAND);
ellipseSize = map(user1RightHandPos.z, 700, 2500, 50, 1);
}else{//if the skeleton isn't tracked, use the mouse
user1RightHandPos.set(mouseX,mouseY,0);
ellipseSize = 20;
}
//draw ellipse regardless of the skeleton tracking or mouse mode
fill(255, 0, 0);
ellipse(user1RightHandPos.x, user1RightHandPos.y, ellipseSize, ellipseSize);
}
///////////////////////////////////////////////////////
void updateRightHand2DCoords(int userID, int jointId) {
// make a vector to store the left hand
PVector jointPosition = new PVector();
// put the position of the left hand into that vector
kinect.getJointPositionSkeleton(userID, jointId, jointPosition);
// convert the detected hand position to "projective" coordinates that will match the depth image
user1RightHandPos.set(0,0,0);//reset the 2D hand position before OpenNI conversion from 3D
kinect.convertRealWorldToProjective(jointPosition, user1RightHandPos);
}
//////////////////////////// Event-based Methods
void onNewUser(SimpleOpenNI curContext, int userId)
{
println("onNewUser - userId: " + userId);
println("\tstart tracking skeleton");
curContext.startTrackingSkeleton(userId);
}
void onLostUser(SimpleOpenNI curContext, int userId)
{
println("onLostUser - userId: " + userId);
}
Optionally, you can use a boolean to swap between mouse/kinect mode when testing.
If you need the mouse coordinates simply to test without having to get in from of the kinect all the time, I recommend having a look at the RecorderPlay example (via Processing > File > Examples > Contributed Libraries > SimpleOpenNI > OpenNI > RecorderPlay). OpenNI has the ability to record a scene (including depth data) which will make it simpler to test: simply record an .oni file with the most common interactions you're aiming for, then re-use the recording when developing.
All it would take to use the .oni file is using a different constructor signature for OpenNI:
kinect = new SimpleOpenNI(this,"/path/to/yourRecordingHere.oni");
One caveat to keep in mind: the depth is stored at half the resolution (so the coordinates with need to be doubled to be on par with the realtime version).

Related

How to draw an OCX to a CBitmap (MFC, c++)

How can I draw an OCX (I do have the sources) to an CBitmap-Object or something alike?
Background: My client creates PDF-Documents and part of these documents is an Output from an OCX. The PDF-lib-Interface has a Method to put an Image from an CBitmap-Object to the PDF-Page.
So what i want to do ist let the Program create an CBitmap-Object, pass that to the OCX to let it draw its content onto it and then pass the he CBitmap to the PDF-library to get it into the document.
So the main question is: how to draw my ocx into a CBitmap-Object?
I'm using Visual C++, Windows, MFC/ATL.
Thanks a lot
actually I didn't manage to render the OXC to a CBitmap (just got a black box drawn) but rendering into an ATL::CImage and making a CBitmap out of it worked:
ATL::CImage* CPrnBasePrinter::DrawBeamerToImage(CSPCListView* pListViewWithBeamer, const CRect& rect4Beamer)
{
ASSERT(pListViewWithBeamer != nullptr);
auto* pRetVal = new CImage();
pRetVal->Create(rect4Beamer.Width(), rect4Beamer.Height(), 24);
HDC hdcImage = pRetVal->GetDC();
//Draw Control to CImage
pListViewWithBeamer->DrawBeamerToDC(HandleToLong(hdcImage),
rect4Beamer.left, rect4Beamer.top, rect4Beamer.right, rect4Beamer.bottom);
pRetVal->ReleaseDC();
return pRetVal;
}
void CPrnBasePrinter::DrawImageFromCImage(
const ATL::CImage* pImage, const CRect& rect) const
{
CBitmap* pbmp2Print = CBitmap::FromHandle(*pImage);
// Get the size of the bitmap
BITMAP bmpInfo;
pbmp2Print->GetBitmap(&bmpInfo);
//virtual - Draws the CBitmap to an Printer-DC or a PDF-Document
DrawImageFromLoadedBitmap(pbmp2Print, &bmpInfo, rect);
}
void CPrnBasePrinter::Draw()
{
//m_pListviewDataSource is an OCX capable of drawing itself into a given DC
ATL::CImage* pBeamerImage = DrawBeamerToImage(m_pListviewDataSource, CRect(0, 0, 100, 50));
if (pBeamerImage != nullptr){
DrawImageFromCImage(pBeamerImage, CRect(0, 0, 100, 50));
delete pBeamerImage;
}
}

Color of texture skybox unity

I'm working with google cardboard in unity.
In my main scene I have a skybox with an image as texture.
How can I get the color of the pixel I'm looking?
The skybox is an element of mainCamera, that is child of "Head".
I put also GvrReticle as child of head; is it useful for my purpose?
Thanks
Basically you wait for the end of the frame so that the camera has rendered. Then you read the rendered data into a texture and get the center pixel.
edit Be aware that if you have a UI element rendered in the center it will show the UI element color not the color behind.
private Texture2D tex;
public Color center;
void Awake()
{
StartCoroutine(GetCenterPixel());
}
private void CreateTexture()
{
tex = new Texture2D(1, 1, TextureFormat.RGB24, false);
}
private IEnumerator GetCenterPixel()
{
CreateTexture();
while (true)
{
yield return new WaitForEndOfFrame();
tex.ReadPixels(new Rect(Screen.width / 2f, Screen.height / 2f, 1, 1), 0, 0);
tex.Apply();
center = tex.GetPixel(0,0);
}
}

Phaser BitmapData Sprite Immovable on Drag

I am a noob with Phaser games in general but I am trying to make a scrabble like game.
I made my tiles as BitmapData and added to sprite. I want to be able to drag and drop them on my scrabble board but not be able to place one tile in a spot where another tile is.
For debugging purposes Ive been just trying to get the individual tiles to respect each other's physics when you drag and drop. The behavior I want is, when dragging, to bump into another tile with that tile "holding its ground" and the dragging tile unable to cross on top of the other tile. Currently the dragged tile just goes on top of other tiles. I've looked at a lot of examples and feel that my issue may be either because my sprite is made with bitmapData or something around the drag event...
function makeTile (tileIndex, tile) {
var bmd = game.add.bitmapData(canvasZoom, canvasZoom);
// draw to the canvas context
bmd.ctx.beginPath();
bmd.ctx.rect(0, 0, canvasZoom, canvasZoom);
bmd.ctx.fillStyle = '#efefef';
bmd.ctx.fill();
bmd.ctx.fillStyle = '#234234';
bmd.ctx.font="20px Georgia";
bmd.ctx.fillText(tile.tileName, 7,23);
// use the bitmap data as the texture for the sprite
var tileSprite = game.make.sprite((tileIndex * canvasZoom) + canvasZoom, canvasZoom, bmd);
game.physics.arcade.enable([tileSprite]);
tileHandGroup.add(tileSprite);
tileSprite.inputEnabled = true;
var bounds = new Phaser.Rectangle(canvasZoom, canvasZoom, spriteWidth * canvasZoom, (spriteHeight * canvasZoom) + (canvasZoom * 2));
tileSprite.input.boundsRect = bounds;
tileSprite.name = 'tileSpriteName' + tileIndex;
tileSprite.input.enableDrag(true);
tileSprite.input.enableSnap(canvasZoom, canvasZoom, true, false);
tileSprite.immovable = true;
tileSprite.body.moves = false;
tileSprite.events.onDragStart.add(onDragStart, this);
tileSprite.events.onDragStop.add(onDragStop, this);
}
function firstTileHand() {
tileHandGroup = game.add.physicsGroup(Phaser.Physics.ARCADE);
game.physics.enable(tileHandGroup, Phaser.Physics.ARCADE);
tileHandGroup.enableBody = true;
tileHandGroup.name = 'tileHandGroup';
for (var i = 0; i < tiles.length; i++)
{
makeTile(i, tiles[i]);
}
}

C#/XNA/HLSL - Applying a pixel shader on 2D sprites affects the other sprites on the same render target

Background information
I have just started learning HLSL and decided to test what I have learned from the Internet by writing a simple 2D XNA 4.0 bullet-hell game.
I have written a pixel shader in order to change the color of bullets.
Here is the idea: the original texture of the bullet is mainly black, white and red. With the help of my pixel shader, bullets can be much more colorful.
But, I'm not sure how and when the shader is applied on spriteBatch in XNA 4.0, and when it ends. This may be the cause of problem.
There were pass.begin() and pass.end() in XNA 3.x, but pass.apply() in XNA 4.0 confuses me.
In addition, it is the first time for me to use renderTarget. It may cause problems.
Symptom
It works, but only if there are bullets of the same color in the bullet list.
If bullets of different colors are rendered, it produces wrong colors.
It seems that the pixel shader is not applied on the bullet texture, but applied on the renderTarget, which contains all the rendered bullets.
For an example:
Here I have some red bullets and blue bullets. The last created bullet is a blue one. It seems that the pixel shader have added blue color on the red ones, making them to be blue-violet.
If I continuously create bullets, the red bullets will appear to be switching between red and blue-violet. (I believe that the blue ones are also switching, but not obvious.)
Code
Since I am new to HLSL, I don't really know what I have to provide.
Here are all the things that I believe or don't know if they are related to the problem.
C# - Enemy bullet (or just Bullet):
protected SpriteBatch spriteBatch;
protected Texture2D texture;
protected Effect colorEffect;
protected Color bulletColor;
... // And some unrelated variables
public EnemyBullet(SpriteBatch spriteBatch, Texture2D texture, Effect colorEffect, BulletType bulletType, (and other data, like velocity)
{
this.spriteBatch = spriteBatch;
this.texture = texture;
this.colorEffect = colorEffect;
if(bulletType == BulletType.ARROW_S)
{
bulletColor = Color.Red; // The bullet will be either red
}
else
{
bulletColor = Color.Blue; // or blue.
}
}
public void Update()
{
... // Update positions and other properties, but not the color.
}
public void Draw()
{
colorEffect.Parameters["DestColor"].SetValue(bulletColor.ToVector4());
int l = colorEffect.CurrentTechnique.Passes.Count();
for (int i = 0; i < l; i++)
{
colorEffect.CurrentTechnique.Passes[i].Apply();
spriteBatch.Draw(texture, Position, sourceRectangle, Color.White, (float)Math.PI - rotation_randian, origin, Scale, SpriteEffects.None, 0.0f);
}
}
C# - Bullet manager:
private Texture2D bulletTexture;
private List<EnemyBullet> enemyBullets;
private const int ENEMY_BULLET_CAPACITY = 10000;
private RenderTarget2D bulletsRenderTarget;
private Effect colorEffect;
...
public EnemyBulletManager()
{
enemyBullets = new List<EnemyBullet>(ENEMY_BULLET_CAPACITY);
}
public void LoadContent(ContentManager content, SpriteBatch spriteBatch)
{
bulletTexture = content.Load<Texture2D>(#"Textures\arrow_red2");
bulletsRenderTarget = new RenderTarget2D(spriteBatch.GraphicsDevice, spriteBatch.GraphicsDevice.PresentationParameters.BackBufferWidth, spriteBatch.GraphicsDevice.PresentationParameters.BackBufferHeight, false, SurfaceFormat.Color, DepthFormat.None);
colorEffect = content.Load<Effect>(#"Effects\ColorTransform");
colorEffect.Parameters["ColorMap"].SetValue(bulletTexture);
}
public void Update()
{
int l = enemyBullets.Count();
for (int i = 0; i < l; i++)
{
if (enemyBullets[i].IsAlive)
{
enemyBullets[i].Update();
}
else
{
enemyBullets.RemoveAt(i);
i--;
l--;
}
}
}
// This function is called before Draw()
public void PreDraw()
{
// spriteBatch.Begin() is called outside this class, for reference:
// spriteBatch.Begin(SpriteSortMode.Immediate, null);
spriteBatch.GraphicsDevice.SetRenderTarget(bulletsRenderTarget);
spriteBatch.GraphicsDevice.Clear(Color.Transparent);
int l = enemyBullets.Count();
for (int i = 0; i < l; i++)
{
if (enemyBullets[i].IsAlive)
{
enemyBullets[i].Draw();
}
}
spriteBatch.GraphicsDevice.SetRenderTarget(null);
}
public void Draw()
{
// Before this function is called,
// GraphicsDevice.Clear(Color.Black);
// is called outside.
spriteBatch.Draw(bulletsRenderTarget, Vector2.Zero, Color.White);
// spriteBatch.End();
}
// This function will be responsible for creating new bullets.
public EnemyBullet CreateBullet(EnemyBullet.BulletType bulletType, ...)
{
EnemyBullet eb = new EnemyBullet(spriteBatch, bulletTexture, colorEffect, bulletType, ...);
enemyBullets.Add(eb);
return eb;
}
HLSL - Effects\ColorTransform.fx
float4 DestColor;
texture2D ColorMap;
sampler2D ColorMapSampler = sampler_state
{
Texture = <ColorMap>;
};
struct PixelShaderInput
{
float2 TexCoord : TEXCOORD0;
};
float4 PixelShaderFunction(PixelShaderInput input) : COLOR0
{
float4 srcRGBA = tex2D(ColorMapSampler, input.TexCoord);
float fmax = max(srcRGBA.r, max(srcRGBA.g, srcRGBA.b));
float fmin = min(srcRGBA.r, min(srcRGBA.g, srcRGBA.b));
float delta = fmax - fmin;
float4 originalDestColor = float4(1, 0, 0, 1);
float4 deltaDestColor = originalDestColor - DestColor;
float4 finalRGBA = srcRGBA - (deltaDestColor * delta);
return finalRGBA;
}
technique Technique1
{
pass ColorTransform
{
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
I would be appreciate if anyone can help solving the problem. (Or optimizing my shader. I really know very little about HLSL.)
In XNA 4 you should pass the effect directly to the SpriteBatch, as explained on Shawn Hargreaves' Blog.
That said, it seems to me like the problem is, that after rendering your bullets to bulletsRenderTarget, you then draw that RenderTarget using the same spriteBatch with the last effect still in action. That would explain why the entire image is painted blue.
A solution would be to use two Begin()/End() passes of SpriteBatch, one with the effect and the other without. Or just don't use a separate RenderTarget to begin with, which seems pointless in this case.
I'm also very much a beginner with pixel shaders so, just my 2c.

Managed DirectX Postprocessing Fragment Shader rendering problem

I'm using Managed Direct X 2.0 with C# and I'm attempting to apply a fragment shader to a texture built by rendering the screen to a texture using the RenderToSurface helper class.
The code I'm using to do this is:
RtsHelper.BeginScene(RenderSurface);
device.Clear(ClearFlags.Target | ClearFlags.ZBuffer, Color.White, 1.0f, 0);
//pre-render shader setup
preProc.Begin(FX.None);
preProc.BeginPass(0);
//mesh drawing
mesh.DrawSubset(j);
preProc.CommitChanges();
preProc.EndPass();
preProc.End();
RtsHelper.EndScene(Filter.None);
which renders to my Surface, RenderSurface, which is attached to a Texture object called RenderTexture
Then I call the following code to render the surface to the screen, applying a second shader "PostProc" to the rendered texture. This shader combines color values on a per pixel basis and transforms the scene to grayscale. I'm following the tutorial here: http://rbwhitaker.wikidot.com/post-processing-effects
device.BeginScene();
{
using (Sprite sprite = new Sprite(device))
{
sprite.Begin(SpriteFlags.DoNotSaveState);
postProc.Begin(FX.None);
postProc.BeginPass(0);
sprite.Draw(RenderTexture, new Rectangle(0, 0, WINDOWWIDTH, WINDOWHEIGHT), new Vector3(0, 0, 0), new Vector3(0, 0, 0), Color.White);
postProc.CommitChanges();
postProc.EndPass();
postProc.End();
sprite.End();
}
}
device.EndScene();
device.Present();
this.Invalidate();
However all I see is the original rendered scene, as rendered to the texture, but unmodified by the second shader.
FX file is below in case it's important.
//------------------------------ TEXTURE PROPERTIES ----------------------------
// This is the texture that Sprite will try to set before drawing
texture ScreenTexture;
// Our sampler for the texture, which is just going to be pretty simple
sampler TextureSampler = sampler_state
{
Texture = <ScreenTexture>;
};
//------------------------ PIXEL SHADER ----------------------------------------
// This pixel shader will simply look up the color of the texture at the
// requested point, and turns it into a shade of gray
float4 PixelShaderFunction(float2 TextureCoordinate : TEXCOORD0) : COLOR0
{
float4 color = tex2D(TextureSampler, TextureCoordinate);
float value = (color.r + color.g + color.b) / 3;
color.r = value;
color.g = value;
color.b = value;
return color;
}
//-------------------------- TECHNIQUES ----------------------------------------
// This technique is pretty simple - only one pass, and only a pixel shader
technique BlackAndWhite
{
pass Pass1
{
PixelShader = compile ps_1_1 PixelShaderFunction();
}
}
Fixed it. Was using the wrong flags for the post processor shader initialization
was:
sprite.Begin(SpriteFlags.DoNotSaveState);
postProc.Begin(FX.None);
should be:
sprite.Begin(SpriteFlags.DoNotSaveState);
postProc.Begin(FX.DoNotSaveState);

Resources