How to draw a straight line on imageview in Xamarin using c#. I have to draw the straight line and calculate length of that line.
Please help me. Thanks in advance..
You can extend UIImageView and inside a method draw a line like so:
public void DrawLine()
{
CGContext context = UIGraphics.GetCurrentContext ();
context.SetLineWidth (4);
UIColor.Clear.SetFill ();
UIColor.Black.SetStroke ();
currentPath = new CGPath ();
currentPath.AddLines (points.ToArray());
context.AddPath (currentPath);
context.DrawPath (CGPathDrawingMode.Stroke);
context.SaveState ();
}
points is a List of PointF objects.
Related
How can I draw an OCX (I do have the sources) to an CBitmap-Object or something alike?
Background: My client creates PDF-Documents and part of these documents is an Output from an OCX. The PDF-lib-Interface has a Method to put an Image from an CBitmap-Object to the PDF-Page.
So what i want to do ist let the Program create an CBitmap-Object, pass that to the OCX to let it draw its content onto it and then pass the he CBitmap to the PDF-library to get it into the document.
So the main question is: how to draw my ocx into a CBitmap-Object?
I'm using Visual C++, Windows, MFC/ATL.
Thanks a lot
actually I didn't manage to render the OXC to a CBitmap (just got a black box drawn) but rendering into an ATL::CImage and making a CBitmap out of it worked:
ATL::CImage* CPrnBasePrinter::DrawBeamerToImage(CSPCListView* pListViewWithBeamer, const CRect& rect4Beamer)
{
ASSERT(pListViewWithBeamer != nullptr);
auto* pRetVal = new CImage();
pRetVal->Create(rect4Beamer.Width(), rect4Beamer.Height(), 24);
HDC hdcImage = pRetVal->GetDC();
//Draw Control to CImage
pListViewWithBeamer->DrawBeamerToDC(HandleToLong(hdcImage),
rect4Beamer.left, rect4Beamer.top, rect4Beamer.right, rect4Beamer.bottom);
pRetVal->ReleaseDC();
return pRetVal;
}
void CPrnBasePrinter::DrawImageFromCImage(
const ATL::CImage* pImage, const CRect& rect) const
{
CBitmap* pbmp2Print = CBitmap::FromHandle(*pImage);
// Get the size of the bitmap
BITMAP bmpInfo;
pbmp2Print->GetBitmap(&bmpInfo);
//virtual - Draws the CBitmap to an Printer-DC or a PDF-Document
DrawImageFromLoadedBitmap(pbmp2Print, &bmpInfo, rect);
}
void CPrnBasePrinter::Draw()
{
//m_pListviewDataSource is an OCX capable of drawing itself into a given DC
ATL::CImage* pBeamerImage = DrawBeamerToImage(m_pListviewDataSource, CRect(0, 0, 100, 50));
if (pBeamerImage != nullptr){
DrawImageFromCImage(pBeamerImage, CRect(0, 0, 100, 50));
delete pBeamerImage;
}
}
The solution for centering any subview within a parent is usually simple, however, it doesn't seem to work in my case.
I'm working with a UICollectionView and have added a Header class programmatically. I have this constructor, where I also try to center the label within the screen:
[Export("initWithFrame:")]
public Header(System.Drawing.RectangleF frame) : base(frame)
{
label = new UILabel
{
Frame = new System.Drawing.RectangleF(frame.Size.Width / 2, 50, 200, 50),
BackgroundColor = UIColor.Clear,
TextColor = UIColor.White,
Font = UIFont.FromName("HelveticaNeueLTStd-ThCn", 35f),
Text = DateTime.Now.ToString("Y")
};
AddSubview(label);
}
And I initialize the class inside the UICollectionViewSource 's constructor like this:
public MyCollectionViewDataSource(MainController mainController, DateTime currentDate)
{
try
{
controller = mainController;
new Header(new RectangleF(0, 0, (float)mainController.View.Frame.Size.Width, 200));
}
catch (Exception ex)
{
Console.WriteLine(ex.Message + ex.StackTrace);
}
}
What exactly am I missing because this usually works in other instances but seems to fail here?
This is what it looks like :
I found an explanation here iOS Layout Gotchas by Adam Kemp which helped me resolve this issue.
The first solution
One very common mistake I made was adding the layout definition code in the constructor, instead of doing it in the rightful place : the LayoutSubviews override in this case.
Giving the label the frame size in the constructor assumes a static size set at the time of construction, which may later change depending on the screen size.
The second solution
He explains that :
Frame sets the position of a view within its parent while Bounds is in the coordinate system of the view itself (not its parent).
So, to center the UILabel, I used bounds and center together and this worked for me.
[Export("initWithFrame:")]
public Header(CGRect bounds) : base(bounds)
{
label = new UILabel
{
BackgroundColor = UIColor.Clear,
TextColor = UIColor.White,
Font = UIFont.FromName("HelveticaNeueLTStd-ThCn", 35f),
Text = DateTime.Now.ToString("Y"),
TextAlignment = UITextAlignment.Center
};
rectangle = bounds;
AddSubview(label);
}
public override void LayoutSubviews()
{
base.LayoutSubviews();
label.Bounds = new CGRect (rectangle.Size.Width / 2, 50, 200, 50);
label.Center = new PointF((float)rectangle.Size.Width/2,50);
}
I'm currently using an XBOX KINECT model 1414, and processing 2.2.1. I'm hoping to use the right hand as a mouse to guide a character through the screen.
I managed to draw an ellipse to follow the right hand joint on a kinect skeleton. How would I be able to figure out the position of that joint so that I could replace mouseX and mouseY if needed?
Below is the code that will track your right hand and draw a red ellipse over it:
import SimpleOpenNI.*;
SimpleOpenNI kinect;
void setup()
{
// instantiate a new context
kinect = new SimpleOpenNI(this);
kinect.setMirror(!kinect.mirror());
// enable depthMap generation
kinect.enableDepth();
// enable skeleton generation for all joints
kinect.enableUser();
smooth();
noStroke();
// create a window the size of the depth information
size(kinect.depthWidth(), kinect.depthHeight());
}
void draw()
{
// update the camera...must do
kinect.update();
// draw depth image...optional
image(kinect.depthImage(), 0, 0);
background(0);
// check if the skeleton is being tracked for user 1 (the first user that detected)
if (kinect.isTrackingSkeleton(1))
{
int joint = SimpleOpenNI.SKEL_RIGHT_HAND;
// draw a dot on their joint, so they know what's being tracked
drawJoint(1, joint);
PVector point1 = new PVector(-500, 0, 1500);
PVector point2 = new PVector(500, 0, 700);
}
}
///////////////////////////////////////////////////////
void drawJoint(int userID, int jointId) {
// make a vector to store the left hand
PVector jointPosition = new PVector();
// put the position of the left hand into that vector
kinect.getJointPositionSkeleton(userID, jointId, jointPosition);
// convert the detected hand position to "projective" coordinates that will match the depth image
PVector convertedJointPosition = new PVector();
kinect.convertRealWorldToProjective(jointPosition, convertedJointPosition);
// and display it
fill(255, 0, 0);
float ellipseSize = map(convertedJointPosition.z, 700, 2500, 50, 1);
ellipse(convertedJointPosition.x, convertedJointPosition.y, ellipseSize, ellipseSize);
}
//////////////////////////// Event-based Methods
void onNewUser(SimpleOpenNI curContext, int userId)
{
println("onNewUser - userId: " + userId);
println("\tstart tracking skeleton");
curContext.startTrackingSkeleton(userId);
}
void onLostUser(SimpleOpenNI curContext, int userId)
{
println("onLostUser - userId: " + userId);
}
Any kind of links or help will be very appreciated, thanks!
In your case I would recommend, that you use the coordinates of the right hand joint. This is how you get them:
foreach (Skeleton skeleton in skeletons) {
Joint RightHand = skeleton.Joints[JointType.HandRight];
double rightX = RightHand.Position.X;
double rightY = RightHand.Position.Y;
double rightZ = RightHand.Position.Z;
}
Be aware of the fact that we are looking at 3 dimensions so you will have a x,y and z coordinate.
FYI: You will have to insert these lines of code in the event handler SkeletonFramesReady.
If you still want the circle around it have a look at the Skeleton-Basics WPF Example in the Kinect SDK's.
Does this help you?
It's slightly unclear what you're trying to achieve.
If you simply need the position of the hand in 2D screen coordinates, the code you posted already includes this:
kinect.getJointPositionSkeleton() retrieves the 3D coordinates
kinect.convertRealWorldToProjective() converts them to 2D screen coordinates.
If you want to be able to swap between using kinect tracked hand coordinates and mouse coordinates, you can store the PVector used in the 2D conversion as a variable visible to the whole sketch which you updated either by kinect skeleton if it is being tracked or mouse otherwise:
import SimpleOpenNI.*;
SimpleOpenNI kinect;
PVector user1RightHandPos = new PVector();
float ellipseSize;
void setup()
{
// instantiate a new context
kinect = new SimpleOpenNI(this);
kinect.setMirror(!kinect.mirror());
// enable depthMap generation
kinect.enableDepth();
// enable skeleton generation for all joints
kinect.enableUser();
smooth();
noStroke();
// create a window the size of the depth information
size(kinect.depthWidth(), kinect.depthHeight());
}
void draw()
{
// update the camera...must do
kinect.update();
// draw depth image...optional
image(kinect.depthImage(), 0, 0);
background(0);
// check if the skeleton is being tracked for user 1 (the first user that detected)
if (kinect.isTrackingSkeleton(1))
{
updateRightHand2DCoords(1, SimpleOpenNI.SKEL_RIGHT_HAND);
ellipseSize = map(user1RightHandPos.z, 700, 2500, 50, 1);
}else{//if the skeleton isn't tracked, use the mouse
user1RightHandPos.set(mouseX,mouseY,0);
ellipseSize = 20;
}
//draw ellipse regardless of the skeleton tracking or mouse mode
fill(255, 0, 0);
ellipse(user1RightHandPos.x, user1RightHandPos.y, ellipseSize, ellipseSize);
}
///////////////////////////////////////////////////////
void updateRightHand2DCoords(int userID, int jointId) {
// make a vector to store the left hand
PVector jointPosition = new PVector();
// put the position of the left hand into that vector
kinect.getJointPositionSkeleton(userID, jointId, jointPosition);
// convert the detected hand position to "projective" coordinates that will match the depth image
user1RightHandPos.set(0,0,0);//reset the 2D hand position before OpenNI conversion from 3D
kinect.convertRealWorldToProjective(jointPosition, user1RightHandPos);
}
//////////////////////////// Event-based Methods
void onNewUser(SimpleOpenNI curContext, int userId)
{
println("onNewUser - userId: " + userId);
println("\tstart tracking skeleton");
curContext.startTrackingSkeleton(userId);
}
void onLostUser(SimpleOpenNI curContext, int userId)
{
println("onLostUser - userId: " + userId);
}
Optionally, you can use a boolean to swap between mouse/kinect mode when testing.
If you need the mouse coordinates simply to test without having to get in from of the kinect all the time, I recommend having a look at the RecorderPlay example (via Processing > File > Examples > Contributed Libraries > SimpleOpenNI > OpenNI > RecorderPlay). OpenNI has the ability to record a scene (including depth data) which will make it simpler to test: simply record an .oni file with the most common interactions you're aiming for, then re-use the recording when developing.
All it would take to use the .oni file is using a different constructor signature for OpenNI:
kinect = new SimpleOpenNI(this,"/path/to/yourRecordingHere.oni");
One caveat to keep in mind: the depth is stored at half the resolution (so the coordinates with need to be doubled to be on par with the realtime version).
I want to have a background texture with 3 rectangles and i want to create animation with them, - texture
But first rectangle cuts in a proper way and two others are cut in a dumb way
Proper way,
Dumb way #1,
Dumb way #2
Here is my code.
public class MainMenu implements Screen{
Texture background_main;
TextureRegion[] background_textures;
Animation background_animation;
SpriteBatch batch;
TextureRegion current_frame;
float stateTime;
BobDestroyer game;
OrthographicCamera camera;
public MainMenu(BobDestroyer game){
this.game = game;
}
#Override
public void show() {
camera = new OrthographicCamera();
camera.setToOrtho(true, Gdx.graphics.getWidth(), Gdx.graphics.getHeight());
batch = new SpriteBatch();
background_main = new Texture(Gdx.files.internal("main_menu_screen/Background.png"));
background_textures = new TextureRegion[3];
for(int i = 0; i<3; i++){
background_textures[i] = new TextureRegion(background_main,0, 0+72*i, 128, 72+72*i);
}
background_animation = new Animation(2f,background_textures);
}
#Override
public void render(float delta) {
Gdx.gl.glClear(GL20.GL_COLOR_BUFFER_BIT | GL20.GL_DEPTH_BUFFER_BIT);
stateTime += Gdx.graphics.getDeltaTime();
current_frame = background_animation.getKeyFrame(stateTime, true);
batch.begin();
batch.draw(current_frame,0, 0,Gdx.graphics.getWidth(),Gdx.graphics.getHeight());
batch.end();
}
}
If I understand you correctly, are you trying to create 3 TextureRegions of the same width/height? If yes, your issue may be with:
new TextureRegion(background_main,0, 0+72*i, 128, 72+72*i)
I think you'd want:
new TextureRegion(background_main,0, 0+72*i, 128, 72)
As the 128x72 is the width/height (not x/y positions) of your next TextureRegions, and do you not want them all the same height (72) as opposed to varying heights (72+72*i)?
Background information
I have just started learning HLSL and decided to test what I have learned from the Internet by writing a simple 2D XNA 4.0 bullet-hell game.
I have written a pixel shader in order to change the color of bullets.
Here is the idea: the original texture of the bullet is mainly black, white and red. With the help of my pixel shader, bullets can be much more colorful.
But, I'm not sure how and when the shader is applied on spriteBatch in XNA 4.0, and when it ends. This may be the cause of problem.
There were pass.begin() and pass.end() in XNA 3.x, but pass.apply() in XNA 4.0 confuses me.
In addition, it is the first time for me to use renderTarget. It may cause problems.
Symptom
It works, but only if there are bullets of the same color in the bullet list.
If bullets of different colors are rendered, it produces wrong colors.
It seems that the pixel shader is not applied on the bullet texture, but applied on the renderTarget, which contains all the rendered bullets.
For an example:
Here I have some red bullets and blue bullets. The last created bullet is a blue one. It seems that the pixel shader have added blue color on the red ones, making them to be blue-violet.
If I continuously create bullets, the red bullets will appear to be switching between red and blue-violet. (I believe that the blue ones are also switching, but not obvious.)
Code
Since I am new to HLSL, I don't really know what I have to provide.
Here are all the things that I believe or don't know if they are related to the problem.
C# - Enemy bullet (or just Bullet):
protected SpriteBatch spriteBatch;
protected Texture2D texture;
protected Effect colorEffect;
protected Color bulletColor;
... // And some unrelated variables
public EnemyBullet(SpriteBatch spriteBatch, Texture2D texture, Effect colorEffect, BulletType bulletType, (and other data, like velocity)
{
this.spriteBatch = spriteBatch;
this.texture = texture;
this.colorEffect = colorEffect;
if(bulletType == BulletType.ARROW_S)
{
bulletColor = Color.Red; // The bullet will be either red
}
else
{
bulletColor = Color.Blue; // or blue.
}
}
public void Update()
{
... // Update positions and other properties, but not the color.
}
public void Draw()
{
colorEffect.Parameters["DestColor"].SetValue(bulletColor.ToVector4());
int l = colorEffect.CurrentTechnique.Passes.Count();
for (int i = 0; i < l; i++)
{
colorEffect.CurrentTechnique.Passes[i].Apply();
spriteBatch.Draw(texture, Position, sourceRectangle, Color.White, (float)Math.PI - rotation_randian, origin, Scale, SpriteEffects.None, 0.0f);
}
}
C# - Bullet manager:
private Texture2D bulletTexture;
private List<EnemyBullet> enemyBullets;
private const int ENEMY_BULLET_CAPACITY = 10000;
private RenderTarget2D bulletsRenderTarget;
private Effect colorEffect;
...
public EnemyBulletManager()
{
enemyBullets = new List<EnemyBullet>(ENEMY_BULLET_CAPACITY);
}
public void LoadContent(ContentManager content, SpriteBatch spriteBatch)
{
bulletTexture = content.Load<Texture2D>(#"Textures\arrow_red2");
bulletsRenderTarget = new RenderTarget2D(spriteBatch.GraphicsDevice, spriteBatch.GraphicsDevice.PresentationParameters.BackBufferWidth, spriteBatch.GraphicsDevice.PresentationParameters.BackBufferHeight, false, SurfaceFormat.Color, DepthFormat.None);
colorEffect = content.Load<Effect>(#"Effects\ColorTransform");
colorEffect.Parameters["ColorMap"].SetValue(bulletTexture);
}
public void Update()
{
int l = enemyBullets.Count();
for (int i = 0; i < l; i++)
{
if (enemyBullets[i].IsAlive)
{
enemyBullets[i].Update();
}
else
{
enemyBullets.RemoveAt(i);
i--;
l--;
}
}
}
// This function is called before Draw()
public void PreDraw()
{
// spriteBatch.Begin() is called outside this class, for reference:
// spriteBatch.Begin(SpriteSortMode.Immediate, null);
spriteBatch.GraphicsDevice.SetRenderTarget(bulletsRenderTarget);
spriteBatch.GraphicsDevice.Clear(Color.Transparent);
int l = enemyBullets.Count();
for (int i = 0; i < l; i++)
{
if (enemyBullets[i].IsAlive)
{
enemyBullets[i].Draw();
}
}
spriteBatch.GraphicsDevice.SetRenderTarget(null);
}
public void Draw()
{
// Before this function is called,
// GraphicsDevice.Clear(Color.Black);
// is called outside.
spriteBatch.Draw(bulletsRenderTarget, Vector2.Zero, Color.White);
// spriteBatch.End();
}
// This function will be responsible for creating new bullets.
public EnemyBullet CreateBullet(EnemyBullet.BulletType bulletType, ...)
{
EnemyBullet eb = new EnemyBullet(spriteBatch, bulletTexture, colorEffect, bulletType, ...);
enemyBullets.Add(eb);
return eb;
}
HLSL - Effects\ColorTransform.fx
float4 DestColor;
texture2D ColorMap;
sampler2D ColorMapSampler = sampler_state
{
Texture = <ColorMap>;
};
struct PixelShaderInput
{
float2 TexCoord : TEXCOORD0;
};
float4 PixelShaderFunction(PixelShaderInput input) : COLOR0
{
float4 srcRGBA = tex2D(ColorMapSampler, input.TexCoord);
float fmax = max(srcRGBA.r, max(srcRGBA.g, srcRGBA.b));
float fmin = min(srcRGBA.r, min(srcRGBA.g, srcRGBA.b));
float delta = fmax - fmin;
float4 originalDestColor = float4(1, 0, 0, 1);
float4 deltaDestColor = originalDestColor - DestColor;
float4 finalRGBA = srcRGBA - (deltaDestColor * delta);
return finalRGBA;
}
technique Technique1
{
pass ColorTransform
{
PixelShader = compile ps_2_0 PixelShaderFunction();
}
}
I would be appreciate if anyone can help solving the problem. (Or optimizing my shader. I really know very little about HLSL.)
In XNA 4 you should pass the effect directly to the SpriteBatch, as explained on Shawn Hargreaves' Blog.
That said, it seems to me like the problem is, that after rendering your bullets to bulletsRenderTarget, you then draw that RenderTarget using the same spriteBatch with the last effect still in action. That would explain why the entire image is painted blue.
A solution would be to use two Begin()/End() passes of SpriteBatch, one with the effect and the other without. Or just don't use a separate RenderTarget to begin with, which seems pointless in this case.
I'm also very much a beginner with pixel shaders so, just my 2c.