BlackBerry - how two merge two images - graphics

I want to merge two images in blackberry. one image is a big image and other image is a small one. position of small image will be define by developer. what are the possible ways?

You can use the Graphics class to draw multiple Bitmaps onto it in different offsets. Look into the Graphic.drawBitmap function. We use something like:
graphics.drawBitmap(x1, y1, icon.getWidth(), icon.getHeight(), icon, 0, 0);
Where the graphics object is the one passed by the paint method we override and icon is the large image. Than determine the x y position for the smaller image and use the same method on the graphics object:
graphics.drawBitmap(x2, y2, mark.getWidth(), mark.getHeight(), mark, 0, 0);
Where mark is the smaller image.
Hope this helps :)

Related

How to combine multiple image as single image in UWP

I have multiple image(near by 40 images), I need to combine these images as single image. I have referred the below link,
Combine two Images into one new Image
but I dint find anything like Graphics in UWP. How toacheive this requirement
Making a render target bitmap and draw the image to render target bitmap.
The example that how to draw image to render target bitmap.
And you can save render target bitmap to file, see http://jamescroft.co.uk/blog/how-to/storing-a-uwp-inkcanvas-drawing-as-an-image-in-a-storagefile/

Kivy rotation during movement

I'm struggling with how to properly implement simultaneous movement and rotation using Kivy (in python, not kv lang). My goal is to rotate an object so it's facing its destination then move it towards the destination using Animation. Using the code below I typically get movement in relation to the angle rotated instead of in relation to my general playing area. For example the animation without rotation might move an image to point [1,1] whereas with rotation of 180* the animation is moving the image to [-1,-1]. The image is properly rotated in this scenario, meaning it's facing the right way but going the wrong way.
My understanding is that the push/pop matrix functions should provide the appropriate context for the animation rather than the rotated element context. Because the PopMatrix function is happening in Canvas.after it seems like this has no effect - my animation is completed before the original Matrix is restored.
I'm lacking some key piece of information here that's causing a lot of headache. Can someone explain why the code below causes an image to move to (-1,-1) rather than the (1,1) indicated, and how to properly implement this?
I threw this code together as an example, my game code is far more complex. That's why I'm hoping for an explanation rather than a direct solution for my code. Thanks.
with self.image.canvas.before:
PushMatrix()
self.rot = Rotate()
self.rot.axis = (0, 0, 1)
self.rot.origin=self.center
self.rot.angle=180
with self.image.canvas.after:
PopMatrix()
self.anim = Animation(pos = (1,1), duration=1)
self.anim.start(self)
self.image.pos = self.pos
self.image.size = self.size
In case others are interested in how to make this work consistently - I've found that setting origin and angle on each frame, along with binding the image widget to any pos change on it's parent will ensure the widget moves with its parent and in the proper direction. I implemented that like this:
Instantiate the image like this:
with self.image.canvas.before:
PushMatrix()
self.rot = Rotate()
self.rot.axis = (0, 0, 1)
self.rot.origin=self.center
self.rot.angle = 0
with self.image.canvas.after:
PopMatrix()
Bind it like this:
self.bind(pos = self.binding)
def binding(self, *args):
self.image.center = self.center
self.image.size = self.size
On each frame call a function that does similar to the below.
self.rot.origin = self.center
self.rot.angle = getangle()#you can use a set angle or generate a new angle every frame
Rotate effectively changes the coordinate system used by the entire canvas, so after you've rotated by 180 degrees the position [1, 1] really is in the opposite direction to what it was before, as far as any canvas instruction is concerned.
I don't know what self.image is (maybe an Image widget?), but presumably whatever you see is something like a Rectangle drawn on its canvas, whose pos and size match those of the widget. When you update that pos and size, the Rectangle is positioned according to the current coordinate system, which is in the rotated frame.
Thinking about it, I'm not sure if there's a neat way to combine Rotate instructions with Kivy's high level widget coordinates in quite this way. Of course you can work around it in various ways, such as by accounting for the rotation when setting the position of the Rectangle, but that's kind of fiddly, and inconvenient when working with prebuilt widgets. You can also look at what the Scatter widget does to enable arbitrary transformations.
If you just want to rotate by 180 degrees, you can instead adjust the image being displayed, either before displaying it or by adjusting the tex_coords of the Rectangle to change the displayed orientation. However, this won't work for arbitrary rotations, which it looks like you may want.

Applying rotation to Graphics object

I have very little experience programming with graphics objects. I am currently tasked with exporting a document (.tiff image) with redacted annotations. The redacted annotation is just a black rectangle object. I am able to get the x coordinates, y coordinates, width and height properties through the .XMP data. There is also a property called rotatation. This is where I am getting stuck, applying the rotation.
So, imagine a document with a redaction on it blacking out the first paragraph. Then, using a tool in the editor the user rotates the document so that it is now laying on it's side. The client is able to render the redaction correctly because we are using the Atalasoft controls to get and display annotations. Now we have a web service that will go and retrieve that image with redactions. We are not able to use the Atalasoft controls in this service due to licensing issues so we just extract the .XMP data from the .tiff image and manually draw the redactions. The problem is, if the user rotates the document when the redaction is already on the document I am having a hard time getting the redaction to rotate correctly (due to my lack of knowledge on graphics programming). If I do not apply any rotation, the redaction is displayed where it was BEFORE the document had been rotated, thus redacting the wrong area of the document.
Here is what I have tried:
Dim rectangle As New Rectangle(xCoordinate, yCoordinate, width, height)
graphics.RotateTransform(rotation)
graphics.FillRectangle(Brushes.Black, rectangle)
When I do this, the redaction does not show up at all on the final document. I have read that I may need to call the following before applying the rotation:
graphics.TranslateTransform(x,y)
But I have no idea what I should be passing in as x and y. It seems like I just need to get the rotation to apply from the upper left corner of the rectangle, but I have yet to figure out a way to properly do this.
Thank you so much for any help or pushes in the right direction!
EDIT 1:
I have also tried this (taken from How can I rotate an RectangleF at a specific degree using Graphics object?).
Dim rectangle As New Rectangle(xCoordinate, yCoordinate, width, height)
Using rotationMatrix As New Matrix
rotationMatrix.RotateAt(rotation, New PointF(rectangle.Left + (rectangle.Width / 2), rectangle.Top + (rectangle.Height / 2)))
graphics.Transform = rotationMatrix
graphics.FillRectangle(Brushes.Black, rectangle)
graphics.ResetTransform()
End Using
Which does rotate the rectangle, but it ends up in the wrong spot so it is not redacting the correct portion of the document. Once again, when I display the document without any rotation transform, it looks like the redaction simply needs to be rotated using the upper left corner as an axis but I'm not quite sure how to accomplish that.
Figured it out. Here is how I am rotating a rectangle using the upper left-hand corner as the axis:
Dim rectangle As New Rectangle(xCoordinate, yCoordinate, width, height)
Using rotationMatrix As New Matrix
rotationMatrix.RotateAt(rotation, New PointF(rectangle.Left, rectangle.Top))
graphics.Transform = rotationMatrix
graphics.FillRectangle(Brushes.Black, rectangle)
graphics.ResetTransform()
End Using

overlap one image over another

I want to add image and some text on another image and create a single image. I have got to add text but not able to figure out that how to add image. Any help?
This snippet assumes you have UIImage's named bottomImage for the base image to be drawn upon, and topImage that will be drawn ON (above) the bottomImage. xpos,ypos are floats to describe the target x,y (top left) position where topImage will be drawn, and targetSize the size in which topImage will be drawn on bottomImage.
...
UIGraphicsBeginImageContext( bottomImage.size );//create a new image context to draw offscreen
[bottomImage drawInRect:CGRectMake(0,0,bottomImage.size.width,bottomImage.size.height)];//draw bottom image first, at original size
[topImage drawInRect:CGRectMake(xpos,ypos,targetSize.width,targetSize.height) blendMode:kCGBlendModeNormal alpha:1];//draw the image to be overlayed second, at wanted location and size
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();//get newly drawn image context into a UIImage
UIGraphicsEndImageContext();//stop drawing to the context
return newImage;//return/use the newly created image
This is not thread safe - creating a UIImage in a thread is not recommended.

Coordinate system and sprite transformation

I'm using andengine to create a physic simulation via box2d.
The bodies are created through PhysicsFactory using Sprites.
My idea is to procedurally position these sprites, following this pattern:
basically one central sprites which represent my world coordinates center, and a series of cloned sprites that are created by rotating the base sprite around myWorld center (the "X" inside the circle).
I've tried to use opengl way inside andengine (translate, rotate, back-translate)
super(stamiRadious, 0, image); //stamiDoadious is te distance from radix (world center) and "petal" attach point
this.setRotationCenter(0, 0);
this.setRotation((float) Math.toDegrees(angleRad));
this.setPosition(this.getX()+radixX, this.getY()+radixY);
but i failed: results are not right (wrong final position, and wrong box2d body property as if the sprite is much larger than the image)
I belive part of the problem relies on my interpretation on setRotation and setRotationCenter, and in general on my understanding of andengine coordinates system + box2d cordinates system.
Any thoughts/links to doc/explanation?
Once you created a Physics representation (Body) of a Sprite, you should be very careful on how you modify the Sprite! Usually you don't modify the Sprite anymore at all, but instead modify the Body, by calling
someBody.setTransform(); // Note that positions must be divided by PhysicsConstants.PIXEL_TO_METER_RATIO_DEFAULT!
Hope that helped :)

Resources