Ignoring touches on transparent areas cocos2dx - visual-studio-2012

I have an image of size 480x800 pixels and there is a icon on one corner which I need to place. What I want is that to ignore all touches on the transparent areas and detect only the area where the icon is.
I found a solution in SO to this problem but it just tells the code to be used. I need to know exactly where to put that code since I am a beginner and don't know much about cocos2d so I expect a step by step solution.
Cocos2d 2.0 - Ignoring touches to transparent areas of layers/sprites

Do not use glReadPixels because it affected by bugs in android drivers. You can translate CCTouch to CCPoint in image coordinates using convertTouchToNodeSpace, and read image pixel at given point.
Create CCImage from file that contains semi-transparent picture, and read one pixel at tap point; it should be {0,0,0,0} for transparent area.
Don't forget to check that tap is not outside picture, and create pixel index in CCImage::getData() array with formulae unsigned index = x * imageWidth + y.

Related

How can I get the color of a pixel on screen with Node.js or C?

I am trying to get the color of a pixel on my screen using node.js. I want it to be returned in RGB format, e.g. (255, 0, 0). My current solution is to use screenshot-desktop to screenshot my entire screen in JPG format, decode it to get the raw pixel data, and get the color of a given pixel. However, this lags out my entire computer for 1-2 seconds as it is taking the screenshot. This is unusable as I would like to do this multiple times per second. So my question is: How can I get the color of a given pixel on the screen, without taking a full screenshot?
I am using Linux with X11. There is an X11 library for node.js, so I asssume I should use that to get the pixel color, I'm just not sure how. If you could show me how to do it in C then I can easily use node.js to do the same thing.
Thanks!
Oh my gosh I just figured it out after posting this. I was using robotjs for reading the mouse position and I totally forgot it can do screen stuff too! So, the solution would be to do
var robot = require('robotjs');
var color = robot.getPixelColor(x, y);
X11 solution using x11 node library ( I am the author ):
query windows tree with QueryTree starting at the root window
get every child geometry using GetGeometry request
if your point is not inside any child, use current window id and get 1x1 pixmap from the current image: GetImage(format, currentWindow, x, y, 1, 1, planeMask) ( 2 for format and 0xffffffff for plane mask should work ). Make sure you calculate relative x y position as you travers windows tree.
if child window covers your point query children for that window and repeat again. Note that QueryTree returns windows in bottom to top stacking order so make sure you pick last one covering your point
Once you have 1x1 pixmap from the topmost window under your point - the buffer should contain only color bytes for your image, RGB order and bit mask might depend on red_mask, green_mask, blue_mask from display.screen[0].depths[visual].
If you cache "topmost window" between requests and only start from root when no match anymore the above solution might be much more performant then the one using robotjs ( although much more low level and complicated ) Good luck!

How can i force the imagemagick module of nodejs to output one single image only?

I am using the imagemagick module with Nodejs
im = require('imagemagick');
The imagemagick module uses the imagemagick command line tools.
I use the convert method to crop an image
im.convert([image_path, '-crop', '200x150', '-gravity', 'center', target_path],
function(err, stdout){}
);
This results in two images. The one with the cropped image area - the second with the image garbage i tried to get rid of.
How can i force imagemagick to output one image file only?
Per the imagemagick documentation for cropping, which is admittedly a little obtuse (emphasis added):
The width and height of the geometry argument give the size of the image that remains after cropping, and x and y in the offset (if present) gives the location of the top left corner of the cropped image with respect to the original image.
...
If the x and y offsets are present, a single image is generated, consisting of the pixels from the cropping region.
...
If the x and y offsets are omitted, a set of tiles of the specified geometry, covering the entire input image, is generated.
... so, you just need to specify your x and y offsets as part of your geometry argument, like so: 200x150-100-75
Notice that I've specified -100 and -75 for the upper left corner of your crop region, this is because you set your gravity to center, but it appears that imagemagick tries to intelligently determine the appropriate distance target based on your gravity, and I don't see exactly how it behaves when you choose center. So you may have to play around with this one a bit, or you could omit the gravity and use the actual offset from the top left corner of your original image.
I had to use the +delete parameter to remove the last image from the image sequence.
im.convert([image_file.path, '-crop', geometry, '+delete', thumb_path], ...

Incorrect coordinates retrieved from image using ABBYY OCR SDK

I'm trying to process an image using ABBYY OCR SDK using the sample code placed in this question but I'm not able get the co-ordinates right for a specific word say "OCR" on the screenshot below.
I want to draw an overlay (yellow rectangle over the word "OCR") and sometimes the rectangle is placed very far away from the actual word.
The XML you get is synthesised according to this schema.
For each recognized character it will contain an instance of charParams element as shown in the answer you linked to. The element will contain the coordinates in page pixels - the same XML also contains a page element:
<page width="..." height="..." resolution="..." originalCoords="...">
where the image width and height are stored. So l and r for each charParams element is in range 0..width-1 of the corresponding page and t and b for each charParams element is in range 0..height-1 of the corresponding page.
Also it's worth mentioning explicitly that all coordinates are in pixels - they are completely resolution-agnostic. This is why whenever you try to highlight anything on an image you have to take zoom into account - the image will likely not be always displayed as is by your device software, but will be downscaled and so you have to map page coordinates onto your zoomed-out image coordinates and highlight appropriately.
Have you checked the DPI of the original image and also check the documentation to make sure the OCR engine is using the same DPI and not returning the image in points or some other measurement system.
It could be that rectangle you are drawing in iOS is not based on pixels but on some other measurement system also.
You just need to work through the process, testing as you go, and work out where the problem is coming from. It is most likely a uniform scaling and the distance from the actual word is proportional to the distance of the word from the top left of the page.

What is the proper way of drawing label in OpenGL Viewport?

I have a multiviewport OpenGL modeler application. It has three different viewports : perspective, front and top. Now I want to paint a label for each viewport and not succeeding in doing it.
What is the best way to print a label for each different perspective?
EDITED : The result
Here is the result of my attempt:
I don't understand why the perspective viewport label got scrambled like that. And, Actually I want to draw it in the upper left corner. How do I accomplished this, because I think it want 3D coordinate... is that right? Here is my code of drawing the label
glColor3f(1,0,0);
glDisable(GL_DEPTH_TEST);
glDepthMask(GL_FALSE);
glRasterPos2f(0,0);
glPushAttrib(GL_LIST_BIT); // Pushes The Display List Bits
glListBase(base - 32); // Sets The Base Character to 32
glCallLists(strlen("Perspective"), GL_UNSIGNED_BYTE, "Perspective"); // Draws The Display List Textstrlen(label)
glPopAttrib();
I use the code from here http://nehe.gamedev.net/data/lessons/lesson.asp?lesson=13
thanks
For each viewport switch into a projection that allows you to supply "viewport space" coordinates, disable depth testing (glDisable(GL_DEPTH_TEST)) and depth writes (glDepthMask(GL_FALSE)) and draw the text using one of the methods used to draw text in OpenGL (texture mapped fonts, rendering the full text into a texture drawing that one, draw glyphs as actual geometry).
Along with #datenwolf's excellent answer, I'd add just one bit of advice: rather than drawing the label in the viewport, it's usually easier (and often looks better) to draw the label just outside the viewport. This avoids the label covering anything in the viewport, and makes it easy to get nice, cleanly anti-aliased text (which you can normally do in OpenGL as well, but it's more difficult).
If you decide you need to draw the text inside the viewport anyway, I'll add just one minor detail to what #datenwolf said: since you generally do want your text anti-aliased (even if the rest of the picture isn't) you generally want to draw the label after all the other geometry of the picture itself. If you haven't turned on anti-aliasing otherwise, you generally will want to turn it on for drawing the text.

How does drawImage work?

I've been trying to draw a "game over" image but for some reason the game crashes when the image is supposed to be drawn.
I've read the API for Java ME but I don't really understand what anchors are.
I want the image to be drawn over the whole screen.
if(mGameStatus==STATUS_GAME_OVER)
{
mGraphics.drawImage(mImgMgr.getGameOver(),0, 0, Graphics.TOP | Graphics.LEFT);
}
In order to help you, it would be useful if you write the Exception thrown that causes your application to crash.
Anchors
The anchors in images refer to the point of the bounding box of the image to be placed at x, y.
For example, I want to set an image in position x=10 and y=20. If I use anchors TOP and LEFT, the upper left corner of the image will be placed at 10,20. With TOP and RIGHT the upper right corner of the image will be placed at 10,20.
This can be read in JavaME API, where they do a similar comparison with strings. The same apply for images except for BASELINE as Images doesn't have one.
http://download.oracle.com/javame/config/cldc/ref-impl/midp2.0/jsr118/javax/microedition/lcdui/Graphics.html#anchor

Resources