How to light-up a sprite in cocos2d? - colors

I've already know how to dark-down a CCSprite object by:
sprite.color = ccc3(x, x, x); // x is a value less then 255
(As far as i know, it should be a direct mapping of openGL functions, so its easy to achieve.)
But when it comes to light-up, my current solution is adding another mask sprite (same shape but all in white), changing its blendFunc to { GL_SRC_ALPHA, GL_ONE } and overlaying it onto the target. Besides all the codes added, there should be a mask image for each need-to-light-up one.
Is there a way to do light-up as easily as dark-down?

However, not as easy as setColor, in Cocos2d 2.x, with OpenGL ES 2.0 support, you can achieve this by using custom shaders. You can get started here:
http://www.raywenderlich.com/10862/how-to-create-cool-effects-with-custom-shaders-in-opengl-es-2-0-and-cocos2d-2-x
You may also try inverting the sprite's darker color to get a lighter one.

Related

GODOT How can I xray through tilemaps around me

So I'm looking to create an effect of having a bubble around my player which, when he enters a hidden area (hidden by tilemaps) the bubble activates and it essentially has an xray effect. So I can see the background, the ground and all the items inside the area I just can't see the blocks themselves.
So pretty much going from this
To this
And as I go further in the more gets revealed
I have no idea what to even begin searching for this. So any direction would be greatly appreciated
First of all, I want to get something out of the way: Making things appear when they are nearby the player is easy, you use a light and a shader. Making things disappear when they are nearby the player by that approach is impossible in 2D (3D has flags_use_shadow_to_opacity).
This is the plan: We are going to create a texture that will work as mask for what to show and what not to show. Then we will use that texture mask with a shader to make a material that selectively disappears. To create that texture, we are going to use a Viewport, so we can get a ViewportTexture from it.
The Viewport setup is like this:
Viewport
├ ColorRect
└ Sprite
Set the Viewport with the following properties:
Size: give it the window size (the default is 1024 by 600)
Hdr: disable
Disable 3D: enable
Usage: 2D
Update mode: Always
For the Sprite you want a grayscale texture, perhaps with transparency. It will be the shape you want to reveal around the player.
And for the ColorRect you want to set the background color as either black or white. Whatever is the opposite of the color on the Sprite.
Next, you are going to attach a script to the Viewport. It has to deal with two concerns:
Move the Sprite to match the position of the player. That looks like this:
extends Viewport
export var target_path:NodePath
func _process(_delta:float) -> void:
var target := get_node_or_null(target_path) as Node2D
if target == null:
return
$Sprite.position = target.get_viewport().get_canvas_transform().origin
And you are going to set the target_path to reference the player avatar.
In this code target.get_viewport().get_canvas_transform().origin will give us the position of the target node (the player avatar) on the screen. And we are placing the Sprite to match.
Handle window resizes. That looks like this:
func _ready():
# warning-ignore:return_value_discarded
get_tree().get_root().connect("size_changed", self, "_on_size_changed")
func _on_size_changed():
size = get_tree().get_root().size
In this code we connect to the "size_changed" of the root Viewport (the one associated with the Window), and change the size of this Viewport to match.
The next thing is the shader. Go to your TileMap or whatever you want to make disappear and add a shader material. This is the code for it:
shader_type canvas_item;
uniform sampler2D mask;
void fragment()
{
COLOR.rgb = texture(TEXTURE, UV).rgb;
COLOR.a = texture(mask, SCREEN_UV).r;
}
As you can see, the first line will be setting the red, green, and blue channels to match the texture the node already has. But the alpha channel will be set to one of the channels (the red one in this case) of the mask texture.
Note: The above code will make whatever is in the black parts fully invisible, and whatever is in the white parts fully visible. If you want to invert that, change COLOR.a = texture(mask, SCREEN_UV).r; to COLOR.a = 1.0 - texture(mask, SCREEN_UV).r;.
We, of course, need to set that mask texture. After you set that code, there should be a shader param under the shader material called "Mask", set it to a new ViewportTexture and set the Viewport to the one we set before.
And we are done.
I tested this with this texture from publicdomainvectors.org:
Plus some tiles from Kenney. They are all, of course, under public domain.
This is how it looks like:
Experiment with different textures for different results. Also, you can add a shader to the Sprite for extra effect. For example add some ripples, by giving a shader material to the Sprite with code like this one:
shader_type canvas_item;
void fragment()
{
float width = SCREEN_PIXEL_SIZE.x * 16.0;
COLOR = texture(TEXTURE, vec2(UV.x + sin(UV.y * 32.0 + TIME * 2.0) * width, UV.y));
}
So you get this result:
There is an instant when the above animation stutters. That is because I didn't cut the loop perfectly. Not an issue in game. Also the animation has much less frames per second than the game would.
Addendum A couple things I want to add:
You can create a texture by other means. I have a couple other answer where I cover some of it
How can I bake 2D sprites in Godot at runtime? where we use blit_rect. You might also be interested in blit_rect_mask.
Godot repeating breaks script where we are using lockbits.
I wrote a shader that outputs on the alpha channel here. Other options include:
Using BackBufferCopy.
To discard fragments.

PyQt: Obtain all pixels inside QPolygon

In PyQt 5, is there a way to obtain all pixel positions that would be modified by a call to QPainter.drawPolygon for a QPainter object constructed with some QImage as an argument without actually drawing the polygon? Ideally I would like to obtain separate sets of pixel positions for the polygon's border and for all pixels inside the polygon.
Just like #ekhumoro said, QPolygon is a subclass of QVector (that is, a QList). However, in Pyqt this is a Python array and not a QList. I got runtime errors when trying to iterate over this list, because it was inside the QPolygon object and there was no getter. In this case, in PyQt the solution is not very efficient. You need to iterate over each pixel of the image, creating a QPoint with pixel coordinates and checking if the QPolygon contains this point through the containsPoint method. There aren't many implementation details, but consider the following code snippet.
array_qpoints = [] # this array will have all the QPoints
polygon = QPolygon([
QPoint(140,234),
QPoint(126,362),
QPoint(282,409),
QPoint(307,273),
QPoint(307,233),
])
# let's consider a 640x480 image
for x in range(640):
for y in range(480):
point = QPoint(x, y)
if polygon.containsPoint(point, Qt.FillRule.OddEvenFill):
array_qpoints.append(point)
You can get the coordinates of each pixel by calling the x() and y() methods for each element in array_qpoints.
for point in array_qpoints:
x = point.x()
y = point.y()
# do what you want with the information
I'm posting this answer for others who visit this question and are looking for a solution by code. Since it's been several years, if you've found a better solution, please post :)

Appropriate use of CCTextureCache

I'm currently creating a CCSprite like this:
CCSprite *s = [CCSprite spriteWithFile:#"image.png"];
This sprite is the background image of a CCLayer that's used relatively often. Is the following use of CCTextureCache more efficient?
CCTexture2D *t = [[CCTextureCache sharedTextureCache] addImage:#"image.png"];
CCSprite *s = [CCSprite spriteWithTexture:t];
No. Internally, all methods that use an image as a texture (not just CCSprite) will add the texture to the CCTextureCache.
The only reason why you would want to use addImage directly is when you want to pre-load certain textures so that the first appearance of a node using that texture won't cause a lag during gameplay.
First of all, if you look to the code of spriteWithFile: method, you will see that it adds image to the texture cache anyway if cannot find it there.
The second thing you must know, that if you store your art in atlases for reducing memory usage(for example, atlas 2048x2048 pixels with 20 different pictures), spriteWithTexture: will create sprite with whole huge atlas(2048x2048 pixels) texture.

How to make basic line segments in LWJGL/OpenGL

I am in the process of learning LWJGL and also OpenGL. I have done the tutorials on quads, and also succesfully drawn polygons on a display. I am trying to draw lines using the same methods, but the lines are not created, or they are made invisible, possibly with a pixel width of 0? I have googled for an answer or a tutorial, but so far all of them seems to claim that I am doing the right thing. my method is as follows:
private void drawLine(Point point, Joint Point2) {
GL11.glColor3f(0.0f, 1.0f, 0.2f);
GL11.glBegin(GL11.GL_LINE);
GL11.glVertex2d(point.getX(), point.getY());
GL11.glVertex2d(point2.getX(), point2.getY());
GL11.glEnd();
}
I also tried to put this one in the middle, but no effect.
GL11.glLineWidth(3.8f);
As stated in the comments, The answer was that GL11.GL_LINE is not accepted as a constant in this case. GL11.LINE_STRIP however works like a charm.

How can I draw a bitmap rotated in wxPython?

How do I draw a bitmap to a DC, while rotating it by a specified angle?
I agree with Al - he deserves the answer, but this (admittedly untested) code fragment should do what you asked for:
def write_bmp_to_dc_rotated( dc, bitmap, angle ):
'''
Rotate a bitmap and write it to the supplied device context.
'''
img = bitmap.ConvertToImage()
img_centre = wx.Point( img.GetWidth()/2, img.GetHeight()/2 )
img = img.Rotate( angle, img_centre )
dc.WriteBitmap( img.ConvertToBitmap(), 0, 0 )
One thing to note though from the docs:
...using wxImage is the preferred way to load images in wxWidgets, with the exception of resources...
Was there a particular reason to load it as a bitmap rather than a wx.Image?
I'm not sure that this is the best way of doing it, but one option would be to convert it to a wx.Image with ConvertToImage (wxWidgets help) and then use the rotate function (wxWidgets help). You could then (if necessary) convert it back with ConvertToBitmap (wxWidgets help).
I couldn't see an obvious function that could be used to apply a coordinate transform to the drawing context (DC), but there may be one in there somewhere...
Hope that helps.
Better way would be to use Graphics context if you want a generic rotation e.g. rotate bitmap or text or any other drawing path
gc = wx.GCDC(dc)
gc.Rotate(angle)
gc.DrawText("anurag", 100, 100)

Resources