Sprite effect on SKShapeNode - sprite

I am trying to get make an effect with sprite kit on Mac on SKShapeNodes ( a normal Node would be fine too ) to like something like this:
http://firewall.com.pl/wp-content/uploads/2013/05/mailstore-cloud-edition-en.png
a circular node surrounded by a rather glow effect, not fully opaque. Any ideas? I also want the "glow" to get different colors. The only idea right now would be an round, white image as png with "fade out" intensity on the edges until it is completely transparent. Then cover that with an blend factor. But I wonder, if there is an simpler way to do this.
If any of you have an good idea, I would be very grateful.
Regards
Thomas

That looks like more of a lens flare. but here is something that may get you started: try this code then fill with black, of course you will have to adjust the glow and circle to fit your needs.
-(void) CreateGlowingCircles {
SKShapeNode *ball = [[SKShapeNode alloc] init];
CGMutablePathRef myPath = CGPathCreateMutable();
CGPathAddArc(myPath, NULL, 0,0, 20, 0, M_PI*2, YES);
ball.path = myPath;
ball.lineWidth =0.1;
ball.glowWidth = 15.5;//adjust for more glow effect
//add fill and stock for the black dot inside the glow
ball.position = CGPointMake(200, 200);
[self addChild:ball];
}
This is a piece of code from a example project i did...
from there you could add a filter! I hope this gets you started. for the lines that connect the black spots you could figure out some advanced physics magic. look up SKPhysicsJointFixed etc.. good luck!

Related

Why isn't this mask working in Phaser?

I must be missing something...why isn't this working? Instead of clipping to the circle the entire 800x800 backdrop image is displayed...
var mask;
var img;
function preload() {
game.load.image('back', 'backdrop.jpg');
}
function create() {
img = game.add.image(game.world.centerX, game.world.centerY,'back').anchor.setTo(0.5);
mask = game.add.graphics(0, 0);
mask.beginFill(0xffffff);
mask.drawCircle(game.world.centerX, game.world.centerY, 600);
img.mask = mask;
}
jsfiddle here
Disclaimer: I have no formal experience in phaser.io
I was able to fix this in your fiddle by changing
img = game.add.image(game.world.centerX, game.world.centerY,'back').anchor.setTo(0.5);
to
img = game.add.image(0, 0, 'back');
JSFiddle Fork
I would assume that placing the image in the centerX,centerY position results in the mask being offset of the image. Hopefully someone with more experience than I could explain the specifics here, but I will research further and update my answer as I figure out the why to go along with the how.
Update
Okay so I've done some digging through the documentation. First, you want to use img = game.add.image(0, 0, 'back'); due to the fact that the x and y parameters in this case dictate the upper-left origin of the image, not the center. By using game.world.centerX and game.world.centerY you are trying to throw the background image to the center of the canvas even though the canvas is the same size as the image.
using .anchor.setTo(0.5) from what I can gather, is attempting to set the anchor point at which the image originates to the centerX position. However, when you remove this anchor, suddenly the mask works, even though it is not showing correctly (because the position of the background image is incorrect).
Theory - By anchoring the image, I believe that it is no longer possible to apply a mask to it. By all experimenting that I've done, having an anchor set on the background image prevents it from being masked, so the mask simply is added as a child to img and is placed as it's center, thus why you are seeing the white circle instead of the circle properly masking the image.
it appears I was a mistaken about the fluidity of the api in trying to chain that last function call... if I break it up:
img = game.add.image(game.world.centerX, game.world.centerY, 'back');
img.anchor.setTo(0.5);
it now works!
Fiddle Here

How to move a UIImage 100px to the right only

In Xamarin iOS, how can I simply move an image to the right 100px from its current location? I know that I am suppose to use Bounds, but I can't get the intenseness to really provide anything helpful. I have googled it and there isn't much that I can find.
Assuming you mean an UIImageView, you can use the Offset(dx,dy) method on its Frame property. If your UIImageView is called imageView:
var frame = imageView.Frame;
frame.Offset(100,0); // offset 100px horizontal, 0 px vertical
imageView.Frame = frame; // set the frame of the image to the new position.
Note that you must take the frame object into a seperate variable. That is, imageView.Frame.Offset(100,0) will not work.

Fade between colors with THREE.js

I have a simple cube in THREE.js:
var cubeMaterial = new THREE.MeshLambertMaterial({color: 0xCC0000});
var cube = new THREE.Mesh(
new THREE.CubeGeometry(100, 100, 100),
cubeMaterial);
cube.position.set(0.7,1.95,-0.1);
cube.scale.x = cube.scale.y = cube.scale.z = 0.002;
scene.add(cube);
Any suggestions on how I can change the color of the material on the fly? What I want to achieve is a smooth fade (for instance from red to green) and be able to fade the color dynamically. So my guess is that it needs to be continuously re-rendered in the render-loop, and then somehow the color should be changed so it will gradually be faded to the target color. But I'm not really sure how to do that in code..
Thanks in advance!
Anders
You can use TWEEN.js: https://github.com/sole/tween.js/
There is a good solution to your question in this Stackoverflow question: How to tween between two colours using three.js?

Merging a previosly rotated by gesture UIImageView with another one. WYS is not WYG

I'm getting crazy while trying to merge two UIImageView.
The situation:
a background UIImageView
(userPhotoImageView)
an overlayed
UIImageView (productPhotoImageView)
that can be streched, pinched and
rotated
I call my function on the UIImages but I can take coords and new stretched size from the UIImageView containing them (they are synthesized in my class)
- (UIImage *)mergeImage:(UIImage *)bottomImg withImage:(UIImage *)topImg;
Maybe simplest way would be rendering the layer of the top UIImageView in a the new CGContextRef like this:
[bottomImg drawInRect:CGRectMake(0, 0, bottomImg.size.width, bottomImg.size.height)];
[productPhotoImageView.layer renderInContext:ctx];
But in this way I loose the rotation effect previosly applied by gestures.
A second way would be trying to apply AffineTransformation to UIImage to reproduce GestureEffects and then draw it in the context like this:
UIImage * scaledTopImg = [topImg imageByScalingProportionallyToSize:productPhotoView.frame.size];
UIImage * rotatedScaledTopImg = [scaledTopImg imageRotatedByDegrees:ANGLE];
[rotatedScaledTopImg drawAtPoint:CGPointMake(productPhotoView.frame.origin.x, productPhotoView.frame.origin.y)];
The problem of this second approach is that I'm not able to exactly get the final rotation degrees (the ANGLE parameter that should be filled in the code above) amount since the user started to interact, this because the RotationGesture is reset to 0 after applying so the next callback is a delta from the current rotation.
For sure the most easy way would be the first one, freezing the two UIImageViews as they are displayed in that very moment but actually I still didn't find anyway to do it.
Ok basically there is another workaround for all this crazy merging stuff, but it's definitively not an elegant solution. For avoid handling any kind of AffineTransformation just capture the ImageScreen and then crop it.
CGImageRef screenImage = UIGetScreenImage();
CGRect fullRect = [[UIScreen mainScreen] applicationFrame];
CGImageRef saveCGImage = CGImageCreateWithImageInRect(screenImage, fullRect);
CGRect cropRect = CGRectMake(x,y,width,height);
CGImageRef saveCGImage = CGImageCreateWithImageInRect(screenImage, cropRect);
Told you it wasnt elegant, but for someone it could be useful.
Great that was helpful and so too much workaroundy ;)
so since i see it's pretty hard to find around some code examples to merge pics after a manipulation here goes mine, hope it can be helpful:
- (UIImage *)mergeImage:(UIImage *)bottomImg withImage:(UIImage *)topImg {
UIImage * scaledTopImg = [topImg imageByScalingProportionallyToSize:productPhotoView.frame.size];
UIGraphicsBeginImageContext(scaledTopImg.size);
CGContextRef ctx = UIGraphicsGetCurrentContext();
CGContextTranslateCTM(ctx, scaledTopImg.size.width * 0.5f, scaledTopImg.size.height * 0.5f);
CGFloat angle = atan2(productPhotoView.transform.b, productPhotoView.transform.a);
CGContextRotateCTM(ctx, angle);
[scaledTopImg drawInRect:CGRectMake(- scaledTopImg.size.width * 0.5f, -(scaledTopImg.size.height * 0.5f), scaledTopImg.size.width, scaledTopImg.size.height)];
UIImage *newImage = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
UIGraphicsBeginImageContext(bottomImg.size);
[bottomImg drawInRect:CGRectMake(0, 0, bottomImg.size.width, bottomImg.size.height)];
[newImage drawInRect:CGRectMake(productPhotoView.frame.origin.x, productPhotoView.frame.origin.y, newImage.size.width, newImage.size.height)];
UIImage *newImage2 = UIGraphicsGetImageFromCurrentImageContext();
UIGraphicsEndImageContext();
return newImage2;
}
Since I got a background image it was easier first create a context just to apply tranformation to the overlayview, than save it and write on top of the bottom layer.
Take care of the CTM translation before rotating the overlayview or it will rotate around an axis placed at {0,0} while I want to rotate my image around its center.
Regards
I have the same problem.
I only have problem with rotation.
I use gesture recognizers for move, scale and rotation.
At the beginning when I save the image tha scale was right, the position was right but never apply rotation to the saved image.
Now it works
if you want to know the rotation angle I get it with:
CGFloat angle = atan2(overlay.transform.b, overlay.transform.a);
And rotation with:
CGContextRotateCTM(context, angle);
If you have a different approach please let me know.

How to get text in a CATextLayer to be clear

I've made a CALayer with an added CATextLayer and the text comes out blurry. In the docs, they talk about "sub-pixel antialiasing", but that doesn't mean much to me. Anyone have a code snippet that makes a CATextLayer with a bit of text that is clear?
Here's the text from Apple's documentation:
Note: CATextLayer disables sub-pixel antialiasing when rendering text.
Text can only be drawn using sub-pixel antialiasing when it is
composited into an existing opaque background at the same time that
it's rasterized. There is no way to draw subpixel-antialiased text by
itself, whether into an image or a layer, separately in advance of
having the background pixels to weave the text pixels into. Setting
the opacity property of the layer to YES does not change the rendering
mode.
The second sentence implies that one can get good looking text if one composites it into an existing opaque background at the same time that it's rasterized. That's great, but how do I composite it and how do you give it an opaque background and how do you rasterize it?
The code they use in their example of a Kiosk Menu is as such: (It's OS X, not iOS, but I assume it works!)
NSInteger i;
for (i=0;i<[names count];i++) {
CATextLayer *menuItemLayer=[CATextLayer layer];
menuItemLayer.string=[self.names objectAtIndex:i];
menuItemLayer.font=#"Lucida-Grande";
menuItemLayer.fontSize=fontSize;
menuItemLayer.foregroundColor=whiteColor;
[menuItemLayer addConstraint:[CAConstraint
constraintWithAttribute:kCAConstraintMaxY
relativeTo:#"superlayer"
attribute:kCAConstraintMaxY
offset:-(i*height+spacing+initialOffset)]];
[menuItemLayer addConstraint:[CAConstraint
constraintWithAttribute:kCAConstraintMidX
relativeTo:#"superlayer"
attribute:kCAConstraintMidX]];
[self.menuLayer addSublayer:menuItemLayer];
} // end of for loop
Thanks!
EDIT: Adding the code that I actually used that resulted in blurry text. It's from a related question I posted about adding a UILabel rather than a CATextLayer but getting a black box instead. http://stackoverflow.com/questions/3818676/adding-a-uilabels-layer-to-a-calayer-and-it-just-shows-black-box
CATextLayer* upperOperator = [[CATextLayer alloc] init];
CGColorSpaceRef space = CGColorSpaceCreateDeviceRGB();
CGFloat components1[4] = {1.0, 1.0, 1.0, 1.0};
CGColorRef almostWhite = CGColorCreate(space,components1);
CGFloat components2[4] = {0.0, 0.0, 0.0, 1.0};
CGColorRef almostBlack = CGColorCreate(space,components2);
CGColorSpaceRelease(space);
upperOperator.string = [NSString stringWithFormat:#"13"];
upperOperator.bounds = CGRectMake(0, 0, 100, 50);
upperOperator.foregroundColor = almostBlack;
upperOperator.backgroundColor = almostWhite;
upperOperator.position = CGPointMake(50.0, 25.0);
upperOperator.font = #"Helvetica-Bold";
upperOperator.fontSize = 48.0f;
upperOperator.borderColor = [UIColor redColor].CGColor;
upperOperator.borderWidth = 1;
upperOperator.alignmentMode = kCAAlignmentCenter;
[card addSublayer:upperOperator];
[upperOperator release];
CGColorRelease(almostWhite);
CGColorRelease(almostBlack);
EDIT 2: See my answer below for how this got solved. sbg.
Short answer — You need to set the contents scaling:
textLayer.contentsScale = [[UIScreen mainScreen] scale];
A while ago I learned that when you have custom drawing code, you have to check for the retina display and scale your graphics accordingly. UIKit takes care of most of this, including font scaling.
Not so with CATextLayer.
My blurriness came from having a .zPosition that was not zero, that is, I had a transform applied to my parent layer. By setting this to zero, the blurriness went away, and was replaced by serious pixelation.
After searching high and low, I found that you can set .contentsScale for a CATextLayer and you can set it to [[UIScreen mainScreen] scale] to match the screen resolution. (I assume this works for non-retina, but I haven't checked - too tired)
After including this for my CATextLayer the text became crisp. Note - it's not necessary for the parent layer.
And the blurriness? It comes back when you're rotating in 3D, but you don't notice it because the text starts out clear and while it's in motion, you can't tell.
Problem solved!
Swift
Set the text layer to use the same scale as the screen.
textLayer.contentsScale = UIScreen.main.scale
Before:
After:
Before setting shouldRasterize, you should:
set the rasterizationScale of the base layer you are going to rasterize
set the contentsScale property of any CATextLayers and possibly other types of layers(it never hurts to do it)
If you don't do #1, then the retina version of sub layers will look blurry, even for normal CALayers.
- (void)viewDidLoad {
[super viewDidLoad];
CALayer *mainLayer = [[self view] layer];
[mainLayer setRasterizationScale:[[UIScreen mainScreen] scale]];
CATextLayer *messageLayer = [CATextLayer layer];
[messageLayer setForegroundColor:[[UIColor blackColor] CGColor]];
[messageLayer setContentsScale:[[UIScreen mainScreen] scale]];
[messageLayer setFrame:CGRectMake(50, 170, 250, 50)];
[messageLayer setString:(id)#"asdfasd"];
[mainLayer addSublayer:messageLayer];
[mainLayer setShouldRasterize:YES];
}
First off I wanted to point out that you've tagged your question with iOS, but constraint managers are only available on OSX, so I'm not sure how you're getting this to work unless you've been able to link against it in the simulator somehow. On the device, this functionality is not available.
Next, I'll just point out that I create CATextLayers often and never have the blurring problem you're referring to so I know it can be done. In a nutshell this blurring occurs because you are not positioning your layer on the whole pixel. Remember that when you set the position of a layer, you use a float values for the x and y. If those values have numbers after the decimal, the layer will not be positioned on the whole pixel and will therefore give this blurring effect--the degree of which depending upon the actual values. To test this, just create a CATextLayer and explicitly add it to the layer tree ensuring that your position parameter is on a whole pixel. For example:
CATextLayer *textLayer = [CATextLayer layer];
[textLayer setBounds:CGRectMake(0.0f, 0.0f, 200.0f, 30.0f)];
[textLayer setPosition:CGPointMake(200.0f, 100.0f)];
[textLayer setString:#"Hello World!"];
[[self menuLayer] addSublayer:textLayer];
If your text is still blurry, then there is something else wrong. Blurred text on your text layer is an artifact of incorrectly written code and not an intended capability of the layer. When adding your layers to a parent layer, you can just coerce the x and y values to the nearest whole pixel and it should solve your blurring problem.
You should do 2 things, the first was mentioned above:
Extend CATextLayer and set the opaque and contentsScale properties to properly support retina display, then render with anti aliasing enabled for text.
+ (TextActionLayer*) layer
{
TextActionLayer *layer = [[TextActionLayer alloc] init];
layer.opaque = TRUE;
CGFloat scale = [[UIScreen mainScreen] scale];
layer.contentsScale = scale;
return [layer autorelease];
}
// Render after enabling with anti aliasing for text
- (void)drawInContext:(CGContextRef)ctx
{
CGRect bounds = self.bounds;
CGContextSetFillColorWithColor(ctx, self.backgroundColor);
CGContextFillRect(ctx, bounds);
CGContextSetShouldSmoothFonts(ctx, TRUE);
[super drawInContext:ctx];
}
If you came searching here for a similar issue for a CATextLayer in OSX, after much wall head banging, I got the sharp clear text by doing:
text_layer.contentsScale = self.window!.backingScaleFactor
(I also set the views background layer contentsScale to be the same).
This is faster and easier: you just need to set contentsScale
CATextLayer *label = [[CATextLayer alloc] init];
[label setFontSize:15];
[label setContentsScale:[[UIScreen mainScreen] scale]];
[label setFrame:CGRectMake(0, 0, 50, 50)];
[label setString:#"test"];
[label setAlignmentMode:kCAAlignmentCenter];
[label setBackgroundColor:[[UIColor clearColor] CGColor]];
[label setForegroundColor:[[UIColor blackColor] CGColor]];
[self addSublayer:label];

Resources