Is there a way to draw a CGContextDrawRadialGradient as an oval instead of a perfect circle? - quartz-graphics

I need a radial gradient in the shape of an oval or ellipse and it seems like it CGContextDrawRadialGradient can only draw a perfect circle. I've been drawing to a square context then copying/drawing into a rectangular context.
Any better way to do this?
Thanks!

The only way I've found to do this is as Mark F suggested, but I think the answer needs an example to be easier to understand.
Draw an elliptical gradient in a view in iOS (and using ARC):
- (void)drawRect:(CGRect)rect {
CGContextRef ctx = UIGraphicsGetCurrentContext();
// Create gradient
CGColorSpaceRef colorSpace = CGColorSpaceCreateDeviceRGB();
CGFloat locations[] = {0.0, 1.0};
UIColor *centerColor = [UIColor orangeColor];
UIColor *edgeColor = [UIColor purpleColor];
NSArray *colors = [NSArray arrayWithObjects:(__bridge id)centerColor.CGColor, (__bridge id)edgeColor.CGColor, nil];
CGGradientRef gradient = CGGradientCreateWithColors(colorSpace, (__bridge CFArrayRef)colors, locations);
// Scaling transformation and keeping track of the inverse
CGAffineTransform scaleT = CGAffineTransformMakeScale(2, 1.0);
CGAffineTransform invScaleT = CGAffineTransformInvert(scaleT);
// Extract the Sx and Sy elements from the inverse matrix
// (See the Quartz documentation for the math behind the matrices)
CGPoint invS = CGPointMake(invScaleT.a, invScaleT.d);
// Transform center and radius of gradient with the inverse
CGPoint center = CGPointMake((self.bounds.size.width / 2) * invS.x, (self.bounds.size.height / 2) * invS.y);
CGFloat radius = (self.bounds.size.width / 2) * invS.x;
// Draw the gradient with the scale transform on the context
CGContextScaleCTM(ctx, scaleT.a, scaleT.d);
CGContextDrawRadialGradient(ctx, gradient, center, 0, center, radius, kCGGradientDrawsBeforeStartLocation);
// Reset the context
CGContextScaleCTM(ctx, invS.x, invS.y);
// Continue to draw whatever else ...
// Clean up the memory used by Quartz
CGGradientRelease(gradient);
CGColorSpaceRelease(colorSpace);
}
Put in a view with a black background you get:

You can change the transform of the context to draw an ellipse (for example, apply CGContextScaleCTM(context, 2.0, 1.0) just before calling CGContextDrawRadialGradient () to draw an elliptical gradient that's twice as wide as it is high). Just remember to apply the inverse transform to your start and end points, though.

Related

How to stretch geometry so its bounding box fits precisely the screen in Three.js

I am looking for the way of stretching a geometry (with all vertices z = 0) into visible screen (HTML Canvas Element).
For now I have worked out how to fit the geometry to the screen, like this:
with following code that basically adjusts camera.z to fit geometry to the height of canvas.
geometry.computeBoundingBox();
const bbox = geometry.boundingBox;
const geometryCenter = bbox.getCenter(new THREE.Vector3());
const geometrySize = bbox.getSize(new THREE.Vector3())
const cameraZ = getZFromGeometrySize(camera.fov, geometrySize);
const scale = getScaleFromZ(height, camera.fov, cameraZ);
const zoomTransform = d3.zoomIdentity
.translate(width * 0.5, height * 0.5)
.scale(scale);
zoom.transform(canvasSelection, zoomTransform);
camera.position.set(geometryCenter.x, geometryCenter.y, cameraZ)
camera.updateProjectionMatrix();
with below definitions of functions:
function getZFromGeometrySize(fov, geometrySize) {
const maxSize = Math.max( geometrySize.x, geometrySize.y );
const halfFOVRadians = toRadians(fov * 0.5);
return maxSize / ( 2 * Math.tan( halfFOVRadians ) );
}
function getScaleFromZ (height, fov, z) {
const halfFOVRadians = toRadians(fov * 0.5);
return height / (2 * Math.tan(halfFOVRadians) * z);
}
This however is using camera position so geometry will fit the view. However, I am looking for the way to stretch the geometry so its bounding box precisely fits the screen, ideally with some predefined padding.
Since this is not related to camera settings I need to manipulate geometry vertices values to stretch it horizontally. How to achieve this? I want to retain values of vertices as they relate to underlying data.
I assume this would need to be a function of canvas dimensions (width, height), geometry coordinates, and camera settings returning new geometry coordinates? Any hint is appreciated.
A short answer to this question is: to set camera's aspect ratio to 1.0.
This will work if geometry bounds are in clip space already [-1, 1 ]. If not they have to be converted to clip space first.

Adding direction arrows to MKPolyline

I'm trying to add direction arrows to a MKPolyline, and I've almost got it, but for some reason some of the arrows are chopped off (see screen shot below). I'm not great at drawing with Core Graphics, so my guess is it's something in there. Anyone have any pointers on how to deal with the clipped arrows?
The following code does the drawing from in the drawMapRect:zoomScale:inContext: method of a subclassed MKPolylineRenderer:
MKMapPoint prevMapPoint = mapPoints[0];
MKMapPoint mapPoint = mapPoints[1];
for (NSUInteger i = 1; i < pointCount; i++) {
mapPoint = _mapPoints[i];
CGPoint prevCGPt = [self pointForMapPoint:prevMapPoint];
CGPoint cgPoint = [self pointForMapPoint:mapPoint];
CGFloat bearing = atan2(cgPoint.y - prevCGPt.y, cgPoint.x - prevCGPt.x) - M_PI;
//Get other two corners of triangle
CGFloat arrowAngle = degreesToRadians(40.0);
CGPoint pt2 = PointAtBearingFromPoint(cgPoint, bearing+arrowAngle/2, arrowLength);
CGPoint pt3 = PointAtBearingFromPoint(cgPoint, bearing-arrowAngle/2, arrowLength);
//Draw triangle and fill
CGContextBeginPath(context);
CGContextMoveToPoint(context, cgPoint.x, cgPoint.y); //go to tip of triangle
CGContextAddLineToPoint(context, pt2.x, pt2.y);
CGContextAddLineToPoint(context, pt3.x, pt3.y);
CGContextClosePath(context);
CGContextFillPath(context);
}
prevMapPoint = mapPoint;
(also in the drawMapRect:zoomScale:inContext: method is a For loop to decide where to put each arrow and populate the mapPoints array, but I'm assuming that isn't the problem).

Scroll MKMapView Underneath Circle Drawn MKMapView

UPDATE
I need to draw a circle onto a MKMapView, something where I can get the radius of the circle and it's center coordinate. However, I would also like the circle to be a subview of the MKMapView, so that the map view can scroll underneath the circle, updating its center coordinate as the map moves and updating its radius as the map is zoomed in and out.
Does anyone know how I might be able to accomplish this?
This is the original wording of the question
I've drawn a circle onto a MKMapView using the code below:
- (void)viewDidLoad
{
[super viewDidLoad];
self.locationManager = [[CLLocationManager alloc] init];
self.locationManager.delegate = self;
self.region = [MKCircle circleWithCenterCoordinate:self.locationManager.location.coordinate radius:kViewRegionDefaultDistance];
[self.mapView addOverlay:self.region];
}
- (MKOverlayPathRenderer *)mapView:(MKMapView *)map viewForOverlay:(id <MKOverlay>)overlay
{
MKCircleRenderer *region = [[MKCircleRenderer alloc] initWithOverlay:overlay];
region.strokeColor = [UIColor blueColor];
region.fillColor = [[UIColor blueColor] colorWithAlphaComponent:0.4];
return region;
}
This works and produces a circle on the map view. However, when I scroll the map view, the circle moves with it. I would like the circle to remain stationary and have the map view scroll underneath the circle.
Is is important to note that I will need to get the center coordinate and radius of the circle in order to create a region. For that reason, I cannot simply draw a UIView on top of the MKMapView, as I would have no way to get the radius in meters of the UIView.
I solved it!
Step 1:
I created a UIView and added it as a subview to the map view. It is important to note that I made sure to center the UIView on the map view. This is important because you will use the centerCoordinate property of the MKMapView to calculate the radius.
self.region = [[UIView alloc] initWithFrame:centerOfMapViewFrame];
self.region.contentMode = UIViewContentModeRedraw;
self.region.userInteractionEnabled = NO;
self.region.alpha = 0.5;
self.region.layer.cornerRadius = widthOfView/2;
self.region.backgroundColor = [UIColor blueColor];
[self.mapView addSubview:self.region];
The image below shows the circular UIView added as a subview to the mapView.
Step 2:
Calculate the radius based off of the center coordinate of map view and the edge coordinate of the UIView.
CLLocationCoordinate2D edgeCoordinate = [self.mapView convertPoint:CGPointMake((CGRectGetWidth(self.region.bounds)/2), 0) toCoordinateFromView:self.region]; //self.region is the circular UIView
CLLocation *edgeLocation = [[CLLocation alloc] initWithLatitude:edgeCoordinate.latitude longitude:edgeCoordinate.longitude];
CLLocation *centerLocation = [[CLLocation alloc] initWithLatitude:self.mapView.centerCoordinate.latitude longitude:self.mapView.centerCoordinate.longitude];
CGFloat radius = [edgeLocation distanceFromLocation:centerLocation]; //is in meters
The image below shows an annotation on the edgeLocation and on the centerLocation.
Swift 4.2/5 adaptation
let edgeCoordinate = self.mapView.convert(mapView.center, toCoordinateFrom: overlayView)
let edgeLocation: CLLocation = .init(latitude: edgeCoordinate.latitude, longitude: edgeCoordinate.longitude)
let centerLocation: CLLocation = .init(latitude: mapView.centerCoordinate.latitude, longitude: mapView.centerCoordinate.longitude)
let radius = edgeLocation.distance(from: centerLocation)
// do something with the radius
Where overlayView is the custom circle you created to represent radius

equivalent to gl_FragCoord in glsl vertex shader

I'm trying to get a screen position of a vertex in pixels inside a vertex shader,
I saw some others posts here but I can't find answer that works for me.
this is what I've got in my vertex Shader:
#version 400
layout (location = 0) in vec3 inPosition;
uniform mat4 MVP; // modelViewProjection
uniform vec2 window;
void main()
{
// vertex in screen space
vec2 fake_frag_coord = (MVP * vec4(inPosition,1.0)).xy;
float X = (fake_frag_coord.x*window.x/2.0) + window.x;
float Y = (fake_frag_coord.y*window.y/2.0) + window.y;
}
It's not working very well and I know it's a strange think to do inside a vertex shader but I want to multiply my vertex offset by a 2d texture, so I need to find the pixel the vertex is on top to be able to multiply it by the pixel of the texture.
thanks!
Luiz
I have corrected your vertex shader with proper terms, and shown you the exact sequence of transformations that actually happens when GL computes gl_FragCoord (window-space).
#version 400
layout (location = 0) in vec4 inPosition; // Always use vec4, it makes life easier!
uniform mat4 MVP; // modelViewProjection
uniform vec2 window;
void main()
{
// Vertex in clip-space
vec4 fake_frag_coord = (MVP * inPosition); // Range: [-w,w]^4
// Vertex in NDC-space
fake_frag_coord.xyz /= fake_frag_coord.w; // Rescale: [-1,1]^3
fake_frag_coord.w = 1.0 / fake_frag_coord.w; // Invert W
// Vertex in window-space
fake_frag_coord.xyz *= vec3 (0.5) + vec3 (0.5); // Rescale: [0,1]^3
fake_frag_coord.xy *= window; // Scale and Bias for Viewport
// Assume depth range: [0,1] --> No need to adjust fake_frag_coord.z
[...]
}
Texture coordinates and window-space coordinates are very different things, however. Generally you need normalized coordinates for traditional texture fetches, that means you want the coordinates in the range [0,1].
Luckily window-space and texture-space share the same origin convention (0,0) = bottom-left, so you can cut out the line below to get the appropriate texture coordinates:
fake_frag_coord.xy *= window; // Scale and Bias for Viewport
I think Andon M. Coleman's answer is fine. However, I like to point out a more general issue with the approach discussed in the question: there might be no meaningful screen space position for a vertex at all.
The vertex might lie utside the viewing frustum. This will not be a a problem if the vertices you draw are guaranteed to lie in the frustum, or if you are drawing only points.
But it will fail if you have primitives intersecting the near plane. You might think that in such a case, you just get some coordinates which are outside [-1,1] in NDC space, and if you just use them to assign some output value for the vertex, the clipping state will make it right. But that assumption is wrong. You might values which are pefectly in [-1,1] in NDC space even for vertices which are outside the frustum, and it it will appear as if the vertices lie in front of the camera for all vertices wich actually lie behind the camera. And no subsequent clipping stage is able to fix this.
The only way to get this right would be to actually carry out the clipping operation, before doing the divide by w. And this is something you don't want to do in a vertex shader.
If you want to get this working on the js part of things, this is how I adapted Andon M. Coleman's reply:
var winW = window.innerWidth;
var winH = window.innerHeight;
camera.updateProjectionMatrix();
// Not sure about the order of these! I was using orthographic camera so it didn't matter but double check the order if it doesn't work!
var MVP = camera.projectionMatrix.multiply(camera.matrixWorldInverse);
// position to vertex clip-space
var fake_frag_coord = position.applyMatrix4(MVP); // Range: [-w,w]^4
// vertex to NDC-space
fake_frag_coord.x = fake_frag_coord.x / fake_frag_coord.w; // Rescale: [-1,1]^3
fake_frag_coord.y = fake_frag_coord.y / fake_frag_coord.w; // Rescale: [-1,1]^3
fake_frag_coord.z = fake_frag_coord.z / fake_frag_coord.w; // Rescale: [-1,1]^3
fake_frag_coord.w = 1.0 / fake_frag_coord.w; // Invert W
// Vertex in window-space
fake_frag_coord.x = fake_frag_coord.x * 0.5;
fake_frag_coord.y = fake_frag_coord.y * 0.5;
fake_frag_coord.z = fake_frag_coord.z * 0.5;
fake_frag_coord.x = fake_frag_coord.x + 0.5;
fake_frag_coord.y = fake_frag_coord.y + 0.5;
fake_frag_coord.z = fake_frag_coord.z + 0.5;
// Scale and Bias for Viewport (We want the window coordinates, so no need for this)
fake_frag_coord.x = fake_frag_coord.x / winW;
fake_frag_coord.y = fake_frag_coord.y / winH;

OpenTK circle rotation

I'm working on my first project using openTk. I'm creating virtual arcball for 3D model rotation. It works fine, but I need to add circle which won't rotate with model. This circle should visualize arcball.
My code to achieve rotation is:
private void SetCamera()
{
GL.MatrixMode(MatrixMode.Modelview);
Matrix4 scale = Matrix4.Scale(magnification / diameter);
Matrix4 translation1 = Matrix4.CreateTranslation(-center);
Matrix4 rotation = Matrix4.CreateFromAxisAngle(axisOfRotation, angleOfRotation*(float)numericSensitivity.Value);
Matrix4 translation2 = Matrix4.CreateTranslation(0.0f, 0.0f, -1.5f);
if (rotationChanged)
{
oldRotation *= rotation;
rotationChanged = false;
}
modelview = translation1 * scale * oldRotation * translation2;
GL.LoadMatrix(ref modelview);
}
So I would like to ask if there is some way how to draw circle, which wil be unaffected by this rotattion (will be on same position on a screen).
If I understand your question correctly, then all you need to do is set the modelview matrix back to the identity before you draw your circle. You can easily do that using the PushMatrix() and PopMatrix() functions. Something like this:
//Draw normal things
GL.MatrixMode(MatrixMode.Modelview);
GL.PushMatrix();
GL.LoadIdentity();
//Draw un-rotated circle
GL.PopMatrix();
PushMatrix() saves the current matrix onto a stack, and PopMatrix() pops the top matrix off of that stack. This means PopMatrix() will take you back to your normal rotated frame of reference after you're done with the circle.

Resources