directx 11 omnidirectional shadow mapping shadow wrong projected - point

I code ommidirectional shadow mapping by c++ directx 11. I took algorithm from book: "HLSL Development Cookbook" by Doron Feinstein. But if my screen resolution and all dependences are different with resolution of shadow map, the shadows are in wrong place and projected incorectly. How I can fix it?
XMMATRIX* PointLight::GetCubeViewProjection()
{
XMMATRIX lightProjection, positionMatrix, spotView, toShadow;
RebuildWorldMatrixPosition();
XMFLOAT3 worldPosition = this->GetWorldPosition();
positionMatrix = XMMatrixTranslation(-worldPosition.x, -worldPosition.y, -worldPosition.z);
lightProjection = XMMatrixPerspectiveFovLH(XM_PIDIV2, 1.0f, SHADOW_NEAR_PLANE, m_radius);
// Cube +X
spotView = XMMatrixRotationY(XM_PI + XM_PIDIV2);
toShadow = positionMatrix * spotView * lightProjection;
m_cubeViewProjection[0] = XMMatrixTranspose(toShadow);
// Cube -X
spotView = XMMatrixRotationY(XM_PIDIV2);
toShadow = positionMatrix * spotView * lightProjection;
m_cubeViewProjection[1] = XMMatrixTranspose(toShadow);
// Cube +Y
spotView = XMMatrixRotationX(XM_PIDIV2);
toShadow = positionMatrix * spotView * lightProjection;
m_cubeViewProjection[2] = XMMatrixTranspose(toShadow);
// Cube -Y
spotView = XMMatrixRotationX(XM_PI + XM_PIDIV2);
toShadow = positionMatrix * spotView * lightProjection;
m_cubeViewProjection[3] = XMMatrixTranspose(toShadow);
// Cube +Z
toShadow = positionMatrix * lightProjection;
m_cubeViewProjection[4] = XMMatrixTranspose(toShadow);
// Cube -Z
spotView = XMMatrixRotationY(XM_PI);
toShadow = positionMatrix * spotView * lightProjection;
m_cubeViewProjection[5] = XMMatrixTranspose(toShadow);
return m_cubeViewProjection;
}
cbuffer WorldMatrixBuffer : register( b0 )
{
matrix worldMatrix;
};
//vertex shadow gen shader
float4 PointShadowGenVS(float4 Pos : POSITION) : SV_Position
{
Pos.w = 1.0f;
return mul(Pos, worldMatrix);
}
//geometry shadow gen shader
cbuffer ShadowMapCubeViewProj : register( b0 )
{
float4x4 cubeViewProj[6] : packoffset(c0);
};
struct GS_OUTPUT
{
float4 Pos: SV_POSITION;
uint RTIndex : SV_RenderTargetArrayIndex;
};
[maxvertexcount(18)]
void PointShadowGenGS(triangle float4 InPos[3] : SV_Position, inout TriangleStream<GS_OUTPUT> OutStream)
{
for (int iFace = 0; iFace < 6; ++iFace)
{
GS_OUTPUT output;
output.RTIndex = iFace;
for (int v = 0; v < 3; ++v)
{
output.Pos = mul(InPos[v], cubeViewProj[iFace]);
OutStream.Append(output);
}
OutStream.RestartStrip();
}
}
//point light pixel shader
float PointShadowPCF(float3 toPixel)
{
float3 toPixelAbs = abs(toPixel);
float z = max(toPixelAbs.x, max(toPixelAbs.y, toPixelAbs.z));
float depth = (lightPerspectiveValues.x * z + lightPerspectiveValues.y) / z;
return pointShadowMapTexture.SampleCmpLevelZero(PCFSampler, toPixel, depth).x;
}
float shadowAttenuation = PointShadowPCF(worldPosition - lightPosition);

Problem was in incorrect viewport. It must have the same width and height as shadow map. I created it incorrect.

Related

at::Tensor to UIImage

I have a PyTorch model and try run it on iOS. I have the next code:
at::Tensor tensor2 = torch::from_blob(imageBuffer2, {1, 1, 256, 256}, at::kFloat);
c10::InferenceMode guard;
auto output = _impl.forward({tensor1, tensor2});
torch::Tensor tensor_img = output.toTuple()->elements()[0].toTensor();
My question is "How I can convert tensor_img to UIImage?"
I found that functions in PyTorch documentation:
- (UIImage*)convertRGBBufferToUIImage:(unsigned char*)buffer
withWidth:(int)width
withHeight:(int)height {
char* rgba = (char*)malloc(width * height * 4);
for (int i = 0; i < width * height; ++i) {
rgba[4 * i] = buffer[3 * i];
rgba[4 * i + 1] = buffer[3 * i + 1];
rgba[4 * i + 2] = buffer[3 * i + 2];
rgba[4 * i + 3] = 255;
}
size_t bufferLength = width * height * 4;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, rgba, bufferLength, NULL);
size_t bitsPerComponent = 8;
size_t bitsPerPixel = 32;
size_t bytesPerRow = 4 * width;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
if (colorSpaceRef == NULL) {
NSLog(#"Error allocating color space");
CGDataProviderRelease(provider);
return nil;
}
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaPremultipliedLast;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGImageRef iref = CGImageCreate(width,
height,
bitsPerComponent,
bitsPerPixel,
bytesPerRow,
colorSpaceRef,
bitmapInfo,
provider,
NULL,
YES,
renderingIntent);
uint32_t* pixels = (uint32_t*)malloc(bufferLength);
if (pixels == NULL) {
NSLog(#"Error: Memory not allocated for bitmap");
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
return nil;
}
CGContextRef context = CGBitmapContextCreate(pixels,
width,
height,
bitsPerComponent,
bytesPerRow,
colorSpaceRef,
bitmapInfo);
if (context == NULL) {
NSLog(#"Error context not created");
free(pixels);
}
UIImage* image = nil;
if (context) {
CGContextDrawImage(context, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef imageRef = CGBitmapContextCreateImage(context);
if ([UIImage respondsToSelector:#selector(imageWithCGImage:scale:orientation:)]) {
float scale = [[UIScreen mainScreen] scale];
image = [UIImage imageWithCGImage:imageRef scale:scale orientation:UIImageOrientationUp];
} else {
image = [UIImage imageWithCGImage:imageRef];
}
CGImageRelease(imageRef);
CGContextRelease(context);
}
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(iref);
CGDataProviderRelease(provider);
if (pixels) {
free(pixels);
}
return image;
}
#end
If I correctly understand, that function can convert unsigned char * to UIImage. I think that I need convert my tensor_img to unsigned char*, but I don't understand how I can do it.
The 1st code its a torch bridge and 2nd code is UIImage helper which I run from Swift. Anyway, I resolve that issue, we can close it. Code example:
for (int i = 0; i < 3 * width * height; i++) {
[results addObject:#(floatBuffer[i])];
}
NSMutableData* data = [NSMutableData dataWithLength:sizeof(float) * 3 * width * height];
float* buffer = (float*)[data mutableBytes];
for (int j = 0; j < 3 * width * height; j++) {
buffer[j] = [results[j] floatValue];
}
return buffer;

Simple 3D Shape faces not rendering as expected - OpenGL ES in Android Studio

I am trying to make a rotating octahedron display correctly, I have successfully achieved other shapes such as a cube and tetrahedron, but I am experiencing some difficulty with this one.
Here is the simple obj file I am using:
v 0 -1 0
v 1 0 0
v 0 0 1
v -1 0 0
v 0 1 0
v 0 0 -1
#
f 1 2 3
f 4 1 3
f 5 4 3
f 2 5 3
f 2 1 6
f 1 4 6
f 4 5 6
f 5 2 6
My code is as follows:
class Shape(context: Context) {
private var mProgram: Int = 0
// Use to access and set the view transformation
private var mMVPMatrixHandle: Int = 0
//For Projection and Camera Transformations
private var vertexShaderCode = (
// This matrix member variable provides a hook to manipulate
// the coordinates of the objects that use this vertex shader
"uniform mat4 uMVPMatrix;" +
"attribute vec4 vPosition;" +
//"attribute vec4 vColor;" +
//"varying vec4 vColorVarying;" +
"void main() {" +
// the matrix must be included as a modifier of gl_Position
// Note that the uMVPMatrix factor *must be first* in order
// for the matrix multiplication product to be correct.
" gl_Position = uMVPMatrix * vPosition;" +
//"vColorVarying = vColor;"+
"}")
private var fragmentShaderCode = (
"precision mediump float;" +
"uniform vec4 vColor;" +
//"varying vec4 vColorVarying;"+
"void main() {" +
//" gl_FragColor = vColorVarying;" +
" gl_FragColor = vColor;" +
"}")
internal var shapeColor = arrayOf<FloatArray>(
//front face (grey)
floatArrayOf(0f, 0f, 0f, 1f), //black
floatArrayOf(0f, 0f, 1f, 1f),
floatArrayOf(0f, 1f, 0f, 1f),
floatArrayOf(1f, 0f, 0f, 1f), // red
floatArrayOf(1f, 1f, 0f, 1f),
floatArrayOf(1f, 0f, 1f, 1f),
floatArrayOf(1f, 0f, 1f, 1f),
floatArrayOf(0f, 1f, 1f, 1f)
)
private var mPositionHandle: Int = 0
private var mColorHandle: Int = 0
// var objLoader = ObjLoader(context, "tetrahedron.txt")
// var objLoader = ObjLoader(context, "cube.txt")
var objLoader = ObjLoader(context, "octahedron.txt")
var shapeCoords: FloatArray
var numFaces: Int = 0
var vertexBuffer: FloatBuffer
var drawOrder: Array<ShortArray>
lateinit var drawListBuffer: ShortBuffer
init {
//assign coordinates and order in which to draw them (obtained from obj loader class)
shapeCoords = objLoader.vertices.toFloatArray()
drawOrder = objLoader.faces.toTypedArray()
numFaces = objLoader.numFaces
// initialize vertex byte buffer for shape coordinates
val bb = ByteBuffer.allocateDirect(
// (# of coordinate varues * 4 bytes per float)
shapeCoords.size * 4
)
bb.order(ByteOrder.nativeOrder())
vertexBuffer = bb.asFloatBuffer()
vertexBuffer.put(shapeCoords)
vertexBuffer.position(0)
// create empty OpenGL ES Program
mProgram = GLES20.glCreateProgram()
val vertexShader = loadShader(
GLES20.GL_VERTEX_SHADER,
vertexShaderCode
)
val fragmentShader = loadShader(
GLES20.GL_FRAGMENT_SHADER,
fragmentShaderCode
)
// add the vertex shader to program
GLES20.glAttachShader(mProgram, vertexShader)
// add the fragment shader to program
GLES20.glAttachShader(mProgram, fragmentShader)
// creates OpenGL ES program executables
GLES20.glLinkProgram(mProgram)
}
var vertexStride = COORDS_PER_VERTEX * 4 // 4 bytes per vertex
fun draw(mvpMatrix: FloatArray) { // pass in the calculated transformation matrix
for (face in 0 until numFaces) {
// Add program to OpenGL ES environment
GLES20.glUseProgram(mProgram)
// get handle to vertex shader's vPosition member
mPositionHandle = GLES20.glGetAttribLocation(mProgram, "vPosition")
// get handle to fragment shader's vColor member
mColorHandle = GLES20.glGetUniformLocation(mProgram, "vColor")
// Enable a handle to the cube vertices
GLES20.glEnableVertexAttribArray(mPositionHandle)
// Prepare the cube coordinate data
GLES20.glVertexAttribPointer(
mPositionHandle, COORDS_PER_VERTEX,
GLES20.GL_FLOAT, false,
vertexStride, vertexBuffer
)
GLES20.glUniform4fv(mColorHandle, 1, shapeColor[face], 0)
// get handle to shape's transformation matrix
mMVPMatrixHandle = GLES20.glGetUniformLocation(mProgram, "uMVPMatrix")
// Pass the projection and view transformation to the shader
GLES20.glUniformMatrix4fv(mMVPMatrixHandle, 1, false, mvpMatrix, 0)
// initialize byte buffer for the draw list
var dlb = ByteBuffer.allocateDirect(
// (# of coordinate values * 2 bytes per short)
drawOrder[face].size * 2
)
dlb.order(ByteOrder.nativeOrder())
drawListBuffer = dlb.asShortBuffer()
drawListBuffer.put(drawOrder[face])
drawListBuffer.position(0)
GLES20.glDrawElements(
GLES20.GL_TRIANGLES,
dlb.capacity(),
GLES20.GL_UNSIGNED_SHORT,
drawListBuffer //position indices
)
}
// Disable vertex array
GLES20.glDisableVertexAttribArray(mMVPMatrixHandle)
}
companion object {
// number of coordinates per vertex in this array
internal var COORDS_PER_VERTEX = 3
}
}
class MyGLRenderer1(val context: Context) : GLSurfaceView.Renderer {
private lateinit var mShape: Shape
#Volatile
var mDeltaX = 0f
#Volatile
var mDeltaY = 0f
#Volatile
var mTotalDeltaX = 0f
#Volatile
var mTotalDeltaY = 0f
private val mMVPMatrix = FloatArray(16)
private val mProjectionMatrix = FloatArray(16)
private val mViewMatrix = FloatArray(16)
private val mRotationMatrix = FloatArray(16)
private val mAccumulatedRotation = FloatArray(16)
private val mCurrentRotation = FloatArray(16)
private val mTemporaryMatrix = FloatArray(16)
override fun onDrawFrame(gl: GL10?) {
// Redraw background color
// Redraw background color
GLES20.glClear(GLES20.GL_COLOR_BUFFER_BIT or GLES20.GL_DEPTH_BUFFER_BIT)
val scratch = FloatArray(16)
// Create a rotation transformation for the square
Matrix.setIdentityM(mRotationMatrix, 0)
Matrix.setIdentityM(mCurrentRotation, 0)
Matrix.rotateM(mCurrentRotation, 0, mDeltaX, 0.0f, 1.0f, 0.0f)
// Matrix.rotateM(mCurrentRotation, 0, mDeltaY, 1.0f, 0.0f, 0.0f)
// Multiply the current rotation by the accumulated rotation, and then set the accumulated
// rotation to the result.
Matrix.multiplyMM(
mTemporaryMatrix,
0,
mCurrentRotation,
0,
mAccumulatedRotation,
0
)
System.arraycopy(mTemporaryMatrix, 0, mAccumulatedRotation, 0, 16)
// Rotate the cube taking the overall rotation into account.
Matrix.multiplyMM(
mTemporaryMatrix,
0,
mRotationMatrix,
0,
mAccumulatedRotation,
0
)
System.arraycopy(mTemporaryMatrix, 0, mRotationMatrix, 0, 16)
// Set the camera position (View matrix)
Matrix.setLookAtM(mViewMatrix, 0, 2f, 2f, -5f, 0f, 0f, 0f, 0f, 1.0f, 0.0f)
//Calculate the projection and view transformation
Matrix.multiplyMM(mMVPMatrix, 0, mProjectionMatrix, 0, mViewMatrix, 0)
// Combine the rotation matrix with the projection and camera view
// Note that the mMVPMatrix factor *must be first* in order
// for the matrix multiplication product to be correct.
Matrix.multiplyMM(scratch, 0, mMVPMatrix, 0, mRotationMatrix, 0)
gl?.glDisable(GL10.GL_CULL_FACE)
// Draw shape
mShape.draw(scratch)
}
override fun onSurfaceChanged(gl: GL10?, width: Int, height: Int) {
GLES20.glViewport(0, 0, width, height);
val ratio: Float = width.toFloat() / height.toFloat()
// this projection matrix is applied to object coordinates
// in the onDrawFrame() method
Matrix.frustumM(mProjectionMatrix, 0, -ratio, ratio, -1.0f, 1.0f, 3.0f, 7.0f)
}
override fun onSurfaceCreated(gl: GL10?, config: EGLConfig?) {
GLES20.glEnable(GLES20.GL_DEPTH_TEST)
// initialize a square
mShape = Shape(context)
// Initialize the accumulated rotation matrix
Matrix.setIdentityM(mAccumulatedRotation, 0)
}
}
fun loadShader(type: Int, shaderCode: String): Int {
return GLES20.glCreateShader(type).also { shader ->
GLES20.glShaderSource(shader, shaderCode)
GLES20.glCompileShader(shader)
}
}
class ObjLoader(context: Context, file: String) {
var numFaces: Int = 0
var vertices = Vector<Float>()
var normals = Vector<Float>()
var textures = Vector<Float>()
var faces = mutableListOf<ShortArray>()
init {
val reader: BufferedReader
val isr = InputStreamReader(context.assets.open(file))
reader = BufferedReader(isr)
var line = reader.readLine()
// read file until EOF
while (line != null) {
val parts = line.split((" ").toRegex()).dropLastWhile({ it.isEmpty() }).toTypedArray()
when (parts[0]) {
"v" -> {
var part1 = parts[1].toFloat()
var part2 = parts[2].toFloat()
var part3 = parts[3].toFloat()
// vertices
vertices.add(part1)
vertices.add(part2)
vertices.add(part3)
}
"vt" -> {
// textures
textures.add(parts[1].toFloat())
textures.add(parts[2].toFloat())
}
"vn" -> {
// normals
normals.add(parts[1].toFloat())
normals.add(parts[2].toFloat())
normals.add(parts[3].toFloat())
}
"f" -> {
// faces: vertex/texture/normal
faces.add(shortArrayOf(parts[1].toShort(), parts[2].toShort(), parts[3].toShort()))
println("dbg: points are "+ parts[1]+" "+parts[2]+" "+parts[3])
}
}
line = reader.readLine()
}
numFaces = faces.size
}}
The shape produced can be seen in the following screenshots, it is also visible on the black surface that there is possibly some sort of z fighting taking place? The black triangle flickers red and yellow:
Sometimes the following shapes are produced, flickering in and out of existence, in different colours:
Any help is much appreciated, thanks in advance.
Edit:
I have managed to make the vertices plot correctly thanks to the below answer however there is still this flickering going on, I really appreciate the help.
Array indices start at 0, but Wavefront (.obj) indices start at 1:
faces.add(shortArrayOf(parts[1].toShort(), parts[2].toShort(), parts[3].toShort()))
faces.add(shortArrayOf(
parts[1].toShort()-1, parts[2].toShort()-1, parts[3].toShort()-1))

Plotting a discrete-time signal shows amplitude modulation

I'm trying to render a simple discrete-time signal using a canvas element. However, the representation seems to be inaccurate. As you can see in the code snippet the signal appears to be amplitude modulated after the frequency reaches a certain threshold. Even though it's well below the Nyquist limit of <50Hz (assuming a sampling rate of 100Hz in this example).
For very low frequencies like 5Hz it looks perfectly fine.
How would I go about rendering this properly? And does it work for more complex signals (say, the waveform of a song)?
window.addEventListener('load', () => {
const canvas = document.querySelector('canvas');
const frequencyElem = document.querySelector('#frequency');
const ctx = canvas.getContext('2d');
const renderFn = t => {
const signal = new Array(100);
const sineOfT = Math.sin(t / 1000 / 8 * Math.PI * 2) * 0.5 + 0.5;
const frequency = sineOfT * 20 + 3;
for (let i = 0; i < signal.length; i++) {
signal[i] = Math.sin(i / signal.length * Math.PI * 2 * frequency);
}
frequencyElem.innerText = `${frequency.toFixed(3)}Hz`
render(ctx, signal);
requestAnimationFrame(renderFn);
};
requestAnimationFrame(renderFn);
});
function render(ctx, signal) {
const w = ctx.canvas.width;
const h = ctx.canvas.height;
ctx.clearRect(0, 0, w, h);
ctx.strokeStyle = 'red';
ctx.beginPath();
signal.forEach((value, i) => {
const x = i / (signal.length - 1) * w;
const y = h - (value + 1) / 2 * h;
if (i === 0) {
ctx.moveTo(x, y);
} else {
ctx.lineTo(x, y);
}
});
ctx.stroke();
}
#media (prefers-color-scheme: dark) {
body {
background-color: #333;
color: #f6f6f6;
}
}
<canvas></canvas>
<br/>
Frequency: <span id="frequency"></span>
It looks right to me. At higher frequencies, when the peak falls between two samples, the sampled points can be a lot lower than the peak.
If the signal only has frequencies < Nyquist, then the signal can be reconstructed from its samples. That doesn't mean that the samples look like the signal.
As long as your signal is oversampled by 2x or more (or so), you can draw it pretty accurately by using cubic interpolation between the sample points. See, for example, Catmull-Rom interpolation in here: https://en.wikipedia.org/wiki/Cubic_Hermite_spline
You can use the bezierCurveTo method in HTML Canvas to draw these interpolated curves. If you need to use lines, then you should find any maximum or minimum points that occur between samples and include those in your path.
I've edited your snippet to use the bezierCurveTo method with Catmull-Rom interpolation below:
window.addEventListener('load', () => {
const canvas = document.querySelector('canvas');
const frequencyElem = document.querySelector('#frequency');
const ctx = canvas.getContext('2d');
const renderFn = t => {
const signal = new Array(100);
const sineOfT = Math.sin(t / 1000 / 8 * Math.PI * 2) * 0.5 + 0.5;
const frequency = sineOfT * 20 + 3;
for (let i = 0; i < signal.length; i++) {
signal[i] = Math.sin(i / signal.length * Math.PI * 2 * frequency);
}
frequencyElem.innerText = `${frequency.toFixed(3)}Hz`
render(ctx, signal);
requestAnimationFrame(renderFn);
};
requestAnimationFrame(renderFn);
});
function render(ctx, signal) {
const w = ctx.canvas.width;
const h = ctx.canvas.height;
ctx.clearRect(0, 0, w, h);
ctx.strokeStyle = 'red';
ctx.beginPath();
const dx = w/(signal.length - 1);
const dy = -(h-2)/2.0;
const c = 1.0/2.0;
for (let i=0; i < signal.length-1; ++i) {
const x0 = i * dx;
const y0 = h*0.5 + signal[i]*dy;
const x3 = x0 + dx;
const y3 = h*0.5 + signal[i+1]*dy;
let x1,y1,x2,y2;
if (i>0) {
x1 = x0 + dx*c;
y1 = y0 + (signal[i+1] - signal[i-1])*dy*c/2;
} else {
x1 = x0;
y1 = y0;
ctx.moveTo(x0, y0);
}
if (i < signal.length-2) {
x2 = x3 - dx*c;
y2 = y3 - (signal[i+2] - signal[i])*dy*c/2;
} else {
x2 = x3;
y2 = y3;
}
ctx.bezierCurveTo(x1,y1,x2,y2,x3,y3);
}
ctx.stroke();
}
#media (prefers-color-scheme: dark) {
body {
background-color: #333;
color: #f6f6f6;
}
}
<canvas></canvas>
<br/>
Frequency: <span id="frequency"></span>

Flip shape with image attached using RaphaelJS

I managed to create a hexagon with Raphel, gave it a stroke and a fill image, then (with help from StackOverFlow) managed to create a flip animation on the hexagon.
But I am facing a problem. When the hexagon flips, the background image does not flip with it. How can I make the hexagon shape flip, but as it is flipping, show the background image flipping too.
Here is what I currently have:
function polygon(x, y, size, sides, rotate) {
var self = this;
self.centrePoint = [x,y];
self.size = size;
self.sides = sides;
self.rotated = rotate;
self.sizeMultiplier = 50;
self.points = [];
for (i = 0; i < sides; i++) {
self.points.push([(x + (self.size * self.sizeMultiplier) * (rotate ? Math.sin(2 * 3.14159265 * i / sides) : Math.cos(2 * 3.14159265 * i / sides))), (y + (self.size * self.sizeMultiplier) * (rotate ? Math.cos(2 * 3.14159265 * i / sides) : Math.sin(2 * 3.14159265 * i / sides)))]);
}
self.svgString = 'M' + self.points.join(' ') + ' L Z';
}
$(document).ready(function() {
var paper = Raphael(0, 0, 450, 450);
var path1 = new polygon(100, 100, 2, 6, false);
var hex1 = paper.path(path1.svgString);
hex1.node.id = "hex1";
hex1.attr("fill", "url('http://i49.tinypic.com/23ma7pt.jpg')");
hex1.attr("stroke", "#aceace");
/* flip animation */
hex1.click(function() {
hex1.animate({transform: "S0,1"},500,'easeIn', function()
{
hex1.attr("fill","url('http://i49.tinypic.com/23ma7pt.jpg')");
hex1.animate({transform: "S-1,1"},500,'easeOut');
});
});
});

Distance between two rectangles

How can I find the distance between two rectangles? Intersects should return 0 in distance.
Here's a quick function for calculating distance between two CGRects represented by a CGSize:
CGSize CGSizeDistanceBetweenRects(CGRect rect1, CGRect rect2)
{
if (CGRectIntersectsRect(rect1, rect2))
{
return CGSizeMake(0, 0);
}
CGRect mostLeft = rect1.origin.x < rect2.origin.x ? rect1 : rect2;
CGRect mostRight = rect2.origin.x < rect1.origin.x ? rect1 : rect2;
CGFloat xDifference = mostLeft.origin.x == mostRight.origin.x ? 0 : mostRight.origin.x - (mostLeft.origin.x + mostLeft.size.width);
xDifference = MAX(0, xDifference);
CGRect upper = rect1.origin.y < rect2.origin.y ? rect1 : rect2;
CGRect lower = rect2.origin.y < rect1.origin.y ? rect1 : rect2;
CGFloat yDifference = upper.origin.y == lower.origin.y ? 0 : lower.origin.y - (upper.origin.y + upper.size.height);
yDifference = MAX(0, yDifference);
return CGSizeMake(xDifference, yDifference);
}
On a slightly related note, here's how to compute the distance between the centers of two given CGRects:
CGFloat CGRectGetDistanceBetweenCenters( CGRect rect1, CGRect rect2 )
{
CGPoint center1 = CGPointMake( CGRectGetMidX( rect1 ), CGRectGetMidY( rect1 ) );
CGPoint center2 = CGPointMake( CGRectGetMidX( rect2 ), CGRectGetMidY( rect2 ) );
CGFloat horizontalDistance = ( center2.x - center1.x );
CGFloat verticalDistance = ( center2.y - center1.y );
CGFloat distance = sqrt( ( horizontalDistance * horizontalDistance ) + ( verticalDistance * verticalDistance ) );
return distance;
}
Adding a swift version to the approved answer:
extension CGRect {
func distance(from rect: CGRect) -> CGSize {
if intersects(rect) {
return CGSize(width: 0, height: 0)
}
let mostLeft = origin.x < rect.origin.x ? self : rect
let mostRight = rect.origin.x < self.origin.x ? self : rect
var xDifference = mostLeft.origin.x == mostRight.origin.x ? 0 : mostRight.origin.x - (mostLeft.origin.x + mostLeft.size.width)
xDifference = CGFloat(max(0, xDifference))
let upper = self.origin.y < rect.origin.y ? self : rect
let lower = rect.origin.y < self.origin.y ? self : rect
var yDifference = upper.origin.y == lower.origin.y ? 0 : lower.origin.y - (upper.origin.y + upper.size.height)
yDifference = CGFloat(max(0, yDifference))
return CGSize(width: xDifference, height: yDifference)
}
}

Resources