Related
I'm trying to use the matching_template function from the ArrayFire library But I don't know how to find the X and Y coordinates of the best matching value.
I was using the imageproc library to perform this function and there it has the find_extremes function that returns the coordinates to me. How would you do the same using ArrayFire lib?
My example using imageproc
let template = image::open("connect.png").unwrap().to_luma8();
let screenshot = image::open("screenshot.png").unwrap().to_luma8();
let matching_probability= imageproc::template_matching::match_template(&screenshot, &template, MatchTemplateMethod::CrossCorrelationNormalized);
let positions = find_extremes(&matching_probability);
println!("{:?}", positions);
Extremes { max_value: 0.9998113, min_value: 0.42247093,
max_value_location: (843, 696), min_value_location: (657, 832) }
My example using ArrayFire
let template: Array<u8> = arrayfire::load_image(String::from("connect.png"), true);
let screenshot: Array<u8> = arrayfire::load_image(String::from("screenshot.png"), true);
let template_gray = rgb2gray(&template, 0.2126, 0.7152, 0.0722);
let screen_gray = rgb2gray(&screenshot, 0.2126, 0.7152, 0.0722);
let matching_probability = arrayfire::match_template(&screen_gray, &template_gray, arrayfire::MatchType::LSAD);
af_print!("{:?}", matching_probability);
139569.0469 140099.2500 139869.8594 140015.7969 140680.9844 141952.5781 142602.7344 142870.7188...
from here I don't havy any idea how to get the best matching pixel coordinates.
Arrayfire doesn't provide "extremum" function, but separate min and max families of functions.
The one that provides index informations are prefixed with i.
imin_all and imax_all returns the min and max value indexes respectively wrapped in a tupple.
You can derive pixel position from value indexes and array dimensions, knowing that arrayfire is column major.
let template: Array<u8> = arrayfire::load_image(String::from("connect.png"), true);
let screenshot: Array<u8> = arrayfire::load_image(String::from("screenshot.png"), true);
let template_gray = rgb2gray(&template, 0.2126, 0.7152, 0.0722);
let screen_gray = rgb2gray(&screenshot, 0.2126, 0.7152, 0.0722);
let matching_probability = arrayfire::match_template(&screen_gray, &template_gray, arrayfire::MatchType::LSAD);
let (min, _, min_idx) = imin_all(&matching_probability);
let (max, _, max_idx) = imax_all(&matching_probability);
let dims = matching_probability.dims();
let [_, height, _, _] = dims.get();
let px_x_min = min_idx as u64 / height;
let px_y_min = min_idx as u64 % height;
let px_x_max = max_idx as u64 / height;
let px_y_max = max_idx as u64 % height;
af_print!("{:?}", matching_probability);
println!("Minimum value: {} is at pixel ({},{}).",min, px_x_min, px_y_min);
println!("Maximum value: {} is at pixel ({},{}).", max, px_x_max, px_y_max);
I would like to remove the selected color from my UISegmetedControl. I know tintColor can do this but that also removes the font color with it. Also using kCTForegroundColorAttributeName will remove both.
Side note I made a UIView and placed it above the selected segment to show selected state. I thought this would look better. Trying to branch out and make my own custom controls.
public let topLine = UIView()
override func awakeFromNib() {
super.awakeFromNib()
self.removeBorders()
setFont()
addTopLine()
}
func setFont() {
let font = UIFont(name: FontTypes.avenirNextUltraLight, size: 22.0)!
let textColor = UIColor.MyColors.flatWhite
let attribute = [kCTFontAttributeName:font]
self.setTitleTextAttributes(attribute, for: .normal)
}
func addTopLine() {
topLine.backgroundColor = UIColor.MyColors.flatWhite
let frame = CGRect(x: 7,
y: -5,
width: Int(self.frame.size.width)/2,
height: 2)
topLine.frame = frame
self.addSubview(topLine)
}
struct FontTypes {
static let avenirNextRegular = "AvenirNext-Regular"
static let avenirLight = "Avenir-Light"
static let avenirNextUltraLight = "AvenirNext-UltraLight"
}
TintColor is attach with
Background colour of Selected segment,
Text color of Unselected segment and
Border colour of UISegmentedControl.
So, if you going to change tintColor to white, then background colour and tint color Both are gone.
You need to set Selected/Unselected text attribute like below:
mySegment.tintColor = .white
let selectedAtrribute = [NSAttributedStringKey.foregroundColor: UIColor.red, NSAttributedStringKey.font: UIFont.systemFont(ofSize: 16)]
mySegment.setTitleTextAttributes(selectedAtrribute as [NSObject : AnyObject], for: UIControlState.selected)
let unselected = [NSAttributedStringKey.foregroundColor: UIColor.black, NSAttributedStringKey.font: UIFont.systemFont(ofSize: 16)]
mySegment.setTitleTextAttributes(unselected as [NSObject : AnyObject], for: UIControlState.normal)
This snippet can be used for drawing CGGlyphs with a CGContext.
//drawing
let coreGraphicsFont = CTFontCopyGraphicsFont(coreTextFont, nil)
CGContextSetFont(context, coreGraphicsFont);
CGContextSetFontSize(context, CTFontGetSize(coreTextFont))
CGContextSetFillColorWithColor(context, Color.blueColor().CGColor)
CGContextShowGlyphsAtPositions(context, glyphs, positions, length)
But how do I obtain the CGGlyphs from a swift string which contains emoji symbols like flags or accented characters?
let string = "swift: \u{1F496} \u{65}\u{301} \u{E9}\u{20DD} \u{1F1FA}\u{1F1F8}"
Neither of these approaches shows the special characters, even though they are correctly printed to the console. Note that this first approach returns NSGlyph but CGGlyph's are required for drawing.
var progress = CGPointZero
for character in string.characters
{
let glyph = font.glyphWithName(String(character))
glyphs.append(CGGlyph(glyph))
let advancement = font.advancementForGlyph(glyph)
positions.append(progress)
progress.x += advancement.width
}
or this second approach which requires casting to NSString:
var buffer = Array<unichar>(count: length, repeatedValue: 0)
let range = NSRange(location: 0, length: length)
(string as NSString).getCharacters(&buffer, range: range)
glyphs = Array<CGGlyph>(count: length, repeatedValue: 0)
CTFontGetGlyphsForCharacters(coreTextFont, &buffer, &glyphs, length)
//glyph positions
advances = Array<CGSize>(count: length, repeatedValue: CGSize.zero)
CTFontGetAdvancesForGlyphs(ctFont, CTFontOrientation.Default, glyphs, &advances, length)
positions = []
var progress = CGPointZero
for advance in advances
{
positions.append(progress)
progress.x += advance.width
}
Some of the characters are drawn as empty boxes with either approach. Kinda stuck here, hoping you can help.
edit:
Using CTFontDrawGlyphs renders the glyphs correctly, but setting the font, size and text matrix directly before calling CGContextShowGlyphsAtPositions draws nothing. I find that rather odd.
If you generate glyphs yourself, you also need to perform font substitution yourself. When you use Core Text or TextKit to lay out and draw the text, they perform font substitution for you. For example:
let richText = NSAttributedString(string: "Hellošā")
let line = CTLineCreateWithAttributedString(richText)
print(line)
Output:
<CTLine: 0x7fa349505f00>{run count = 3, string range = (0, 8), width = 55.3457, A/D/L = 15/4.6875/0, glyph count = 7, runs = (
<CTRun: 0x7fa34969f600>{string range = (0, 5), string = "Hello", attributes = <CFBasicHash 0x7fa3496902d0 [0x10e85a7b0]>{type = mutable dict, count = 1,
entries =>
2 : <CFString 0x1153bb720 [0x10e85a7b0]>{contents = "NSFont"} = <CTFont: 0x7fa3496182f0>{name = Helvetica, size = 12.000000, matrix = 0x0, descriptor = <CTFontDescriptor: 0x7fa34968f860>{attributes = <CFBasicHash 0x7fa34968f8b0 [0x10e85a7b0]>{type = mutable dict, count = 1,
entries =>
2 : <CFString 0x1153c16c0 [0x10e85a7b0]>{contents = "NSFontNameAttribute"} = <CFString 0x1153b4700 [0x10e85a7b0]>{contents = "Helvetica"}
}
>}}
}
}
<CTRun: 0x7fa3496cde40>{string range = (5, 2), string = "\U0001F600", attributes = <CFBasicHash 0x7fa34b11a150 [0x10e85a7b0]>{type = mutable dict, count = 1,
entries =>
2 : <CFString 0x1153bb720 [0x10e85a7b0]>{contents = "NSFont"} = <CTFont: 0x7fa3496c3eb0>{name = AppleColorEmoji, size = 12.000000, matrix = 0x0, descriptor = <CTFontDescriptor: 0x7fa3496a3c30>{attributes = <CFBasicHash 0x7fa3496a3420 [0x10e85a7b0]>{type = mutable dict, count = 1,
entries =>
2 : <CFString 0x1153c16c0 [0x10e85a7b0]>{contents = "NSFontNameAttribute"} = <CFString 0x11cf63bb0 [0x10e85a7b0]>{contents = "AppleColorEmoji"}
}
>}}
}
}
<CTRun: 0x7fa3496cf3e0>{string range = (7, 1), string = "\u2192", attributes = <CFBasicHash 0x7fa34b10ed00 [0x10e85a7b0]>{type = mutable dict, count = 1,
entries =>
2 : <CFString 0x1153bb720 [0x10e85a7b0]>{contents = "NSFont"} = <CTFont: 0x7fa3496cf2c0>{name = PingFangSC-Regular, size = 12.000000, matrix = 0x0, descriptor = <CTFontDescriptor: 0x7fa3496a45a0>{attributes = <CFBasicHash 0x7fa3496a5660 [0x10e85a7b0]>{type = mutable dict, count = 1,
entries =>
2 : <CFString 0x1153c16c0 [0x10e85a7b0]>{contents = "NSFontNameAttribute"} = <CFString 0x11cf63230 [0x10e85a7b0]>{contents = "PingFangSC-Regular"}
}
>}}
}
}
)
}
We can see here that Core Text recognized that the default font (Helvetica) doesn't have glyphs for the emoji or the arrow, so it split the line into three runs, each with the needed font.
The Core Text Programming Guide says this:
Most of the time you should just use a CTLine object to get this information because one font may not encode the entire string. In addition, simple character-to-glyph mapping will not get the correct appearance for complex scripts. This simple glyph mapping may be appropriate if you are trying to display specific Unicode characters for a font.
Your best bet is to use CTLineCreateWithAttributedString to generate glyphs and choose fonts. Then, if you want to adjust the position of the glyphs, use CTLineGetGlyphRuns to get the runs out of the line, and then ask the run for the glyphs, the font, and whatever else you need.
If you want to handle font substitution yourself, I think you're going to want to look into āfont cascadingā.
I'm coloring each side of a box with different color using materials property. The code works and the box is beautifully colored. The documentation states the following:
For geometries with multiple elements, you can use the materials
property to attach different materials to each element.
I'm testing the number of geometry elements of a box (cube). The result is 1.I'm a little confused about the meaning of geometry element. Why can I use the materials property to attach different materials if the box only has 1 geometry element?
//cretaing a box
let box = SCNBox(width: 40, height: 40, length: 40, chamferRadius: 0)
boxNode.geometry = box
scene.rootNode.addChildNode(boxNode)
boxNode.position = SCNVector3Make(0, -90, 0)
boxNode.rotation = SCNVector4Make(1, 1, 0, 1)
//setting up materials
let mat1 = SCNMaterial()
mat1.diffuse.contents = UIColor.redColor()
let mat2 = SCNMaterial()
mat2.diffuse.contents = UIColor.blueColor()
let mat3 = SCNMaterial()
mat3.diffuse.contents = UIColor.greenColor()
let mat4 = SCNMaterial()
mat4.diffuse.contents = UIColor.yellowColor()
let mat5 = SCNMaterial()
mat5.diffuse.contents = UIColor.blackColor()
let mat6 = SCNMaterial()
mat6.diffuse.contents = UIColor.orangeColor()
box.materials = [mat1,mat2,mat3,mat4,mat5,mat6]
//checking the number of geometry elements
let i = box.geometryElementCount
println("Number of geometry elements: \(i)")
animateBox()
The documentation for SCNGeometry / SCNMaterial is correct.
But SCNBox will automatically generate from 1 to 6 geometryElements depending on the number of material you assign to it. And this will be done just before the rendering so depending on when you ask for the number of geometry elements, you may get different results.
Only SCNBox does this. Other primitives and other geometry don't have such dynamic number of geometry element.
I'm having a problem with understanding scenekit geometery.
I have the default cube from Blender, and I export as collada (DAE), and can bring it into scenekit.... all good.
Now I want to see the vertices for the cube. In the DAE I can see the following for the "Cube-mesh-positions-array",
"1 1 -1 1 -1 -1 -1 -0.9999998 -1 -0.9999997 1 -1 1 0.9999995 1 0.9999994 -1.000001 1 -1 -0.9999997 1 -1 1 1"
Now what I'd like to do in scenekit, is get the vertices back, using something like the following:
SCNGeometrySource *vertexBuffer = [[cubeNode.geometry geometrySourcesForSemantic:SCNGeometrySourceSemanticVertex] objectAtIndex:0];
If I process the vertexBuffer (I've tried numerous methods of looking at the data), it doesn't seem correct.
Can somebody explain what "SCNGeometrySourceSemanticVertex" is giving me, and how to extract the vertex data properly? What I'd like to see is:
X = "float"
Y = "float"
Z = "float"
Also I was investigating the following class / methods, which looked promising (some good data values here), but the data from gmpe appears empty, is anybody able to explain what the data property of "SCNGeometryElement" contains?
SCNGeometryElement *gmpe = [theCurrentNode.geometry geometryElementAtIndex:0];
Thanks, assistance much appreciated,
D
The geometry source
When you call geometrySourcesForSemantic: you are given back an array of SCNGeometrySource objects with the given semantic in your case the sources for the vertex data).
This data could have been encoded in many different ways and a multiple sources can use the same data with a different stride and offset. The source itself has a bunch of properties for you to be able to decode the data like for example
dataStride
dataOffset
vectorCount
componentsPerVector
bytesPerComponent
You can use combinations of these to figure out which parts of the data to read and make vertices out of them.
Decoding
The stride tells you how many bytes you should step to get to the next vector and the offset tells you how many bytes offset from the start of that vector you should offset before getting to the relevant pars of the data for that vector. The number of bytes you should read for each vector is componentsPerVector * bytesPerComponent
Code to read out all the vertices for a single geometry source would look something like this
// Get the vertex sources
NSArray *vertexSources = [geometry geometrySourcesForSemantic:SCNGeometrySourceSemanticVertex];
// Get the first source
SCNGeometrySource *vertexSource = vertexSources[0]; // TODO: Parse all the sources
NSInteger stride = vertexSource.dataStride; // in bytes
NSInteger offset = vertexSource.dataOffset; // in bytes
NSInteger componentsPerVector = vertexSource.componentsPerVector;
NSInteger bytesPerVector = componentsPerVector * vertexSource.bytesPerComponent;
NSInteger vectorCount = vertexSource.vectorCount;
SCNVector3 vertices[vectorCount]; // A new array for vertices
// for each vector, read the bytes
for (NSInteger i=0; i<vectorCount; i++) {
// Assuming that bytes per component is 4 (a float)
// If it was 8 then it would be a double (aka CGFloat)
float vectorData[componentsPerVector];
// The range of bytes for this vector
NSRange byteRange = NSMakeRange(i*stride + offset, // Start at current stride + offset
bytesPerVector); // and read the lenght of one vector
// Read into the vector data buffer
[vertexSource.data getBytes:&vectorData range:byteRange];
// At this point you can read the data from the float array
float x = vectorData[0];
float y = vectorData[1];
float z = vectorData[2];
// ... Maybe even save it as an SCNVector3 for later use ...
vertices[i] = SCNVector3Make(x, y, z);
// ... or just log it
NSLog(#"x:%f, y:%f, z:%f", x, y, z);
}
The geometry element
This will give you all the vertices but won't tell you how they are used to construct the geometry. For that you need the geometry element that manages the indices for the vertices.
You can get the number of geometry elements for a piece of geometry from the geometryElementCount property. Then you can get the different elements using geometryElementAtIndex:.
The element can tell you if the vertices are used a individual triangles or a triangle strip. It also tells you the bytes per index (the indices may have been ints or shorts which will be necessary to decode its data.
Here is an extension method if the data isn't contiguous (the vector size isn't equal to the stride) which can be the case when the geometry is loaded from a DAE file. It also doesn't use copyByte function.
extension SCNGeometry{
/**
Get the vertices (3d points coordinates) of the geometry.
- returns: An array of SCNVector3 containing the vertices of the geometry.
*/
func vertices() -> [SCNVector3]? {
let sources = self.sources(for: .vertex)
guard let source = sources.first else{return nil}
let stride = source.dataStride / source.bytesPerComponent
let offset = source.dataOffset / source.bytesPerComponent
let vectorCount = source.vectorCount
return source.data.withUnsafeBytes { (buffer : UnsafePointer<Float>) -> [SCNVector3] in
var result = Array<SCNVector3>()
for i in 0...vectorCount - 1 {
let start = i * stride + offset
let x = buffer[start]
let y = buffer[start + 1]
let z = buffer[start + 2]
result.append(SCNVector3(x, y, z))
}
return result
}
}
}
The Swift Version
The Objective-C version and this are essentially identical.
let planeSources = _planeNode?.geometry?.geometrySourcesForSemantic(SCNGeometrySourceSemanticVertex)
if let planeSource = planeSources?.first {
let stride = planeSource.dataStride
let offset = planeSource.dataOffset
let componentsPerVector = planeSource.componentsPerVector
let bytesPerVector = componentsPerVector * planeSource.bytesPerComponent
let vectors = [SCNVector3](count: planeSource.vectorCount, repeatedValue: SCNVector3Zero)
let vertices = vectors.enumerate().map({
(index: Int, element: SCNVector3) -> SCNVector3 in
var vectorData = [Float](count: componentsPerVector, repeatedValue: 0)
let byteRange = NSMakeRange(index * stride + offset, bytesPerVector)
planeSource.data.getBytes(&vectorData, range: byteRange)
return SCNVector3Make(vectorData[0], vectorData[1], vectorData[2])
})
// You have your vertices, now what?
}
Here's a Swift 5.3 version, based on the other answers, and that also supports a bytesPerComponent different from 4 (untested for size different from 4 though):
extension SCNGeometrySource {
var vertices: [SCNVector3] {
let stride = self.dataStride
let offset = self.dataOffset
let componentsPerVector = self.componentsPerVector
let bytesPerVector = componentsPerVector * self.bytesPerComponent
func vectorFromData<FloatingPoint: BinaryFloatingPoint>(_ float: FloatingPoint.Type, index: Int) -> SCNVector3 {
assert(bytesPerComponent == MemoryLayout<FloatingPoint>.size)
let vectorData = UnsafeMutablePointer<FloatingPoint>.allocate(capacity: componentsPerVector)
defer {
vectorData.deallocate()
}
let buffer = UnsafeMutableBufferPointer(start: vectorData, count: componentsPerVector)
let rangeStart = index * stride + offset
self.data.copyBytes(to: buffer, from: rangeStart..<(rangeStart + bytesPerVector))
return SCNVector3(
CGFloat.NativeType(vectorData[0]),
CGFloat.NativeType(vectorData[1]),
CGFloat.NativeType(vectorData[2])
)
}
let vectors = [SCNVector3](repeating: SCNVector3Zero, count: self.vectorCount)
return vectors.indices.map { index -> SCNVector3 in
switch bytesPerComponent {
case 4:
return vectorFromData(Float32.self, index: index)
case 8:
return vectorFromData(Float64.self, index: index)
case 16:
return vectorFromData(Float80.self, index: index)
default:
return SCNVector3Zero
}
}
}
}
// call this function _ = vertices(node: mySceneView.scene!.rootNode)
// I have get the volume in Swift 4.2 :--- this function
func vertices(node:SCNNode) -> [SCNVector3] {
let planeSources1 = node.childNodes.first?.geometry
let planeSources = planeSources1?.sources(for: SCNGeometrySource.Semantic.vertex)
if let planeSource = planeSources?.first {
let stride = planeSource.dataStride
let offset = planeSource.dataOffset
let componentsPerVector = planeSource.componentsPerVector
let bytesPerVector = componentsPerVector * planeSource.bytesPerComponent
let vectors = [SCNVector3](repeating: SCNVector3Zero, count: planeSource.vectorCount)
let vertices = vectors.enumerated().map({
(index: Int, element: SCNVector3) -> SCNVector3 in
let vectorData = UnsafeMutablePointer<Float>.allocate(capacity: componentsPerVector)
let nsByteRange = NSMakeRange(index * stride + offset, bytesPerVector)
let byteRange = Range(nsByteRange)
let buffer = UnsafeMutableBufferPointer(start: vectorData, count: componentsPerVector)
planeSource.data.copyBytes(to: buffer, from: byteRange)
return SCNVector3Make(buffer[0], buffer[1], buffer[2])
})
var totalVolume = Float()
var x1 = Float(),x2 = Float(),x3 = Float(),y1 = Float(),y2 = Float(),y3 = Float(),z1 = Float(),z2 = Float(),z3 = Float()
var i = 0
while i < vertices.count{
x1 = vertices[i].x;
y1 = vertices[i].y;
z1 = vertices[i].z;
x2 = vertices[i + 1].x;
y2 = vertices[i + 1].y;
z2 = vertices[i + 1].z;
x3 = vertices[i + 2].x;
y3 = vertices[i + 2].y;
z3 = vertices[i + 2].z;
totalVolume +=
(-x3 * y2 * z1 +
x2 * y3 * z1 +
x3 * y1 * z2 -
x1 * y3 * z2 -
x2 * y1 * z3 +
x1 * y2 * z3);
i = i + 3
}
totalVolume = totalVolume / 6;
volume = "\(totalVolume)"
print("Volume Volume Volume Volume Volume Volume Volume :\(totalVolume)")
lbl_valume.text = "\(clean(String(totalVolume))) cubic mm"
}
return[]
}
With swift 3.1 you can extract vertices from SCNGeometry in a much faster and shorter way:
func vertices(node:SCNNode) -> [SCNVector3] {
let vertexSources = node.geometry?.getGeometrySources(for: SCNGeometrySource.Semantic.vertex)
if let vertexSource = vertexSources?.first {
let count = vertexSource.data.count / MemoryLayout<SCNVector3>.size
return vertexSource.data.withUnsafeBytes {
[SCNVector3](UnsafeBufferPointer<SCNVector3>(start: $0, count: count))
}
}
return []
}
...
Today i've noted that on osx this not going to work correct. This happens because on iOS SCNVector3 build with Float and on osx CGFloat (only apple good do smth simple so suffering). So I had to tweak the code for osx but this not gonna work as fast as on iOS.
func vertices() -> [SCNVector3] {
let vertexSources = sources(for: SCNGeometrySource.Semantic.vertex)
if let vertexSource = vertexSources.first {
let count = vertexSource.vectorCount * 3
let values = vertexSource.data.withUnsafeBytes {
[Float](UnsafeBufferPointer<Float>(start: $0, count: count))
}
var vectors = [SCNVector3]()
for i in 0..<vertexSource.vectorCount {
let offset = i * 3
vectors.append(SCNVector3Make(
CGFloat(values[offset]),
CGFloat(values[offset + 1]),
CGFloat(values[offset + 2])
))
}
return vectors
}
return []
}
For someone like me want to extract data of face from SCNGeometryElement.
Notice I only consider primtive type is triangle and index size is 2 or 4.
void extractInfoFromGeoElement(NSString* scenePath){
NSURL *url = [NSURL fileURLWithPath:scenePath];
SCNScene *scene = [SCNScene sceneWithURL:url options:nil error:nil];
SCNGeometry *geo = scene.rootNode.childNodes.firstObject.geometry;
SCNGeometryElement *elem = geo.geometryElements.firstObject;
NSInteger componentOfPrimitive = (elem.primitiveType == SCNGeometryPrimitiveTypeTriangles) ? 3 : 0;
if (!componentOfPrimitive) {//TODO: Code deals with triangle primitive only
return;
}
for (int i=0; i<elem.primitiveCount; i++) {
void *idxsPtr = NULL;
int stride = 3*i;
if (elem.bytesPerIndex == 2) {
short *idxsShort = malloc(sizeof(short)*3);
idxsPtr = idxsShort;
}else if (elem.bytesPerIndex == 4){
int *idxsInt = malloc(sizeof(int)*3);
idxsPtr = idxsInt;
}else{
NSLog(#"unknow index type");
return;
}
[elem.data getBytes:idxsPtr range:NSMakeRange(stride*elem.bytesPerIndex, elem.bytesPerIndex*3)];
if (elem.bytesPerIndex == 2) {
NSLog(#"triangle %d : %d, %d, %d\n",i,*(short*)idxsPtr,*((short*)idxsPtr+1),*((short*)idxsPtr+2));
}else{
NSLog(#"triangle %d : %d, %d, %d\n",i,*(int*)idxsPtr,*((int*)idxsPtr+1),*((int*)idxsPtr+2));
}
//Free
free(idxsPtr);
}
}
The Swift 3 version:
// `plane` is some kind of `SCNGeometry`
let planeSources = plane.geometry.sources(for: SCNGeometrySource.Semantic.vertex)
if let planeSource = planeSources.first {
let stride = planeSource.dataStride
let offset = planeSource.dataOffset
let componentsPerVector = planeSource.componentsPerVector
let bytesPerVector = componentsPerVector * planeSource.bytesPerComponent
let vectors = [SCNVector3](repeating: SCNVector3Zero, count: planeSource.vectorCount)
let vertices = vectors.enumerated().map({
(index: Int, element: SCNVector3) -> SCNVector3 in
let vectorData = UnsafeMutablePointer<Float>.allocate(capacity: componentsPerVector)
let nsByteRange = NSMakeRange(index * stride + offset, bytesPerVector)
let byteRange = Range(nsByteRange)
let buffer = UnsafeMutableBufferPointer(start: vectorData, count: componentsPerVector)
planeSource.data.copyBytes(to: buffer, from: byteRange)
let vector = SCNVector3Make(buffer[0], buffer[1], buffer[2])
})
// Use `vertices` here: vertices[0].x, vertices[0].y, vertices[0].z
}
OK, here is another Swift 5.5 version based on Oliver's answer.
extension SCNGeometry{
/**
Get the vertices (3d points coordinates) of the geometry.
- returns: An array of SCNVector3 containing the vertices of the geometry.
*/
func vertices() -> [SCNVector3]? {
let sources = self.sources(for: .vertex)
guard let source = sources.first else{return nil}
let stride = source.dataStride / source.bytesPerComponent
let offset = source.dataOffset / source.bytesPerComponent
let vectorCount = source.vectorCount
return source.data.withUnsafeBytes { dataBytes in
let buffer: UnsafePointer<Float> = dataBytes.baseAddress!.assumingMemoryBound(to: Float.self)
var result = Array<SCNVector3>()
for i in 0...vectorCount - 1 {
let start = i * stride + offset
let x = buffer[start]
let y = buffer[start + 1]
let z = buffer[start + 2]
result.append(SCNVector3(x, y, z))
}
return result
}
}
}
To use it you simply create a standard shape from which you can extract the vertex and rebuild the index.
let g = SCNSphere(radius: 1)
let newNode = SCNNode(geometry: g)
let vectors = newNode.geometry?.vertices()
var indices:[Int32] = []
for i in stride(from: 0, to: vectors!.count, by: 1) {
indices.append(Int32(i))
indices.append(Int32(i+1))
}
return self.createGeometry(
vertices:vectors!, indices: indices,
primitiveType: SCNGeometryPrimitiveType.line)
The createGeometry extension can be found here
It draws this...