I want to write a Collada-1.4 exporter for exporting a Skeleton. I need to extend the Collada format for defining some additional information:
bone tail (relative to joint location)
bone roll (relative to longitudinal axis)
bone connect state (bone tail connected to parent joint)
The best i can think of is to use an element and add a tool-specific profile, but the documentation is not clear about how to do this exactly.
Here is what i guess is a correct working example:
<node id="Armature" name="Armature" type="NODE">
<matrix sid="transform">
1 0 0 0.1151489
0 1 0 0.01073149
0 0 1 1.730716
0 0 0 1</matrix>
<node id="A" name="A" sid="A" type="JOINT">
<matrix sid="transform">1 0 0 0 0 0 -1 0 0 1 0 0 0 0 0 1</matrix>
<node id="B" name="B" sid="B" type="JOINT">
<matrix sid="transform">
-0.7919466 -0.4411024 0.422196 0
8.9407e-8 0.6914554 0.7224193 1
-0.6105905 0.5721177 -0.5475955 0
0 0 0 1
</matrix>
<extra>
<technique profile="blender">
<!-- =============================== -->
<!-- Bone tail offset from bone head -->
<!-- =============================== -->
<float_array name="tail" sid="tail" count="3">
0.0 0.0 1.0
</float_array>
</technique>
<technique_common/>
</extra>
</node>
</node>
</node>
However i have some questions:
Is this an acceptable method to provide the bone tail information?
Can i use a technique_common element as alternative to defining the profile?
Do i have to define an empty technique_common element or can i skip that?
is it ok to use sid="tail" for every bone tail in the entire collada file?
Is there a less verbose way to define the bone tail information?
Can i use the profile in a more specific way? Like for example:
<node>
...
<extra>
<technique profile="blender">
<tail type="float_array" connect="true">0 0 1.0</tail>
<roll type="float">0</roll>
</technique>
<technique_common/>
</extra>
</node>
I am not sure if i am allowed to define new elements (tail, role, connect) as shown in the example above. Can i do this?
I am also aware that the importer must be aware of the extra data (support the blender profile) to find the bone tail info. So if a tool does not know the blender profile it won't recognize the additional bone information, but that is OK for me.
I can try to answer your questions.
If you read the <extra> tag from Collada specification, you can
write any XML-conform data under <technique> or
<technique_common>. So for your first question, yes it is
acceptable. If you have any schema defined for your XML elements, you
can give it with xmlns attribute in the technique.
Yes, you can choose <technique_common> element, but it is better to give the profile information in your case. I would prefer to use
<technique> and give the profile information as you have already
done.
You don't need to put <technique_common> element as a child for <extra>, if you have no information to give there.
sid should be unique among the document. So it is not OK. You can try to number the tails consecutively.
Bone tail information is more about the semantics. I cannot answer that.
I think how you did it in second example is better than the first one. But you are the one to use it for data exchange. You should think about how the importer should interpret this data. I cannot help you further without knowing the intern structure of the importer.
Related
I've tried to explore a python library for SVG parsing named svgelements. And there is an unusual concept I can't find in any SVG docs, also nor dolphin file browser nor firefox nor gimp can't render svg files using this. There is a z in pathd parsed as a coordinate and passed to Path to create the curve or line with z_point (the end of last move operation). So z used with LQTCS operations to replace a coordinate.
Is it something standard for SVG? And why many other apps can't process this?
I've explored this code for path d parsing
https://github.com/meerk40t/svgelements/blob/master/svgelements/svgelements.py#L408
There's a part with z as number processing
The library is implementing something that is actually part of the SVG 2 specification: Segment-completing close path operation. There is a (apparently failing) test in the Chromium test suite that exemplifies what is meant. It gives the test path element:
<path d="M 10 10 z m 20 70 h 10 v 10 h -10 l z M 70 30 q 20 0 20 20 t -20 20 t -20 -20 T z" />
To make it clear: since SVG 1.0 the z command closes any path in a straight line. This variant makes it possible to define the closing segment as a curved line.
Unfortunately, that part of the specification looks a bit like a dead end. This issue of the W3C SVG working group from August says:
The spec for the Segment-completing close path operation command was added 4 years ago and hasn't been implemented by any browser yet.
(https://svgwg.org/svg2-draft/paths.html#PathDataClosePathCommand). Currently it only exists in the spec, and as a failing wpt test.
consider removing it from the spec?
So far, there has been no further discussion as it seems.
I've run into a problem rendering the following SVG path with various svg libraries:
<path d="M19.35 10.04c-.68-3.45-3.71-6.04-7.35-6.04-2.89 0-5.4 1.64-6.65 4.04-3.01.32-5.35 2.87-5.35 5.96 0 3.31 2.69 6 6 6h13c2.76 0 5-2.24 5-5 0-2.64-2.05-4.78-4.65-4.96zm-2.35 2.96l-5 5-5-5h3v-4h4v4h3z"/>
specifically, you can see something odd about this block:
4.04-3.01.32-5.35
this fixes it:
4.04-3.01+0.32-5.35
... as does this:
4.04-3.01 0.32-5.35
My reading of the SVG spec suggests the original path is invalid, but since the icon comes right out of Google's material design icons (https://github.com/google/material-design-icons) - and there's many similar "errors", I'm a little suspect of my reading of the BNF.
Can anyone offer a second opinion?
4.04-3.01.32-5.35 is valid. The SVG path specification grammar says that we're processing this
curveto-argument comma-wsp? curveto-argument-sequence
The ? after comma-wsp means 0 or 1 of those. In this case we've 0.
Tracing through the BNF we end up in the part that's about parsing numbers prior to any exponentiation character i.e.
digit-sequence? "." digit-sequence.
Once we've seen one full stop we can't see any more unless we see an exponent and so the second full stop must be part of something else i.e. the next number.
So the above character sequence corresponds to the values: 4.04 -3.01 .32 -5.35
I'm utilizing the Bravura music font.
Here's its font-face definition:
<font-face
font-family="Bravura"
font-weight="400"
font-stretch="normal"
units-per-em="2048"
panose-1="5 6 0 0 0 0 0 0 0 0"
ascent="1638"
descent="-410"
bbox="-889 -4080 4749 4120"
underline-thickness="102"
underline-position="-102"
unicode-range="U+0020-1D1DD"
/>
I'm trying to wrap my head around font metrics. I've studied the explanation on this site: But I'm still unclear.
My goal is to translate the glyphs into a properly scaled SVG path using an SVG symbol viewBox attribute.
So the EM square (which is an imaginary square enclosing each glyph) is 2048x2048 units (defined by units-per-em). A unit is 1/72 of an inch. My monitor DPI is 96x96
Converting this to pixels = 2048 * 96/72 = 2730 1/3 x 2730 1/3
(Let me know if I'm off here)
So each font should natively fit into a 2730 1/3 x 2730 1/3 square?
How does the bounding box #s fit into this process? Are the bbox units in glyph-units as well? (1/72 in)
Should the bbox value below be directly inputted into the viewBox attribute of a symbol?
Do I need to consider ascent and descent values?
Here is a jsfiddle somewhat demonstrating my issue: http://jsfiddle.net/1wqa384u/5/
Any resources or help appreciated.
The em box encompasses the ascent and decent. Notice that ascent-descent=2048.
As for your main question, I think you are confusing yourself a bit. The viewBox tells the browser how to scale the symbol to fit the size specified by the <use> that references it.
So if I understand what you want correctly, your symbol viewBox should just be "0 0 2048 2048".
You should then be able to draw it at, say 12pt, by referencing it like so:
<use xlink:href="#mysymbol" x="100" y="100" width="12pt" height="12pt"/>
You shouldn't have to worry about doing your own DPI conversion.
I am trying to use OSG for displaying some cubes on the screen.
at some runs it works perfectly but sometimes it does not display anything, just prints this in the virtual console:
CullVisitor::apply(Geode&) detected NaN,
depth=nan, center=(0 0 0),
matrix={
-1 0 0 0
0 0 1 0
0 1 0 0
-nan -nan -nan -nan
}
the reason why it sometimes works and other times doesn't is probably that the cubes are positioned randomly, and some positions apparently do not work.
The question is:
what does it mean and how do I avoid it?
note: you may be tempted to downvote this question right away, but please note that google only provides miserably useless results and I see no way of solving this problem other than asking for help.
Did you search your code for the usual list of suspects?
see:
http://en.wikipedia.org/wiki/NaN#Operations_generating_NaN
It's also possible you're trying to cull your scene before an object is fully initialized (no position yet) - the fix would be to not add it to your scene until you've initialized it. But we're really just guessing unless you post some of your relevant code.
The point is the view matrix is not correctly initialized.
Perform a check and, if the view matrix is invalid, replace it by the identity matrix:
// if the view matrix is invalid (NaN), use the identity
osg::ref_ptr<osg::Camera> camera = _viewer->getCamera();
if (camera->getViewMatrix().isNaN())
camera->setViewMatrix(osg::Matrix::identity());
Anyone know how to convert latitude, longitude in degrees to define a BBOX where SRS=EPSG:27700?
I am trying to call a WMS service with a URL like following (not a real link):
http://mysecretmappingserver.com/wms?user=myuser&pwd=mypassword&VERSION=1.1.1&REQUEST=GetMap&LAYERS=ap25cm&STYLES=&SRS=EPSG:27700&BBOX=229096,231675,229296,231875&width=400&height=400
Any language would be fine; C# preferable.
Spacedman has been trying to help me, but I can't seem to get Proj4Net to work for me - all me, I'm sure - but if someone knows either Proj4Net or the math involved, that might be better...
You need an interface to the PROJ.4 projections library in your language of choice. In R, for example, its in the rgdal package:
Here's some points (1,1 to 2,2 degrees) in epsg:4326:
> pts
SpatialPoints:
coords.x1 coords.x2
[1,] 1 1
[2,] 2 2
Coordinate Reference System (CRS) arguments: +init=epsg:4326
and voila:
> spTransform(pts,CRS("+init=epsg:27700"))
SpatialPoints:
coords.x1 coords.x2
[1,] 734005.9 -5416918
[2,] 845270.7 -5305999
Coordinate Reference System (CRS) arguments: +init=epsg:27700
Proj.4 docs here:
http://trac.osgeo.org/proj/
Since this is OSGB, probably a better example would be in the UK: here's a point in central london:
> pts = SpatialPoints(cbind(-0.109863,51.460852),proj4string=CRS("+init=epsg:4326"))
> spTransform(pts,CRS("+init=epsg:27700"))SpatialPoints:
coords.x1 coords.x2
[1,] 531407.1 175235.8
Coordinate Reference System (CRS) arguments: +init=epsg:27700
+proj=tmerc +lat_0=49 +lon_0=-2 +k=0.9996012717 +x_0=400000
+y_0=-100000 +ellps=airy +datum=OSGB36 +units=m +no_defs
+towgs84=446.448,-125.157,542.060,0.1502,0.2470,0.8421,-20.4894
I recommend ogr2ogr, which among other things can convert between projections. I have it installed on my Mac, and there are binding e.g. to Python and many other languages. You can also use it on the commandline. Homepage is http://www.gdal.org/ogr2ogr.html