Is this a valid SVG path? - svg

I've run into a problem rendering the following SVG path with various svg libraries:
<path d="M19.35 10.04c-.68-3.45-3.71-6.04-7.35-6.04-2.89 0-5.4 1.64-6.65 4.04-3.01.32-5.35 2.87-5.35 5.96 0 3.31 2.69 6 6 6h13c2.76 0 5-2.24 5-5 0-2.64-2.05-4.78-4.65-4.96zm-2.35 2.96l-5 5-5-5h3v-4h4v4h3z"/>
specifically, you can see something odd about this block:
4.04-3.01.32-5.35
this fixes it:
4.04-3.01+0.32-5.35
... as does this:
4.04-3.01 0.32-5.35
My reading of the SVG spec suggests the original path is invalid, but since the icon comes right out of Google's material design icons (https://github.com/google/material-design-icons) - and there's many similar "errors", I'm a little suspect of my reading of the BNF.
Can anyone offer a second opinion?

4.04-3.01.32-5.35 is valid. The SVG path specification grammar says that we're processing this
curveto-argument comma-wsp? curveto-argument-sequence
The ? after comma-wsp means 0 or 1 of those. In this case we've 0.
Tracing through the BNF we end up in the part that's about parsing numbers prior to any exponentiation character i.e.
digit-sequence? "." digit-sequence.
Once we've seen one full stop we can't see any more unless we see an exponent and so the second full stop must be part of something else i.e. the next number.
So the above character sequence corresponds to the values: 4.04 -3.01 .32 -5.35

Related

Is there an SVG path d convention for using z instead of coordinate with other segments?

I've tried to explore a python library for SVG parsing named svgelements. And there is an unusual concept I can't find in any SVG docs, also nor dolphin file browser nor firefox nor gimp can't render svg files using this. There is a z in pathd parsed as a coordinate and passed to Path to create the curve or line with z_point (the end of last move operation). So z used with LQTCS operations to replace a coordinate.
Is it something standard for SVG? And why many other apps can't process this?
I've explored this code for path d parsing
https://github.com/meerk40t/svgelements/blob/master/svgelements/svgelements.py#L408
There's a part with z as number processing
The library is implementing something that is actually part of the SVG 2 specification: Segment-completing close path operation. There is a (apparently failing) test in the Chromium test suite that exemplifies what is meant. It gives the test path element:
<path d="M 10 10 z m 20 70 h 10 v 10 h -10 l z M 70 30 q 20 0 20 20 t -20 20 t -20 -20 T z" />
To make it clear: since SVG 1.0 the z command closes any path in a straight line. This variant makes it possible to define the closing segment as a curved line.
Unfortunately, that part of the specification looks a bit like a dead end. This issue of the W3C SVG working group from August says:
The spec for the Segment-completing close path operation command was added 4 years ago and hasn't been implemented by any browser yet.
(https://svgwg.org/svg2-draft/paths.html#PathDataClosePathCommand). Currently it only exists in the spec, and as a failing wpt test.
consider removing it from the spec?
So far, there has been no further discussion as it seems.

Debug manifest .mpd

After activating CORS on my webserver I have run my manifest on dash conformance webtool finding several errors that I can not interpret, at the moment I am not able to get the ABR behavior,Can i ask for help to understand how to fix the bug?
https://allibrante.com/live/manifest.mpd
Below some log reported from the Dash confromance webtool, for more details is better run the manifest on their webtool
Thanks a lot!
error: moov-1:trak-1:mdia-1:minf-1:stbl-1:stsd-1
SampleDescription sdType must be 'mp4v', 'avc1', 'encv', 'hev1','hvc1', or 'vp09'('mp4v', 'avc1', 'encv', 'hev1','hvc1','vp09')
Warning: Unknown atom found "avcC": video sample descriptions would
not normally contain this Warning: Unknown atom found "pasp": video
sample descriptions would not normally contain this Brand 'lmsg' not
found as a compatible brand for the last segment (number 3); violates
Section 3.2.3. of Interoperability Point DASH264: If the MPD#type is
equal to "dynamic" or if it includes MPD#profile attribute in-cludes
"urn:mpeg:dash:profile:isoff-live:2011", then: if the Media Segment is
the last Media Segment in the Representation, this Me-dia Segment
shall carry the 'lmsg' compatibility brand tfdt base media decode time
1658.000000 not equal to accumulated decode time 0.000000 for track 1 for the first fragment of the movie. This software does not handle
incomplete presentations. Applying correction.
error:
Buffer underrun conformance error: first (and only one reported here) for sample 1 of run 1 of track fragment 1 of fragment 1 of track
id 1 (sample absolute file offset 1356, fragment absolute file offset
860, bandwidth: 7591)
-
'tkhd' alternateGroup must be 0 not 1 Validate_ES_Descriptor: ES_ID
should be 0 not 2 in media tracks
WARNING: unknown sample table atom 'sgpd' WARNING: unknown mvex atom
'trep' WARNING: unknown/unexpected atom 'meta' Brand 'lmsg' not found
as a compatible brand for the last segment (number 3); violates
Section 3.2.3. of Interoperability Point DASH264: If the MPD#type is
equal to "dynamic" or if it includes MPD#profile attribute in-cludes
"urn:mpeg:dash:profile:isoff-live:2011", then: if the Media Segment is
the last Media Segment in the Representation, this Me-dia Segment
shall carry the 'lmsg' compatibility brand tfdt base media decode time
1657.984000 not equal to accumulated decode time 0.000000 for track 2 for the first fragment of the movie. This software does not handle
incomplete presentations. Applying correction.
error:
grouping_type roll in sbgp is not found for any sgpd in moof number 1
error:
grouping_type roll in sbgp is not found for any sgpd in moof number 1
error:
grouping_type roll in sbgp is not found for any sgpd in moof number 1
The cause of the majority of your problems is spelled out in the error message: This software does not handle incomplete presentations. You are trying to validate a live stream, and this tool does not currently have that capability.
With respect to the sample description issue, it looks like the validator does not recognise avc3 content (ie where the parameter sets are inband rather than in the initialisation segment). I would consider this a bug and suggest you raise an issue at https://github.com/Dash-Industry-Forum/Conformance-and-reference-source/issues.

J2ME - PNG created in PhotoShop not displaying in emulator

My midlet is showing some images fine, but not others.
They are all 8-bit PNGs, but the ones that aren't displaying are the ones I have created myself in PhotoShop.
So I am thinking maybe my PhotoShop (CS6) settings are wrong...
PNG-8, Selective, Diffusion, Colors: 256, Dither: 100%, Matte: None, Web
Snap: 0%, Convert to sRGB: ticked, Width: 48, Height: 48, Percent: 100%,
Quality: Bicubic.
I've experimented with a few of these settings, but to no avail.
Any ideas?
There is a similar problem here but this is opposite to mine in that PhotoShop mends things in that case, rather than breaks things...
My code is...
image = Image.createImage("/img/loading1.png");
...and here is my stack trace:
java.io.EOFException
at javax.imageio.stream.ImageInputStreamImpl.readFully(
ImageInputStreamImpl.java:353)
at java.io.DataInputStream.readUTF(DataInputStream.java:609)
at javax.imageio.stream.ImageInputStreamImpl.readUTF(ImageInputStreamImpl.java:332)
at com.sun.kvem.png.PNGImageReader.parse_iTXt_chunk(PNGImageReader.java:447)
at com.sun.kvem.png.PNGImageReader.readMetadata(PNGImageReader.java:650)
at com.sun.kvem.png.PNGImageReader.readImage(PNGImageReader.java:1312)
at com.sun.kvem.png.PNGImageReader.read(PNGImageReader.java:1582)
at com.sun.kvem.midp.GraphicsBridge.loadImage(GraphicsBridge.java:2602)
at com.sun.kvem.midp.GraphicsBridge.createImageFromData(GraphicsBridge.java:2511)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:601)
at com.sun.kvem.sublime.MethodExecution.process(MethodExecution.java:42)
at com.sun.kvem.sublime.SublimeExecutor.processRequest(SublimeExecutor.java:63)
at javax.microedition.lcdui.Image.createImage(Image.java:315)
The image in question does exist - both in the project and in the jar that is built.
Here is the image in question:
According to the crash log, the PNG decoder in J2ME fails inside the non-critical chunk iTXt:1
> com.sun.kvem.png.PNGImageReader.readMetadata
> com.sun.kvem.png.PNGImageReader.parse_iTXt_chunk
> javax.imageio.stream.ImageInputStreamImpl.readUTF
> java.io.DataInputStream.readUTF
According to libpng documentation, the text part of an iTXt chunk must be valid UTF8:
... The remaining chunk data is the main UTF-8 text, either zlib-compressed or not, according to the compression flag. Since its length can be determined from the chunk length, it is not null-terminated. As with the other two text chunks, newlines should be represented by single line-feed characters (decimal 10), and all other control characters (1-9, 11-31, and 127-159) are discouraged.
and so normally this would indicate that the stream read is not valid UTF8 text - it contains 'raw' bytes higher than the plain ASCII range 0..127 that do not conform to UTF8 rules.
I found that not to be the case in the sample image. There is only one set of consecutive bytes that form a UTF8 code sequence, and it is a valid one:
<?xpacket begin="EFBBBF" id=" ..
(the bolded section represents 3 data bytes in hexadecimal notation). I first suspected this was the error:
If the BOM character appears in the middle of a data stream, Unicode says it should be interpreted as a "zero-width non-breaking space" (inhibits line-breaking between word-glyphs). In Unicode 3.2, this usage is deprecated in favour of the "Word Joiner" character, U+2060.[1] This allows U+FEFF to be only used as a BOM.
(http://en.wikipedia.org/wiki/Byte_order_mark)
.. and so a fully conforming UTF8 reader should inspect its bytes and throw an UTFDataFormatException when it encounters a BOM anywhere else than as the very first value. Surprisingly, this does not seem to be the problem! First of all, there is no indication any of the readUTF sources do anything else than only verify if the UTF8 code is valid on its own, irrespective of its value. There are lots of 'invalid' Unicode code points (values that do not represent a valid Unicode character or instruction), but it appears to me they are all silently ignored. But I noticed the common readUTF functions only implement a small subset of UTF8/Unicode (see, e.g., Modified UTF-8 in Oracle's documentation).
So the problem lies elsewhere. Another clue to this is that the error thrown is not UTFDataFormatException but rather EOFException, indicating the read buffer ran out of the number of bytes it was promised to contain.
(warning: pure conjecture follows)
Looking at a source of DataInputStream, I find this snippet of code:
588 public final static String readUTF(DataInput in) throws IOException {
589 int utflen = in.readUnsignedShort();
followed by a loop to read utflen bytes (not "Unicode characters"). This is wrong for an iTXt chunk, as it does not have a 'first word' to indicate its length. The number of bytes in the plain text can be derived from the chunk length (which is, per PNG convention, the total data length excluding the length long word, the iTXt signature itself, and the final CRC32 code) minus the length of the zero-terminated keyword name, language, and "translated keyword" strings, and the two bytes which indicate compression of the full plain text.
As a work-around, remove the iTXt chunks from your PNG images. The data itself -- XMP Metadata -- is most likely not interesting at all for your purposes (but feel free to read what benefits Adobe thinks it has). And if your workflow does not use it, it's just a useless hunk of uncompressed text, taking up 814 bytes of the total of 981 bytes in your sample image -- a whopping 83%!
You can use an external utility to remove extraneous data chunks; the command line for the popular pngcrush, for example, is
pngcrush -rem alla -rem text InputFile.png OutputFile.png
(from en.wikipedia.org/wiki/Pngcrush).
Or directly from Photoshop: if you save a PNG 'the usual way' with the "Save As" menu option, the metadata goes in and there is no checkbox to get rid of it. If you use "Save for Web & Devices" instead, you get a large dialog with a lot of handy options, such as a drop down list labelled "Metadata".
Choosing "All" I got an even larger file; my version of Photoshop creates a massive 3K chunk of XMP Metadata, including a 2K totally empty 'filler' block...
Selecting "Copyright" or "None" finally got rid of all the crud (presumably because I did not fill in any copyright information), and then you get a nice 169 bytes long PNG, in which the only metadata is that software used is called "Adobe ImageReady".
1 Which is kind of ironic. Per PNG specifications,
.. A decoder encountering an unknown chunk in which the ancillary bit is 1 can safely ignore the chunk and proceed to display the image.
(source)
This "ancillary bit" is the 5th bit of the first byte of the chunk ID: 0 (uppercase) = critical, 1 (lowercase) = ancillary, i.e., if the first character of the chunk ID is a capital, a PNG reader must read and interpret its data correctly, and if it's not, it can be skipped silently.
So technically, the writers of J2ME could safely have ignored this entire chunk. But they messed it up, attempt to read it, and now the code crashes on all programs that merely try to read the image data in PNGs which happen to contain iTXt chunks.

OpenSceneGraph CullVisitor::apply(Geode&) detected NaN

I am trying to use OSG for displaying some cubes on the screen.
at some runs it works perfectly but sometimes it does not display anything, just prints this in the virtual console:
CullVisitor::apply(Geode&) detected NaN,
depth=nan, center=(0 0 0),
matrix={
-1 0 0 0
0 0 1 0
0 1 0 0
-nan -nan -nan -nan
}
the reason why it sometimes works and other times doesn't is probably that the cubes are positioned randomly, and some positions apparently do not work.
The question is:
what does it mean and how do I avoid it?
note: you may be tempted to downvote this question right away, but please note that google only provides miserably useless results and I see no way of solving this problem other than asking for help.
Did you search your code for the usual list of suspects?
see:
http://en.wikipedia.org/wiki/NaN#Operations_generating_NaN
It's also possible you're trying to cull your scene before an object is fully initialized (no position yet) - the fix would be to not add it to your scene until you've initialized it. But we're really just guessing unless you post some of your relevant code.
The point is the view matrix is not correctly initialized.
Perform a check and, if the view matrix is invalid, replace it by the identity matrix:
// if the view matrix is invalid (NaN), use the identity
osg::ref_ptr<osg::Camera> camera = _viewer->getCamera();
if (camera->getViewMatrix().isNaN())
camera->setViewMatrix(osg::Matrix::identity());

modx Decrement a TV to obtain 0

I need my [[+idx]] tv to start at 0 instead of 1 so I tried this:
[[+idx:decr]] or [[+idx:substract=1]] but it gives me -1 (minus one).
Does anyone know another way to obtain 0?
Thank you
Using this in chunk for getImageList works (at least for me):
[[+idx:decr]]
It gives: 0,1,2,3 ....
P.S. using modx revo 2.3.1
set your template variable default to 0 when you create the variable.
What are you trying to do, your question is vague at best.
UPDATE
ok - what I think will work for you is to write a snippet to do the math... where ever you call the [[+idx]] instead write a snippet
[[!FixIDX? &itemindex=`[[+idx]]`]]
then in your FixIDX snippet just do the math with php and return the corrected index. Though perhaps a custom output modifier would be the better way to go: http://rtfm.modx.com/display/revolution20/Input+and+Output+Filters+(Output+Modifiers)
Though looking at the docs, your code should certainly work - I see no reason for it not to.

Resources