After activating CORS on my webserver I have run my manifest on dash conformance webtool finding several errors that I can not interpret, at the moment I am not able to get the ABR behavior,Can i ask for help to understand how to fix the bug?
https://allibrante.com/live/manifest.mpd
Below some log reported from the Dash confromance webtool, for more details is better run the manifest on their webtool
Thanks a lot!
error: moov-1:trak-1:mdia-1:minf-1:stbl-1:stsd-1
SampleDescription sdType must be 'mp4v', 'avc1', 'encv', 'hev1','hvc1', or 'vp09'('mp4v', 'avc1', 'encv', 'hev1','hvc1','vp09')
Warning: Unknown atom found "avcC": video sample descriptions would
not normally contain this Warning: Unknown atom found "pasp": video
sample descriptions would not normally contain this Brand 'lmsg' not
found as a compatible brand for the last segment (number 3); violates
Section 3.2.3. of Interoperability Point DASH264: If the MPD#type is
equal to "dynamic" or if it includes MPD#profile attribute in-cludes
"urn:mpeg:dash:profile:isoff-live:2011", then: if the Media Segment is
the last Media Segment in the Representation, this Me-dia Segment
shall carry the 'lmsg' compatibility brand tfdt base media decode time
1658.000000 not equal to accumulated decode time 0.000000 for track 1 for the first fragment of the movie. This software does not handle
incomplete presentations. Applying correction.
error:
Buffer underrun conformance error: first (and only one reported here) for sample 1 of run 1 of track fragment 1 of fragment 1 of track
id 1 (sample absolute file offset 1356, fragment absolute file offset
860, bandwidth: 7591)
-
'tkhd' alternateGroup must be 0 not 1 Validate_ES_Descriptor: ES_ID
should be 0 not 2 in media tracks
WARNING: unknown sample table atom 'sgpd' WARNING: unknown mvex atom
'trep' WARNING: unknown/unexpected atom 'meta' Brand 'lmsg' not found
as a compatible brand for the last segment (number 3); violates
Section 3.2.3. of Interoperability Point DASH264: If the MPD#type is
equal to "dynamic" or if it includes MPD#profile attribute in-cludes
"urn:mpeg:dash:profile:isoff-live:2011", then: if the Media Segment is
the last Media Segment in the Representation, this Me-dia Segment
shall carry the 'lmsg' compatibility brand tfdt base media decode time
1657.984000 not equal to accumulated decode time 0.000000 for track 2 for the first fragment of the movie. This software does not handle
incomplete presentations. Applying correction.
error:
grouping_type roll in sbgp is not found for any sgpd in moof number 1
error:
grouping_type roll in sbgp is not found for any sgpd in moof number 1
error:
grouping_type roll in sbgp is not found for any sgpd in moof number 1
The cause of the majority of your problems is spelled out in the error message: This software does not handle incomplete presentations. You are trying to validate a live stream, and this tool does not currently have that capability.
With respect to the sample description issue, it looks like the validator does not recognise avc3 content (ie where the parameter sets are inband rather than in the initialisation segment). I would consider this a bug and suggest you raise an issue at https://github.com/Dash-Industry-Forum/Conformance-and-reference-source/issues.
Related
I have a source mp4 file with duration=17sec (for example).
When i convert video to Apple HLS using AWS MediaConvert, i get the m3u8 file with duration 18sec .
I mean #EXTINF:18 tag in m3u8.
I use ABR mode.
SegmentControl settings are default
{
"OutputGroups": [
{
"Name": "Apple HLS",
"OutputGroupSettings": {
"Type": "HLS_GROUP_SETTINGS",
"HlsGroupSettings": {
"SegmentLength": 10,
"MinSegmentLength": 0,
"TargetDurationCompatibilityMode": "LEGACY",
"SegmentLengthControl": "GOP_MULTIPLE",
"SegmentControl": "SEGMENTED_FILES"
}
}
]
}
How to fix it? I tried to change different HlsGroupSettings but result remains the same
Thanks for your post. MediaConvert has a default setting to use whole integers for manifest durations. This means that if the source asset has even 1 extra frame of video or audio, the service will add a whole second to the segment duration. This may be why your output is appearing to be +1s longer than expected. You can change this segment duration setting to 'floating point' duration under "HLS Output Group / Avanced/ Manifest duration format". Try this and you might find the last segment is only slightly longer than expected.
You can ensure the source asset is exactly XX seconds long using the "input Clips" feature to specify a specific start + end timecode (HH:MM::SS:FF).
For the widest compatibility with streaming players we recommend using 1 second as the minimum segment duration. Very short segments (<1s) sometimes get skipped by some players or flagged by stream quality checking products. If a few extra frames of source content are found to exist, they will get added to the previous segment.
When measuring durations, be sure to check the actual media track durations and not just the file header metadata. Utilities such as ffprobe or mediainfo (use the --full flag) are helpful for this. The pts_time for each frame will indicate when it is supposed to start. The pkt_duration_time will indicate the duration of each frame.
Goodmorning,
with my Nikon DSLR, I always shoot raw. I'd like to import my nef's in an import-folder and copy them from there with a name, prefixed by the date taken. In Lazarus I've created a class named TMyNef and I've managed to retrieve the date taken, ISO, Shutterspeed, F-Stops, exposure program, maker, model and focal length. I'm struggling with the offset to the embedded JPG. I need this because the Lazarus TImage-component doesn't support the NEF-format, but it can display JPG's perfectly. Every NEF has an embedded JPG (you see on the back of the camera), hidden in the PreviewTAG of the NEF. Basically, a NEF is almost the same as a TIFF. A Tiff uses IFD-s to store meta-information. So, I studied lots of sites about tiff, exif, Exiftool (Phil Harvey), DCRaw (Dave Coffin) I've used Exiftool.exe to analyse my nef's. I've even called "Exiftool.exe -b" within my program, but it's to slow.
Shortly, it comes down to:
Open the file
Read the header
Search for tag 0x8769 (the EXIF-tag)
If not found -> exit
Goto the offset where the EXIF-IFD starts
Search for tag 0x927C (The makernotes)
If not found -> exit
Goto to the offset where the MAKERNOTES-IFD starts
Read the Makernotes header
Search for tag 0x011, the Nikon Preview IFD
If not found -> Exit
Goto to the Nikon Preview IFD
Search for tags 0x201 (Start of the embedded JPG) and
tag 0x202 (Size of the embedded JPG)
If not found -> exit
Goto the start of the embedded jpg.
Read JPGSIZE-bytes from the file and copy them to a TMemorystream,
called Fms
Perform a Image1.Picture.LoadFromStream(Fms)
Summarized:
Open nef, search tag x08769, search 0x927C, search 0x0011, search 0x0201.
"Goto" means in the case: move the filepointer to the given offset.
Unfortunately, I've haven't been able the retrieve the offset of the embedded JPG. My program gives entirely different values than Phil Harvey's "Exiftool".
Jumping to "Phil's offset" and read until 0xFFD9 or read the amount of bytes given in tag 0x202, display's a perfect picture, going to the offset my program found and read until 0xFFD9, Lazarus says: Unsupported picture-format.
In the several IFD's I've found the meta-data I needed, so I took that along with me.
Does anyone know a way to retrieve the correct offset to the embedded JPG in a NEF? Or is there another way to display a NEF via the TImage-component in Lazarus?
Thanks in advance and best regards,
Martin
(Intel I5, 8Gb, Windows 10, Nikon Codec loaded, Lazarus 1.8)
Offsets within Exif structures usually are relative to the beginning of the TIFF header in front of the first Exif tag. This is true for jpeg and tiff files, not sure about NEF.
The most recent HTTP Live Streaming spec (16) omits the FRAME-RATE attribute from the EXT-X-STREAM-INF tag.
The link following shows the diff of the two latest versions of the spec (drafts 15 and 16):
draft-pantos-http-live-streaming-15.txt
draft-pantos-http-live-streaming-16.txt
[https://www.ietf.org/rfcdiff?url1=draft-pantos-http-live-streaming-15&url2=draft-pantos-http-live-streaming-16]
See that in section 4.3.4.2. "EXT-X-STREAM-INF" the FRAME-RATE attribute is present in 15 but not in 16. But there was no mention of why it was omitted. Is it now deprecated? Can it still be used? Should players ignore it if a FRAME-RATE attribute is specified? What if my playlist uses the FRAME-RATE attribute, can it still be used or will I need to change my playlists and remove it?
Since I was curious about this too I contacted the draft author and he kindly provided the information.
The EXT-X-STREAM-INF optional attribute FRAME-RATE is not deprecated/removed but it was published by mistake before being fully validated.
It is now so we can expect it to return in a future version of the protocol. It will be used to allow devices that don't support higher framerates to skip the respective streams without the need to fetch a media segment beforehand.
For now:
To support forward compatibility, when parsing Playlists, Clients
MUST:
ignore any unrecognized tags.
ignore any Attribute/value pair with an unrecognized
AttributeName.
ignore any tag containing an attribute/value pair of type
enumerated-string whose AttributeName is recognized but whose
AttributeValue is not recognized, unless the definition of the
attribute says otherwise.
I've run into a problem rendering the following SVG path with various svg libraries:
<path d="M19.35 10.04c-.68-3.45-3.71-6.04-7.35-6.04-2.89 0-5.4 1.64-6.65 4.04-3.01.32-5.35 2.87-5.35 5.96 0 3.31 2.69 6 6 6h13c2.76 0 5-2.24 5-5 0-2.64-2.05-4.78-4.65-4.96zm-2.35 2.96l-5 5-5-5h3v-4h4v4h3z"/>
specifically, you can see something odd about this block:
4.04-3.01.32-5.35
this fixes it:
4.04-3.01+0.32-5.35
... as does this:
4.04-3.01 0.32-5.35
My reading of the SVG spec suggests the original path is invalid, but since the icon comes right out of Google's material design icons (https://github.com/google/material-design-icons) - and there's many similar "errors", I'm a little suspect of my reading of the BNF.
Can anyone offer a second opinion?
4.04-3.01.32-5.35 is valid. The SVG path specification grammar says that we're processing this
curveto-argument comma-wsp? curveto-argument-sequence
The ? after comma-wsp means 0 or 1 of those. In this case we've 0.
Tracing through the BNF we end up in the part that's about parsing numbers prior to any exponentiation character i.e.
digit-sequence? "." digit-sequence.
Once we've seen one full stop we can't see any more unless we see an exponent and so the second full stop must be part of something else i.e. the next number.
So the above character sequence corresponds to the values: 4.04 -3.01 .32 -5.35
To all those who have had experience with using the crf++ toolkit (refer: http://crfpp.sourceforge.net/)
Please find the error message which pops up on trying to execute the CRF++ training program:
CRF++: Yet Another CRF Tool Kit
Copyright (C) 2005-2009 Taku Kudo, All rights reserved.
encoder.cpp(280) [feature_index.open(templfile, trainfile)] feature_index.cpp(86) [max_size == size] inconsistent column size: 21 20 train.data
I'm not sure how to interpret the error message.
There are 20 features in my training file and the 21st token is the class value.
I have created the Crf++ template file as per the instructions on the site.
It looks like a training data format issue, make sure the number of columns are consistent across all sentences.
I got this error today, and found that crf++ toolkit just set tab character(\t) to default column separator whereas my train data file using one white space lead to error.
Some points to check:
1. Check if you have a new line after each sentence
2. Check if your columnar values does not contain any sp
Error suggests that your number of columns in rows are not same among all. Your maximum number of columns are 21 and that should be consistent through out the training file but crf_learn finds it 20 somewhere in your train.data training file. So locate such row and remove/repair it.