I have an original video coded in several resolutions with its associated bitrates:
CanalMetroLivinglab3DTV-HD-Musica_384x216.mp4
CanalMetroLivinglab3DTV-HD-Musica_640x360.mp4
CanalMetroLivinglab3DTV-HD-Musica_720x406.mp4
CanalMetroLivinglab3DTV-HD-Musica_1280x720.mp4
CanalMetroLivinglab3DTV-HD-Musica_1920x1080.mp4
I used GPAC MP4 Box to split into segments these contents and to create the MPD file as follows:
MP4Box -dash 1000 -rap -segment-name %s_ -out CanalMetroLivinglab3DTV-HD-Musica.mpd ..\..\Dem3DTV\CanalMetroLivinglab3DTV-HD-Musica\CanalMetroLivinglab3DTV-HD-Musica_384x216.mp4 ..\..\Dem3DTV\CanalMetroLivinglab3DTV-HD-Musica\CanalMetroLivinglab3DTV-HD-Musica_640x360.mp4 ..\..\Dem3DTV\CanalMetroLivinglab3DTV-HD-Musica\CanalMetroLivinglab3DTV-HD-Musica_720x406.mp4 ..\..\Dem3DTV\CanalMetroLivinglab3DTV-HD-Musica\CanalMetroLivinglab3DTV-HD-Musica_1280x720.mp4 ..\..\Dem3DTV\CanalMetroLivinglab3DTV-HD-Musica\CanalMetroLivinglab3DTV-HD-Musica_1920x1080.mp4
I obtained the following manifest file:(it isn't complete)
<MPD xmlns="urn:mpeg:dash:schema:mpd:2011" minBufferTime="PT1.500000S" type="static" mediaPresentationDuration="PT0H0M45.02S" profiles="urn:mpeg:dash:profile:full:2011">
<ProgramInformation moreInformationURL="http://gpac.sourceforge.net">
<Title>
CanalMetroLivinglab3DTV-HD-Musica.mpd generated by GPAC
</Title>
</ProgramInformation>
<Period duration="PT0H0M45.02S">
<AdaptationSet segmentAlignment="true" bitstreamSwitching="true" maxWidth="1920" maxHeight="1080" maxFrameRate="25" par="16:9" lang="eng">
<AudioChannelConfiguration schemeIdUri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="2"/>
<ContentComponent id="1" contentType="video"/>
<ContentComponent id="2" contentType="audio"/>
<SegmentList>
<Initialization sourceURL="CanalMetroLivinglab3DTV-HD-Musica_init.mp4"/>
</SegmentList>
<Representation id="1" mimeType="video/mp4" codecs="avc3.640028,mp4a.40.2" width="384" height="216" frameRate="25" sar="1:1" audioSamplingRate="44100" startWithSAP="1" bandwidth="479651">...</Representation>
<Representation id="2" mimeType="video/mp4" codecs="avc3.640028,mp4a.40.2" width="640" height="360" frameRate="25" sar="1:1" audioSamplingRate="44100" startWithSAP="1" bandwidth="930072">...</Representation>
<Representation id="3" mimeType="video/mp4" codecs="avc3.640028,mp4a.40.2" width="720" height="408" frameRate="25" sar="1:1" audioSamplingRate="44100" startWithSAP="1" bandwidth="1123428">...</Representation>
<Representation id="4" mimeType="video/mp4" codecs="avc3.640028,mp4a.40.2" width="1280" height="720" frameRate="25" sar="1:1" audioSamplingRate="44100" startWithSAP="1" bandwidth="2470344">...</Representation>
<Representation id="5" mimeType="video/mp4" codecs="avc3.640028,mp4a.40.2" width="1920" height="1080" frameRate="25" sar="1:1" audioSamplingRate="44100" startWithSAP="1" bandwidth="4327645">
<SegmentList timescale="1000" duration="1001">
<SegmentURL media="CanalMetroLivinglab3DTV-HD-Musica_1920x1080_1.m4s"/>
<SegmentURL media="CanalMetroLivinglab3DTV-HD-Musica_1920x1080_2.m4s"/>
<SegmentURL media="CanalMetroLivinglab3DTV-HD-Musica_1920x1080_3.m4s"/>
<SegmentURL media="CanalMetroLivinglab3DTV-HD-Musica_1920x1080_4.m4s"/>
...
<SegmentURL media="CanalMetroLivinglab3DTV-HD-Musica_1920x1080_42.m4s"/>
<SegmentURL media="CanalMetroLivinglab3DTV-HD-Musica_1920x1080_43.m4s"/>
<SegmentURL media="CanalMetroLivinglab3DTV-HD-Musica_1920x1080_44.m4s"/>
<SegmentURL media="CanalMetroLivinglab3DTV-HD-Musica_1920x1080_45.m4s"/>
</SegmentList>
</Representation>
</AdaptationSet>
</Period>
</MPD>
However, I have several questions:
First, I think there should be an initialization segment for each representation and it will be listed in the first position of the SegmentList.
How can I do it?
Second, in my case all segments (.m4s), the initialization segment(.mp4) and the manifest file (.mpd) are stored in the same location in a server. According to this, isn't it neccessary the Base-URL element?
In other DASH sequences that I saw, all the segments of each representation are stored in an independent folder together to its associated initialization segment and the manifest file of that representation. And then, there is a global MPD. What parameters of MP4 Box do I have to use to do it in this way?
Thank you!
First Question: It seems that GPAC is using only one initialization segment that contains all initialization information for all representations. Therefore a SegmentList element with the Initialization element is present in the AdaptationSet element. According to the MPEG-DASH standard, this technique is used to express default values and all SegmentList elements inside of the Representation elements will inherit the attributes and elements from the higher level SegmentList. Basically this means that each SegmentList on the Representation level contains an initialization segment.
Second Question: If no BaseUrl element is present on the MPD level then all requests will be relative to the MPD location. So if the MPD is located on the same server there is no need to use a BaseUrl. It makes it also more comfortable when you move the content from one folder to another because you do not need to modify the MPD, i.e., changing the BaseUrl.
Third Question: This is possible and other services are structuring the content in that way and using SegmentTemplate, while providing MPDs for all individual Representations. This makes it easier to remove or add Representations. With MP4Box you can use the -segment-name flag and enable a subdirectory for each Representation with, e.g., $RepresentationID$/CanalMetroLivinglab3DTV-HD-Musica_$Number$.m4s. Anyway there is no need to do it like that. I would strongly recommend to use SegmentTemplate as it makes your MPD more compact (less bytes, reduces startup delay). Possible with MP4Box and the -url-template flag. Btw. your content as generated with MP4Box seems valid, at least from the MPEG-DASH MPD perspective. You can always check if your MPD is valid with the MPD-Validator from the University Klagenfurt.
Related
at my .mpd file I have a lang="en" element, sadly in VideoJS this is not displayed as "English", instead it's displayed as is and the player simply shows "en" and "de" for example when it comes to language selection. Now I would like to know if there is a track_name like object that I can set in the mpd similar to what I can do in a HLS m3u8 playlist e.g.:
#EXT-X-MEDIA:TYPE=AUDIO,GROUP-ID="a-4",NAME="English",CHANNELS=2,AUTOSELECT="YES",DEFAULT="NO",LANGUAGE="en",URI="a-aac-en-mp4a.40.2_128000/master.m3u8"
NAME="English" is what gets displayed in VideoJS and LANGUAGE="en" indicated the actual real language. What's the equivalent in Dash/mpd here?
Thanks in advance
After some more searching I gave up finding the relevant information and checked onto the DASH-IF Slack channel where I finally got a response that could help me out with my problem.
In mpd manifests there is no direct attribute such as track or track_name or label... Instead, you can simply just add a Label element like so:
<?xml version="1.0" encoding="utf-8"?>
<MPD xmlns="urn:mpeg:dash:schema:mpd:2011" id="master.mpd" minBufferTime="PT1.500S" type="static" mediaPresentationDuration="PT0H22M11.000S" profiles="urn:mpeg:dash:profile:isoff-main:2011">
<ProgramInformation moreInformationURL="https://strics.io">
<Copyright>Blablablallala ...</Copyright>
</ProgramInformation>
<Period duration="PT0H22M11.000S">
<AdaptationSet contentType="audio" mimeType="audio/mp4" segmentAlignment="true" lang="de" maxBandwidth="64000" minBandwidth="25600">
<Label>Deutsch</Label>
<Representation id="0_a" codecs="mp4a.40.5">
<AudioChannelConfiguration schemeIdUri="urn:mpeg:dash:23003:3:audio_channel_configuration:2011" value="2"></AudioChannelConfiguration>
<BaseURL>a-aac_he_v2-de-mp4a.40.5_64000/</BaseURL>
<SegmentList timescale="1000" startNumber="0" duration="9000">
<Initialization sourceURL="init-a-aac_he_v2-de-mp4a.40.5_64000.m4s"></Initialization>
<SegmentURL media="f-0000.m4s" duration="9.002667"></SegmentURL>
<SegmentURL media="f-0001.m4s" duration="9.002667"></SegmentURL>
<SegmentURL media="f-0002.m4s" duration="9.002667"></SegmentURL>
...
I have seen examples where it is possible to display a bitmap as a vertical profile in Google Earth. Like this:
However, I have not been able to find any kml/kmz examples of this. Does anyone have a simple example of how to do this?
Does it include using the dae (collada) file format too?
One method to do this would be to use a KML "Photo Overlay". They are designed to place landscape photographs vertically in the world, so that they can be viewed with the Earth terrain & imagery as matching background. You could use that technique to place images like these on vertical planes. There is a basic tool in Earth Pro to create Photo Overlays (Add menu >> Photo). Or you can create them manually or programmatically by writing the appropriate KML (reference links below), though it can get pretty complex with all the placement and field of view parameters. Also note that Photo Overlays work in Earth Pro (Earth v7.x), but do not currently work in Earth for web & mobile (Earth v9.x).
You could also do this using 3D models (yes, collada based) where you have a model representing your vertical plane(s), and the images as textures on the models. 3D models also only work in Earth Pro at this time. Which technique is easier will depend on the tools and skills you have available.
https://developers.google.com/kml/documentation/photos
https://developers.google.com/kml/documentation/kmlreference#photooverlay
My PhotoOverlay example is just here.
<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://www.opengis.net/kml/2.2" xmlns:gx="http://www.google.com/kml/ext/2.2">
<Document id="7">
<visibility>0</visibility>
<PhotoOverlay id="8">
<name>PhotoOverlay Test</name>
<visibility>0</visibility>
<Camera>
<longitude>-122.3599987260313</longitude>
<latitude>47.62949781133496</latitude>
<altitude>100</altitude>
<heading>-90</heading>
<tilt>90</tilt>
<roll>0</roll>
</Camera>
<Icon id="10">
<href>foo.png</href>
</Icon>
<ViewVolume>
<leftFov>-45</leftFov>
<rightFov>45</rightFov>
<bottomFov>-45</bottomFov>
<topFov>45</topFov>
<near>20000</near>
</ViewVolume>
</PhotoOverlay>
</Document>
</kml>
My kml example using collada is here.
You have to prepare your_prepared_square.dae and your_prepared.png in same folder. The square size is 1m x 1m. This model is scaling base.
<?xml version="1.0" encoding="UTF-8"?>
<kml xmlns="http://www.opengis.net/kml/2.2" xmlns:gx="http://www.google.com/kml/ext/2.2">
<Document id="1">
<Placemark id="2">
<name>My billboard</name>
<Model id="3">
<altitudeMode>absolute</altitudeMode>
<Location>
<longitude>140.037</longitude>
<latitude>36.84895</latitude>
<altitude>500</altitude>
</Location>
<Orientation>
<heading>90</heading>
<tilt>0</tilt>
<roll>0</roll>
</Orientation>
<Scale>
<x>1</x>
<y>10000</y>
<z>3500</z>
</Scale>
<Link id="4">
<href>your_prepared_square.dae</href>
</Link>
<ResourceMap>
<Alias>
<targetHref>your_prepared.png</targetHref>
<sourceHref>mapping.png</sourceHref>
</Alias>
</ResourceMap>
</Model>
</Placemark>
</Document>
</kml>
And mapping.png is texture filename in the collada model. This has
to be defined in the model. following fragment is a part of your_prepared_square.dae. But you don't need to prepare mapping.png. KML replaces it to your_prepared.png.
:
:
<library_images>
<image id="texture" name="texture">
<init_from>mapping.png</init_from>
</image>
</library_images>
:
:
folllowing image is applied this method.
I am trying to override default picture converter in Nuxeo.
By default, Nuxeo provides following OOTB converters
Thumbnail
Small
Medium
Large
Orignal
I want to reduce the converters to
Thumbnail
Orignal
Following are the configurations that I have tried
Created a Multi-Module Contribution using Nuxeo-cli utility
Steps followed to create contribution
$>nuxeo bootstrap multi-module
$>nuxeo bootstrap contribution
target component used for contribution is org.nuxeo.ecm.platform.picture.ImagingComponent
$>nuxeo bootstrap package
Added following extension to the OSGI-INF/picture-conversion-core-contrib.xml file Ref
<?xml version="1.0"?>
<component name="org.nuxeo.ecm.platform.picture.ImagingComponent.default.config.override">
<require>
org.nuxeo.ecm.platform.picture.ImagingComponent.default.config
</require>
<extension target="org.nuxeo.ecm.platform.picture.ImagingComponent" point="pictureConversions">
<pictureConversion chainId="Image.Blob.Resize" description="Thumbnail size" id="Thumbnail" maxSize="100" order="0" rendition="true"/>
<pictureConversion chainId="Image.Blob.Resize" description="Original jpeg image" id="OriginalJpeg" order="100" rendition="true"/>
</extension>
</component>
I want to keep only two picture conversions hence adding only Thumbnail converter and OriginalJpeg converter.
After creating package I am installing the package on Nuxeo server with following command.
$>nuxeoctl mp-install /path/to/dir/sample_picture_converter-package-1.0-SNAPSHOT.zip
Even though the component is installed correctly on the Nuxeo server, the server is converting the images with default formats(ie. Thumbnail, Small, Medium, Large and Original).
What are the steps to override a default contribution in Nuxeo without Nuxeo studio?
Cross Posted on Nuxeo forum
We need to disable the default picture conversions explicitly in OSGI-INF/picture-conversion-core-contrib.xml. Given below the updated OSGI configuration.
<?xml version="1.0"?>
<component name="org.nuxeo.ecm.platform.picture.ImagingComponent.default.config.override">
<require>
org.nuxeo.ecm.platform.picture.ImagingComponent.default.config
</require>
<extension target="org.nuxeo.ecm.platform.picture.ImagingComponent" point="pictureConversions">
<pictureConversion chainId="Image.Blob.Resize" description="Thumbnail size" id="Thumbnail" maxSize="100" order="0" rendition="true"/>
<pictureConversion chainId="Image.Blob.Resize" description="Original jpeg image" id="OriginalJpeg" order="100" rendition="true"/>
<pictureConversion chainId="Image.Blob.Resize" id="Small" enabled="false" />
<pictureConversion chainId="Image.Blob.Resize" id="Medium" enabled="false" />
<pictureConversion chainId="Image.Blob.Resize" id="FullHD" enabled="false" />
</extension>
</component>
Answered by LaraGranite on Nuxeo forum
I would like to update graphic symbolize in geoserver style from svg to png, gif or jpg. However, the symbol doesn't display when I view in openlayer. Svg symbol works fine.
Here is my code:
<?xml version="1.0" encoding="UTF-8"?>
<StyledLayerDescriptor xmlns="http://www.opengis.net/sld" xmlns:ogc="http://www.opengis.net/ogc" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" version="1.1.0" xmlns:xlink="http://www.w3.org/1999/xlink" xsi:schemaLocation="http://www.opengis.net/sld http://schemas.opengis.net/sld/1.1.0/StyledLayerDescriptor.xsd" xmlns:se="http://www.opengis.net/se">
<NamedLayer>
<se:Name>map_seed_source_site </se:Name>
<UserStyle>
<se:Name>map_seed_source_site </se:Name>
<se:FeatureTypeStyle>
<se:Rule>
<se:Name>Single symbol </se:Name>
<se:PointSymbolizer>
<se:Graphic>
<se:ExternalGraphic>
<se:OnlineResource xlink:type="simple" xlink:href="http://www.opendevelopmentcambodia.net/wp-content/themes/opendata/images/seed-sources-legend.gif" />
<se:Format>image/gif</se:Format>
</se:ExternalGraphic>
<se:Size>100 </se:Size>
</se:Graphic>
</se:PointSymbolizer>
</se:Rule>
</se:FeatureTypeStyle>
</UserStyle>
</NamedLayer>
</StyledLayerDescriptor>
When you tried, have you also changed this line?
from
<se:Format>image/gif</se:Format>
to
<se:Format>image/png</se:Format>
I'm also not sure if you can use <size> tag with non-vector graphic files. So try to change the gif to PNG and also to remove the size. The SLD will use the original size of the PNG. I'm not 100% sure it helps but it is worth a try...
I'm using mapnik 0.7.1 and tilelite with openlayers. I would like to prescribe two rules under a style for a shapefile layer (within layers-shapefiles.xml.inc), however, liteserv.py will not start if I do this. I also tried two styles for this layer (one rule per style), but same result. Here is the style block:
<Style name="wilderness_boundaries">
<Rule>
&maxscale_zoom0;
&minscale_zoom9;
<PolygonSymbolizer>
<CssParameter name="fill">#72B656</CssParameter>
</PolygonSymbolizer>
<LineSymbolizer>
<CssParameter name="stroke">grey</CssParameter>
<CssParameter name="stroke-width">1.0</CssParameter>
<CssParameter name="stroke-dasharray">4,2</CssParameter>
</LineSymbolizer>
</Rule>
<Rule>
&maxscale_zoom10;
&minscale_zoom20;
<PolygonSymbolizer>
<CssParameter name="fill">#72B656</CssParameter>
</PolygonSymbolizer>
<LineSymbolizer>
<CssParameter name="stroke">grey</CssParameter>
<CssParameter name="stroke-width">1.0</CssParameter>
<CssParameter name="stroke-dasharray">4,2</CssParameter>
</LineSymbolizer>
<TextSymbolizer name="NAME" fontset_name="book-fonts" size="8" dy="2" fill="grey" halo_radius="1" />
</Rule>
</Style>
and the corresponding layer definition:
<Layer name="wilderness_boundaries" status="on" srs="&srs4326;">
<StyleName>wilderness_boundaries</StyleName>
<Datasource>
<Parameter name="type">shape</Parameter>
<Parameter name="file">&world_boundaries;/wilderness_EPSG4326.shp</Parameter>
<Parameter name="encoding">latin1</Parameter>
</Datasource>
</Layer>
If I execute generate_image.py with this configuration, I get a segmentation fault.
As you can see, I am trying to show wilderness area polygons at all zoom levels, but label them only at zoom levels above 10. Multiple rules are allowed for a given style in osm.xml, what am I missing?
Thanks,
John
a segfault indicates you forgot to run generate_xml.py - in other words the styles sheet still has files in the inc/ directory that have %(something)s that need filled in. libxml2, the library mapnik uses under the hood, has a silly bug that causes segfaults when entities cannot be loaded.