I am having some trouble with vector images in magnolia.
Not everywhere, but images that go through the imaging module, don't give a result.
This makes sense to me as resizing / optimizing is what this module does and that wouldn't be applicable to a vector (e.g. *.svg)
Is there a way I can configure the imaging module to 'just give back the image' if it's an SVG image?
All documentation seems to point to:
NoOpAssetRenderer
The module magnolia-dam-core provides NoOpAssetRenderer which can be used to define
global AssetRenderer. Per default, NoOpAssetRenderer does only wrap the original Asset.
But I can't figure out how to configure this.
Concrete example, my thumbnails during asset selection give me:
The url for such an image would be
<base-url>/.imaging/thumbnail/dam/<asset-name>.svg
Is there anyone who has made this work?
Related
I use Cloudinary for my blog site (built with node js).
My problem is that when I upload an image from an iPhone, it rotates 90 degrees. I have already tried angle: "ignore" but this does not seem to work. I think it has to do with the exif information. How do I get rid of it or am I using the wrong cloudinary parameters?
(also does not work when I include a_ignore in the url)
Here is the upload code:
let result = await cloudinary.uploader.upload(req.file.path, {resource_type: type, angle:"ignore"});
I'll start by recommending to remove angle: "ignore" from the upload call parameters, try to upload again, and then you probably experience one of the following cases:
The original may not have any embedded rotation metadata and therefore there's no way to know what's the correct rotation.
The original may have the rotation metadata, and the delivery URL was of the original, which by default sends the original image, as is, without any Cloudinary processes. So far so good, however, at that point, it's up to the client/device to be responsible to parse the metadata and render it correctly by the rotation metadata, and unfortunately, there are indeed cases of clients "ignoring" the rotation metadata when rendering images.
The image was both rotated manually AND the metadata wasn't stripped, which may result in an extra (unnecessary) rotation.
After you check which case you think you might be on, here are some possible ways to fix:
For case #1 - In case you have access to the original unrotated and un-metadata-stripped version of the image, try to upload that one again instead.
For case #2 - On delivery, instead of using the original image's delivery URL, try to use a derived version of it (e.g., add q_auto to the URL as a transformation). Using any of Cloudinary's transformations will automatically optimize the image before delivery, but importantly for this case, it'll also rotate the image by the rotation metadata info (assuming it has one), and last but not least, will strip the metadata, so it'll always show the image by the intended rotation.
For case #3 - Usually a possible fix for this is to indeed add angle: "ignore", which is, as mentioned previously, recommended as a delivery transformation (a_ignore in the URL) rather than as part of the upload parameters.
If you could share the original image here, I'll be happy to take a closer look and offer solutions. If privacy is an issue, share it with Cloudinary's support team who will be happy to assist.
I am using the Vimeo Depth Player (https://github.com/vimeo/vimeo-depth-player/) for volumetric videos - only for a hobby/out of curiosity - and I'd like to know more about the parameters we use in the video description (such as in this video: https://vimeo.com/279527916) - I searched for it but I wasn't able to find a description for any of the supported parameters.
Does anyone here knows where to find such description?
Thanks!
Unfortunately, this JSON config is not publicly documented anywhere right now, except for the source code which parses it.
If you are using Depthkit to do a volumetric capture, they automatically generate this configuration for you so you don't have to worry about what it means.
https://docs.depthkit.tv/docs/the-depthkit-workflow
The point of this config is to mathematically describe how the subject was captured. e.g. How far is the subject from the camera? Without all of this, you won't be able to properly reconstruct the volumetric capture.
I am working on a project where Users can interact with a Map via mouse click to see more details of an area. It is Perth Metropolitan Area Map. This Map is generated from PDF using online "PDF to SVG converter".
When I looked at generated SVG code it is so huge can't understand full of it. and did some research to see if i can find any simpler version of the Map, I see there are various options to construct SVG, detailed below.
Shapefiles : Creating maps based on real world data, I thought this is good option to go. but the problem I observed here is we need to depend on GIS tools and open databases where GIS data is available. It is too heavy for our requirement.
Geo JSON / Topo JSON : I see this is simple way to represent Maps in plain, but I could not figure out a way to generate required JSON files. After exploring more on this I understood these technologies are dependent on GIS / Shapefiles.
Inscape : UI editor to draw SVG - It is just generating lot of SVG code again.
After reviewing above all I kind of thinking may be I should learn to write own SVG map.
Can somebody advice whether I am in right direction or Are there any simple approaches to create a Map like this Perth Metropolitan Area Map ?
Thanks in advance.
Is there any way to protect your sprites on EaselJS?
Currently is too easy to download the sprites.
On chrome just go to console -> resources like this
I made a resarch before i made this answer and found this topic .
That could be very nice. Also we don't need to save the slices in a json like he said, if we have a shuffle seed.
But, i didn't find any thing in nodejs(back-end) to make this image shuffle.
I tried Node GM but its looks too complicaded to bind a image on top of another with (w,h,x,y,offsetX,offsetY)
I know always will have a way to "hack" the resource. But at least offer some difficult.
One of the simple approaches is to encode images to base64, store them as part of Javascript and decode at runtime. See:
Convert and insert Base64 data to Canvas in Javascript
But obviously this will increase download size.
Personally, I would not go this route for "normal" applications or games, unless it is really justified or put on me as an external requirement. For example, one can easily extract assets from the android APK, but this does not seem an area of concern for most of the developers.
The user's browser downloads those images whether you want it or not. Otherwise, they wouldn't be able to display them.
At any given time, any user can just right click on any image on the site and click SAVE AS, you can't stop it, and you shouldn't try.
If you don't want people downloading your work, don't put it on the public facing internet.
I want too build a web application, and I am looking at the tools I will have to use.
I want to use a real time map
I'm a thinking about :
Tilemill to get .png in order to constitue the background of my maps
or get data from a webite in shp files to build layers for this in mapnik.
Mapnik Build layers with the data I want to add on my map.
Mapnik : Put layers together and generate a map.
TileStache : generate tiles for my application.
Openlayers : Display my map with tiles in a browser.
Once my map is displayed, I'd like to add interactivity. For example when you go over a line or a circle (a town/ an event), then it gives you the attributes of this object.
But the lines and circles will integrated dirctly to the mapnik map, so I need to add some javascript to make it dynamic and open a pop-up. How do I do this ? Using Openlayer javascript libraries or node.js.
What is your advice on the question/the way I want to use theese tools?
Thanks a lot!
I'm in a similar situation, so I don't know the answer, but from what I've been able to figure out I think you're on the right track.
I started off using the Mapbox approach, which simplifies things as long as your data is static. You use Tilemill not only to generate your PNG tiles (once you've used Carto to do some nice styling) but also to import your data sets.
TileMill can export your TileJSON and UTFGrid files with the PNG tiles all packaged up and ready to use. Mapbox will then host all that stuff for you, and you can use their mapbox.js library (an extension of Leaflet) to bring it all together in the browser, with full interactivity. Opening popups would be something you'd do in Javascript in the browser - and if you mean infoWindows (the overlay window that's associated with a map point) then that would be a call to the Leaflet API.
If you're happy to create your layers and import your data offline this approach seems to be really simple and powerful; Mapbox will even render out tiles using multiple layers overlaid - so for example you can see your circles on top of a satellite image, merged into a single PNG.
The problem really comes in when your data needs to be live and you can't therefore prepare it all ahead of time in TileMill. I'm still trying to figure this all out but it does seem as though a combination of TileStache and Mapnik would be able to serve you up the TileJSON, GeoJSON and UTFGrid files you'd need as well as the tiles themselves, in the way you've outlined in the question.
You might also want PostGIS and GeoDjango or similar behind the scenes in order to hold and manage your live data, respectively.
As I said, I'm still trying to actually get my full stack working so I can't vouch for this 100% but if your data is gathered upfront then I'd definitely recommend the TileMill route for simplicity's sake.
I hope that's a help!