I use Cloudinary for my blog site (built with node js).
My problem is that when I upload an image from an iPhone, it rotates 90 degrees. I have already tried angle: "ignore" but this does not seem to work. I think it has to do with the exif information. How do I get rid of it or am I using the wrong cloudinary parameters?
(also does not work when I include a_ignore in the url)
Here is the upload code:
let result = await cloudinary.uploader.upload(req.file.path, {resource_type: type, angle:"ignore"});
I'll start by recommending to remove angle: "ignore" from the upload call parameters, try to upload again, and then you probably experience one of the following cases:
The original may not have any embedded rotation metadata and therefore there's no way to know what's the correct rotation.
The original may have the rotation metadata, and the delivery URL was of the original, which by default sends the original image, as is, without any Cloudinary processes. So far so good, however, at that point, it's up to the client/device to be responsible to parse the metadata and render it correctly by the rotation metadata, and unfortunately, there are indeed cases of clients "ignoring" the rotation metadata when rendering images.
The image was both rotated manually AND the metadata wasn't stripped, which may result in an extra (unnecessary) rotation.
After you check which case you think you might be on, here are some possible ways to fix:
For case #1 - In case you have access to the original unrotated and un-metadata-stripped version of the image, try to upload that one again instead.
For case #2 - On delivery, instead of using the original image's delivery URL, try to use a derived version of it (e.g., add q_auto to the URL as a transformation). Using any of Cloudinary's transformations will automatically optimize the image before delivery, but importantly for this case, it'll also rotate the image by the rotation metadata info (assuming it has one), and last but not least, will strip the metadata, so it'll always show the image by the intended rotation.
For case #3 - Usually a possible fix for this is to indeed add angle: "ignore", which is, as mentioned previously, recommended as a delivery transformation (a_ignore in the URL) rather than as part of the upload parameters.
If you could share the original image here, I'll be happy to take a closer look and offer solutions. If privacy is an issue, share it with Cloudinary's support team who will be happy to assist.
Related
I'm trying to implement own URL preview service and I thought the following strategy for figuring out more or less accurate preview image was enough:
look for og:image first
if no og:image was found look for link with rel="image_src" next
if nothing found, look for first image in a body with preferrable aspect ratio, larger than assumed size (eg. > 50x50).
But looks like there are sites which don't fall in any of these rules yet still having a nice preview generated by slack or facebook. 500px is one of them - any hint where following preview images comes from?
I want to use the REST Google Photos API to download original photos or videos from Goolge Photos, and I found there is no way to achieve it with the "baseUrl".
I have checked the following pages, but there is not a definitive answer:
https://issuetracker.google.com/issues/112096115
https://issuetracker.google.com/issues/80149160
So if there is indeed a way to get the original photos and videos or if there will be one?
The addition of '=d' will not give you the original file! I tested it. The quality and resolution of the image seems to match the original one, but some information like exif metadata (geo location) is missing. As a result, the file size is also smaller than the original. This makes is not usable for backup synchronization where I want the original file.
Actually, I expect from google that they give me automated access to my own original data. It looks like that is currently not the case.
I'm afraid there are currently only two options how to get the original fotos:
Manual download on Google Fotos
Manual download via Google Takeout
Very disappointing!
So I just read through the issue tracker answers you provided, and I noticed that one reply was to add '=d'to the baseUrl.
So example: GET https://lh3.googleusercontent.com/lr/AGb3...HG2n=d
I am trying to fetch user photos from instagram api.
The url i am querying is:
https://api.instagram.com/v1/users/self/media/recent?access_token={access_token}
Note that i am only using access token and there's no need to register my app. I only want to get user photos. The thing is It returns original images and not the square ones. Why? All I want to do is to get all photos in the same size, but it returns original sizes.I tried standard_resolution and every solution I met but finally, There's more than I thought. Photos even have white lines which is disgusting to be shown on my website.
P.S. I want to get all photos ,all of them in the same sizes and without any white lines. Guys,i know instagram so often changes these kind of things,but maybe you've found the solution to my problem.
The Instagram url in standard resolution is always square:
https://scontent.cdninstagram.com/t51.2885-15/s640x640/sh0.08/e35/20759276_1125417337622604_960083034999095296_n.jpg
If you want to get the original image from this then use (changing url pattern):
https://scontent.cdninstagram.com/t51.2885-15/20759276_1125417337622604_960083034999095296_n.jpg
Some image which is not having 640 resolution than the standard resolution introduces white line.
in that case to get the original square image we need to go lower than 640x640
This is the url pattern for 320x320:
https://scontent.cdninstagram.com/t51.2885-15/s320x320/e35/20759276_1125417337622604_960083034999095296_n.jpg
Is there any way to protect your sprites on EaselJS?
Currently is too easy to download the sprites.
On chrome just go to console -> resources like this
I made a resarch before i made this answer and found this topic .
That could be very nice. Also we don't need to save the slices in a json like he said, if we have a shuffle seed.
But, i didn't find any thing in nodejs(back-end) to make this image shuffle.
I tried Node GM but its looks too complicaded to bind a image on top of another with (w,h,x,y,offsetX,offsetY)
I know always will have a way to "hack" the resource. But at least offer some difficult.
One of the simple approaches is to encode images to base64, store them as part of Javascript and decode at runtime. See:
Convert and insert Base64 data to Canvas in Javascript
But obviously this will increase download size.
Personally, I would not go this route for "normal" applications or games, unless it is really justified or put on me as an external requirement. For example, one can easily extract assets from the android APK, but this does not seem an area of concern for most of the developers.
The user's browser downloads those images whether you want it or not. Otherwise, they wouldn't be able to display them.
At any given time, any user can just right click on any image on the site and click SAVE AS, you can't stop it, and you shouldn't try.
If you don't want people downloading your work, don't put it on the public facing internet.
I'm looking to build a feature into an Angular.js web app that allows a user to paste a url to an eCommerce site like Amazon or Zappos and retrieve the main product image from that page. My plan is to post the url to my express API and handle the image retrieval on the server.
My initial plan was to download the raw html, parse it out with htmlparser, select all the html image elements with soupselect and retrieve their src attributes. Ideally I would like to implement a solution that would work across any site, and not just hardcode values for a particular retailer's site (using specific known css class names). One of the assumptions I made was that the largest image on the page would likely be the main product image, with this logic I decided I would try to sort the images by file size. My idea was to make a http head request with the src url for each of the images to determine their size with the content-length header property. So far this approach has worked well but I would really like to avoid making so many http requests even if they are only head requests.
I feel there is a better way of doing this, would it be easier to use something like PhantomJS to load the entire page and parse it that way? I was trying to make this work as quick as possible and thus avoiding downloading all of the images. Does anyone have any suggestions?
I would think the best image to use isn't the one with the largest file size, but the image that is displayed largest on the page. PhantomJS might be able to help you determine that. Load the page, but instruct PhantomJS not to load images. Then pick the image element whose calculated dimensions are biggest. This will only work if the page uses CSS or width and height attributes on the img to give it dimension.
Alternatively, you could send the image URLs back to the client, and have the client fetch the images and figure out which is biggest. That limits the number of requests your server has to make, and it allows the user to quickly pick a different image if the largest isn't the best.