gravatar highest resolution - resolution

I show a gravatr image of users in my site. How can I know the best highest resolution to use? e.g. which parameter "s" should be.
https://secure.gravatar.com/avatar/?s=250
of course it depends in the user image, but gravatr must know the resolution of the initial image and can advise me on best highest size.

It seems the maximum size changed drastically in the last 9 month
You may request images anywhere from 1px up to 2048px, however note that many users have lower resolution images, so requesting larger sizes may result in pixelation/low-quality images.
quoted from the Gravatar image Request tutorial

I don't think it is possible, I just did some research on that matter because I have the same need, but I think Gravatar was designed for websites that will show all avatars at the same size, which means small avatars, and if it's too big for some of them, they'll be ok with the automatic upscaling.
They should include a "?s=native" to get the native size.

This is what Gravatar writes about the resolution of the avatars:
You may request images anywhere from 1px up to 2048px, however note
that many users have lower resolution images, so requesting larger
sizes may result in pixelation/low-quality images.
The highest resolution of the default image would be 2048px:)
Read more about Gravatar images (including the default image) on https://en.gravatar.com/site/implement/images/
EDIT: You will see the picture cannot get any bigger than 2048px x 2048px even if you set s=3000:)
EDIT 2: Apparently the maximum size changed from 512px to 2048px

Related

efficiently get single pixels from large images in aws lambda

i would like to implement a lambda in aws which receives as input pixel coordinates (x/y), retrieve that pixel's RGB from one image, and then do something with it.
the catch now is that the image is very large: 21600x10800 pixels (a 684MB tif file).
Many of the image's pixels will likely never be accessed (its a world map so it includes e.g. oceans, for which no lambda calls will happen. But i don't know which pixels will be needed.)
The result of the lambda will be persisted so that the image operation is only done once per pixel.
My main concern is that i would like to avoid large unnecessary processing time and costs. I expect multiple calls per second of the lambda. The naive way would be to throw the image into an s3 bucket, then read it in the lambda to get one pixel - but i would think that then each lambda invoke would become very heavy. I could do some custom solution such as storing the rows separately but was wondering if there is some set of technologies that handles it more elegant.
Right now i am using Node.js 14.x but that's not a strong requirement.
the image is in tif format but i could convert it to another image format beforehand if needed. (just not to the answer of the lambda as that is even bigger)
How can i efficiently design this lambda?
As I said in the comments, I think Lambda is the wrong solution unless your traffic is very bursty. If you have continuous traffic with "multiple calls per second," it will be more cost-effective to use an alternate technology, such as EC2 or ECS. And these give you far more control over storage.
However, if you're set on using Lambda, then I think the best solution is to store the file on an EFS volume, then mount that filesystem onto your Lambda. In my experiments, it takes roughly 150 ms for a Lambda to read an arbitrary byte from a file on EFS, using Python and the mmap package.
Of course, if your TIFF library attempts to read the file into memory before performing any operations, this is moot. The TIFF format is designed so that shouldn't be necessary, but some libraries take the easy way out (because in most cases you're displaying or transforming the entire image). You may need to pre-process your file, to produce a raw byte format, in order to be able to make single-pixel gets.
Thanks everyone for the useful information!
so after some testing i settled for the solution from luk2302's comment with 100x100 pixel sub-images hosted on s3, but can't flag a comment as the solution. My tests showed that the lambda operates within 110ms to access a pixel (from the now only 4kb large files) which i think is quite sufficient for my expectations. (the very first request was at 1s time, but now even requests to sub-images which have never been touched before are answered within 110ms)
Parsifal's solution is what i originally envisioned to be ideal in order to really only check the relevant data (open question then being which image library actually omits loading the entire file) but i don't have the means to check the file system aspect more closely if that has more potential. In my case indeed the requests are very much burst driven (with long periods of expected inactivity after the bursts), so for the moment i will remain with a lambda but will keep the mentioned alternatives in mind.

PageSpeed Insights LCP with picture tag

We use PageSpeed Insights to measure the performance of our website (drupal 7 and picture module for lazy loading).
In the mobile results we got the message, that the LCP (largest contentful paint) was too high (4.5 s) and the following code is shown.
<img class=" lazyloaded" data-src="https://www.interelectronix.com/sites/default/files/styles/view_einspaltig_abfallend/public/image_contenttype/impactinator_glas_ik10_4.jpg?itok=YxPF9YZf&timestamp=1559550873" alt="ABNT NBR IEC 62262 " title="" src="https://www.interelectronix.com/sites/default/files/styles/view_einspaltig_abfallend/public/image_contenttype/impactinator_glas_ik10_4.jpg?itok=YxPF9YZf&timestamp=1559550873">
If we have a look in the Chrome developer tools, we see in the network tab, that not the image in the code (https://www.interelectronix.com/sites/default/files/styles/view_einspaltig_abfallend/public/image_contenttype/impactinator_glas_ik10_4.jpg) was delivered (the image in the code has about 110 KB file size), but a image with a lower resolution (which has about 47 KB file size).
Now we change the delivered image with an image (47 KB) with an image of 14 KB file size.
But the PageSpeed Insights values don't change. It's always the same 4.5 s.
Use PageSpeed Insights the image in the code for calculating the value?
Or what can we do to get a faster time result?
LCP is about when the largest item on the page is painted, it has nothing to do with the size of that item in kb (other than a smaller element will load faster).
What you want is for the "above the fold" content (content visible without scrolling) to be fully painted in less than 2.5 seconds, ideally 1.5 seconds.
To achieve this you need to make sure all of your CSS that is for "above the fold" is inlined within your HTML. (known as "inlining critical CSS").
Doing this will also fix "Eliminate Render Blocking Resources" as everything needed to render the page will be loaded with the first request. Finally this will also help with Cumulative Layout Shift as things won't "bounce" around the page as CSS styles are loaded.
For images above the fold you do not want them to be lazy loaded, instead you want them to load normally as they need to render as fast as possible. You also may want to ensure that the background image you use for the logo has the relevant CSS inlined as well as otherwise that will load in late. (better yet convert the logo to an inline SVG to save an unneeded request and page weight).
Finally I noticed you use a video background, it is unlikely you will get top scores as this uses a lot of bandwidth. I would suggest replacing your video background with a static image on a mobile to save the massive overhead associated with a video background.
By all means move the video further down the page and lazy load it in, but perhaps allow users to start the video manually.
Allowing a user to decide whether to play a video helps both with people who have low data allowances and also with people who have ADHD, autism etc. who may find a moving image distracting.
Anyway I have gone off tangent a bit. To fix a late LCP make sure that all above the fold assets have priority and are as light weight as possible basically.
You may find this article explaining LCP useful, as well as this article on how to optimise LCP for understanding what you need to look at.

TextNoteType text_size minimal value

I am writing a revit pluging.
I need to change the text_size of a textnotetype inside a viewplan, following a user input.
the textnotetype height has limited lo=wer and upper values.
I want to control the user input to respect this limits and avoid an error message from revit, so how can I find,read or compute the minimal and maximal values of a textnotetype height ?
Thanks in advance
Luc
I see no such limitations in the Text Note Type Properties.
How do you see those in the user interface?
What error message do you observe?
0.2 millimetres seems ridiculously small to me, and 400 millimetres ridiculously large.
In that case, I would say you can easily solve this problem by adding your own limits that are perfectly sensible, e.g., 2 mm minimum and 40 mm maximum. These sensible limits will then also protect the user also from hitting the ridiculous limits imposed by Revit.

Large amount of dataURIs compared to images

I'm trying to compare (for performance) the use of either dataURIs compared to a large number of images. What I've done is setup two tests:
Regular Images (WPT)
Base64 (WPT)
Both pages are exactly the same, other than "how" these images/resources are being offered. I've ran a WebPageTest against each (noted above - WPT) and it looks the average load time for base64 is a lot faster -- but the cached view of regular view is faster. I've implemented HTML5 Boilerplate's .htaccess to make sure resources are properly gzipped, but as you can see I'm getting an F for base64 for not caching static resources (which I'm not sure if this is right or not). What I'm ultimately trying to figure out here is which is the better way to go (assuming let's say there'd be that many resources on a single page, for arguments sake). Some things I know:
The GET request for base64 is big
There's 1 resource for base64 compared to 300 some-odd for the regular (which is the bigger downer here... GET request or number of resources)? The thing to remember about the regular one is that there are only so many resources that can be loaded in parallel due to restrictions -- and for base64 - you're really only waiting until the HTML can be read - so nothing is technically loaded than the page itself.
Really appreciate any help - thanks!
For comparison I think you need to run a test with the images sharded across multiple hostnames.
Another option would be to sprite the images into logical sets.
If you're going to go down the BASE64 route, then perhaps you need to find a way to cache them on the client.
If these are the images you're planning on using then there's plenty of room for optimisation in them, for example: http://yhmags.com/profile-test/img_scaled15/interior-flooring.jpg
I converted this to a PNG and ran it through ImageOptim and it came out as 802 bytes (vs 1.7KB for the JPG)
I'd optimise the images and then re-run the tests, including one with multiple hostnames.

How does Nike's website do this Flash effect when the user selects a choice

I was wondering how does Nike website make the change you can see when selecting a color or a sole. At first I thought they were only using images and when the user picked a color you just replaced that part, but when I selected a different sole I noticed it didn't changed like an image it looked a bit more as if it was being rendered. Does anybody happens to know how this is made? Or where can I get further info about making this effect :)?
It's hard to know for sure, but my guess would be that they're using a rendering service similar to that provided by Adobe's Scene7.
It's a product that is used to colorize/customize a base product image based on user choices.
If you're interested in using the service, I'd suggest signing up for their weekly webinar. I attended one a while back and was very impressed with their offering. They showed the Converse site (which had functionality almost identical functionality to the Nike site) as a demo.
A lot of these tools are built out in Flash using a variety of techniques:
1) You can use Flash's BitmapData object to directly shift the hues of the pixels in your item. This is probably the simplest technique but often limits you to simple color transformations.
2) You can pre-render transparent PNG's (or photos, I guess) containing the various textures you would want to show on your object (for instance patterns or textures) and have them dynamically added to your stage at runtime. This, I think, offers the highest fidelity but means you need all of your items rendered upfront.
3) You can create 3D collada files and load them via a library like Papervision3D. Then dynamically change the texture at runtime. This is the most memory intensive technique and tends to result in far worse fidelity, but for that you get a full 3D object that you can view in space.
I'm sure there are other techniques but those are the top 3 I can think of. I hope that helps!

Resources