Loading only relevant section of image into RAM - node.js

In my website I have a viewer that loads huge 200mb pictures (millionsXmillions of pixels).
With that, on screen the client never sees more than say 1600x900 pixels at a time.
Loading the full image seems to be a huge toll on the browser, actually often my tab crashes when I try to load a picture that is too big.
How do I go about serving only the visible part of the image? (serving a section of image by coordinates)
If the image was scrolled left by 100px, I want to serve the new 100px only, not the entire 1600x900 section that includes those 100px. Meaning, that the client needs to know what section it already has, and only requests for the new section that it lacks.
I would like to unload parts of the image from the client after a certain period of time.
(Yes, very similar to the way google maps work)
Any tips on what to read? where to look? What approach I should attempt?
I am using node.js with graphicsmagick.

Related

How to reduce response time of the web page to minimum in django?

Currently my network speed is about 1.5s per page if there are images in that page, If i move to different pages with image, audio and video file in it then loading of page takes around 2s to 2.5s. What i want to know is if there is a way to bring that loading time to minimum.
I am using Django, and Django-templates here to create such web application.
Since you are talking about images, audio and video, first I will play with the size and quality of the image, make some tests. For the audio and video you might want to use the attribute preload="none". This from the template. Another option is to upload image/audio/video by scroll, or paging. It is not a good practice to send all together. i.e: show 10/15 multimedia per page
From the view check the queries you are executing and the data structures you are using (memory consume is important, avoid huge lists and jsons). The same goes for the custom filters, if applicable.

First Contentful Paint measured with almost 6 s although content is visible in second screen

I did everything I could to optimize my Wordpress site and got rid of most of the recommendations by PageSpeed Insights. I use WP Rocket caching plugin, Optimole image optimization and Cloudflare CDN.
Google PageSpeed Insights got somewhat better but still, especially on mobile, results are far from good - although all of the recommendations that were there in the beginning and that I could get rid of (without custom coding and without breaking the site) are now gone.
There is one thing that strikes me as odd about the PageSpeed Insights results. That is that First Contentful Paint is measured with something between 5 and 6 seconds although the screenshots of the page that Google presents clearly show that there is contentful paint in the second frame already. See image 1.
Any ideas on this?
The only remaining points on your suggestions are 1. Remove Unused css, and 2.Defer non critical resources( I think, cus the text is in German)
Point 2 affects time to first paint the most.
The screenshots you see in PSI are not in real time.
Also there is a slight (bug?) discrepancy between screenshots and actual performance as PSI uses a simulated slow-down of the page rather than an applied slowdown (so it loads the page at full speed then adjusts the figures to account for bandwidth and Round Trip Time to the server caused by higher latency).
If you run a Lighthouse audit (Google Chrome -> F12 -> audits 0> run audits) with the throttling set to 'applied' rather than 'simulated' you will see it is about 5 seconds before a meaningful paint.
Lighthouse is the engine that powers Page Speed Insights now so the same information should be in both.
With regards to your site speed you have a load of blank SVGs being loaded for some reason and your CSS and JS files need combining (do it manually, plugins don't tend to do a good job here) to reduce the number of requests your site makes. (It can only make 8 requests at a time and on 4G the round-trip / latency to your server means these add up quickly e.g. 40 files = 5 * 8 round trips at 100ms latency = 500ms of dead time waiting for a response)

GoogleEarth crashed or still loading?

How do you know if the (free) GoogleEarth App is still loading data or hasnt crashed for some reaons.
Im loading a huge kml file, 100% cpu usage, but it never stops processing..
Are there any limits about the KML size of the displayed file?
How many KML MBs the Google Earth PC application can show without crashing ?
I don't think there are any file size limitations for Google Earth when using .kml files. There is however limits with regard to the size of images, so if your kml is trying to load images (such as screen overlays) then perhaps your problem lies there.
This is related to the specifications of your computer, so you can find the maximum resolution of images you can import by looking under the 'About'. I am not sure where to find info about the kb size of the image.
Next, you can try creating a .kmz out of the .kml - A KMZ is simply a compressed file the same as a .zip is - learn more about it
http://code.google.com/apis/kml/documentation/kmzarchives.html
Also you can try breaking the file up into smaller ones, either by using network links
http://code.google.com/apis/kml/documentation/kmlreference.html#networklink
or Regions
http://code.google.com/apis/kml/documentation/regions.html
By breaking the file up into smaller ones, you might also discover which part/element of the KML is causing hassles if you have some kind of format error

Loading website Images faster

Is it possible to improve Website background image to load faster that it is. My website's background image size is 1258X441 and memory size is 656KB. Its taking too long to load complete background image while accessing my website. Is there anyway than Compressing (As the image is already compressed) to improve its loading speed.
Choose: (2. is my main solution, and 3.)
Compress image even more, because image is background, it doesn't matter that much. Users do not focus at background as much as they do at content.
Since background is partially covered by content, you can paint black (or any other solid color) the part of background that is not visible (is behind content). This will make the image compress more nicely, sparig some place.
Save image in JPG progresive compression. That will make background display in gradually more quality as the image loads.
Get rid of background image. (obvious) :)
Today's web speeds are big, don't change anything.
ALSO: If your PNG image has any repeating parts, you can then slice your image in three parts and spare a lot of space.
The speed of loading the background image is determined by (latency ignored) bandwidth of your connection and the size of the image. So if you have let's say 128 KB/s and an image of size 4096 KB, you at least need
4096/128 = 32s
for it to load.
Since you can't change the bandwidth, the only thing you can do is change the size of the picture. That is, if you can't compress it more, lower the resolution.
If you don't want to lose precision, you could put different layers of background in your website with different qualities, the better ones over the bad ones. Then when your page is loading, the low quality images load fast and you get some background. Then over time the better quality images load and the background is improved.
Load your image in Database and call them , when ever you required. The benefit of this is ? Database is loading once you initiate it and can able to retrieve the information when ever required. it is fast compare to any other technique

JavaME - LWUIT images eat up all the memory

I'm writing a MIDlet using LWUIT and images seem to eat up incredible amounts of memory. All the images I use are PNGs and are packed inside the JAR file. I load them using the standard Image.createImage(URL) method. The application has a number of forms and each has a couple of labels an buttons, however I am fairly certain that only the active form is kept in memory (I know it isn't very trustworthy, but Runtime.freeMemory() seems to confirm this).
The application has worked well in 240x320 resolution, but moving it to 480x640 and using appropriately larger images for UI started causing out of memory errors to show up. What the application does, among other things, is download remote images. The application seems to work fine until it gets to this point. After downloading a couple of PNGs and returning to the main menu, the out of memory error is encountered. Naturally, I looked into the amount of memory the main menu uses and it was pretty shocking. It's just two labels with images and four buttons. Each button has three images used for style.setIcon, setPressedIcon and setRolloverIcon. Images range in size from 15 to 25KB but removing two of the three images used for every button (so 8 images in total), Runtime.freeMemory() showed a stunning 1MB decrease in memory usage.
The way I see it, I either have a whole lot of memory leaks (which I don't think I do, but memory leaks aren't exactly known to be easily tracked down), I am doing something terribly wrong with image handling or there's really no problem involved and I just need to scale down.
If anyone has any insight to offer, I would greatly appreciate it.
Mobile devices are usually very low on memory. So you have to use some tricks to conserve and use memory.
We had the same problem at a project of ours and we solved it like this.
for downloaded images:
Make a cache where you put your images. If you need an image, check if it is in the cachemap, if it isn't download it and put it there, if it is, use it. if memory is full, remove the oldest image in the cachemap and try again.
for other resource images:
keep them in memory only for as long as you can see them, if you can't see them, break the reference and the gc will do the cleanup for you.
Hope this helps.
There are a few things that might be happening here:
You might have seen the memory used before garbage collection, which doesn't correspond to the actual memory used by your app.
Some third party code you are running might be pooling some internal datastructures to minimize allocation. While pooling is a viable strategy, sometimes it does look like a leak. In that case, look if there is API to 'close' or 'dispose' the objects you don't need.
Finally, you might really have a leak. In this case you need to get more details on what's going on in the emulator VM (though keep in mind that it is not necessarily the same as the phone VM).
Make sure that your emulator uses JRE 1.6 as backing JVM. If you need it to use the runtime libraries from erlyer JDK, use -Xbootclasspath:<path-to-rt.jar>.
Then, after your application gets in the state you want to see, do %JAVA_HOME%\bin\jmap -dump:format=b,file=heap.bin <pid> (if you don't know the id of your process, use jps)
Now you've got a dump of the JVM heap. You can analyze it with jhat (comes with the JDK, a bit difficult to use) or some third party profilers (my preference is YourKit - it's commercial, but they have time-limited eval licenses)
I had a similar problem with LWUIT at Java DTV. Did you try flushing the images when you don't need them anymore (getAWTImage().flush())?
Use EncodedImage and resource files when possible (resource files use EncodedImage by default. Read the javadoc for such. Other comments are also correct that you need to actually observe the amount of memory, even high RAM Android/iOS devices run out of memory pretty fast with multiple images.
Avoid scaling which effectively eliminates the EncodedImage.
Did you think of the fact, that maybe loading the same image from JAR, many times, is causing many separate image objects (with identical contents) to be created instead of reusing one instance per-individual-image? This is my first guess.

Resources