GoogleEarth crashed or still loading? - kml

How do you know if the (free) GoogleEarth App is still loading data or hasnt crashed for some reaons.
Im loading a huge kml file, 100% cpu usage, but it never stops processing..
Are there any limits about the KML size of the displayed file?
How many KML MBs the Google Earth PC application can show without crashing ?

I don't think there are any file size limitations for Google Earth when using .kml files. There is however limits with regard to the size of images, so if your kml is trying to load images (such as screen overlays) then perhaps your problem lies there.
This is related to the specifications of your computer, so you can find the maximum resolution of images you can import by looking under the 'About'. I am not sure where to find info about the kb size of the image.
Next, you can try creating a .kmz out of the .kml - A KMZ is simply a compressed file the same as a .zip is - learn more about it
http://code.google.com/apis/kml/documentation/kmzarchives.html
Also you can try breaking the file up into smaller ones, either by using network links
http://code.google.com/apis/kml/documentation/kmlreference.html#networklink
or Regions
http://code.google.com/apis/kml/documentation/regions.html
By breaking the file up into smaller ones, you might also discover which part/element of the KML is causing hassles if you have some kind of format error

Related

Resize image when uploading to server or when serving from server to client?

My website uses many images. On a weak day users will upload hundreds of new images.
I'm trying to figure out what is the best-practice for manipulating sizes of images.
This project uses Node.js with gm module for manipulating images, but I don't think this question is node or gm specific.
I came up with several strategies, but I can't make a decision as to which is the best, and I am not sure if I am missing an obvious best-practice strategy.
Please enlighten me with your thoughts and experience.
Option 1: Resize the file with gm on every client request.
Option 1 pros:
If I run gm function every time I serve a file, I can control the size, quality, compression, filters and so on whenever I need it.
On the server I only save 1, full quality - full size version of the file and save storage space.
Option 1 cons:
gm is very resource intensive, and that means that I will be abusing my RAM for every single image server to every single client.
It means I will be always working from a big file, which makes things even worse.
I will always have to fetch the file from my storage (in my case S3) to the server, then manipulate it, then serve it. It seems like it would create redundant bandwidth issues.
Option 2: resize the file on first upload and keep multiple sizes of the file on the server.
Option 2 pros:
I will only have to use gm on uploads.
Serving the files will require almost no resources.
Option 2 cons:
I will use more storage because I will be saving multiple versions of the same file (i.e full, large, medium, small, x-small) instead of only one version.
I will be limited to using only the sizes that were created when the user uploaded their image.
Not flexible - If in the future I decide I need an additional size version (x-x-small for instance) I will have to run a script that processes every image in my storage to create the new version of the image.
Option 3:
Use option 2 to only process files on upload, but retain a resize module when serving file sizes that don't have a stored version in my storage.
Option 3 pros:
I will be able to reduce resource usage significantly when serving files in a selection of set sizes.
Option 3 cons:
I would still take more storage as in option 2 vs option 1.
I will still have to process files when I serve them in cases where I don't have the file size I want
Option 4: I do not create multiple versions of files on upload. I do resize the images when I serve them, BUT when ever an image size was requested, this version of the file will be saved in my storage and for future requests I will not have to process the image again.
Option 4 pros:
I will only use storage for the versions I use.
I could add a new file size when ever I need, it will be automatically created on a need-basis if it doesn't already exists.
Will use a lot of resources only once per file
Option 4 cons:
Files that are only accessed once will be both resource intensive AND storage intensive. Because I will access the file, see that the size version I need doesn't exist, create the new file version, use the resources needed, and save it to my storage wasting storage space for a file that will only be used once (note, I can't know how many times files will be used)
I will have to check if the file already exists for every request.
So,
Which would you choose? Why?
Is there a better way than the ways I suggested?
Solution highly depends on the usage you have for your resources. If you have an intensive utilisation then option 2 is from far the better. If not, option 1 could work nicely also.
From a qualitative point of view I think option 4 is the best of course. But for a question of simplicity and automation, I think option 2 is way better.
Because simplicity matter, I suggest to mix the option 2 and 4 : you will have a list of size (e.g. large,medium,small), but will not process them on upload but when requested as in option 4.
So that in the end, in the worst case you will arrive to the option 2 solution.
My final word would be that you should also use the <img> and/or <canvas> object in your website to perform the final sizing, so that the small computation overhead is not done on the server side.

Loading only relevant section of image into RAM

In my website I have a viewer that loads huge 200mb pictures (millionsXmillions of pixels).
With that, on screen the client never sees more than say 1600x900 pixels at a time.
Loading the full image seems to be a huge toll on the browser, actually often my tab crashes when I try to load a picture that is too big.
How do I go about serving only the visible part of the image? (serving a section of image by coordinates)
If the image was scrolled left by 100px, I want to serve the new 100px only, not the entire 1600x900 section that includes those 100px. Meaning, that the client needs to know what section it already has, and only requests for the new section that it lacks.
I would like to unload parts of the image from the client after a certain period of time.
(Yes, very similar to the way google maps work)
Any tips on what to read? where to look? What approach I should attempt?
I am using node.js with graphicsmagick.

using jpeg as a memory mapped file

I have a jpeg image 360x288 resolution. It is stored on my server. This image can be updated randomly but in a sequential manner.
I upload the image via a web service as a byte array, I then save it to an image/jpeg on my hard drive.
And it continues...
Now, I am curious as to whether using a memory mapped file would improve efficiency as I have read some where (and not recall where) that an image by default is already a memory mapped file?
Any info would be great...

Loading website Images faster

Is it possible to improve Website background image to load faster that it is. My website's background image size is 1258X441 and memory size is 656KB. Its taking too long to load complete background image while accessing my website. Is there anyway than Compressing (As the image is already compressed) to improve its loading speed.
Choose: (2. is my main solution, and 3.)
Compress image even more, because image is background, it doesn't matter that much. Users do not focus at background as much as they do at content.
Since background is partially covered by content, you can paint black (or any other solid color) the part of background that is not visible (is behind content). This will make the image compress more nicely, sparig some place.
Save image in JPG progresive compression. That will make background display in gradually more quality as the image loads.
Get rid of background image. (obvious) :)
Today's web speeds are big, don't change anything.
ALSO: If your PNG image has any repeating parts, you can then slice your image in three parts and spare a lot of space.
The speed of loading the background image is determined by (latency ignored) bandwidth of your connection and the size of the image. So if you have let's say 128 KB/s and an image of size 4096 KB, you at least need
4096/128 = 32s
for it to load.
Since you can't change the bandwidth, the only thing you can do is change the size of the picture. That is, if you can't compress it more, lower the resolution.
If you don't want to lose precision, you could put different layers of background in your website with different qualities, the better ones over the bad ones. Then when your page is loading, the low quality images load fast and you get some background. Then over time the better quality images load and the background is improved.
Load your image in Database and call them , when ever you required. The benefit of this is ? Database is loading once you initiate it and can able to retrieve the information when ever required. it is fast compare to any other technique

JavaME - LWUIT images eat up all the memory

I'm writing a MIDlet using LWUIT and images seem to eat up incredible amounts of memory. All the images I use are PNGs and are packed inside the JAR file. I load them using the standard Image.createImage(URL) method. The application has a number of forms and each has a couple of labels an buttons, however I am fairly certain that only the active form is kept in memory (I know it isn't very trustworthy, but Runtime.freeMemory() seems to confirm this).
The application has worked well in 240x320 resolution, but moving it to 480x640 and using appropriately larger images for UI started causing out of memory errors to show up. What the application does, among other things, is download remote images. The application seems to work fine until it gets to this point. After downloading a couple of PNGs and returning to the main menu, the out of memory error is encountered. Naturally, I looked into the amount of memory the main menu uses and it was pretty shocking. It's just two labels with images and four buttons. Each button has three images used for style.setIcon, setPressedIcon and setRolloverIcon. Images range in size from 15 to 25KB but removing two of the three images used for every button (so 8 images in total), Runtime.freeMemory() showed a stunning 1MB decrease in memory usage.
The way I see it, I either have a whole lot of memory leaks (which I don't think I do, but memory leaks aren't exactly known to be easily tracked down), I am doing something terribly wrong with image handling or there's really no problem involved and I just need to scale down.
If anyone has any insight to offer, I would greatly appreciate it.
Mobile devices are usually very low on memory. So you have to use some tricks to conserve and use memory.
We had the same problem at a project of ours and we solved it like this.
for downloaded images:
Make a cache where you put your images. If you need an image, check if it is in the cachemap, if it isn't download it and put it there, if it is, use it. if memory is full, remove the oldest image in the cachemap and try again.
for other resource images:
keep them in memory only for as long as you can see them, if you can't see them, break the reference and the gc will do the cleanup for you.
Hope this helps.
There are a few things that might be happening here:
You might have seen the memory used before garbage collection, which doesn't correspond to the actual memory used by your app.
Some third party code you are running might be pooling some internal datastructures to minimize allocation. While pooling is a viable strategy, sometimes it does look like a leak. In that case, look if there is API to 'close' or 'dispose' the objects you don't need.
Finally, you might really have a leak. In this case you need to get more details on what's going on in the emulator VM (though keep in mind that it is not necessarily the same as the phone VM).
Make sure that your emulator uses JRE 1.6 as backing JVM. If you need it to use the runtime libraries from erlyer JDK, use -Xbootclasspath:<path-to-rt.jar>.
Then, after your application gets in the state you want to see, do %JAVA_HOME%\bin\jmap -dump:format=b,file=heap.bin <pid> (if you don't know the id of your process, use jps)
Now you've got a dump of the JVM heap. You can analyze it with jhat (comes with the JDK, a bit difficult to use) or some third party profilers (my preference is YourKit - it's commercial, but they have time-limited eval licenses)
I had a similar problem with LWUIT at Java DTV. Did you try flushing the images when you don't need them anymore (getAWTImage().flush())?
Use EncodedImage and resource files when possible (resource files use EncodedImage by default. Read the javadoc for such. Other comments are also correct that you need to actually observe the amount of memory, even high RAM Android/iOS devices run out of memory pretty fast with multiple images.
Avoid scaling which effectively eliminates the EncodedImage.
Did you think of the fact, that maybe loading the same image from JAR, many times, is causing many separate image objects (with identical contents) to be created instead of reusing one instance per-individual-image? This is my first guess.

Resources