Background:
I have requirement of showing picture representation of storage hardware(configured from smaller hardware pieces), Using svgjs library to compose storage hardware SVG image from 100-500 smaller pieces of jpg image.
Problem:
Seeing performance lag, page is not responsive for around 30-40 seconds when there is big configuration uses more than 400 smaller images to compose SVG, actually there are only 15 different jpg images are downloaded from server, these images are very small in size, it is around 600 KB and download time is around 3 seconds combining all of the them, but the page is taking 30-40 seconds to be full responsive.
Around 80KB of DOM is generated for this SVG image.
Example of HTML representation of SVG: https://ibb.co/0jYgBBk
The reason i am using SVG instead of canvas is i have some minor interaction with image once loaded, like adding and removing shapes on svg(for example highlighting particular piece of hardware)
Any solution to improve the performance.
Related
I am loading in big, raw data files with python. It is a collection of images (video stream) that I want to display on an interface. As of now I am embedding a matplotlib graph by using the imshow() command. However it is very slow.
The fast part is reading the data itself, but splitting it in a numpy array matrix already takes 8 seconds for a 14MB file. We have 50GB files. That would take 8 hours. It's probably not the biggest problem though.
What the real problem is, is displaying the images. Let's say all images of the 14MB file are in RAM memory (I'm assuming python keeps it there. Which is also my problem with python, you don't know what the hell is happening). So right now I am replotting the image every time and then redrawing the canvas, and it seems to be a bottleneck. Is there anyway to reduce this bottleneck?
Images are usually 680*480 (but also variable) of a variable datatype, usually uint8. The interface is a GUI, and there is a slider bar that you can drag to get to a certain frame. An additional feature will be a play button that will go through each frames near real-time. Windows application.
Lets say I am creating an icon in illustrator, which should be saved as an svg. If I create a large nice vector icon, lets say 800x800 pixels, I can always scale it as I want to, and its "easier" to work with. Problem is, when this icon is saved, the file size is big.
If I instead make a small icon in illustrator, on a document size which has the proportions the icon will be used as, lets say 20x20, the file size is much smaller.
So what is good practice when working with icon sizes?
Thanks!
You are mistaken. A vector graphic that is 800x800 should have the same file size (more or less) as a vector graphic that is 20x20. That is because the shapes that make up the file are defined "mathematically" and can be rendered at any size.
If your files have significantly different sizes, you have probably not created a purely vector file. It probably has bitmap images in it. Bitmap image file sizes are greater if the image size is larger.
However, in answer to your primary question, it doesn't really matter what the page size is when creating vector icons. They could be either of the two sizes you mentioned. Or any other size. Whatever dimensions you choose, they should be rendered nicely at any final size you need.
I have a series of ~300 high resolution images (~0.5 gigapixel) which I want embedded as PhotoOverlays in Google Earth. I have them in either of two formats, ~250mb geotiff's (georeferenced & warped) and ~100mb jpg's (which I can localize in GE with explicit coordinates). These images are of very small areas (~100m^2). Ultimately, I will want to share the images online.
Are the file sizes big enough to need Image Pyramids?
If so, is gdal_retile an appropriate tool to produce the pyramids and the KML?
I am working on a classic RPG that requires a pixelated style of graphics. I want to do this by making a small image and scaling it up. However, when I do this, it gets fuzzy. Is there any way to scale it while keeping a crisp edge for every pixel, or do I just need to make a bigger image?
You cannot scale an image expecting it to keep a crisp aspect if it's not made in a big enough resolution in the first place. In your case you would have to make a bigger image and scale it down to make the small image.
If you do not use the large image all the time however, you should consider having two versions of the same image (one small / one large) for optimization sake.
I'm curating a web site (joomla, as it happens) and I notice that every (jpeg) image file uploaded is stored in a series of 'sizes', of which the largest is~ 25 times the original size. (9k -> 240k) - just to support display to a larger 'view port', I assume. Is there any practical way, either with jpeg transfoms, other common web image formats or any other wacky idea, to build image files with larger pixel dimensions but retaining approximately the same file size as the original?
Why not simply resize the images with CSS/HTML? If you simply resize them without any quality improvement, you are wasting bandwidth by sending the bigger image.
If you are looking to generate lower resolution thumbnails from a single large image, use JPEGMini. Alternatively, Photoshop has an excellent "Save for web and devices" feature that dramatically reduces the size of images with customizable loss.