I am loading in big, raw data files with python. It is a collection of images (video stream) that I want to display on an interface. As of now I am embedding a matplotlib graph by using the imshow() command. However it is very slow.
The fast part is reading the data itself, but splitting it in a numpy array matrix already takes 8 seconds for a 14MB file. We have 50GB files. That would take 8 hours. It's probably not the biggest problem though.
What the real problem is, is displaying the images. Let's say all images of the 14MB file are in RAM memory (I'm assuming python keeps it there. Which is also my problem with python, you don't know what the hell is happening). So right now I am replotting the image every time and then redrawing the canvas, and it seems to be a bottleneck. Is there anyway to reduce this bottleneck?
Images are usually 680*480 (but also variable) of a variable datatype, usually uint8. The interface is a GUI, and there is a slider bar that you can drag to get to a certain frame. An additional feature will be a play button that will go through each frames near real-time. Windows application.
Related
I'm working on some code to select and export geodata based on a bounding box. The data I want to select comes from 2 seperate layers in a huge File GDB (16GB) covering the entire Netherlands. I use a bounding box as to avoid reading the entire dataset before making a selection.
This method works great when applied on a gpkg database, but with a file geodatabase the time to process is way longer (0,2s vs 300s for a 200x200 meter selection). The File GDB I'm using has a spatial index set for the layers I'm reading. I'm using geopandas to read and select. Below you'll find an example for the layer 'Adres':
import geopandas as gpd
def ImportGeodata(FilePath, BoundingBox):
importBag=gpd.read_file(FilePath, layer='Adres', bbox=BoundingBox)
importBag['mergeid']=importBag['identificatie']
return importBag
Am I overseeing something? Or is this a limitation when importing from a huge File GDB? I can't find an obvious mistake here. For now the workaround is another script that imports and dumps the layers I need in a gpkg. Problem is this runs for 3 to 4 hours (gpkg result is almost 6 GB). I don't want to keep doing that, it would be necessary to do once every month or so in order to process a new version of this dataset.
Curious what you guys come up with.
Background:
I have requirement of showing picture representation of storage hardware(configured from smaller hardware pieces), Using svgjs library to compose storage hardware SVG image from 100-500 smaller pieces of jpg image.
Problem:
Seeing performance lag, page is not responsive for around 30-40 seconds when there is big configuration uses more than 400 smaller images to compose SVG, actually there are only 15 different jpg images are downloaded from server, these images are very small in size, it is around 600 KB and download time is around 3 seconds combining all of the them, but the page is taking 30-40 seconds to be full responsive.
Around 80KB of DOM is generated for this SVG image.
Example of HTML representation of SVG: https://ibb.co/0jYgBBk
The reason i am using SVG instead of canvas is i have some minor interaction with image once loaded, like adding and removing shapes on svg(for example highlighting particular piece of hardware)
Any solution to improve the performance.
I am experimenting with video classification using Keras in Cloud ML Engine. My dataset consists in video sequences saved as separate images (eg. seq1_frame1.png, seq1.frame2.png...) which I have uploaded to a GCS bucket.
I use a csv file referencing the start of end frames of different subclips, and a generator which feeds batch of clips to the model. The generator is responsible for loading frames from the bucket, reading them as images, and concatenating them as numpy arrays.
My training is fairly long, and I suspect the generator is my bottleneck due to the numerous reading operations.
In the exemples I found online, people usually save pre-formatted clips as tfrecords files directly to GCS. I feel like this solution isn't ideal for very large datasets as it implies duplicating the data, even more so if we decide to extract overlapping subclips.
Is there something wrong in my approach ? And more importantly, is there a "golden-standard" for using large video datasets for machine learning ?
PS : I explained my setup for reference, but my question is not bound to Keras, generators or Cloud ML.
In this, you are almost always going to be trading time for space. You just have to work out which is more important.
In theory, for every frame, you have height*width*3 bytes. That's assuming 3 colour channels. One possible way you could save space is to use only one channel (probably choose green, or, better still, convert your complete dataset to greyscale). That would reduce your full size video data to one third size. Colour data in video tends to be at a lower resolution than luminance data so it might not affect your training, but it depends on your source files.
As you probably know, .png is a lossless image compression. Every time you load one, the generator will have to decompress first, and then concatenate to the clip. You could save even more space by using a different compression codec, but that would mean every clip would need full decompression and probably add to your time. You're right, the repeated decompression will take time. And saving the video uncompressed will take up quite a lot of space. There are places you could save space, though:
reduce to greyscale (or green scale as above)
temporally subsample frames (do you need EVERY consecutive frame, or could you sample every second one?)
do you use whole frames or just patches? Can you crop or rescale the video sequences?
are you using optical flow? It's pretty processor intensive, consider it as a pre-processing step, too, so you only have to do it once per clip (again this is trading space for time)
What is suggested (optimal) image size to work with face API. Can't find anything about this.
Looks like images should not be to small but either too large. Probably any recommendation how to prepare them before train model?
Thanks.
This may help from the "Add Face" documentation:
JPEG, PNG, GIF (the first frame), and BMP format are supported. The allowed image file size is from 1KB to 4MB.
"targetFace" rectangle should contain one face. Zero or multiple faces will be regarded as an error. If the provided "targetFace" rectangle is not returned from Face - Detect, there’s no guarantee to detect and add the face successfully.
Out of detectable face size (36x36 - 4096x4096 pixels), large head-pose, or large occlusions will cause failures.
Adding/deleting faces to/from a same face list are processed sequentially and to/from different face lists are in parallel.
I am doing some studies on eye vascularization - my project contains a machine which can detect the different blood vessels in the retinal membrane at the back of the eye. What I am looking for is a possibility to segment the picture and analyze each segmentation on it`s own. The Segmentation consist of six squares wich I want to analyze separately on the density of white pixels.
I would be very thankful for every kind of input, I am pretty new in the programming world an I actually just have a bare concept on how it should work.
Thanks and Cheerio
Sam
Concept DrawOCTA PICTURE
You could probably accomplish this by using numpy to load the image and split it into sections. You could then analyze the sections using scikit-image or opencv (though this could be difficult to get working. To view the image, you can either save it to a file using numpy, or use matplotlib to open it in a new window.
First of all, please note that in image processing "segmentation" describes the process of grouping neighbouring pixels by context.
https://en.wikipedia.org/wiki/Image_segmentation
What you want to do can be done in various ways.
The most common way is by using ROIs or AOIs (region/area of interest). That's basically some geometric shape like a rectangle, circle, polygon or similar defined in image coordinates.
The image processing is then restricted to only process pixels within that region. So you don't slice your image into pieces but you restrict your evaluation to specific areas.
Another way, like you suggested is to cut the image into pieces and process them one by one. Those sub-images are usually created using ROIs.
A third option which is rather limited but sufficient for simple tasks like yours is accessing pixels directly using coordinate offsets and several nested loops.
Just google "python image processing" in combination with "library" "roi" "cropping" "sliding window" "subimage" "tiles" "slicing" and you'll get tons of information...