I'm resizing (Downsample) an Image with an unknown size to a certain size.
for example:
The first image size is 45X45
The second image size is 57x57
The output size must be resizable and be the maximum value (max pooling)
I`ve tried to use skimage.measure.block_reduce, working good but I can't get the certain output size.
I've tried to use cv2.resize but it doesn't have the "Map pooling option"
maybe there is some function/library to do it?
Related
Are the input images resized to fixed width and height during the training in detectron2? and if yes please explain why? thanks!
Yes, it is. The reason is, large images cannot fit into the memory. Even, with 8GB GPU it is not possible to train with high resolution images.
Besides, there is trade off between image size and batch size; the larger image, the smaller batch size.
In Detectron2, you can change the minimum image size in the configuration like this:
from detectron2.config import get_cfg
cfg = get_cfg()
# minimum image size for the train set
cfg.INPUT.MIN_SIZE_TRAIN = (800,)
# maximum image size for the train set
cfg.INPUT.MAX_SIZE_TRAIN = 1333
# minimum image size for the test set
cfg.INPUT.MIN_SIZE_TEST = 800
# maximum image size for the test set
cfg.INPUT.MAX_SIZE_TEST = 1333
The image size is set for width of the image, and height is fitted correspondingly to avoid disordered images.
Besides, you can set multiple image sizes, so that during training one is selected randomly per image. like this:
cfg .INPUT.MIN_SIZE_TRAIN = (600, 900, 900)
cfg .INPUT.MIN_SIZE_TRAIN_SAMPLING = "choice"
For more information, see here.
In YOLO v2, after every 10 epochs, the network randomly chooses a size How is this happening in darknet? I'm using ubuntu 18.04
I think the network sized is changed every 10 iterations (not epochs).
In your cfg file, check the random flag.
random = 1 means Yolo changes the network size for every 10 iterations, it is useful to increase precision by training the network on different resolution.
According to Yolo paper :
However, since our model only uses convolutional and pooling layers it
can be resized on the fly. We want YOLOv2 to be robust to running on
images of different sizes so we train this into the model. Instead of
fixing the input image size we change the network every few
iterations. Every 10 batches our network randomly chooses a new image
dimension size. Since our model downsamples by a factor of 32, we pull
from the following multiples of 32: {320, 352, ..., 608}. Thus the
smallest option is 320 × 320 and the largest is 608 × 608. We resize
the network to that dimension and continue training.
I see this error continuously in the debug.log in cassandra,
WARN [SharedPool-Worker-2] 2018-05-16 08:33:48,585 BatchStatement.java:287 - Batch of prepared statements for [test, test1] is of size 6419, exceeding specified threshold of 5120 by 1299.
In this
where,
6419 - Input payload size (Batch)
5120 - Threshold size
1299 - Byte size above threshold value
so as per this ticket in Cassandra, https://github.com/krasserm/akka-persistence-cassandra/issues/33 I see that it is due to the increase in input payload size so I Increased the commitlog_segment_size_in_mb in cassandra.yml to 60mb and we are not facing this warning anymore.
Is this Warning harmful? Increasing the commitlog_segment_size_in_mb will it affect anything in performance?
This is not related to the commit log size directly, and I wonder why its change lead to disappearing of the warning...
The batch size threshold is controlled by batch_size_warn_threshold_in_kb parameter that is default to 5kb (5120 bytes).
You can increase this parameter to higher value, but you really need to have good reason for using batches - it would be nice to understand the context of their usage...
commit_log_segment_size_in_mb represents your block size for commit log archiving or point-in-time backup. These are only active if you have configured archive_command or restore_command in your commitlog_archiving.properties file.
Default size is 32mb.
As per Expert Apache Cassandra Administration book:
you must ensure that value of commitlog_segment_size_in_mb must be twice the value of max_mutation_size_in_kb.
you can take reference of this:
Mutation of 17076203 bytes is too large for the maxiumum size of 16777216
I need strategic help to get going.
I need to animate a large network of THREE.LineSegments (about 150k segments, static) with custom (vertex) colors (RGBA). For each segment, I have 24 measured values over 7 days[so 5.04 × 10^7 measured values, or 2.016 × 10^8 vec4 color buffer array size, about 770Mb in float32array size].
The animation is going through each hour for each day in 2.5 second steps and needs to apply an interpolated color to each segment on a per-frame-basis (via time delta). To be able to apply an alpha value to the vertex colors I need to use a THREE.ShaderMaterial with a vec4 color attribute.
What I don´t get my head around is how to best handle that amount of data per vertex. Some ideas are to
calculate the RGBA values in render loop (between current color array and the one for the coming hour via interpolation with a time delta) and update the color buffer attribute[I expect a massive drop in framerate]
have a currentColor and nextColor buffer attribute (current hour and next hour), upload both anew in every step (2.5 sec) to the GPU and do the interpolation in the shader (with aditional time delta uniform)[seems the best option to me]
upload all data to the GPU initially, either with multiple custom attributes or one very large buffer array, and do the iteration and interpolation in the shader[might be even better if possible; I know I can set the offset for the buffer array to read in for each vertex, but not sure if that works as I think it does...]
do something in between, like upload the data for one day in a chunk instead of either all data or hourly data
Do the scenario and the ideas make sense? If so:
What would be the most suitable way of doing this?
I appreciate any additional advice.
Screenshot of DataStax DevCenter
How can I increase response window size (highlighted in image) from 300 to 1000 in DataStax DevCenter?
Notice at the very top of your image, it says "with limit" followed by a text box containing the number "300." Try increasing that to 1000.
Also, how many glusr_ids are you specifying with your IN clause? Judging by the size of the window, it looks like a lot. Multi-key queries are considered to be anti-patterns, because of all the extra network traffic they create. That might be why it's taking 3384ms to return just 300 rows.