I'm tired to capture an image from GPUImageStillCamera and I got an image which has dimension 2592 X 1936 and 5.1 MB in size(and the image taken in iPad mini). This is too large(for my app). How to reduce the image dimension while capturing an image?
As I said in response to this GitHub issue you created, use -forceProcessingAtSize: on the first filter in your filter chain to lock it to a given size. If that size is larger than the default video size, you might want to do this right before you capture your photo, then set it back to 0 (using an unrestricted size) after the photo is captured.
Related
So my goal is to add CVPixelBuffers into my AVAssetWriter / AVAssetWriterInputPixelBufferAdaptor with super high speed. My previous solution used CGContextDrawImage but it is very slow (0.1s) to draw. The reason seems to be with color matching and converting, but that's another question I think.
My current solution is trying to read the bytes of the image directly to skip the draw call. I do this:
CGImageRef cgImageRef = [image CGImage];
CGImageRetain(cgImageRef);
CVPixelBufferRef pixelBuffer = NULL;
CGDataProviderRef dataProvider = CGImageGetDataProvider(cgImageRef);
CGDataProviderRetain(dataProvider);
CFDataRef da = CGDataProviderCopyData(dataProvider);
CVPixelBufferCreateWithBytes(NULL,
CGImageGetWidth(cgImageRef),
CGImageGetHeight(cgImageRef),
kCVPixelFormatType_32BGRA,
(void*)CFDataGetBytePtr(da),
CGImageGetBytesPerRow(cgImageRef),
NULL,
0,
NULL,
&pixelBuffer);
[writerAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:presentTime];
-- releases here --
This works fine on my simulator and inside an app. But when I run the code inside the SpringBoard process, it comes out as the images below. Running it outside the sandbox is a requirement, it is meant for jailbroken devices.
I have tried to play around with e.g. pixel format styles but it mostly comes out with differently corrupted images.
The proper image/video file looks fine:
But this is what I get in the broken state:
Answering my own question as I think I got the answer(s). The resolution difference was a simple code error, not using the device bounds on the latter ones.
The color issues. In short, the CGImages I got when running outside of the sandbox was using more bytes per pixel, 8 bytes. The images I get when running inside the sandbox was 4 bytes. So basically I was simply writing the wrong data into the buffer.
So, instead of simply slapping all of the bytes from the larger image into the smaller buffer. I loop through the pixel buffer row-by-row, byte-by-byte and I pick the RGBA values for each pixel. I essentially had to skip every other byte from the source image to get the right data into the right place within the buffer.
I'm creating a Raspberry Pi Zero W security camera and am attempting to integrate motion detection using Node.js. Images are being taken with Pi camera module at 8 Megapixels (3280x2464 pixels, roughly 5MB per image).
On a Pi Zero, resources are limited, so loading an entire image from file to Node.js may limit how fast I can capture then evaluate large photographs. Surprisingly, I capture about two 8MB images per second in a background time lapse process and hope to continue to capture the largest sized images roughly once per second at least. One resource that could help with this is extracting the embedded thumbnail from the large image (thumbnail size customizable in raspistill application).
Do you have thoughts on how I could quickly extract the thumbnail from a large image without loading the full image in Node.js? So far I've found a partial answer here. I'm guessing I would manage this through a buffer somehow?
I am trying to get pairs of images out of a Minoru stereo webcam, currently through opencv on linux.
It works fine when I force a low resolution:
left = cv2.VideoCapture(0)
left.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 320)
left.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 240)
right = cv2.VideoCapture(0)
right.set(cv2.cv.CV_CAP_PROP_FRAME_WIDTH, 320)
right.set(cv2.cv.CV_CAP_PROP_FRAME_HEIGHT, 240)
while True:
_, left_img = left.read()
_, right_img = right.read()
...
However, I'm using the images for creating depth maps, and a bigger resolution would be good. But if I try leaving the default, or forcing resolution to 640x480, I'm hitting errors:
libv4l2: error turning on stream: No space left on device
I have read about USB bandwith limitations but:
this happens on the first iteration (first read() from right)
I don't need anywhere near 60 or even 30 FPS, but couldn't manage to reduce "requested FPS" via VideoCapture parameters (if this makes sense)
adding sleeps don't seem to help, even between the left/right reads
strangely if I do much processing (in the while loop), I start noticing "lag": things happening in the real world get shown much later on the images read. This would suggest that actually there is a buffer somewhere that can and does accumulate several images (a lot)
I tried a workaround of creating and releasing a separate VideoCapture for each image read, but this is a bit too slow overall (< 1FPS), and more importantly, image are too much out of sync for working on stereo matching.
I'm trying to understand why this fails, in order to find solutions. It looks like v4l is allocating a single global too-small buffer, used by the 2 capture objects somehow.
Any help would be appreciated.
I had the same problem and found this answer - https://superuser.com/questions/431759/using-multiple-usb-webcams-in-linux
Since both the minoru cameras show the format as 'YUYV', this is likely a USB bandwidth issue. I lowered the frames per second to 20 (didn't work at 24) and I can see both the 640x480 images.
I'm trying to use mogrify to decrease the quality of the image to ultimately decrease the image size but rather than decreasing it, the image size is increasing. I'm using the following command:
mogrify -quality 20% 1.png
The image size is going from 2.5 mb to 4 mb, any idea?
PNG is a lossless format, so changing "quality" settings should do nothing at all with respect to the "image".
The mogrify documentation confirms this - "quality", when applied to a PNG, indicates which row filters to apply: a value ranging from 0 to 6.
Since the input 20 is invalid for a PNG file, it must have been silently replaced with a default value; presumably 0, which indicates no row filtering at all. (If you really want to know if this is the case, you could use a tool such as pngcheck on your before and after images.)
As to your target: it is unclear whether you want to decrease the physical image size in pixels, or the file size on disk, or (possibly) both. For the first, you can use -resize. For the second, try a PNG-recompressing tool such as pngcrush. For both, use the first method and then the second.
Another option may be to lower the number of color components, for example, from 24-bit RGB to indexed color. Finally, you can always convert the image type from PNG to JPEG, after which you can experiment with the "quality" parameter.
I want to grab a single frame from my webcam with node.js and OpenCV. The first frame captured is as expected. But if I move in front of the camera and take a second one, I get an image that was obviously taken quite after the first one and doesnt show my move. I have to take 5 pictures to see the move. Searching on the net gave me hint about a problem with the camera buffer that holds 4 images (OS depended).
Here is an example of someone having the same issue:
http://opencvarchive.blogspot.de/2010/05/opencv-arm-linux-servo-frame-delay.html
At the moment I'm doing a workaround and capture 5 images within a loop and then save the last image to disk. So the buffer is cleared and the really current image is taken.
Does anyone knows a better solution? Taking five images instead of one takes too much time for my application...
Thanks in advance! :)