I did crawl the images in the Google Image Search window
but, the images are too small so I want to increased the size
I increased the size using PIL, but the picture is broken(Image quality is too low)
How can I increase the images size with good quality?
I used PIL this way
from PIL import Image
im = Image.open('filename')
im_new = im.resize((500, 500))
im_new.save('filename2')
No, I think you maybe get a wrong understanding of the real problem.
The images you got are just some thumbnails, so it contains little information. Your efforts to improve the image quality
by some algorithm may be very hard to make a difference. Probably only by using some machine learning tricks can you make the photos a little nicer.
In my opinion, what you need to do is to get original images you got with Google search rather than use thumbnails. You can do this by do a lot more analysis with image search results. Good luck :)
Related
I tried applying tesseract ocr on image but before applying OCR I want to improve the quality of the image so that OCR efficiency increase,
How to detect image brightness and increase or decrease the brightness of the image as per requirement.
How to detect image sharpness
It's not easy way to do that, if your images are similar to each other you can define "correct" brightness and adjust it on unprocessed images.
But what is "correct" brightness? You can use histogram to match this. See figure bellow. If you establish your correct histogram you can calibrate other images to it.
Richard Szeliski, Computer Vision Algorithms and Applications
I think you can use the autofocus method. You must check contrast histogram of image and define what is sharp to you.
source
*Basic recommendation, using the gray images is better on OCR
You can use the BRISQUE image quality assessment for scoring the image quality, it's available as a library. check it out, a smaller score means good quality.
I just want to register UAV images side by side using pix4D software, but the problem is I don't have any GCPs. How do I get a mosaic where all the images are stitched properly?
Images with Pix4d are typically referenced with geo-information (data tagged from INS/GPS and IMU). If this is not the case, there is no way for Pix4d to know where your images are in space.
An alternative to geo-referenced images is to create Manual Tie Points. This is a tedious process and has an effect on accuracy + image distortion. Check out Pix4D tutorials on Youtube or Pix4d/FAQ's for how to tag your images with MTP's.
Hope that helps & good luck.
So I am trying to extract text from image. And as the quality and size of image is not good, it is giving inaccurate results. I tried few enhancements and other things with PIL but that is only worsening the quality of image.
Can someone suggest some enhancement in image to get better results. Few Examples of images:
In the provided example of image the text is visually of quite good quality, so the question is how it comes that OCR gives inaccurate results?
To illustrate the conclusions given in further text of this answer let's run the the given image
through Tesseract. Below the result of Tesseract OCR:
"fhpgearedmomrs©gmachom"
Now let's resize the image four times and apply thresholding to it. I have done the resizing and thresholding manually in Gimp, but with appropriate resizing method and threshold value for PIL it can be for sure automated, so that after the enhancement you get an image similar to the enhanced image I have got:
The improved image run through Tesseract OCR gives following text:
"fhpgearedmotors©gmail.com"
This demonstrates that enlarging an image can help to achieve 100% accuracy on the provided text-image example.
It may appear weird that enlarging an image helps to achieve better OCR accuracy, BUT ... OCR was developed to convert scans of printed media to texts and expect 300 dpi images of the text by design. This explains why some OCR programs didn't resize the text by themselves to improve their results and do bad on small fonts expecting higher dpi resolution of the image which can be achieved by enlarging.
Here an excerpt from Tesseract FAQ on github.com prooving the statement above:
[There is a minimum text size for reasonable accuracy. You have to consider resolution as well as point size. Accuracy drops off below 10pt x 300dpi, rapidly below 8pt x 300dpi. A quick check is to count the pixels of the x-height of your characters. (X-height is the height of the lower case x.) At 10pt x 300dpi x-heights are typically about 20 pixels, although this can vary dramatically from font to font. Below an x-height of 10 pixels, you have very little chance of accurate results, and below about 8 pixels, most of the text will be "noise removed".]
I am working on a classic RPG that requires a pixelated style of graphics. I want to do this by making a small image and scaling it up. However, when I do this, it gets fuzzy. Is there any way to scale it while keeping a crisp edge for every pixel, or do I just need to make a bigger image?
You cannot scale an image expecting it to keep a crisp aspect if it's not made in a big enough resolution in the first place. In your case you would have to make a bigger image and scale it down to make the small image.
If you do not use the large image all the time however, you should consider having two versions of the same image (one small / one large) for optimization sake.
I'm using the Kinect SDK in C++ to generate an image of points near a plane in space, with the goal of using them as touches. I've attached a 3x scale image of the result of that process, so thats all gravy.
My question is how best to use OpenCV to generate blobs frame to frame from this image (and images like it) to use for touches. Heres what I've tried in my ProcessDepth callback, where img is a monochrome cv::Mat of the touch image, and out is an empty cv::Mat.
cv::Canny(img,out,100,200,3);
cv::findContours(out,contours,cv::RETR_TREE,cv::CHAIN_APPROX_SIMPLE,cv::Point(0,0));
mu.resize(contours.size());
mc.resize(contours.size());
for(int i = 0; i<contours.size();i++){
mu[i] = cv::moments(contours[i],true);
}
for(int i = 0; i<contours.size();i++){
mc[i] = cv::Point2f(mu[i].m10/mu[i].m00, mu[i].m01/mu[i].m00);
}
(I'd post more code, but VMWare is being bad about letting me copy paste out of it, if you want more, just ask.)
At which point I think I should get center of masses for blobs for a frame, in practice though, its not there. I get either errors when contour.size() returns greater than 0, or with a bit of tinkering, I get moments that seem really weird, containing large negative numbers say. So my questions are as follows:
Does anyone have recommendations on how to turn the image below into blob data with a good result, so far as flags in findContours are concerned?
Do I even need to bother with Cranny or threshold since I have a monochrome image already, and if Cranny, is the kernal of 3 too large for the number of pixels I'm dealing with?
Will Find contours work on images of this size? (160 ish by 90 ish, though thats fairly arbitrary. Smallish more generally.)
Are the OpenCV functions async? I get lots of invalid address errors if my images and the contour vector don't exist as properties on the application class. (I'm the first to admit I'm not a particularly talented C++ Programmer.)
Is there a way simpler way to go from image to series of points corresponding to touches from image?
For reference, I'm cribbing on some examples in my OpenCV download, and this example.
Let me know if you need some other information to better answer, and I'll try to provide it, thanks!