I am developing an application that uses the Google vision API and I have a question about the colors properties.
Is the color shown in the properties with the highest percentage is the dominant color? And how that works ?
Because the color with highest percentage is not accurate.
What is the feature you are using?
I suggest you provide a sample image, the sample request and the responses you get so we know more about how to improve.
Related
The texture characteristics of fire are mainly manifested in edges and colors. I want to control it to generate different colors on the same edge, or generate different edges under the same color. It is difficult to achieve one-to-many translation tasks. I am searching for a long time on net. But no use. Please help or try to give some ideas how to achieve this.enter image description here
I tried applying tesseract ocr on image but before applying OCR I want to improve the quality of the image so that OCR efficiency increase,
How to detect image brightness and increase or decrease the brightness of the image as per requirement.
How to detect image sharpness
It's not easy way to do that, if your images are similar to each other you can define "correct" brightness and adjust it on unprocessed images.
But what is "correct" brightness? You can use histogram to match this. See figure bellow. If you establish your correct histogram you can calibrate other images to it.
Richard Szeliski, Computer Vision Algorithms and Applications
I think you can use the autofocus method. You must check contrast histogram of image and define what is sharp to you.
source
*Basic recommendation, using the gray images is better on OCR
You can use the BRISQUE image quality assessment for scoring the image quality, it's available as a library. check it out, a smaller score means good quality.
I am an undergraduate student working with detecting defects on a surface of an object, in a given digital image using image processing technique. I am planning on using OpenCV library to get image processing functions. Currently I am trying to decide on which defect detection algorithm to use, in order to detect defects. This is one of my very first projects related to this field, so it will be appreciated if I can get some help related to this issue. The reference image with a defect (missing teeth in the gear), which I am currently working with is uploaded as a link below ("defective gear image").
defective gear image
Get the convex hull of a gear (which is a polygon) and shrink is slightly so that it crosses the teeth. Make sure that the centroid of the gear is the fixed point.
Then sample the pixels along the hull, preferably using equidistant points (divide the perimeter by a multiple of the number of teeth). The unwrapped profile will be a dashed line, with missing dashes corresponding to missing teeth, and the problem is reduced to 1D.
You can also try a polar unwarping, making the outline straight, but you will need an accurate location of the center.
In azure they have 2 option to detect text from image which is handwriting or OCR, but I don't get the confidence score, so I need to get the best one text result,
And what engine can handle which is the best/similar based on spelling OR confidence score?
I too tried Azure API and sometimes it is not giving results as expected.What score value are you getting.Also,I would suggest ,please compare this with Google Vision API
I'm working on a image stabilization by using optical flow.
The algorithm that I've used is like this; first of all I have found good features to track in OpenCv "cvGoodFeaturesToTrack" and then I've estimated the optical flow by using this function for OpenCv as well "cvCalcOpticalFlowPyrLK".
Now I want to stabilize the video sequence, which I think I need to take the average of the optical flow vectors.
I'm working on a real time application so I can't use either SIFT or SURF.
The problem that I don't know how take the average.
Can anyone show me what to do?
Regards
You don't need to average anything. Optical flow will return the position of the "good features to track" in the second image. Transform the second image so that these features coincide with the features on the first image (use GetPerspectiveTransform).
I'll probably write an article on this soon on my website http://aishack.in/