How to solve soft margin SVM using dual lagrangian problem? - svm

I don't know how to use dual lagrangian problem to solve soft margin SVM. Can anyone help me to solve this example with steps calculations:
X_1=(0,2) | y_1=+1
X_2=(0,1) | y_2=-1
X_3=(0,0) | y_3=+1
Thank you!
P.S.
I don't want to use kernel, I just want to know how soft margin SVM is computed with lagrangian multiplier.
I attached a picture with soft margin SVM. In the end you can see the lagrangian dual problem for soft margin SVM. SOFT MARGIN SVM IMAGE

Related

How to increase the detection speed of yolov3 as it takes 6 seconds to know the correct class?

Tried to change the color to gray but that didn't help so much:
ImageGray = cv2.cvtColor(image,BGR2GRAY)
Good question.
Try
resizing the images to a smaller size: reduce dataset resolution.
use GPU

Mask generation for segmentation

I have this kind of images (255 exactly):
Example of delineation
I have to generate a mask for each of them automatically. For example, set all pixels in the bleu outline to zeros and set the others to one.
Does someone know a useful python's tool to achieve this task? Knowing that I know the exact position of the blue outline of the image above.
Thank you for your help!

How to calculate precision, recall, F1 score for my caffe model

Framework: Caffe
Architecture: Mobilenet-SSD
Dataset: [Caltech Padestrain Detection Dataset][1]
I know the formulae for precision, recall, F1 score & accuracy. I know the formulas but the problem here is automation of that
Manually I can calculate everything but the problem is with automating that for ~10k images in test dataset
Because I can see an image & compare that what is false positive etc etc. But not sure how to do with computer
Because my model might detect a person but the bounding box is little bigger(coordinates are bit off) & test dataset labels are accurate
So although the detection is happening but coordinates are not matching.
How to solve this problem to calculate precision, recall & accuracy? If this is not the correct way please propose correct way
Hmmm good question. I think you should define what you want to actually measure. F1 score, precision and recall are easy. Just see if a person has been detected or not, or you have a false positive and calculate things business as usual. For that use scikit.
Now about the bounding boxes, areas and coordinates. You should use a different metric! I recommend you mAP (mean Average Precision). Check out this link and feel free to read more about this on the internet. Good luck with your model!
An alternate option, is use the following gist confusionMatrix_convnet_test_BatchMode(VeryFast).py to build the confusion matrix using sklearn.

how does filter2D works in image processing for RGB image to calculate matrix output manually?

I'm newbie in image processing, i have tried to implement filter2D to reduce noise image with RGB color recently, and it works well. But i don't understand how it works manually in image matrix. Anybody can help me to explain how it works manually?
This is the input matrix and output matrix i get.
Input Image Matrix
Output Image Matrix
Thanks for your help. :)
as a short answer, filtering an image means apply a filter (or kernel) to it, i.e; convolving the image by this kernel. For that, you take each pixel on your image and consider a neighbourhood around it. You apply the kernel to the neighbourhood by multiplying each pixel of the neighbourhood with the corresponding kernel coefficient and sum all these values.
For a pixel, this can be summarized by this figure (source) :
For example, by setting all the coefficients to 1/N (where N is the number of elements in your kernel), you compute the average intensity of your neighbourhood.
You can see https://en.wikipedia.org/wiki/Multidimensional_discrete_convolution for more information about image convolutions.
OpenCV's documentation gives some practical examples of image smoothing.
Hope it helps

Reducing / Enhancing known features in an image

I am microbiology student new to computer vision, so any help will be extremely appreciated.
This question involves microscope images that I am trying to analyze. The goal I am trying to accomplish is to count bacteria in an image but I need to pre-process the image first to enhance any bacteria that are not fluorescing very brightly. I have thought about using several different techniques like enhancing the contrast or sharpening the image but it isn't exactly what I need.
I want to reduce the noise(black spaces) to 0's on the RBG scale and enhance the green spaces. I originally was writing a for loop in OpenCV with threshold limits to change each pixel but I know that there is a better way.
Here is an example that I did in photo shop of the original image vs what I want.
Original Image and enhanced Image.
I need to learn to do this in a python environment so that I can automate this process. As I said I am new but I am familiar with python's OpenCV, mahotas, numpy etc. so I am not exactly attached to a particular package. I am also very new to these techniques so I am open to even if you just point me in the right direction.
Thanks!
You can have a look at histogram equalization. This would emphasize the green and reduce the black range. There is an OpenCV tutorial here. Afterwards you can experiment with different thresholding mechanisms that best yields the bacteria.
Use TensorFlow:
create your own dataset with images of bacteria and their positions stored in accompanying text files (the bigger the dataset the better).
Create a positive and negative set of images
update default TensorFlow example with your images
make sure you have a bunch of convolution layers.
train and test.
TensorFlow is perfect for such tasks and you don't need to worry about different intensity levels.
I initially tried histogram equalization but did not get the desired results. So I used adaptive threshold using the mean filter:
th = cv2.adaptiveThreshold(img, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY_INV, 3, 2)
Then I applied the median filter:
median = cv2.medianBlur(th, 5)
Finally I applied morphological closing with the ellipse kernel:
k1 = cv2.getStructuringElement(cv2.MORPH_ELLIPSE,(5,5))
dilate = cv2.morphologyEx(median, cv2.MORPH_CLOSE, k1, 3)
THIS PAGE will help you modify this result however you want.

Resources