Particle swarm optimization algorithm written in pytorch - pytorch

I want to develop code in PyTorch to build the basic particle swarm optimization algorithm.
could anyone help me?

Related

PySpark Vs. Gradient-Boosted Trees (with predict_proba) Integration

I want to build a GBT based binary-classification model in pyspark that allows producing prediction probabilities (mandatory), preferably the state of the art of GBT variant like XGBoost.
All I can find are unmaintained, non-official, and unstable packages which I find really hard to install and operate.
Can you please help me find a solution?

What type of CNN will be suitable for underwater image processing?

The primary objective (my assigned work) is to do an image segmentation for the underwater images using a convolutional neural network. The camera shots taken from the underwater structure will have poor image quality due to severe noise and bad light exposure. In order to achieve higher classification accuracy, I want to do an automatic image enhancement for the images (see the attached file). So, I want to know, which CNN architecture will be best to do both tasks. Please kindly suggest any possible solutions to achieve the objective.
What do you need to segment? I'd be nice so see some labels of the segmentation.
You may not need to enhance the image, if all your dataset has that same amount of noise, the network will generalize properly.
Regarding CNNs architectures, it depends on the constraints you have with processing power and accuracy. If that is not a constrain go with something like MaskRCNN, check that repo as a good starting point, some results are like this:
Be mindful it's a bit of a complex architecture so inference times might be a bit too high (but it's doable on realtime depending your gpu).
Other simple architectures are FCN (Fully Convolutional Networks) with are basically your CNN but instead of fully connected layers:
You replace with with Fully Convolutional Layers:
Images taken from HERE.
The advantage of this FCNs are that they are really easy to implement and modify since you can go with simple architectures (FCN-Alexnet), to more complex and more accurate ones (FCN-VGG, FCN-Resnet).
Also, I think you don't mention framework, there are many to choose from and it depends on your familiarly with languages, most of them you can do them with python:
TensorFlow
Pytorch
MXNet
But if you are a beginner, try starting with a GUI based one, Nvidia Digits is a great starting point and really easy to configure, it's based on Caffe so it's fairly fast when deploying and can easily be integrated with accelerators like TensorRT.

Convolutional Neural Network in Spark

I'm trying to implement a Convolutional Neural Network algorithm on Spark and I wanted to ask two questions before moving forward.
I need to implement my code such that, it is highly integrated with Spark and also follows the principles of machine learning algorithms in Spark. I found that Spark ML is an established ground for machine learning codes and it has a specific foundation, which all written algorithms are following. Also, the implemented algorithms are offloading their heavy mathematical operations to third party libraries such as BLAS, to do calculations fast.
Now I wanted to ask:
1) Is ML the right place to start? By following the ML structure, does my code going to be highly integrable with the rest of the spark ML ecosystem?
2) Am I right about the bottom of the ML codes, where they offload the processing into another mathematical library? Does it mean I can decide to change that layer to do the heavy processings in a customized fashion?
Would appreciate any suggestions.

Caffe vs Theano MNIST example

I'm trying to learn (and compare) different deep learning frameworks, by the time they are Caffe and Theano.
http://caffe.berkeleyvision.org/gathered/examples/mnist.html
and
http://deeplearning.net/tutorial/lenet.html
I follow the tutorial to run those frameworks on MNIST dataset. However, I notice a quite difference in term of accuracy and performance.
For Caffe, it's extremely fast for the accuracy to build up to ~97%. In fact, it only takes 5 mins to finish the program (using GPU) which the final accuracy on test set of over 99%. How impressive!
However, on Theano, it is much poorer. It took me more than 46 minutes (using same GPU), just to achieve 92% test performance.
I'm confused as it should not have so much difference between the frameworks running relatively same architectures on same dataset.
So my question is. Is the accuracy number reported by Caffe is the percentage of correct prediction on test set? If so, is there any explanation for the discrepancy?
Thanks.
The examples for Theano and Caffe are not exactly the same network. Two key differences which I can think of are that the Theano example uses sigmoid/tanh activation functions, while the Caffe tutorial uses the ReLU activation function, and that the Theano code uses normal minibatch gradient descent while Caffe uses a momentum optimiser. Both differences will significantly affect the training time of your network. And using the ReLU unit will likely also affect the accuracy.
Note that Caffe is a deep learning framework which already has ready-to-use functions for many commonly used things like the momentum optimiser. Theano, on the other hand, is a symbolic maths library which can be used to build neural networks. However, it is not a deep learning framework.
The Theano tutorial you mentioned is an excellent resource to understand how exactly convolutional and other neural networks work on a basic level. However, it will be cumbersome to implement all the state-of-the-art tweaks. If you want to get state-of-the-art results quickly you are better off using one of the existing deep learning frameworks. Apart from Caffe, there are a number of frameworks based on Theano. I know of keras, blocks, pylearn2, and my personal favourite lasagne.

Which kernel is to be used for Face detection using SVM?

I'm working on face detection algorithm which extracts Haar-like features and then classifies the face and non faces using SVM. I'll be implementing whole algorithm including SVM in C language because i have to run the code on Stretch SCP board.
I have lot of doubts regarding which kernel is most suitable for face-detection problem; is it linear, RBF or something else?
I already extracted haar-features and tried to classify using libsvm and liblinear but didn't get appropriate results.
Please suggest which kernel to be used and what parameter to be considered ?

Resources