how assign Label for images using label propagation? - scikit-learn

I assume that some labeled data are unlabeled, which labels are set as -1.
Using Label propagation in scikit-learning leads to assign label.
labelpropagation.fit(x_feature,y_class) with X_feature (Color, hog, gist, sift Feature).
Questions:
Is my understanding right?

Yes, In sci-kit, the unlabelled data is indicated using the label -1. This may involve:
using actual unlabelled-data (In case of semi-supervised learning)
Using test/dev set of the dataset as unlabelled data. (To verify model performaces)
In either of the cases, the data is appended as part of training-set by setting the labels as -1.
Thus, label propagation algorithm propagates the labels from the labelled nodes to these unlabelled nodes.

Related

Can I create a color image with a GAN consisting of only FC layers?

I understand that in order to create a color image, three channel information of input data must be maintained inside the network. However, data must be flattened to pass through the linear layer. If so, can GAN consisting of only FC layer generate only black and white images?
Your fully connected network can generate whatever you want. Even three channel outputs. However, the question is: does it make sense to do so? Flattened your input will inherently lose all kinds of spatial and feature consistency that is naturally available when represented as an RGB map.
Remember that an RGB image can be thought of as 3-element features describing each spatial location of a 2D image. In other words, each of the three channels gives additional information about a given pixel, considering these channels as separate entities is a loss of information.

Best way to work with rotated model grids (MF6) in Flopy?

I want to model a system that consist of glacifluvial sediment deposited in a NE, SW orientation (this is also the direction of GW-flow), using MF6 disv-grid through Flopy. In order to reduce risk of numerical instability, I want to rotate the model grid 45 degrees counter clockwise. However, rotating the model grid causes the grid to be displaced in relation to model pertinent information, such as digital elevation maps, boundary conditions, and observation data.
What is the recommended approach for working with rotated model grids? Is it possible to rotate the grid, refine the grid, and then rotate it back to its original position before appending boundary conditions and elevations? Or is the modeller expected to rotate all other data as well (i.e. boundary conditions, observation data, e.t.c.)? If the latter is true, then that would mean a lot of extra work for a complex model.
Ultimately, I want to perform parameter estimation / history matching using PEST/PEST++. What could be the consequence of having a rotated model grid when performing model calibration?
Very curious to hear your opinions and recommendations.

Gaussian Mixture Models for pixel clustering

I have a small set of aerial images where different terrains visible in the image have been have been labelled by human experts. For example, an image may contain vegetation, river, rocky mountains, farmland etc. Each image may have one or more of these labelled regions. Using this small labeled dataset, I would like to fit a gaussian mixture model for each of the known terrain types. After this is complete, I would have N number of GMMs for each N types of terrains that I might encounter in an image.
Now, given a new image, I would like to determine for each pixel, which terrain it belongs to by assigning the pixel to the most probable GMM.
Is this the correct line of thought ? And if yes, how can I go about clustering an image using GMMs
Its not clustering if you use labeled training data!
You can, however, use the labeling function of GMM clustering easily.
For this, compute the prior probabilities, mean and covariance matrixes, invert them. Then classify each pixel of the new image by the maximum probability density (weighted by prior probabilities) using the multivariate Gaussians from the training data.
Intuitively, your thought process is correct. If you already have the labels that makes this a lot easier.
For example, let's pick on a very well known and non-parametric algorithm like Known Nearest Neighbors https://en.wikipedia.org/wiki/K-nearest_neighbors_algorithm
In this algorithm, you would take your new "pixels" which would then find the closest k-pixels like the one you are currently evaluating; where closest is determined by some distance function (usually Euclidean). From there, you would then assign this new pixel to the most frequently occurring classification label.
I am not sure if you are looking for a specific algorithm recommendation, but KNN would be a very good algorithm to begin testing this type of exercise out on. I saw you tagged sklearn, scikit learn has a very good KNN implementation I suggest you read up on.

Keras: Zero-padding the input data. What to do with the target data?

I have ECG data and sleep annotations as target. The data is recorded in sessions. For one case, I want to use each session as an input sample. Therefore, I need to zero-pad the input data to achieve same length/dimension.
What do I do with the target data?
Do I also "zero-pad" to achieve the same length/dimension? I could use a new state (e.g. 666 as 0 is already in use) which will then not be considered by using a masking layer on the particular zero-padded input data.
Or do I just leave the target as it is?
Thanks for your help
edit: more info about the data
after some logical thinking, I came to the obvious conclusion, that you have to pad also the target. As the samples are of different length apparently, you need to pad to be able to create a tensor with fixed dimensions.
I would create a new label for the mask_value. For this labels, there is no training as these timesteps are skipped and the weights are set to zero via the masking.

Different blurFilter.texelSpacingMultiplier for different regions in image GPUImageCannyEdgeDetection filter

I want to set different blurFilter.texelSpacingMultiplier for different regions in image in GPUImageCannyEdgeDetection filter is there a way to do that.
The texelSpacingMultiplier is defined as a uniform in the fragment shaders used for this operation. That will remain constant across the image.
If you wished to have this vary in parts of the image, you will need to create a custom version of this operation and its sub-filters that takes in a varying value for this per-pixel.
Probably the easiest way to do this would be to have your per-pixel values for the multiplier be encoded into a texture that would be input as a secondary image. This texture could be read from within the fragment shaders and the decoded value from the RGBA input converted into a floating point value to set this multiplier per-pixel. That would allow you to create a starting image (drawn or otherwise) that would be used as a mask to define how this is applied.
It will take a little effort to do this, since you will need to rewrite several of the sub-filters used to construct the Canny edge detection implementation here, but the process itself is straightforward.

Resources