Can anyone please explain why 1D Convolutional Neural Network sometimes perform well on tabular data (better than DNN)? I have seen this in some published papers (although the reason for using CNN1D is not provided), Kaggle competitions and also have seen questions in stack overflow about the input shape of CNN 1d in tabular data. (eg: Preparing feeding data to 1D CNN). While I know we use CNN 1D for sequence data like in time series and NLP, what is the intuitive idea behind using CNN 1D for tabular data? Why does it work? Is it due to a spatial correlation between features?
A large problem with tabular data is that it is not structured. That is, there is often no relationship within the ordering of the columns. I believe that when applying a 1DCNN to tabular data you first have a linear layer that then feeds into the 1DCNN layer. This enables the model to self-order the columns, creating a more structured dataset. Then patterns within the data can be found through the 1DCNN and remaining Dense layers.
I am not sure this is really an answer, but also the question is actually not a question... maybe I can at least help to explain.
This is not a general feature of CNN and/or DNN. It is very specific to the structure of the input data.
CNN are suited for data that contain structures/patterns with additional translations/symmetries. "Convolution" means to map very many different sub-parts of the input data ("windows" of any dimension) onto the same "kernel" network. Thus the network can learn universally, independent of location of the "window".
I think it is even misleading to distinguish CNN and DNN at all. DNN are multi-layer complex networks. CNNs are typical substructures/layers of DNNs.
Related
I am trying a multi-task regression model. However, the ground-truth labels of different tasks are on different scales. Therefore, I wonder whether it is necessary to normalize the targets. Otherwise, the MSE of some large-scale tasks will be extremely bigger. The figure below is part of my overall targets. You can certainly find that columns like ASA_m2_c have much higher values than some others.
First, I have already tried some weighted loss techniques to balance the concentration of my model when it does gradient backpropagation. The result shows it didn't perform well.
Secondly, I have seen tremendous discussions regarding normalizing the input data, but hardly discovered any particular talking about normalizing the labels. It's partly because most of the people's problems are classification type and a single task. I do know pytorch provides a convenient approach to normalize the vision dataset by transform.normalize, which is still operated on the input rather than the labels.
Similar questions: https://forums.fast.ai/t/normalizing-your-dataset/49799
https://discuss.pytorch.org/t/ground-truth-label-normalization/26981/19
PyTorch - How should you normalize individual instances
Moreover, I think it might be helpful to provide some details of my model architecture. The input is first fed into a feature extractor and then several generators use the shared output representation from that extractor to predict different targets.
I've been working on a Multi-Task Learning problem where one head has an output of ~500 and another between 0 and 1.
I've tried Uncertainty Weighting but in vain. So I'd be grateful if you could give me a little clue about your studies.(If there is any progress)
Thanks.
i am new to datascience and when i was going through one of the kaggle blog, i saw that the user is using both scale and fit on the data set. i tried to understand the difference by going through the documentation but was not able to understand
It's hard to understand the source of your confusion without any code. Inside the link you provided, the data is first scaled with sklearn.preprocessing.scale() and then fit to a sklearn.ensemble.GradientBoostingRegressor.
So the scaling operation transforms data such that all the features are represented on the same scale, and the fitting operation trains the model with the said data.
From your question it sounds like you thought these two operations were mutually exclusive, or somehow equivalent, but they are actually logical consecutive steps.
In general, before model is trained, data is somehow preprocessed (with .scale() in this case), then trained. In sklearn the .fit() methods are for training (fitting functions/models to the data).
Hope it makes sense!
Scale is a data normalization technique and it is used when data in different features are of not similar values like in one feature you have values ranging from 1 to 10 and in other features you have values ranging from 1000 to 10000.
Where as fit is the function that actually starts your model training
Scaling is conversion of data, a method used to normalize the range of independent variables or features of data. The fit method is a training step.
I don't have much experience with training neural networks. I have 4 variable vectors as input and I have respectively 3 variable output vector. I want to create a neural network that takes these inputs and outputs which have some unknown correlation(might not be linear) between them and train. So that when I put previously untrained data through it should predict the correlated output.
I was wondering,
What type of model should I use in such scenarios? Is it Restricted boltzmann machine, regression, GAN, etc?
What library is easiest to learn and implement for such a model? eg:- TensorFlow, PyTorch, etc
If images were involved which can be processed as fft arrays, would the model change.
I did find this answer, but I am not satisfied with it.
Please let me know if there are any functions or other points you would like me to know. Any help is much appreciated.
A multilayer perceprton is a good place to start.
Keras is the highest level/easiest to use library I have used.
If you are working with images or spatially structured data a convolutional neural network will probably work best.
I would like to use scikit-learn's svm.SVC() estimator to perform classification tasks on multi-dimensional time series - that is, on time series where the points in the series take values in R^d, where d > 1.
The issue with doing this is that svm.SVC() will only take ndarray objects of dimension at most 2, whereas the dimension of such a dataset would be 3. Specifically, the shape of a given dataset would be (n_samples, n_features, d).
Is there a workaround available? One simple solution would just be to reshape the dataset so that it is 2-dimensional, however I imagine this would lead to the classifier not learning from the dataset properly.
Without any further knowledge about the data reshaping is the best you can do. Feature engineering is a very manual art that depends heavily on domain knowledge.
As a rule of thumb: if you don't really know anything about the data throw in the raw data and see if it works. If you have an idea what properties of the data may be beneficial for classification, try to work it in a feature.
Say we want to classify swiping patterns on a touch screen. This closely resembles your data: We acquired many time series of such patterns by recording the 2D position every few milliseconds.
In the raw data, each time series is characterized by n_timepoints * 2 features. We can use that directly for classification. If we have additional knowledge we can use that to create additional/alternative features.
Let's assume we want to distinguish between zig-zag and wavy patterns. In that case smoothness (however that is defined) may be a very informative feature that we can add as a further column to the raw data.
On the other hand, if we want to distinguish between slow and fast patterns, the instantaneous velocity may be a good feature. However, the velocity can be computed as a simple difference along the time axis. Even linear classifiers can model this easily so it may turn out that such features, although good in principle, do not improve classification of raw data.
If you have lots and lots and lots and lots of data (say an internet full of good examples) Deep Learning neural networks can automatically learn features to some extent, but let's say this is rather advanced. In the end, most practical applications come down to try and error. See what features you can come up with and try them out in practice. And beware the overfitting gremlin.
I am a newby to the convolutional neural nets... so this may be an ignorant question.
I have followed many examples and tutorials now on the MNIST example in TensforFlow. In the CNN examples, all authors talk bout using the 'input filters' to run in the CNN. But no one that I can find mentions WHERE they come from. Can anyone answer where these come from? Or are they magically obtained from the input images.
Thanks! Chris
This is an image that one professor uses, be he does not exaplain if he made them or TensorFlow auto-extracts these somehow.
Disclaimer: I am not an expert, more of an enthusiast.
To cut a long story short: filters are the CNN equivalent of weights, and all a neural network essentially does is learning their optimal values.
Which it does by iterating through a training dataset, making predictions, comparing them to the label/value already assigned to each training unit (usually an image in case of a CNN) and adjusting weights to minimize the error function (the difference between the predicted value and the actual value).
Initial values of filters/weights do not matter that much, so although they might affect the speed of convergence to a small degree, I believe they are often assigned random values.
It is the job of the neural network to figure out the optimal weights, not of the person implementing it.