DenseNet results zero-plan - conv-neural-network

I am learning about densenet and image reconstruct from speckle.
I take a speckle images form my experiment. Then I run the densenet, it works!
I successfully reconstruct original image from speckle.
But I met some problem after I take more test.
I trained two model with same data from the beginning. One model works well. But another model puts out only zeros.
I don't understand why these results come out.
Could you please explain why the model predicts zeros or how I can check the weights?
I looked at this links
https://github.com/shuailizju/IDiffNet
https://github.com/liuzhuang13/DenseNet
Could you please explain why the model predicts zeros or how I can check the weights?

Related

PyTorch CNN gives different results

I am trying to make a face recognition model with PyTorch. My model performs well with both loss scores for training and validation close to 0. The problem is when I tested it using same input, it gives me different results like below:
and below
The left side is the true label and the right one is the prediction output. I set random seed and ran model.eval() before feeding the input to the model.
Does anyone know how to solve this?
Please advise

Which model should I use? - Multi label classification

I am newbie on data science so my question might be basic.
I have a dataset. 1st column is comments of people about issues (as text), 2nd columns is class/labels of that failure (as text). There are many failure types on my 2nd column.
I want to train a model. When another comment is entered and explained the issue, model should classify the failure.
Can I use Keras Sequential model? Or should I use different model? If you can share a link which can be related my question, I will be appreciate.
You can use Keras Sequential model for sure. Now as a beginner, try using Dense layers, and you can also use Convolutional Neural Networks for it...
and btw try using the tensorflow.keras.preprocessing.text Tokenizer to label each word as numbers so the machine can understand.
For more information, search on Google for text classification and search for the Tokenizer.

Creating input data for BERT modelling - multiclass text classification

I'm trying to build a keras model to classify text for 45 different classes. I'm a little confused about preparing my data for the input as required by google's BERT model.
Some blog posts insert data as a tf dataset with input_ids, segment ids, and mask ids, as in this guide, but then some only go with input_ids and masks, as in this guide.
Also in the second guide, it notes that the segment mask and attention mask inputs are optional.
Can anyone explain whether or not those two are required for a multiclass classification task?
If it helps, each row of my data can consist of any number of sentences within a reasonably sized paragraph. I want to be able to classify each paragraph/input to a single label.
I can't seem to find many guides/blogs about using BERT with Keras (Tensorflow 2) for a multiclass problem, indeed many of them are for multi-label problems.
I guess it is too late to answer but I had the same question. I went through huggingface code and found that if attention_mask and segment_type ids are None then by default it pays attention to all tokens and all the segments are given id 0.
If you want to check it out, you can find the code here
Let me know if this clarifies it or you think otherwise.

extracting gradients for a model, reversing them and updating the weights in Keras

I'm trying to do domain adversarial training using gradient-reversal procedure. I have a deep-learning model architecture consisting of 5 dense layers. Now I want to extract the gradient, reverse them and then update the weights. I am not sure how to extract the gradients and finally how to add them to update the weights. I have gone through some example codes but I am still pretty doubtful regarding using Keras backend. Any help with some toy code or example with explanation is really appreciated.
Thank you,

Wrong predictions from MNSIT keras model

I am new to neural networks so I tried my first neural network which is pretty close to one at keras learn page,given below:
https://github.com/aakarsh1011/Neural-Network/blob/master/MNSIT%20classification.ipynb
Kindlly look at the ending where I red a random image and tried to predict it which comes out as a bag, and when trained at epocs=5 it predicted it as a sandal.
Is something wrong with my code or labeling.
UPDATE - Being new to the field I didn't know the importance of epochs so I asked this question, I was afraid that I don't over-fit the model or train train too much. But there is no definite way to do this, it's all try and error. GOOD LUCK!
First of all, as far as I can see, your code is correct. Your model predicting the wrong item can be caused by the model not being trained for long enough. I would highly recommend you to set epochs=100 and you will be able to see the model's accuracy rise. You should generally always try to give your model as many epochs as possible for training. It will simply take some time. Try out some different numbers of epochs to find the one not taking too long, but still giving an acceptable result.

Resources