concatenation of two models feature vector for fusion - python-3.x

I've trained two models on VGG16 both models are trained on a different dataset.
I want to concatenate these two models for predicting new image.
So,My query is how can i concatenate these two models.

Related

Aggregate tensorflow keras models

Let say I have trained 300 different models, they all have the same architecture, only the weights are different.
Now, rather than calling my 300 models one after the others, how can I aggregate those models together so effectively having one aggregated X feature that would be fed into that model, and output results for the 300 model in one go?

Training features with different dimensions

I have two types of features that should be concatenated and trained, features are as follow:
Images with dimension (277,217,3)
vector of real numbers (40,1)
What I did is after training the model on all images, I extracted the output of predict() function of all samples, this outputs vector, then I concatenated this vector with the real number features. But the problem is when I trained the values of predict() alone, the result was lower than the base model's result.
So I am trying to add the features after the convolutional layers while the model is training
Thanks in advance!!

Average weights from two .h5 folders in keras

I have trained two models on different datasets and saved weights of each model as ModelA.h5 and ModelB.h5
I want to average these weights and create a new folder called ModelC.h5 and load it on the same model architechture.
How do I do it?
Model trained on different datasets can't just be added like this. It looks something like this. Let's say like this, train one person to classify 1000 images into 5 classes, then, train another person to classify another 1000 images into same 5 classes. Now, you want to combine them into one.
Rather, what you can do is take ensemble of both the networks. There are multiple ways to ensemble the predictions of both models using Max Voting, Averaging or Weighted Average, Bagging and Boosting, etc. Ensemble helps to boost the weak classifiers into one strong classifier.
You can refer to this link to read more about different types of ensemble: Link

Best tool for text representation to deep learning

so I wanna ask you which is the best tool used to prepare my text to deep learning?
What is the difference between Word2Vec, Glove, Keras, LSA...
You should use a pre-trained embedding to represent the sentence into a vector or a matrix. There are a lot of sources where you can find pre-trained embeddings that use different dataset (for instance all the Wikipedia) to train their models. These models can have different length, but normally each word is represented with 100 or 300 dimensions.
Pre-trained embeddings
Pre-trained embeddings 2

Use models from Keras Applications without pretrained weights

Keras Applications provide implementations of some of the most popular model architectures with weights pretrained on some of the most popular datasets. These predefined models are very handy for transfer learning of problems which are similar to the datasets the models were trained on.
But what if I have a very different problem and want to completely train the models on the new dataset? How can I use the models in Applications for training from scratch based on my own dataset, if I dont have pretrained weights?
You can assign a None to the weights variable, for instance with the inception V3 architecture.
keras.applications.inception_v3.InceptionV3(include_top=False, weights='None', input_shape=input_shape = (img_width, img_height, 3))
include_top=False will allow you to train the top layer with your custom network.
weights='None' means that we are training without any weights if you want to train using imagenet weight you set it to weights='imagenet'

Resources