I have been trying to find a good way to do a change detection with some images, I have found the following SNUNet paper witht heir github https://github.com/likyoo/Siam-NestedUNet,
but when trying to use it I am always getting a blank output as a segmentation map, what did I do wrong?
here is a link for my colab.
thanks
https://colab.research.google.com/drive/1h77eFWXUqTroShuVHueCrejp3F-t9wpm?usp=sharing
Related
I want to hand write a framework to perform inference of a given neural network. The network is so complicated, so to make sure my implementation is correct, I need to know how exactly the inference process is done on device.
I tried to use torchviz to visualize the network, but what I got seems to be the back propagation compute graph, which is really hard to understand.
Then I tried to convert the pytorch model to ONNX format, following the instruction enter link description here, but when I tried to visualize it, it seems that the original layers of the model had been seperated into very small operators.
I just want to get the result like this
How can I get this? Thanks!
Have you tried saving the model with torch.save (https://pytorch.org/tutorials/beginner/saving_loading_models.html) and opening it with Netron? The last view you showed is a view of the Netron app.
You can try also the package torchview, which provides several features (useful especially for large models). For instance you can set the display depth (depth in nested hierarchy of moduls).
It is also based on forward prop
github repo
Disclaimer: I am the author of the package
Note: The accepted format for tool is pytorch model
I´m new here, so please be kind and teach me if I did not provide all the information you need :)
I would like to compare Edge TPU with other edge device such as Myriad. I would like to select one object detection model and one image segmentation model. Considering the following link which shows supported operations, I have noticed that yolov3 cannot be compiled for EdgeTPU because it includes LeakyRelu.
https://coral.withgoogle.com/docs/edgetpu/models-intro/
For image segmentation, I'd like to use Deeplab. But I'm still don't know if operations included in deeplab v3+, such as atrous convolution or feature pyramid network, are supported.
I'd appreciate if someone teach me what models are usable on edgeTPU. Are there any models of image segmentation?
Did you already found below?
https://github.com/tensorflow/models/blob/master/research/deeplab/g3doc/quantize.md
"mobilenetv2_coco_voc_trainaug_8bit":
deeplabv3_mnv2_pascal_train_aug_8bit/frozen_inference_graph.pb
This model is possible to converting to TFLite FlatBuffer.
And also possible to compile for edgetpu with edgetpu_compiler.
Note. edgetpu_api environment had updated.
You can find it below.
https://coral.withgoogle.com/news/updates-07-2019/
Yes. There are prepackaged segmentation models and code examples how to use them.
Here they are https://coral.ai/models/
Please share if you know where to find something similar for Movidius based VPU devices.
Here you can find all supported layers for edgetpu: https://coral.ai/docs/edgetpu/models-intro/#supported-operations.
And for Conv2D it says "Must use the same dilation in x and y dimensions.". So implementing a version of deeplab v3+ is possible for the edgetpu.
I am working on a project for face recognition with photos taken by cameras. I should use a virtual machine with spark and deeplearning4j.
The problem is that I didn't find the suitable algorithm and code to use for creating the neural network.
What is the difference between VGG16, keras, dataVec? and when we should use those models?
All the things you asked can be found through just a google search still giving you links just to give you a direction
VGG16
Keras
I have trying to develop machine learning based image classification system using Scikit-Learn. I am trying to do is multi class classification. the biggest problem i am facing with Scikit-Learn is how to load the data. Then I came across one of the examples face_recognition.py. which using fetch_lfw_people to fetch data from internet. I could see this example actually does multi class classification. I was trying to find some documentation on the example but was unable to find. I have some question here, what does fetch_lfw_people do ? what does this function load in the lfw_people. Also what i saw in the data folder there are some text file .is the code reading the text files/? My main intention is to load my set of image data but i am unable to do it with fetch_lfw_people in case i change the path that my image folder by data_home and funneled=False.I get erros, I hope i get some answers here
First thing first. You can't directly give images as an input to your classifier. You have to extract some features from you images. Or you can load your image using opencv and use the numpy array as an input to your classifier.
I would suggest you to read some basics of image classification , like how you can train your classifier and all.
Coming to your question about fetch_lfw_people function. It might be downloading already pre-processed image data from any text file. If you are training from your images you have to first convert your image data to some numerical features.
I am trying to develop a spam detector application using svm classifier.
But I am not able to find any input data. Can anyone please suggest what kind of input data should I take and from where I could find it. I tried google but didnt found the satisfactory answeres
Stanford machine learning course (ml-class.org) has a lab (no. 6) where you build a spam
filter using support vector machines. The dataset is supplied.