Sensor design parameters to be found - sensors

Could you please help me find for the circuit below:
Sensitivity
interfacing circuit
energy harvesting model
Sensor
To help me figure out how to solve these types of questions

Related

visualization for output of topic modelling

For topic modelling I use the method called nmf(Non-negative matrix factorisation). Now, I want to visualise it.So, can someone tell me visualisation techniques for topic modelling.
Check LDAvis if you're using R; pyLDAvis if Python. It was developed for LDA. But I guess it also works for NMF, by treating one matrix as topic_word_matrix and the other as topic proportion in each document.
http://nbviewer.jupyter.org/github/bmabey/pyLDAvis/blob/master/notebooks/pyLDAvis_overview.ipynb
I highly recommend topicwizard https://github.com/x-tabdeveloping/topic-wizard
(full disclosure: it was written by me)
It's a highly interactive dashboard for visualizing topic models, where you can also name topics and see relations between topics, documents and words.
Here are some example screenshots:

CNTK for waveform input?

I want to use Neural Networks to classify periodic signals coming from a sensor. I've only done image stuff before with CNTK. I suppose its a bit like NLP in that a continuous waveform in the input -- but in my case it won't be audio, but something else. Can somebody point me to how I might get started on this? Thanks!
Could you check if the following links in the sequential order help ?
https://cntk.ai/pythondocs/Manual_How_to_feed_data.html#Comma-separated-values-(CSV)
https://github.com/Microsoft/CNTK/issues/2199

How can you train GATE (General Architecture for Text Enginnering) Developer with some training data or data that already annotated?

I am looking for ways to train my GATE. Not just running the application, but training it with like data that already annotated (not just plain document). I really appreciate if anybody help me. Thanks :)

What are the obstacles in today's object detection?

I am new to computer vision, and now I am do some research on object detection. I have read papers about faster RCNN and RFCN, also read YOLO. It seems the biggest problem is the speed? And all of them use image data data only. Are there any models that combines text and image data? Which means we can use the information from text to help detection when the training data is small. For example, when the training data is small, the model cannot tell dogs and cats clearly, but the model could tell there is a bone near that object, and the model gets some information from text that the object near a bone is most likely a dog, thus the model now could tell what the object is. Does this kind of algorithm exist? I haven't found them, hope you could help me. Thanks a lot.
It seems you have mostly referred to research on Deep Networks for Object Detection. Prior to the success of deep networks, researchers were looking to to the possibility of using text with image features to implement ideas similar to yours. You might want to refer to papers from ACM Multimedia and IEEE TMM, especially those before 2014.
The problem was that those approaches could not perform as well as the simplest of the deep networks that use only images. There is some work on combining both images and text, such as this paper. I am sure at least some researchers are already working on this.

Gender Detection by audio

I've been searching everywhere for some form of gender detection by reading frequency data of a audio file. I've had no luck with finding a program that could do that or even anything that can output audio data so I can write a basic program to read it and manipulate it to determine gender of the speaker.
Do any of you know where I can find something to help me with this?
To reiterate, I basically want to have a program that when a person talks into a microphone it will say the gender of the speaker with a fair amount of precision. My full plan is to also have speech to text feature on it, so the program will write out what the speaker said and give some extremely basic demographics on the speaker.
*Preferably with a common scripting language thats cross platform or linux supported.
Though an old question but still if someone is interested in doing gender detection from audio, You can easily do this by extracting MFCC (Mel-frequency Cepstral coefficient) features and model it with machine learning model GMM (Gausssian Mixture model)
One can follow this tutorial which implements the same and has evaluated it on subset extracted from Google's AudioSet gender wise data.
https://appliedmachinelearning.wordpress.com/2017/06/14/voice-gender-detection-using-gmms-a-python-primer/
You're going to want to look into formant detection and linear predictive coding. Heres a paper that has some signal flow diagrams that could be ported over to scipy/numpy.

Resources