Clock Drift at WSNs - sensors

Does anyone know how to simulate clock drift on a WSN network using Castalia?
I would like to know if anyone has already done this type of analysis and, if so, what parameters were analyzed and compared.

You can already simulate a very simple model of clock drift in Castalia, by using the sigmaCPUClockDrift parameter of the ResourceManager.
There is work by other people who have enhanced Castalia to have more complex clock models. See the following links:
Work by Thomas Schmid
Work by Federico Ferrari

Related

How to capture keyboard input during runtime in Verilog?

I've been trying to find a way to capture keyboard input during runtime simulation of my Verilog code. Is this even possible?
I have taken a look at resources like asic-world and the Quick Reference for Verilog found on Google, but found nothing regarding a way to take keyboard inputs.
There seems to be a fundamental misunderstanding here in the difference between a hardware description language used to simulate a design versus using that same description to implement a design in actual hardware. It's like drawing a picture of a pinwheel, blowing on that picture, and expecting the pinwheel to start turning.
You can certainly build a 3-D model of that pinwheel, simulate the force of the wind on that model and watch it turn, and then send that model to a 3-d printer to get your pinwheel. I suppose you could put wind sensors in front of your monitor, and write a program that converts a value from the sensor to a value used in the simulation. The point is, the simulator has no knowledge that the value came from someone blowing on the monitor, it just sees a parameter value change.
Unless you are designing the keyboard hardware yourself and simulating that, there really is not much point in taking keyboard input from a computer and using that to stimulate your design in simulation. The operating system has already abstracted away the keyboard hardware and provides you with a string of character codes. The reason you are simulating in the first place is to verify the functionality of your design. If you find a problem, you are going to want to replay the exact same stimulus until you fix your problem.
Just like the pinwheel example, I do know it's possible for someone to set up a program that reads keyboard input and provides that as stimulus to a simulation. But that involves inter-process communication(IPC) and specific tool knowledge to set that up.

how to get power estimation using xpower

I have been working on a class project using Verilog. I had to create a circuit and then calculate the power that the circuit uses. I have been trying to do it using Xpower Analyzer I follow the instruction to create the vcd file, compile and synthesize the code using Xilinx ISE 14.7 . Everything goes well until the result shows up. I received 0 power consumption from the clock. I try to constrains the clock and it only give me a increment in dynamic power from 0 to 0.009, but not luck in the clock. Also, I try Xpower in my personal computer and at my university computer lab, so I don't think that it is a software bug.
Moreover, I have try different design such as a simple alu, register etc. Nonetheless, I still getting the same power result.
More information:
Testbench runs well and does what I want
I declare clock like: module toptrafficlight(
clock,rst,output );
List item: I have constrained the clock to 20ns
Timing phase = 0. After synthesis (not sure what this means)
Warrnings from:
HDLCompiler:413 - Line 86: Result of 5-bit expression is truncated to fit in 4-bit target.
PhysDesignRules:372 - Gated clock. Clock net main_gated_clk is sourced by a combinatorial pin. This is not good design practice. Use the CE pin to control the loading of data into the flip-flop.
Power result from Xpower Analyzer
My questions are?
is it a way to setup the clock? which I think might be the cause of the problem
is there anything else needed to be done beside getting the VCD file and synthesize the code?
any other ideas, examples or tutorial?
The screenshot shows that the design is very small, so it's not a big surprise for clock power to be smaller than 1mW. Xilinx also provides an Excel sheet for power estimation. It can be used for a quick tryout to see what circumstances make the clock power significant.
Xilinx Power Estimator (XPE)

How does the Ableton Drum-To-MIDI function work?

I can't seem to find any information regarding the process that Ableton uses to efficiently detect atonal percussion and convert it into MIDI. I assume feature extraction and onset detection algorithms are executed, but I'm intrigued as to what algorithms. I am particularly interesting how its efficiency is maintained for a beatboxed input.
Cheers
Your guesses are as good as everyone else's - although they look plausible. The reality is that the way this feature is implemented in Ableton is a trade secret and likely to remain that way.
If I'm not mistaken Ableton licenses technology from https://www.zplane.de/ for these things.
I don't exactly know how the software assigns the different drum sounds, but the chapter in the live manual Convert Drums to New MIDI Track says that it can only detect kick, snare and hi-hat. An important thing is that they are identified by the transient Markers. For a good result you should manually check and adjust them. The transient Markers look like the warp Markers, but are grey.
compared to a kick and a snare for example, a beatboxed input is likely to have less difference between the individual sounds and therefore likely to be harder for Ableton to individually extract the seperate sounds (depends on the beatboxer). In any case, some combination of frequency and amplitude - more specifically(Attack, Decay, Sustain, Release) as well as perhaps the different overtone combinations that account for differences in timbre are going to be the characteristics that would have to be evaluated in order to separate the kick snare and hihat .
Before this feature existed I used gates and hi/low pass filters to accomplish a similar task. So perhaps Ableton's solution is not as complicated as we might imagine.

Onset to Beat Detection?

How do you determine which onsets are beats? I am using Spectral Flux for Note Onset Detection and a Running Mean for peak-picking/thresholding.
I am just working with the guitar instrument so the presence of percussions may not help with this. Any ideas?
Thanks!
EDIT: Wow...just realized this question is 3 years old...sorry to resurrect an old post.
My Master's thesis was in beat detection and the main advantage of my method over all other published methods of beat detection was in resolution, both in the time domain and frequency (beat) domain. You can find my thesis here. What it basically boils down to (after alot of filtering) is a comb-filter convolution. My code is an adaptation of this project, which contains Matlab files for you to see how it works.
My code (both in C++ and the Matlab port) is not publicly available due to possible copywrite issues with my university, but if you email me at dberm22[at]gmail[dot]com, I'd be more than willing to ahem::discuss my work with you.
Try using a beat tracking algorithm. Beat tracking is a distinct problem from onset detection.
I think there's a good algorithm in the Queen Mary plugin set for Sonic Visualizer. The plugins are open source, so you can have a look at the code to figure out how they work.
Or do a search on google scholar for "beat tracking". There are a number of effective approaches. Dan Ellis' is a good one to start with. It's intuitive, and there's code available in Matlab and Java.

Note Onset Detection Dynamic Thresholding

So I am working on Note Onset Detection. I have implemented the method here: Note onset detection
However, I am finding some difficulty or problems regarding the 'static' nature of the method. What I am looking for is how to make the thresholding method 'dynamic'. But I am finding trouble finding suitable solutions.
Aside from that, I am also working on instead of having the amplitude value as the basis of passing the threshold, I make use of the 'difference' between 2 amplitude values, to know when a signal increases or not, and how much it increased or decreased. This is what I'm using currently.
Anyone willing to help or has worked with this kind of problem? Thank you!
Additionally, by any chance do any of you have a PDF file of this paper: http://www.mendeley.com/research/methods-detecting-impulsive-noise-speech-audio-signals-14/
Volume compression is a form of AGC (Automatic Gain Control), and AGC can be done dynamically. The are plenty of close to real-time AGC algorithms to be found in search results, although a bit of delay is required if you want an AGC attack that's smoother than a step function.

Resources