Can the RISC-V simulators estimate the energy consumption of a Rocket chip?
For instance, is there a way to produce traces that can be fed to McPAT?
To estimate Rocket Chip's energy, we use Chisel's Verilog backend to generate RTL which we feed into CAD tools for gate-level simulation.
The simulators provided by Berkeley (QEMU, Rocket Chip, spike) currently do not support interfacing with McPAT, but this could be a great community contribution for those without access to CAD tools or wanting to simulate at a higher rate.
Related
Some speech-to-text services, like Google Speech-to-Text, offer speaker differentiation via diarization which attempts to identify and separate multiple speakers on a single audio recording. This is often needed when multiple speakers are in a meeting room sharing a single microphone.
Is there an algorithm and implementation to calculate the correctness of speaker separation?
This would be used in conjunction with Word Error Rate which is often used to test correctness of baseline transcription.
The commonly used approach for this appears to be the Diarization Error Rate (DER) defined by NIST in the NIST-RT projects.
A newer evaluation metric is the Jaccard Error Rate (JER) introduced in DIHARD II: The Second DIHARD Speech Diarization Challenge.
Two projects for measuring these include:
https://github.com/nryant/dscore
https://github.com/wq2012/SimpleDER
DER is referenced in these papers:
A Comparison of Neural Network Feature Transforms for Speaker Diarization
The ICSI RT-09 Speaker Diarization System
On my system, using my USB microphone, I've found that the audio level that works best with CMU Sphinx is about 20% of the maximum. This gives me 75% voice recognition accuracy. If I amplify this digitally I get far worse recognition accuracy (25%). Why is this? What is the recommended audio level for Sphinx? [Also I am using 16,000 samples/sec, 16-bit.]
pocketsphinx decoder uses channel amplitude normalization. Initial normalization value is configured to 20% audio level indeed inside the model (-cmninit parameter in feat.params). However, the level is updated as you decode, so it has only effect on first utterance. If you properly decode in continuous mode, level should not matter. Do not restart recognizer for every utterance, let it adapt to the noise and audio level.
I am using a PIC16F877a to drive a solid state realy connected to a 300W starter motor (R=50. millohms, L=50mH);
I tried varying The frequency and duty cycle to reduce the inrush current. it worked my current reduced to almost half.
I know that the average voltage for a pwm is V*duty cycle. But i am not driving the motor directly but through a relay. can anyone tell me a formula on how to calculate the current to the motor for validation.
Regs,
cj
I think you would need a datasheet of the motor with its electrical and mechanical characteristcs to determine the current. But that would still be a theoretical value. In the real world you will have the wires, contacts and so on, that add additional resistance and will "help" to limit the start current. But don't choose the wires to small and use a fuse for safety reasons. This should help you to choose the right wires: American Wire Gauge
If it's a DC motor there is a better and quiet simple solution.
Because of mechanical wearing and the limited switching frequency you should better not use a relay. The better solution would be an application fitting field effect transistor (FET)switching at a pwm frequency of about 20kHz so it would not produce any annoying humming or whimpering sounds in the motor. Depending on the transistor you will need a driver circuit for the FET to operate fine, dropping just a small ammount of power (passive cooling might still be needed).
For a smooth start of the motor with a minimum of peak current you should apply a linear duty cycle sweep from 0 to 100%. The optimum duration of the sweep depends on the motor and the mechanical load. Discover your optimum by trying different sweep durations.
This is as simple and less vague as I can make it, so please and try to help me out.
By this, meaning I want to:
1) Input an audio track (Anaglod)
2) Using the micro controllers ADC
convert it to a digital output
3) Then Have the
microcontollers/boards timer sample
the data at selected intervuls.
4) Tell the board to take the "Sampled
audio track" and now sample it at a
rate of 2B, ( B meaning the highest
frequency.
F= Frequency
F(Hz=1/s) E.x. 100Hz = 1000 (Cyc/sec)
F(s)= 1/(2f)
Example problem: 1000 hz = Highest
frequency 1/2(1000hz) = 1/2000 =
5x10(-3) sec/cyc or a sampling rate of
5ms
5) Spit it back at the boards ADC and
convert it back to analog, thus the
out-put is a perfect reconstruction of
the initial audio track.
Using Fourier Analysis i will determine the highest frequency at which I will sample the track at.
However in theory it sounds easy enough and straight forward, but what I need is to program this in C and utilize my msp430 chip/Experimenters board to sample the track.
Im going to be using Texas Instruments CCS and Octave for my programming and debugging. This is my board that I will be using.
Questions:
Is C the right language for this? Can I get any examples of how to sample the tack at nyquist frequency using C? What code in C will tell the board to utilize the ADC component? And any recommended information that is similar or that will help me on this project.
I don't fully understand what you want to do, but I'll answer your specific questions.
Yes, C is the right language for this.
You should probably look at application code on the Texas Instruments website to see how to interact with the ADC. You can start with the example code listed at the bottom of the page you linked to. It has C code that shows how to use the ADC.
Incidentally, an ADC only converts analog to digital. To go digital to analog, you need a DAC, which this board does not appear to have.
5) ADC doesnt do Digital-to-Analog Conversion, 'cause it's ADC, not DAC. But you may use PWM with Low-pass filter to output analog signal.
It is often a bad idea to sample signal at Nyquist frequency. This will cause lots of aliasing at high frequencies. For example signal with frequency F-deltaF, where deltaF as small, will look like F amplitude modulated by 2deltaF.
That's why CD sampling rate is 44.1 kSPS, not 30 kSPS (as twice 15 kHz -- higher frequency limit).
You have to sample the signal with a frequency that is twice as high as the highest frequency in your signal. Otherwise you get aliasing effects (distortion of the original signal). It is not possible to determine the highest frequency in your signal with fourier analysis because to perform an fft you have to convert your analog signal to digital values - with a conversion frequency (that you want to determine with the fft).
The highest frequency in your input signal is defined by the analog input filter that the signal must pass before analog to digital conversion.
I need to implement a wavetable synthesizer in an ARM Cortex-M3 core. I'm looking for any code or tools to help me get started.
I'm aware of this AVR implementation. I actually converted it to a PIC a while back. Now I am looking for something similar, but a little better sounding.
ANSI C code would be great. Any code snippets (C or C++), samples, tools, or just general information would be greatly appreciated.
Thanks.
The Synthesis Toolkit (STK) is excellent, but it is C++ only:
http://ccrma.stanford.edu/software/stk/
You may be able to extract the wavetable synthesizer code from the STK though.
Two open-source wavetable synthesizers are FluidSynth and TiMidity.
Any ARM synth, the best ones, can be changed to wavescanner in less than a day. Scanning the wave from files or generating them mathematically is nearly the same thing audio wise, WT provides massive banks of waveforms at zero processing cost, you need the waves, the WT oscillator code itself is 20 lines. so change your waveform knob from 3 to 100 to indicate which WAV you are reading, use a ramp/counter to read the WAV files(as arrays). WT fixed.
From 7 years of Synth experience, i'd recommend to change 20 lines of the oscillator function of your favorite synth to adapt it to read wave arrays. The WT only uses 20 lines of logic, the rest of the synthesizer is more important: LFO's, Filters, input parameters, preset memory... Use your favorite synth instead and find a WT wave library as WAV files and folders, and replace your fav synth oscillators with WT functions, it will sound almost the same, only lower processing costs.
A synth normally uses Sin, Sqr, Saw, Antialiased OSC functions for the wave...
A wavetable synth uses about 20 lines of code at it's base, and 10/20/100ds of waves, each wave sampled at every octave ideally. If you can get a wavetable sound library, the synth just loops, pitch shifts, the sounds, and pro synths can also have multiple octave to mix the octaves.
WTfunction =
load WAV files into N arrays
change waveform = select waveform array from WAV list
read waveform array at desired Hz
wavescanner function =
crossfade between 2 waves and assign xfade to LFO, i.e. sine and xfade.
The envelope, filter, amplitude, all other functions are independent from the wave generation function in all synths.
remember the the most powerful psychoacoustic tool for synthesizers is deviation from the digital tone of the notes, it's called unison detune, sonic character of synthesizers mostly comes from chorus and unison detune.
WT's are either single periods of waves of longer sections, in more advanced synths. the single period stuff is super easy to write into code. the advanced WT's are sampled per octave with waves lasting N periods, even 2-3 seconds, i.e. piano, and that means that they change sound quality through the octaves, so the complex WT's are crossfaded every octave with multiple octave recordings.