I follow the audio i/o tutorial to build a real-time audio recording app. However, the callback is called very infrequently for recording the whole sound. For example, when I start the recording for 5 second, the callback actually only gives a 2-second clip (instead of 5).
Here is a dummy testing code which only dumps how many bytes have been recorded:
int _audio_read_total;
static void _audio_io_stream_read_cb(audio_in_h handle, size_t nbytes, void *userdata)
{
const void *buffer = NULL;
if (nbytes > 0) {
int error_code = audio_in_peek(handle, &buffer, &nbytes);
_audio_read_total += nbytes;
dlog_print(DLOG_DEBUG, LOG_TAG, "nbytes = %d,_audio_read_total = %d", nbytes, _audio_read_total);
error_code = audio_in_drop(handle); // remove audio data from internal buffer
}
}
static void start_audio_recording(appdata_s *ad)
{
int error_code = audio_in_create(48000, AUDIO_CHANNEL_MONO, AUDIO_SAMPLE_TYPE_S16_LE, &ad->input);
error_code = audio_in_set_stream_cb(ad->input, _audio_io_stream_read_cb, ad);
error_code = audio_in_prepare(ad->input);
}
Attached images include the result of running this code on Galaxy Gear S3. As you can see in the image, the recording callback is called from time 32.3 to time 37.7 (over 5 seconds), but only 224250 bytes are received (it should be (37.7-32.3)*48000*sizeof(short) = 518400). That means only less than 40% of audio is recorded.
Could anyone give me some suggestions to solve this issue?
Yu-Chih
You could try to use
int audio_in_set_interrupted_cb ( audio_in_h input,
audio_io_interrupted_cb callback,
void * user_data
)
Registers a callback function to be invoked when the audio input handle is interrupted or the interrupt is completed. I will explore further and try to reproduce this.
Related
I am trying to write random noise to to a device and allow my loop to sleep when I have written enough data. My understanding is that for each call to snd_pcm_writei I am writing 162 bytes (81 frames) which at 8khz rate and 16bit format it should be enough audio for ~10ms. I have verified that alsa does tell me I have written 81 frames.
I would expect that I can then sleep for a short amount of time before waking up and pushing the next 10 ms worth of data. However when I sleep for any amount - even a single ms - I start to get buffer underrun errors.
Obviously I have made an incorrect assumption somewhere. Can anyone point me to what I may be missing? I have removed most error checking to shorten the code - but there are no errors initializing the alsa system on my end. I would like to be able to push 10ms of audio and sleep (even for 1 ms) before pushing the next 10ms.
#include <alsa/asoundlib.h>
#include <spdlog/spdlog.h>
int main(int argc, char **argv) {
snd_pcm_t* handle;
snd_pcm_hw_params_t* hw;
unsigned int rate = 8000;
unsigned long periodSize = rate / 100; //period every 10 ms
int err = snd_pcm_open(&handle, "default", SND_PCM_STREAM_PLAYBACK, 0);
snd_pcm_hw_params_malloc(&hw);
snd_pcm_hw_params_any(handle, hw);
snd_pcm_hw_params_set_access(handle, hw, SND_PCM_ACCESS_RW_INTERLEAVED);
snd_pcm_hw_params_set_format(handle, hw, SND_PCM_FORMAT_S16_LE);
snd_pcm_hw_params_set_rate(handle, hw, rate, 0);
snd_pcm_hw_params_set_channels(handle, hw, 1);
int dir = 1;
snd_pcm_hw_params_set_period_size_near(handle, hw, &periodSize, &dir);
snd_pcm_hw_params(handle, hw);
snd_pcm_uframes_t frames;
snd_pcm_hw_params_get_period_size(hw, &frames, &dir);
int size = frames * 2; // two bytes a sample
char* buffer = (char*)malloc(size);
unsigned int periodTime;
snd_pcm_hw_params_get_period_time(hw,&periodTime, &dir);
snd_pcm_hw_params_free(hw);
snd_pcm_prepare(handle);
char* randomNoise = new char[size];
for(int i = 0; i < size; i++)
randomNoise[i] = random() % 0xFF;
while(true) {
err = snd_pcm_writei(handle, randomNoise, size/2);
if(err > 0) {
spdlog::info("Write {} frames", err);
} else {
spdlog::error("Error write {}\n", snd_strerror(err));
snd_pcm_recover(handle, err, 0);
continue;
}
usleep(1000); // <---- This is what causes the buffer underrun
}
}
Try to put in /etc/pulse/daemon.conf :
default-fragments = 5
default-fragment-size-msec = 2
and restart linux.
What I don't understand is why you write a buffer of size "size" to the device, and in the approximate calculations of time you rely on the "periodSize" declared by you. Then write a buffer with the size "periodSize" to the device.
I am making a simple game whose audio speed should increase as the player is approaching the end of the level it is playing. So now I was wondering if there was a way to do this using SDL_Mixer. If SDL_Mixer is not the way to go could you please tell me how could I make this change in the audio file itself to make it faster. I am working with a 8-bit .wav file with 2 channels at the samplerate of 22050.
According to this forum here: https://forums.libsdl.org/viewtopic.php?p=44663, you can use a different library called "SoLoud" to change the playback speed of your sounds on the fly. You can get/see more details on SoLoud here: http://sol.gfxile.net/soloud/. From what I can tell, you cannot do this using SDL2, and SoLoud seems easy enough to use, so that would be my suggestion.
A few years back I was trying to achieve something very similar and, after a lot of web search, I came up with this solution, involving using Mix_RegisterEffect function, which got close:
#include <SDL2/SDL.h>
#include <SDL2/SDL_mixer.h>
#include <iostream>
#include <cstdlib>
#include <cmath>
/* global vars */
Uint16 audioFormat; // current audio format constant
int audioFrequency, // frequency rate of the current audio format
audioChannelCount, // number of channels of the current audio format
audioAllocatedMixChannelsCount; // number of mix channels allocated
static inline Uint16 formatSampleSize(Uint16 format)
{
return (format & 0xFF) / 8;
}
// Get chunk time length (in ms) given its size and current audio format
static int computeChunkLengthMillisec(int chunkSize)
{
/* bytes / samplesize == sample points */
const Uint32 points = chunkSize / formatSampleSize(audioFormat);
/* sample points / channels == sample frames */
const Uint32 frames = (points / audioChannelCount);
/* (sample frames * 1000) / frequency == play length, in ms */
return ((frames * 1000) / audioFrequency);
}
// Custom handler object to control which part of the Mix_Chunk's audio data will be played, with which pitch-related modifications.
// This needed to be a template because the actual Mix_Chunk's data format may vary (AUDIO_U8, AUDIO_S16, etc) and the data type varies with it (Uint8, Sint16, etc)
// The AudioFormatType should be the data type that is compatible with the current SDL_mixer-initialized audio format.
template<typename AudioFormatType>
struct PlaybackSpeedEffectHandler
{
const AudioFormatType* const chunkData; // pointer to the chunk sample data (as array)
const float& speedFactor; // the playback speed factor
int position; // current position of the sound, in ms
const int duration; // the duration of the sound, in ms
const int chunkSize; // the size of the sound, as a number of indexes (or sample points). thinks of this as a array size when using the proper array type (instead of just Uint8*).
const bool loop; // flags whether playback should stay looping
const bool attemptSelfHalting; // flags whether playback should be halted by this callback when playback is finished
bool altered; // true if this playback has been pitched by this handler
PlaybackSpeedEffectHandler(const Mix_Chunk& chunk, const float& speed, bool loop, bool trySelfHalt)
: chunkData(reinterpret_cast<AudioFormatType*>(chunk.abuf)), speedFactor(speed),
position(0), duration(computeChunkLengthMillisec(chunk.alen)),
chunkSize(chunk.alen / formatSampleSize(audioFormat)),
loop(loop), attemptSelfHalting(trySelfHalt), altered(false)
{}
// processing function to be able to change chunk speed/pitch.
void modifyStreamPlaybackSpeed(int mixChannel, void* stream, int length)
{
AudioFormatType* buffer = static_cast<AudioFormatType*>(stream);
const int bufferSize = length / sizeof(AudioFormatType); // buffer size (as array)
const int bufferDuration = computeChunkLengthMillisec(length); // buffer time duration
const float speedFactor = this->speedFactor; // take a "snapshot" of speed factor
// if there is still sound to be played
if(position < duration || loop)
{
// if playback is unaltered and pitch is required (for the first time)
if(!altered && speedFactor != 1.0f)
altered = true; // flags playback modification and proceed to the pitch routine.
if(altered) // if unaltered, this pitch routine is skipped
{
const float delta = 1000.0/audioFrequency, // normal duration of each sample
vdelta = delta*speedFactor; // virtual stretched duration, scaled by 'speedFactor'
for(int i = 0; i < bufferSize; i += audioChannelCount)
{
const int j = i/audioChannelCount; // j goes from 0 to size/channelCount, incremented 1 by 1
const float x = position + j*vdelta; // get "virtual" index. its corresponding value will be interpolated.
const int k = floor(x / delta); // get left index to interpolate from original chunk data (right index will be this plus 1)
const float proportion = (x / delta) - k; // get the proportion of the right value (left will be 1.0 minus this)
// usually just 2 channels: 0 (left) and 1 (right), but who knows...
for(int c = 0; c < audioChannelCount; c++)
{
// check if k will be within bounds
if(k*audioChannelCount + audioChannelCount - 1 < chunkSize || loop)
{
AudioFormatType leftValue = chunkData[( k * audioChannelCount + c) % chunkSize],
rightValue = chunkData[((k+1) * audioChannelCount + c) % chunkSize];
// put interpolated value on 'data' (linear interpolation)
buffer[i + c] = (1-proportion)*leftValue + proportion*rightValue;
}
else // if k will be out of bounds (chunk bounds), it means we already finished; thus, we'll pass silence
{
buffer[i + c] = 0;
}
}
}
}
// update position
position += bufferDuration * speedFactor; // this is not exact since a frame may play less than its duration when finished playing, but its simpler
// reset position if looping
if(loop) while(position > duration)
position -= duration;
}
else // if we already played the whole sound but finished earlier than expected by SDL_mixer (due to faster playback speed)
{
// set silence on the buffer since Mix_HaltChannel() poops out some of it for a few ms.
for(int i = 0; i < bufferSize; i++)
buffer[i] = 0;
if(attemptSelfHalting)
Mix_HaltChannel(mixChannel); // XXX unsafe call, since it locks audio; but no safer solution was found yet...
}
}
// Mix_EffectFunc_t callback that redirects to handler method (handler passed via userData)
static void mixEffectFuncCallback(int channel, void* stream, int length, void* userData)
{
static_cast<PlaybackSpeedEffectHandler*>(userData)->modifyStreamPlaybackSpeed(channel, stream, length);
}
// Mix_EffectDone_t callback that deletes the handler at the end of the effect usage (handler passed via userData)
static void mixEffectDoneCallback(int, void *userData)
{
delete static_cast<PlaybackSpeedEffectHandler*>(userData);
}
// function to register a handler to this channel for the next playback.
static void registerEffect(int channel, const Mix_Chunk& chunk, const float& speed, bool loop, bool trySelfHalt)
{
Mix_RegisterEffect(channel, mixEffectFuncCallback, mixEffectDoneCallback, new PlaybackSpeedEffectHandler(chunk, speed, loop, trySelfHalt));
}
};
// Register playback speed effect handler according to the current audio format; effect valid for a single playback; if playback is looped, lasts until it's halted
void setupPlaybackSpeedEffect(const Mix_Chunk* const chunk, const float& speed, int channel, bool loop=false, bool trySelfHalt=false)
{
// select the register function for the current audio format and register the effect using the compatible handlers
// XXX is it correct to behave the same way to all S16 and U16 formats? Should we create case statements for AUDIO_S16SYS, AUDIO_S16LSB, AUDIO_S16MSB, etc, individually?
switch(audioFormat)
{
case AUDIO_U8: PlaybackSpeedEffectHandler<Uint8 >::registerEffect(channel, *chunk, speed, loop, trySelfHalt); break;
case AUDIO_S8: PlaybackSpeedEffectHandler<Sint8 >::registerEffect(channel, *chunk, speed, loop, trySelfHalt); break;
case AUDIO_U16: PlaybackSpeedEffectHandler<Uint16>::registerEffect(channel, *chunk, speed, loop, trySelfHalt); break;
default:
case AUDIO_S16: PlaybackSpeedEffectHandler<Sint16>::registerEffect(channel, *chunk, speed, loop, trySelfHalt); break;
case AUDIO_S32: PlaybackSpeedEffectHandler<Sint32>::registerEffect(channel, *chunk, speed, loop, trySelfHalt); break;
case AUDIO_F32: PlaybackSpeedEffectHandler<float >::registerEffect(channel, *chunk, speed, loop, trySelfHalt); break;
}
}
// example
// run the executable passing an filename of a sound file that SDL_mixer is able to open (ogg, wav, ...)
int main(int argc, char** argv)
{
if(argc < 2) { std::cout << "missing argument" << std::endl; return 0; }
SDL_Init(SDL_INIT_AUDIO);
Mix_OpenAudio(MIX_DEFAULT_FREQUENCY, MIX_DEFAULT_FORMAT, MIX_DEFAULT_CHANNELS, 4096);
Mix_QuerySpec(&audioFrequency, &audioFormat, &audioChannelCount); // query specs
audioAllocatedMixChannelsCount = Mix_AllocateChannels(MIX_CHANNELS);
float speed = 1.0;
Mix_Chunk* chunk = Mix_LoadWAV(argv[1]);
if(chunk != NULL)
{
const int channel = Mix_PlayChannelTimed(-1, chunk, -1, 8000);
setupPlaybackSpeedEffect(chunk, speed, channel, true);
// loop for 8 seconds, changing the pitch dynamically
while(SDL_GetTicks() < 8000)
speed = 1 + 0.25*sin(0.001*SDL_GetTicks());
}
else
std::cout << "no data" << std::endl;
Mix_FreeChunk(chunk);
Mix_CloseAudio();
Mix_Quit();
SDL_Quit();
return EXIT_SUCCESS;
}
While this works, it's not a perfect solution, since the result has some artifacts (crackling) in most cases, which I wasn't able to figure out why.
Github gist I created for this a while ago.
I'm trying to use I2S and internal DAC to play WAV files from SPIFF on a Heltec WiFi LoRa 32 V2, using the Arduino IDE.
I have an audio amp and an oscilloscope hooked up to DAC2 (pin 25) of the board and I'm not getting any signal. I've simplified the problem by generating a sine wave (as in the ESP-IDF examples). Here's the code:
#include <Streaming.h>
#include <driver/i2s.h>
#include "freertos/queue.h"
#define SAMPLE_RATE (22050)
#define SAMPLE_SIZE 4000
#define PI (3.14159265)
#define I2S_BCK_IO (GPIO_NUM_26)
#define I2S_WS_IO (GPIO_NUM_25)
#define I2S_DO_IO (GPIO_NUM_22)
#define I2S_DI_IO (-1)
size_t i2s_bytes_write = 0;
static const int i2s_num = 0;
int sample_data[SAMPLE_SIZE];
i2s_config_t i2s_config = {
.mode = (i2s_mode_t)(I2S_MODE_MASTER | I2S_MODE_TX | I2S_MODE_DAC_BUILT_IN), // Only TX
.sample_rate = SAMPLE_RATE,
.bits_per_sample = I2S_BITS_PER_SAMPLE_16BIT,
.channel_format = I2S_CHANNEL_FMT_RIGHT_LEFT, //2-channels
.communication_format = (i2s_comm_format_t)I2S_COMM_FORMAT_I2S,
.intr_alloc_flags = 0,//ESP_INTR_FLAG_LEVEL1
.dma_buf_count = 8,
.dma_buf_len = 64,
.use_apll = false //Interrupt level 1
};
i2s_pin_config_t pin_config = {
.bck_io_num = I2S_BCK_IO,
.ws_io_num = I2S_WS_IO,
.data_out_num = I2S_DO_IO,
.data_in_num = I2S_DI_IO //Not used
};
static void setup_sine_wave()
{
unsigned int i;
int sample_val;
double sin_float;
size_t i2s_bytes_write = 0;
for (i = 0; i < SAMPLE_SIZE; i++)
{
sin_float = sin(i * PI / 180.0);
sin_float *= 127;
sample_val = (uint8_t)sin_float;
sample_data[i] = sample_val;
Serial << sample_data[i] << ",";
delay(1);
}
Serial << endl << "Sine wave generation complete" << endl;
}
void setup() {
pinMode(26, OUTPUT);
Serial.begin(115200);
i2s_driver_install(I2S_NUM_0, &i2s_config, 0, NULL);
//i2s_set_pin(I2S_NUM_0, NULL);
i2s_set_pin(I2S_NUM_0, &pin_config);
i2s_set_dac_mode(I2S_DAC_CHANNEL_RIGHT_EN);
i2s_set_sample_rates(I2S_NUM_0, 22050); //set sample rates
setup_sine_wave();
i2s_set_clk(I2S_NUM_0, SAMPLE_RATE, I2S_BITS_PER_SAMPLE_16BIT, I2S_CHANNEL_MONO);
i2s_write(I2S_NUM_0, &sample_data, SAMPLE_SIZE, &i2s_bytes_write, 500);
i2s_driver_uninstall(I2S_NUM_0); //stop & destroy i2s driver
}
void loop()
{
i2s_driver_install(I2S_NUM_0, &i2s_config, 0, NULL);
i2s_write(I2S_NUM_0, &sample_data, SAMPLE_SIZE, &i2s_bytes_write, 500);
delay(100);
i2s_driver_uninstall(I2S_NUM_0);
delay(10);
}
The code uploads and runs OK but I still get no signal on pin 25. I also looked on pin 26 (DAC1) but that seems to be used by LoRa_IRQ. Can anyone help me out?
First of all, take a look at how you've set up your pins
#define I2S_BCK_IO (GPIO_NUM_26)
#define I2S_WS_IO (GPIO_NUM_25)
#define I2S_DO_IO (GPIO_NUM_22)
#define I2S_DI_IO (-1)
According to this specification, pin 26 will output a clock signal, pin 25 will output the line selector (left or right), and pin 22 will output the serial data corresponding to the audio you're sending to your DAC.
.sample_rate = SAMPLE_RATE,
.bits_per_sample = I2S_BITS_PER_SAMPLE_16BIT,
Now you've set up your sample rate to 22050Hz, and your bit depth to 16 bits. So on pin 26 and 22 you should be getting a periodic signal of 22kHz, and a periodic signal of 22kHz/16 on pin 25.
Now to your problem. First, the ESP32 board has two 8-bit internal DACs, and they'll output an analog signal with a 8-bit depth. So in reality, pin 22 should be outputting an analog signal. Let's take a look at your code:
i2s_set_pin(I2S_NUM_0, &pin_config);
The I2s specification is a 3-line bus specification for audio communication. Since you're using the internal DAC, you don't need these three lines and setting their pins will make the driver assume you want to use them (meaning the DAC pin won't be activated).
//i2s_set_pin(I2S_NUM_0, NULL);
Uncomment this and the driver will assume you want to use the internal DAC.
.bits_per_sample = I2S_BITS_PER_SAMPLE_16BIT,
Again, since the internal DAC only takes 8 bits per sample, the driver only takes the 8 most significant bits. You can set this to 8 bits and avoid any problems.
void setup() {
pinMode(26, OUTPUT);
Serial.begin(115200);
i2s_driver_install(I2S_NUM_0, &i2s_config, 0, NULL);
//i2s_set_pin(I2S_NUM_0, NULL);
i2s_set_pin(I2S_NUM_0, &pin_config);
i2s_set_dac_mode(I2S_DAC_CHANNEL_RIGHT_EN);
i2s_set_sample_rates(I2S_NUM_0, 22050); //set sample rates
setup_sine_wave();
i2s_set_clk(I2S_NUM_0, SAMPLE_RATE, I2S_BITS_PER_SAMPLE_16BIT, I2S_CHANNEL_MONO);
i2s_write(I2S_NUM_0, &sample_data, SAMPLE_SIZE, &i2s_bytes_write, 500);
i2s_driver_uninstall(I2S_NUM_0); //stop & destroy i2s driver
}
In your setup function you're installing the I2s driver, then setting the pins for an I2s communication with an external DAC, setting the sample rate to 22050, resetting them to 22050 again, writing 1 cycle of your sine wave, then uninstalling the driver. After you uninstall the driver, it is useless to output anything. Here's a more appropriate approach:
void setup() {
Serial.begin(115200);
i2s_driver_install(I2S_NUM_0, &i2s_config, 0, NULL);
i2s_set_pin(I2S_NUM_0, NULL);
i2s_set_dac_mode(I2S_DAC_CHANNEL_BOTH_EN); // You also might be sending data to the wrong channel, so use both.
setup_sine_wave();
}
Now the loop function:
void loop()
{
i2s_driver_install(I2S_NUM_0, &i2s_config, 0, NULL);
i2s_write(I2S_NUM_0, &sample_data, SAMPLE_SIZE, &i2s_bytes_write, 500);
delay(100);
i2s_driver_uninstall(I2S_NUM_0);
delay(10);
}
You don't need to install and uninstall the I2s driver, nor delay the outer function, since the write function writes to a buffer that is consumed in the specified sample rate.
void loop()
{
i2s_write(I2S_NUM_0, &sample_data, SAMPLE_SIZE, &i2s_bytes_write, 500);
}
This is all you need, theorically. But there's a major problem. You defined your audio buffer as an int-array (a list of 32 bit signed values). Then you defined the size of this buffer as 4000
#define SAMPLE_SIZE 4000
In your write function, the buffer size parameter expects the size in bytes of your buffer, kinda like when you use the malloc function. Since each sample in your buffer has 4 bytes, you're only giving 1/4 of your buffer to the write function. In the end, you're not outputting a sine wave, but a 1/4 of a sine wave.
void loop()
{
i2s_write(I2S_NUM_0, &sample_data, SAMPLE_SIZE * sizeof(int), &i2s_bytes_write, 500);
}
Should do the trick.
I haven't tested this code, but hopefully my explanation will give you some directions as to debug your code.
I2s driver documentation has some code samples that you can check aswell.
I am using ESP8266 and ModbusMaster.h library to communicate with RS485 enabled power meter. Communication works fine but responses are the ones are confusing me and I can not get correct values. My power meter shows 1.49 kWh but response from Modbus is 16318. Here is my code:
#include <ArduinoOTA.h>
#include <BlynkSimpleEsp8266.h>
#include <SimpleTimer.h>
#include <ModbusMaster.h>
#include <ESP8266WiFi.h>
/*
Debug. Change to 0 when you are finished debugging.
*/
const int debug = 1;
#define ARRAY_SIZE(A) (sizeof(A) / sizeof((A)[0]))
int timerTask1, timerTask2, timerTask3;
float battBhargeCurrent, bvoltage, ctemp, btemp, bremaining, lpower, lcurrent, pvvoltage, pvcurrent, pvpower;
float stats_today_pv_volt_min, stats_today_pv_volt_max;
uint8_t result;
// this is to check if we can write since rs485 is half duplex
bool rs485DataReceived = true;
float data[100];
ModbusMaster node;
SimpleTimer timer;
// tracer requires no handshaking
void preTransmission() {}
void postTransmission() {}
// a list of the regisities to query in order
typedef void (*RegistryList[])();
RegistryList Registries = {
AddressRegistry_0001 // samo potrosnju
};
// keep log of where we are
uint8_t currentRegistryNumber = 0;
// function to switch to next registry
void nextRegistryNumber() {
currentRegistryNumber = (currentRegistryNumber + 1) % ARRAY_SIZE( Registries);
}
void setup()
{
// Serial.begin(115200);
Serial.begin(9600, SERIAL_8E1); //, SERIAL_8E1
// Modbus slave ID 1
node.begin(1, Serial);
node.preTransmission(preTransmission);
node.postTransmission(postTransmission);
// WiFi.mode(WIFI_STA);
while (Blynk.connect() == false) {}
ArduinoOTA.setHostname(OTA_HOSTNAME);
ArduinoOTA.begin();
timerTask1 = timer.setInterval(9000, updateBlynk);
timerTask2 = timer.setInterval(9000, doRegistryNumber);
timerTask3 = timer.setInterval(9000, nextRegistryNumber);
}
// --------------------------------------------------------------------------------
void doRegistryNumber() {
Registries[currentRegistryNumber]();
}
void AddressRegistry_0001() {
uint8_t j;
uint16_t dataval[2];
result = node.readHoldingRegisters(0x00, 2);
if (result == node.ku8MBSuccess)
{
for (j = 0; j < 2; j++) // set to 0,1 for two
datablocks
{
dataval[j] = node.getResponseBuffer(j);
}
terminal.println("---------- Show power---------");
terminal.println("kWh: ");
terminal.println(dataval[0]);
terminal.println("crc: ");
terminal.println(dataval[1]);
terminal.println("-----------------------");
terminal.flush();
node.clearResponseBuffer();
node.clearTransmitBuffer();
} else {
rs485DataReceived = false;
}
}
void loop()
{
Blynk.run();
// ArduinoOTA.handle();
timer.run();
}
I have tried similar thing but with Raspberry Pi and USB-RS485 and it works.
Sample of NodeJS code is below. It looks similar to Arduino code.
// create an empty modbus client
var ModbusRTU = require("modbus-serial");
var client = new ModbusRTU();
// open connection to a serial port
client.connectRTUBuffered("/dev/ttyUSB0", { baudRate: 9600, parity: 'even' }, read);
function write() {
client.setID(1);
// write the values 0, 0xffff to registers starting at address 5
// on device number 1.
client.writeRegisters(5, [0 , 0xffff])
.then(read);
}
function read() {
// read the 2 registers starting at address 5
// on device number 1.
console.log("Ocitavanje registra 0000: ");
client.readHoldingRegisters(0000, 12)
.then(function(d) {
var floatA = d.buffer.readFloatBE(0);
// var floatB = d.buffer.readFloatBE(4);
// var floatC = d.buffer.readFloatBE(8);
// console.log("Receive:", floatA, floatB, floatC); })
console.log("Potrosnja u kWh: ", floatA); })
.catch(function(e) {
console.log(e.message); })
.then(close);
}
function close() {
client.close();
}
This code displays 1.493748298302 in console.
How can I implement this var floatA = d.buffer.readFloatBE(0); in Arduino? Looks like that readFloatBE(0) does the trick, but available only in NodeJS / javascript.
Here i part of datasheet for my device
Here is what I am getting as result from original software that came with device:
If someone could point me in better direction I would be thenkfull.
UPDATE:
I found ShortBus Modbus Scanner software and tested readings.
Library read result as Unsigned integer, but need Floating Point and Word Order swapped. It is shown on image below.
Can someone tell how to set proper conversion please.
Right, so indeed the issue is with the part done by var floatA = d.buffer.readFloatBE(0);Modbus returns an array of bytes, and the client has to interpret those bytes, ideally done by the library you're using, but if not available on Arduino, you may try manually with byte decoding functions, with the following considerattions:
Modbus registers are 16 bit in length, so length 1 = 16 bits, length
2 = 32 bits, hence the data type noted on the docs as float32 means
"2 registers used for this value, interpret as float".
Therefore, on client.readHoldingRegisters(0000, 12)you're asking to read the register with address 00, and size 12... so this makes no sense, you only need 2 registers.
On your sample Node code, first you're writing
2 registers to address 5 in client.writeRegisters(5, [0 , 0xffff])
register 5 = 0, and register 6 = 0xFFFF, why? Then you go and read
from address 0, in read(), which is the address for Total KwH per
your docs.
So, you should get an array of bytes, and you need to
decode them as a float. Modbus is Big Endian for words and bytes, so
you need to use those in the decoding functions. I don't know exactly
what is available in Arduino, but hopefully you can figure it out
with this extra info.
I suppose that if you just send the buffer to print, you'll get an integer interpretation of the value, hence the problem
I have managed to send audio from a microphone using the code found here.
However I have not been able to do this using NAudio.
The code from CodeProject has explicit code to encode and decode such as:
G711.Encode_aLaw
G711.Decode_uLaw
to translate and return bytes to send across the network.
Is it possible to get some sample code for NAudio for the CodeProject application above?
Here's a quick C# Console App that I wrote using NAudio, microphone input, speaker output, with u-Law or A-Law encoding. The NAudio.Codecs namespace contains A-Law and u-Law encoders and decoders.
This program does not send data across the network (it's not hard to do, I just didn't feel like doing it here). I'll leave that to you. Instead, it contains a "Sender" thread and a "Receiver" thread.
The microphone DataAvailable event handler just drops the byte buffer into a queue (it makes a copy of the buffer - you don't want to hold on to the actual buffer from the event). The "Sender" thread grabs the queued buffers, converts the PCM data to g.711 and drops it into a second queue. This "drops into a second queue" part is where you'd send to a remote UDP destination for your particular app.
The "Receiver" thread reads the data from the second queue, converts it back to PCM, and feeds it to a BufferedWaveProvider that's being used by the WaveOut (speaker) device. You would replace this input with a UDP socket receive for your networked application.
Note that the program guarantees that the PCM input and output (microphone and speaker) are using the same WaveFormat. That's something that you'd also have to do for networked endpoints.
Anyway, it works. So here's the code. I won't go into too much detail. There are lots of comments to try to help understand what's going on:
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using NAudio.Wave;
using NAudio.Codecs;
namespace G711MicStream
{
class Program
{
delegate byte EncoderMethod( short _raw );
delegate short DecoderMethod( byte _encoded );
// Change these to their ALaw equivalent if you want.
static EncoderMethod Encoder = MuLawEncoder.LinearToMuLawSample;
static DecoderMethod Decoder = MuLawDecoder.MuLawToLinearSample;
static void Main(string[] args)
{
// Fire off our Sender thread.
Thread sender = new Thread(new ThreadStart(Sender));
sender.Start();
// And receiver...
Thread receiver = new Thread(new ThreadStart(Receiver));
receiver.Start();
// We're going to try for 16-bit PCM, 8KHz sampling, 1 channel.
// This should align nicely with u-law
CommonFormat = new WaveFormat(16000, 16, 1);
// Prep the input.
IWaveIn wavein = new WaveInEvent();
wavein.WaveFormat = CommonFormat;
wavein.DataAvailable += new EventHandler<WaveInEventArgs>(wavein_DataAvailable);
wavein.StartRecording();
// Prep the output. The Provider gets the same formatting.
WaveOut waveout = new WaveOut();
OutProvider = new BufferedWaveProvider(CommonFormat);
waveout.Init(OutProvider);
waveout.Play();
// Now we can just run until the user hits the <X> button.
Console.WriteLine("Running g.711 audio test. Hit <X> to quit.");
for( ; ; )
{
Thread.Sleep(100);
if( !Console.KeyAvailable ) continue;
ConsoleKeyInfo info = Console.ReadKey(false);
if( (info.Modifiers & ConsoleModifiers.Alt) != 0 ) continue;
if( (info.Modifiers & ConsoleModifiers.Control) != 0 ) continue;
// Quit looping on non-Alt, non-Ctrl X
if( info.Key == ConsoleKey.X ) break;
}
Console.WriteLine("Stopping...");
// Shut down the mic and kick the thread semaphore (without putting
// anything in the queue). This will (eventually) stop the thread
// (which also signals the receiver thread to stop).
wavein.StopRecording();
try{ wavein.Dispose(); } catch(Exception){}
SenderKick.Release();
// Wait for both threads to exit.
sender.Join();
receiver.Join();
// And close down the output.
waveout.Stop();
try{ waveout.Dispose(); } catch(Exception) {}
// Sleep a little. This seems to be accepted practice when shutting
// down these audio components.
Thread.Sleep(500);
}
/// <summary>
/// Grabs the mic data and just queues it up for the Sender.
/// </summary>
/// <param name="sender"></param>
/// <param name="e"></param>
static void wavein_DataAvailable(object sender, WaveInEventArgs e)
{
// Create a local copy buffer.
byte [] buffer = new byte [e.BytesRecorded];
System.Buffer.BlockCopy(e.Buffer, 0, buffer, 0, e.BytesRecorded);
// Drop it into the queue. We'll need to lock for this.
Lock.WaitOne();
SenderQueue.AddLast(buffer);
Lock.ReleaseMutex();
// and kick the thread.
SenderKick.Release();
}
static
void
Sender()
{
// Holds the data from the DataAvailable event.
byte [] qbuffer = null;
for( ; ; )
{
// Wait for a 'kick'...
SenderKick.WaitOne();
// Lock...
Lock.WaitOne();
bool dataavailable = ( SenderQueue.Count != 0 );
if( dataavailable )
{
qbuffer = SenderQueue.First.Value;
SenderQueue.RemoveFirst();
}
Lock.ReleaseMutex();
// If the queue was empty on a kick, then that's our signal to
// exit.
if( !dataavailable ) break;
// Convert each 16-bit PCM sample to its 1-byte u-law equivalent.
int numsamples = qbuffer.Length / sizeof(short);
byte [] g711buff = new byte [numsamples];
// I like unsafe for this kind of stuff!
unsafe
{
fixed( byte * inbytes = &qbuffer[0] )
fixed( byte * outbytes = &g711buff[0] )
{
// Recast input buffer to short[]
short * buff = (short *)inbytes;
// And loop over the samples. Since both input and
// output are 16-bit, we can use the same index.
for( int index = 0; index < numsamples; ++index )
{
outbytes[index] = Encoder(buff[index]);
}
}
}
// This gets passed off to the reciver. We'll queue it for now.
Lock.WaitOne();
ReceiverQueue.AddLast(g711buff);
Lock.ReleaseMutex();
ReceiverKick.Release();
}
// Log it. We'll also kick the receiver (with no queue addition)
// to force it to exit.
Console.WriteLine("Sender: Exiting.");
ReceiverKick.Release();
}
static
void
Receiver()
{
byte [] qbuffer = null;
for( ; ; )
{
// Wait for a 'kick'...
ReceiverKick.WaitOne();
// Lock...
Lock.WaitOne();
bool dataavailable = ( ReceiverQueue.Count != 0 );
if( dataavailable )
{
qbuffer = ReceiverQueue.First.Value;
ReceiverQueue.RemoveFirst();
}
Lock.ReleaseMutex();
// Exit on kick with no data.
if( !dataavailable ) break;
// As above, but we convert in reverse, from 1-byte u-law
// samples to 2-byte PCM samples.
int numsamples = qbuffer.Length;
byte [] outbuff = new byte [qbuffer.Length * 2];
unsafe
{
fixed( byte * inbytes = &qbuffer[0] )
fixed( byte * outbytes = &outbuff[0] )
{
// Recast the output to short[]
short * outpcm = (short *)outbytes;
// And loop over the u-las samples.
for( int index = 0; index < numsamples; ++index )
{
outpcm[index] = Decoder(inbytes[index]);
}
}
}
// And write the output buffer to the Provider buffer for the
// WaveOut devices.
OutProvider.AddSamples(outbuff, 0, outbuff.Length);
}
Console.Write("Receiver: Exiting.");
}
/// <summary>Lock for the sender queue.</summary>
static Mutex Lock = new Mutex();
static WaveFormat CommonFormat;
/// <summary>"Kick" semaphore for the sender queue.</summary>
static Semaphore SenderKick = new Semaphore(0, int.MaxValue);
/// <summary>Queue of byte buffers from the DataAvailable event.</summary>
static LinkedList<byte []> SenderQueue = new LinkedList<byte[]>();
static Semaphore ReceiverKick = new Semaphore(0, int.MaxValue);
static LinkedList<byte []> ReceiverQueue = new LinkedList<byte[]>();
/// <summary>WaveProvider for the output.</summary>
static BufferedWaveProvider OutProvider;
}
}