RtApiWasapi::getDeviceInfo: Unable to retrieve device mix format - audio

I am trying to learn how to use Processing, and so am attempting to use the sound library. When running the either of the first two example programs provided at https://processing.org/tutorials/sound/, the IDE responds with this error:
This application has requested the Runtime to terminate it in an unusual way.
Please contact the application's support team for more information.
terminate called after throwing an instance of 'RtAudioError'
what(): RtApiWasapi::getDeviceInfo: Unable to retrieve device mix format.
Could not run the sketch (Target VM failed to initialize).
For more information, read revisions.txt and Help ? Troubleshooting.
Also, whenever I try to run a sketch using this library, along with that error, windows says
Java(TM) Platform SE binary has stopped working
Windows is collecting more information about the problem. This might take several minutes...
Could you help me to resolve this issue? I am using a windows vista computer.
This is the second example code:
/**
* Processing Sound Library, Example 2
*
* This sketch shows how to use envelopes and oscillators.
* Envelopes describe to course of amplitude over time.
* The Sound library provides an ASR envelope which stands for
* attack, sustain, release.
*
* .________
* . ---
* . ---
* . ---
* A S R
*/
import processing.sound.*;
// Oscillator and envelope
TriOsc triOsc;
Env env;
// Times and levels for the ASR envelope
float attackTime = 0.001;
float sustainTime = 0.004;
float sustainLevel = 0.2;
float releaseTime = 0.2;
// This is an octave in MIDI notes.
int[] midiSequence = {
60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71, 72
};
// Set the duration between the notes
int duration = 200;
// Set the note trigger
int trigger = 0;
// An index to count up the notes
int note = 0;
void setup() {
size(640, 360);
background(255);
// Create triangle wave and envelope
triOsc = new TriOsc(this);
env = new Env(this);
}
void draw() {
// If value of trigger is equal to the computer clock and if not all
// notes have been played yet, the next note gets triggered.
if ((millis() > trigger) && (note<midiSequence.length)) {
// midiToFreq transforms the MIDI value into a frequency in Hz which we use
//to control the triangle oscillator with an amplitute of 0.8
triOsc.play(midiToFreq(midiSequence[note]), 0.8);
// The envelope gets triggered with the oscillator as input and the times and
// levels we defined earlier
env.play(triOsc, attackTime, sustainTime, sustainLevel, releaseTime);
// Create the new trigger according to predefined durations and speed
trigger = millis() + duration;
// Advance by one note in the midiSequence;
note++;
// Loop the sequence
if (note == 12) {
note = 0;
}
}
}
// This function calculates the respective frequency of a MIDI note
float midiToFreq(int note) {
return (pow(2, ((note-69)/12.0)))*440;
}

Related

Mee6 Experience Lvl System Clone In NodeJS (Static Value?)

I've been trying to replicate the Discord-Famous Discord Bot Mee6's Experience Algorithm in NodeJS (the OG Bot is in Python) but every attempt im stumped by static values.
function XPValue(min, max) {
return Math.floor(Math.random() * (max - min + 1)) + min;
}
XPValue(15, 25) // This Randomizes as normal
const curLevel = Math.floor(0.5 * Math.sqrt(XPValue(15, 25))); //When put it here it becomes a static number and never changes
I don't understand how I can get the XP to randomize as I actually want it to do using the XPValue function, even though it's passed through that function it keeps a static value of 2
the issue was the variable once assign it wouldn't changed the value per execution where as using a getter fixed this issue:
const Experience = {
get random() {
var rand = Math.floor(Math.random() * 15);
return rand + (25 - 15);
}
};
credits to Zelak (PlexiDev)

pjsip-2.7.1 Assertion failed when calling pjsua_set_snd_dev with ios CallKit

After I update pjsip from v2.6 to v2.7.1,my app will assertion failed with fuction pjsua_set_snd_dev().
According pjsip's ticket #1941:
To make outgoing call:
func provider(_ provider: CXProvider, perform action: CXStartCallAction) {
/* 1. We must not start call audio here, and can only do so
* once the audio session has been activated by the system
* after having its priority elevated. So, make sure that the sound
* device is closed at this point.
*/
/* 2. Provide your own implementation to configure
* the audio session here.
*/
configureAudioSession()
/* 3. Make call with pjsua_call_make_call().
* Then use pjsua's on_call_state() callback to report significant
* events in the call's lifecycle, by calling iOS API
* CXProvider.reportOutgoingCall(with: startedConnectingAt:) and
* CXProvider.reportOutgoingCall(with: ConnectedAt:)
*/
/* 4. If step (3) above returns PJ_SUCCESS, call action.fulfill(),
* otherwise call action.fail().
*/
}
To handle incoming call:
func provider(_ provider: CXProvider, perform action: CXAnswerCallAction) {
/* 1. We must not start call audio here, and can only do so
* once the audio session has been activated by the system
* after having its priority elevated. So, make sure that the sound
* device is closed at this point.
*/
/* 2. Provide your own implementation to configure
* the audio session here.
*/
configureAudioSession()
/* 3. Answer the call with pjsua_call_answer().
*/
/* 4. If step (3) above returns PJ_SUCCESS, call action.fulfill(),
* otherwise call action.fail().
*/
}
To start sound device:
func provider(_ provider: CXProvider, didActivate audioSession: AVAudioSession) {
/* Start call audio media, now that the audio session has been
* activated after having its priority boosted.
*
* Call pjsua API pjsua_set_snd_dev() here.
*/
}
When calling pjsua API pjsua_set_snd_dev(),there show:
Assertion failed: (param && id!=PJMEDIA_AUD_INVALID_DEV), function
pjmedia_aud_dev_default_param, file ../src/pjmedia/audiodev.c, line
487.
I found that v2.7.1, in pjsua_set_no_snd_dev(),there is
pjsua_var.cap_dev = PJSUA_SND_NO_DEV; pjsua_var.play_dev =
PJSUA_SND_NO_DEV;
but in v2.6 ,do not have this.
PJSUA_SND_NO_DEV and PJMEDIA_AUD_INVALID_DEV are same as -3.
Is this a bug,or I confuse anything?
Steve have you added
App provides Voice over IP services required background modes in Capabilities
Or you can add in to directly in Info.plist

Decode streaming audio with gstreamer 1.0 and access the waveform data?

The actual gst version is 1.8.1.
Currently I have code that receives a gstreamer encoded stream and plays it through my soundcard. I want to modify it to instead give my application access to the raw un-compressed audio data. This should result in an array of integer sound samples, and if I were to plot them I would see the audio wave form (e.g. a perfect tone would be a nice sine wave), and if I were to append the most recent array to the last one received by a callback I wouldn't see any discontinuity.
This is the current playback code:
https://github.com/lucasw/audio_common/blob/master/audio_play/src/audio_play.cpp
I think I need to change the alsasink to an appsink, and setting up a callback that will get the latest chunk of audio after it has passed through the decoder. This is adapted from https://github.com/jojva/gst-plugins-base/blob/master/tests/examples/app/appsink-src.c :
_sink = gst_element_factory_make("appsink", "sink");
g_object_set (G_OBJECT (_sink), "emit-signals", TRUE,
"sync", FALSE, NULL);
g_signal_connect (_sink, "new-sample",
G_CALLBACK (on_new_sample_from_sink), this);
And then there is the callback:
static GstFlowReturn
on_new_sample_from_sink (GstElement * elt, gpointer data)
{
RosGstProcess *client = reinterpret_cast<RosGstProcess*>(data);
GstSample *sample;
GstBuffer *app_buffer, *buffer;
GstElement *source;
/* get the sample from appsink */
sample = gst_app_sink_pull_sample (GST_APP_SINK (elt));
buffer = gst_sample_get_buffer (sample);
/* make a copy */
app_buffer = gst_buffer_copy (buffer);
/* we don't need the appsink sample anymore */
gst_sample_unref (sample);
/* get source and push new buffer */
source = gst_bin_get_by_name (GST_BIN (client->_sink), "app_source");
return gst_app_src_push_buffer (GST_APP_SRC (source), app_buffer);
}
Can I get at the data in that callback? What am I supposed to do with the GstFlowReturn? If that is passing data to another pipeline element I don't want to do that, I'd rather get it there and be done.
https://github.com/lucasw/audio_common/blob/appsink/audio_process/src/audio_process.cpp
Is the gpointer data passed to that callback exactly what I want (cast to a gint16 array?), or otherwise how do I convert and access it?
The GstFlowReturn is merely a return value for the underlying base classes. If you would return an error there the pipeline probably stops because.. well there was a critical error.
The cb_need_data events are triggered by your appsrc element. This can be used as a throttling mechanism if needed. Since you probably use the appsrc in a pure push mode (as soon something arrives at the appsink you push it to the appsrc) you can ignore these. You also explicitly disable these events on the appsrc element. (Or do you still use the one?)
The data format in the buffer depends on the caps that the decoder and appsink agreed on. That is usually the decoder preferred format. You may have some control over this format depending on the decoder or convert it to your preferred format. May be worthwhile to check the format, Float32 is not that uncommon..
I kind of forgot what your actual question was, I'm afraid..
I can interpret the data out of the modified callback below (there is a script that plots it to the screen), it looks like it is signed 16-bit samples in the uint8 array.
I'm not clear about the proper return value for the callback, there is a cb_need_data callback setup elsewhere in the code that is getting triggered all the time with this code.
static void // GstFlowReturn
on_new_sample_from_sink (GstElement * elt, gpointer data)
{
RosGstProcess *client = reinterpret_cast<RosGstProcess*>(data);
GstSample *sample;
GstBuffer *buffer;
GstElement *source;
/* get the sample from appsink */
sample = gst_app_sink_pull_sample (GST_APP_SINK (elt));
buffer = gst_sample_get_buffer (sample);
GstMapInfo map;
if (gst_buffer_map (buffer, &map, GST_MAP_READ))
{
audio_common_msgs::AudioData msg;
msg.data.resize(map.size);
// TODO(lucasw) copy this more efficiently
for (size_t i = 0; i < map.size; ++i)
{
msg.data[i] = map.data[i];
}
gst_buffer_unmap (buffer, &map);
client->_pub.publish(msg);
}
}
https://github.com/lucasw/audio_common/tree/appsink

Recording a WAV file using SuperCollider

I wrote the following code to define a SynthDef that records a sound into the buffer passed as one of the parameters.
(
SynthDef(\recordTone, { |freq, bufnum, duration|
var w = SinOsc.ar(freq) * XLine.ar(101,1,duration,add: -1) / 100;
RecordBuf.ar(w!2,bufnum,loop: 0,doneAction: 2);
}).add;
)
I also have the below code that invokes a Synth for the above SynthDef and tried to write the buffer into a file.
({
var recordfn = { |freq, duration, fileName|
var server = Server.local;
var buf = Buffer.alloc(server,server.sampleRate * duration,2);
Synth(\recordTone,[\freq, 440, \bufnum, buf.bufnum, \duration, duration]);
buf.write(
"/Users/minerva/Temp/snd/" ++ fileName ++ ".wav",
"WAVE",
"int16",
completionMessage: ["b_free", buf.bufnum]
);
};
recordfn.value(440,0.5,"test");
}.value)
The output file is being created, but does not contain any audible sound. What am I doing wrong? I've looked through all the SuperCollider documentation I could find, but nothing seems to work! Any pointers is greatly appreciated.
Based on what Dan S answer, I made a few changes to get this working:
(
SynthDef(\playTone, { |freq, duration|
var w = SinOsc.ar(freq) * XLine.ar(1001,1,duration,add: -1,doneAction:2) / 1000;
Out.ar(0,w!2);
}).add;
)
(
SynthDef(\recordTone, { |buffer|
RecordBuf.ar(In.ar(0,2), buffer, loop: 0, doneAction: 2);
}).add;
)
(Routine({
var recordfn = { |freq, duration|
var server = Server.local;
var buffer = Buffer.alloc(server, server.sampleRate * duration, 2);
server.sync;
server.makeBundle(func: {
var player = Synth(\playTone, [\freq, freq, \duration, duration]);
var recorder = Synth.after(player, \recordTone, [\buffer, buffer]);
});
duration.wait;
buffer.write(
"/Users/minerva/Temp/snd/test.wav",
"WAVE",
"int16",
completionMessage: ["/b_free", buffer]
);
};
recordfn.value(440,0.1);
}).next)
Your main problem is that in your recordfn function you're instantiating the SynthDef (i.e. STARTING it recording) and writing the Buffer to disk at the same time. Obviously, at the point you START recording, there's no sound in the Buffer, so SuperCollider is doing exactly as you ask and writing the empty silent Buffer out as a file.
Solutions:
The most basic solution is to invoke one function to start recording, and a separate function when it's time to write to disk.
OR if you want it all in one, consider launching a Task within your function in order to wait until the Buffer is ready to be written to disk.
OR instead of RecordBuf use DiskOut which is for directly "spooling" to disk.
A secondary thing: I can't remember right now but I think it might be "WAV" not "WAVE".

RPC communication between Linux and Solaris

I have a RPC server running in Solaris. I have a RPC client which is running fine in Solaris.
When I compile and run the same code in Ubuntu, I am getting Error decoding arguments in the server.
Solaris use SunRPC (ONC RPC). Not sure how to find the version of rpc.
Is there any difference between the RPC available in Linux & Solaris?
Would there be any mismatch between the xdr generated in Solaris & Linux?
How should I find out the issue?
Note: Code cannot be posted
#twalberg, #cppcoder Have you resolve the problem? I have the same problem, but I can to post my code if it will be helpfull. The some part of code is:
/* now allocate a LoopListRequestStruct and fill it with request data */
llrs = malloc(sizeof(LoopListRequestStruct));
fill_llrs(llrs);
/* Now, make the client request to the bossServer */
client_call_status = clnt_call(request_client, ModifyDhctState,
(xdrproc_t)xdr_LoopListRequestStruct,
(caddr_t)llrs,
(xdrproc_t)xdr_void,
0,
dummy_timeval
);
void fill_llrs(LoopListRequestStruct* llrs)
{
Descriptor_Loop* dl = 0;
DhctState_d *dhct_state_ptr = 0;
PackageAuthorization_d *pkg_auth_ptr = 0;
llrs->TRANS_NUM = 999999; /* strictly arbitraty, use whatever you want */
/* the bossServer simply passes this back in */
/* in the response you use it to match */
/* request/response if you want or you can */
/* choose to ignore it if you want */
/* now set the response program number, this is the program number of */
/* transient program that was set up using the svc_reg_utils.[ch] */
/* it is that program that the response will be sent to */
llrs->responseProgramNum = response_program_number;
/* now allocate some memory for the data structures that will actually */
/* carry the request data */
llrs->ARG_PTR = malloc(sizeof(LoopListRequestArgs));
dl = llrs->ARG_PTR->loopList.Loop_List_val;
/* we are using a single descriptor loop at a time, this should always */
/* be the case */
llrs->ARG_PTR->loopList.Loop_List_len = 1;
llrs->ARG_PTR->loopList.Loop_List_val = malloc(sizeof(Descriptor_Loop));
/* now allocate memory and set the size for the ModifyDhctConfiguration */
/* this transaction always has 3 descriptors, the DhctMacAddr_d, the */
/* DhctState_d, and the PackageAuthorization_d */
dl = llrs->ARG_PTR->loopList.Loop_List_val;
dl->Descriptor_Loop_len = 2;
dl->Descriptor_Loop_val =
malloc((2 * sizeof(Resource_descriptor_union)));
/* now, populate each descriptor */
/* the order doesn't really matter I'm just doing it in the order I */
/* always have done */
/* first the mac address descriptor */
dl->Descriptor_Loop_val->type =
dhct_mac_addr_type;
strcpy(
dl->Descriptor_Loop_val[0].Resource_descriptor_union_u.dhctMacAddr.dhctMacAddr,
dhct_mac_addr
);
/* second the dhct state descriptor */
dl->Descriptor_Loop_val[1].type =
dhct_state_type;
dhct_state_ptr =
&(dl->Descriptor_Loop_val[1].Resource_descriptor_union_u.dhctState);
if(dis_enable)
dhct_state_ptr->disEnableFlag = DIS_Enabled;
else
dhct_state_ptr->disEnableFlag = DIS_Disabled;
if(dms_enable)
dhct_state_ptr->dmsEnableFlag = DMS_Enabled;
else
dhct_state_ptr->dmsEnableFlag = DMS_Disabled;
if(analog_enable)
dhct_state_ptr->analogEnableFlag = AEF_Enabled;
else
dhct_state_ptr->analogEnableFlag = AEF_Disabled;
if(ippv_enable)
dhct_state_ptr->ippvEnableFlag = IEF_Enabled;
else
dhct_state_ptr->ippvEnableFlag = IEF_Disabled;
dhct_state_ptr->creditLimit = credit_limit;
dhct_state_ptr->maxIppvEvents = max_ippv_events;
/* we don't currently use the powerkey pin, instead we use an */
/* application layer pin for purchases and blocking so always turn */
/* pinEnable off */
dhct_state_ptr->pinEnable = PE_Disabled;
dhct_state_ptr->pin = 0;
if(fast_refresh_enable)
dhct_state_ptr->fastRefreshFlag = FRF_Enabled;
else
dhct_state_ptr->fastRefreshFlag = FRF_Disabled;
dhct_state_ptr->locationX = location_x;
dhct_state_ptr->locationY = location_y;
}
I've met exactly this error during integration with the same software. Linux version really creates bad request. Reason of such behaviour is serialization of null c-string. Glibc edition of SUN rpc can't encode them, xdr_string returns zero. But the sample which you are dealing with sets 'pin' in 0. Just replace 'pin' with "", or create some wrapper over xdr_string(), and samples will work.
My patch to the PowerKey samples looks like this:
< if (!xdr_string(xdrs, objp, PIN_SZ))
< return (FALSE);
< return (TRUE);
---
> char *t = "";
> return xdr_string(xdrs, *objp? objp : &t , PIN_SZ);
but it can be made simpler, ofcourse. In general you should fix usage of the generated code, in my case it was 'pin' variable in the sample sources provided by software authors which must be initialized before xdr_string() call.
Note that XDR will handle endianness but if you use app specific opaque fields, decoding will break if you don’t handle endianness yourself. Make sure integers are sent as XDR integers

Resources