I have been very disappointed to find that now that I am ready to migrate my html5 app onto phones using phonegap the ability to play audio is like perpetual motion. It is almost there, but not quite.
The answer for Androids (pre ice cream sandwich at least) is the Phonegap media api. According to everyone.
The problem is that I REALLY care about what the current time is while playing back my audio. On my Mac running Chrome I timed how long it takes to do:
pos=audio.currentTime
and it is less than a millisecond.
With the Phonegap media api I must call an asynchronous function, WAIT till it calls back and then I get the current media position. This takes anywhere from 5 to over 150 milliseconds. It is not consistent and it is distressingly slow.
It also begs the question: what time is it? If it takes 150ms to get the time, did I get the time at the beginning of those 150ms or the end? I really cant be sure that the time I got is really that close to the current time.
Its a good thing I am not still writing submarine navigation software, because with such a horrible source of time I would never be able to determine where I am.
I am now using the system clock to keep track of time, and using getCurrentPosition() to re-sync from time to time. However, I can still be off by over a hundred ms.
My question: Anyone got a better approach?
EDIT:
Looking at the code for Cordova:
/**
* Get current position of playback.
*
* #return position in msec or -1 if not playing
*/
public long getCurrentPosition() {
if ((this.state == STATE.MEDIA_RUNNING) || (this.state == STATE.MEDIA_PAUSED)) {
int curPos = this.player.getCurrentPosition();
this.handler.webView.sendJavascript("cordova.require('cordova/plugin/Media').onStatus('" + this.id + "', " + MEDIA_POSITION + ", " + curPos / 1000.0f + ");");
return curPos;
}
else {
return -1;
}
}
It looks to me like this should execute pretty fast...except for that call to sendJavascript(). I wonder if that is where the mysterious delay is originating?
For my purposes I don't need an onStatus event: I need to get the current position as quickly as I can. So I wonder if I can remove that line of code... I guess that would necessitate a change to the Cordova source itself. :(
Related
I hope to open a question that is useful for the users of this shield.
Incipit: WiFi.setAutoReconnect(true); seems that not prevent 100% of disconnections.
I've tested a lot of shields (ESP12F, ESP01) and in some case i noted that the auto-reconnect does not work properly.
Fact:
I was unable to reproduce when the shield is connect to pc/debugger.
When did it happen, i try to execute a task depending on a botton press, eg:
loop(){
if (digitalRead... == HIGH) do_something()
}
And the shield... do something! So, the shield was not frozen.
I try to reset (BOTH via HW and SW) and the shield immediately reconnect.
I read some other source and this behavior is often described (es. https://randomnerdtutorials.com/solved-reconnect-esp32-to-wifi/ ). Brief: try WiFi.reconnect(), if it does not work try ESP.restart().
Then, the question:
Why does this happen? Is there a problem with the arduino libraries, or with the native expressif interface? Or is a well-known hardware problem unsolvable via SW?
If indeed so, what techniques do you use to prevent disconnection? I have set a ticker every 30 minutes, which if it sees the card disconnected for more than a certain time, it restarts it. Eg.
void checkWifi() {
if (lastPing + DELTA < millis()) ESP.restart();
}
ticker.attach(checkWifi, ...)
void loop() {
if WiFi.isConnect() {
lastPing = millis();
...
}
}
If there is nothing to do, what do you think of the restart technique? Is it risky to restart frequently, can it reduce the life of the device?
Thanks to anyone who wants to contribute or exchange impressions!
I am playing a bit with Node.js. I've just started writing something new and it stuck me that my simple "console" app takes quite long to respond. This app loads a 5MB json file, turns it into an object but all that still does not take a significant amount of time. My further search (in a quite short and simple code) led me to conclusion that this single line:
this.generated_on = ( new Date() ).toString();
takes around 2.5s to execute. Further investigation made me understand even less. I've modified it to:
this.generated_on = new Date();
this.generated_on = this.generated_on.toString();
(with console.timeLogs in between) and line with toString() was the one that took over 2 seconds to execute. Then, I've modified the code once again:
this.generated_on = new Date('2019-02-04 20:00:00');
this.generated_on = this.generated_on.toString();
and results were other way around. toString() took only 2ms while creating Date object took over 2s.
Why is it so slow? Why so different results? Any faster way to get formatted current time string? (I don't care much about execution time for this project as it works offline, still it bugs me).
I think your development environment is off or something. I cannot tell you why your machine is running the code in a slow manner. I cannot replicate the problem you were saying.
I tried to benchmark the code you have above.
https://repl.it/#act/HotpinkFearfulActiveserverpages
Go here and try clicking on Run button at the top.
Here's the result I am seeing
// Case Togehter:
// this.generated_on = ( new Date() ).toString();
// Case Separately:
// this.generated_on = new Date();
// this.generated_on = this.generated_on.toString();
Together x 332,222 ops/sec ±7.75% (44 runs sampled)
Separtely x 313,162 ops/sec ±8.48% (43 runs sampled)
332,222 ops/sec means an operation took 1/332,222 seconds on average.
I suggest you to use node.js builtin module 'Performance hooks'.
You can find documentation here: https://nodejs.org/dist/latest-v11.x/docs/api/perf_hooks.html#perf_hooks_performance_mark_name
Mark each process from top to bottom and then print the metrics (in milliseconds), you will figure out the actual issue
I was wondering could you please help. I am using the web audio to play music for a certain amount of time. After around 90 seconds (depending on length of song), i want to call my loadAndPlay() function again to repeat play. The function will be called every 90 seconds or so.
I would like to use the same method to trigger effects at the same time, using Tuna.js
it seems that this will not work:
if (context.currentTime > 90*totalPlays)
{
loadAndPlay();
}
i have also tried using this method:
if (time > 90*totalPlays){
source.onended = function()
{
console.log("End. starting again");
loadAndPlay();
totalPlays++
}
time will usually be around 90 and 105. This works sometimes but not all the time unfortunately. I would like to steer away from 'source.onended' anyway as it is unreliable.
the same idea using effects, cueing a delay for 15 seconds
while (context.currentTime > 90 && context.currentTime < 105){
delay.wetLevel = 10
}
if anyone could point out where i'm going wrong that would be fantastic. thank you!
I have a very simple program that plays 4 different tones, depending on what button is pressed. I have found that if I play multiple tones or the same tone in rapid succession, there are unpleasant clicking noises produced. I have made sure that these clicks are not present in my audio samples; it is definitely caused by playing the clips quickly one after another.
After googling around, I'm fairly sure that the clicks are due to the rapid change in pitch between clips. Looking at the waveform of the playback from the offending audio, it looks like a clip is first cancelled for a fraction of a second before starting the next clip. I have highlighted the section where this seems particularly obvious.
The clip that showcases these audio clicks can also be downloaded here.
My code is very simple. I am using XInput to read input from a connected controller, which determines the tone to play, and I am using WinMM to output sound from wav files. It is written in the D programming language, but I have modified it to use no D-specific features to make it as C-like as possible and to avoid confusion.
SHORT keyPressed(int vkey)
{
enum highBit { val = 0x8000 }
return cast(SHORT)(GetKeyState(vkey) & highBit.val);
}
enum Button
{
DPAD_UP = 0x0001,
DPAD_DOWN = 0x0002,
DPAD_LEFT = 0x0004,
DPAD_RIGHT = 0x0008,
START = 0x0010,
BACK = 0x0020,
LEFT_THUMB = 0x0040,
RIGHT_THUMB = 0x0080,
LEFT_SHOULDER = 0x0100,
RIGHT_SHOULDER = 0x0200,
A = 0x1000,
B = 0x2000,
X = 0x4000,
Y = 0x8000,
}
struct XINPUT_GAMEPAD
{
WORD wButtons;
BYTE bLeftTrigger;
BYTE bRightTrigger;
SHORT sThumbLX;
SHORT sThumbLY;
SHORT sThumbRX;
SHORT sThumbRY;
}
struct XINPUT_STATE
{
DWORD dwPacketNumber;
XINPUT_GAMEPAD Gamepad;
bool isPressed(int button)
{
return cast(bool)(Gamepad.wButtons & button);
}
}
int main()
{
HANDLE xinputDLL = initXinput();
XINPUT_STATE oldState;
XINPUT_STATE newState;
while (!keyPressed(VK_ESCAPE))
{
oldState = newState;
XInputGetState(0, &newState);
enum flags { val = SND_ASYNC | SND_FILENAME | SND_NODEFAULT }
if (newState.isPressed(Button.A) && !oldState.isPressed(Button.A))
{
PlaySoundA(toStringz("Piano.ff.A4.wav"), null, flags.val);
}
if (newState.isPressed(Button.B) && !oldState.isPressed(Button.B))
{
PlaySoundA(toStringz("Piano.ff.B4.wav"), null, flags.val);
}
if (newState.isPressed(Button.X) && !oldState.isPressed(Button.X))
{
PlaySoundA(toStringz("Piano.ff.C5.wav"), null, flags.val);
}
if (newState.isPressed(Button.Y) && !oldState.isPressed(Button.Y))
{
PlaySoundA(toStringz("Piano.ff.F4.wav"), null, flags.val);
}
}
denitXinput(xinputDLL);
return 0;
}
Assuming that I'm correct in regards to the source of the clicking sounds, I think the solution is to have each sample fade into the next one. However, I am not sure how to do this as the WinMM documentation seems relatively sparse, and I am inexperienced with it.
Is the solution to my problem of clicks when playing audio samples to have each sample fade into the next one? If so, how can I accomplish this using WinMM? If not, is there another solution that I can try?
I know how we can solve this in theory, but I don't have actual working code yet for all cases. (When I do, I'll edit this.)
First, the simple case which kinda works: instead of using PlaySound, try mciSendStringA:
if(auto err = mciSendStringA("play test.wav", null, 0, null))
writeln(err);
I am not making that up, Windows actually has that function, and it actually works with a lot of little command strings and file formats (though if your program terminates, all sound stops, so make sure the program keeps running e.g. stay in your controller loop or call Sleep(something)).
I've used a lot of Win32 and sometimes I'm amazed by how much stuff it has. Prototype:
extern(Windows) uint mciSendStringA(in char*,char*,uint,void*);
found in winmm.lib.
That basically works, but in my test, playing the same file twice at the same time has no effect. Playing different files together mixes them though. So it is a partial solution.
Next step from that would be to use the mciSendCommand function - a bit lower level than send string, so you can open multiple devices and try to get more overlap that way:
http://msdn.microsoft.com/en-us/library/windows/desktop/dd743675%28v=vs.85%29.aspx
I haven't tried this yet, but it looks fairly simple and I suspect it might be good enough for you. Open up a few devices for each button so you can hit them a few times fast and it cycles through them, hopefully mixing the same sound more than once when needed.
The prototype to that is:
extern(Windows) uint /*MCIERROR*/ mciSendCommandA(MCIDEVICEID,UINT,DWORD,DWORD);
Yes, it casts to void* then to DWORD in the msdn example. Blargh. Relevant structs:
struct MCI_OPEN_PARMSA {
DWORD dwCallback;
MCIDEVICEID wDeviceID; // aka uint
LPCSTR lpstrDeviceType;
LPCSTR lpstrElementName;
LPCSTR lpstrAlias;
}
struct MCI_PLAY_PARMS {
DWORD dwCallback;
DWORD dwFrom;
DWORD dwTo;
}
and you can borrow some constants from here too:
https://github.com/AndrejMitrovic/DWinProgramming/blob/master/WindowsAPI/win32/mmsystem.d#L693
(if you are already using the win32 bindings, great! But I think they are kinda a pain for little things so I try to avoid them, preferring to copy/paste prototypes+structs+constants off MSDN as I need them.)
You should be able to get the MSDN example working with those definitions and core.sys.windows.windows. Don't forget pragma(lib, "winmm"); too.
I think a full solution that will certainly work, but is also quite a bit harder, will be using the low level interface to mix the sounds yourself as they happen and send that result to the device. I don't have this working yet and I'm out of time today, but hopefully I can get something to you tomorrow.
The basic steps are:
1) call waveOutOpen to get a device. Set up a callback function which it calls when it needs more data.
2) prepare a buffer - or perhaps more than one - with waveOutPrepareHeader
3) feed data with waveOutWrite when requested by your callback (might want this in a separate thread) with the current notes. Mixing two samples is simply a case of adding the values together (and clipping if they overflow - sounds awful btw but hopefully that won't actually happen) so if you are doing more than one sound, just add them as you go.
Don't forget extern(Windows) on any callback function!
4) Loading your samples probably means reading the .wav file. That's not super hard, Windows has helper functions or you can do it yourself. I'll show code for this too.
What I have so far is in my simpleaudio.d https://github.com/adamdruppe/arsd/blob/master/simpleaudio.d find struct AudioOutput and the WinMM version. It has a horrible API right now that must be radically changed - it was acceptable on Linux but sucks on Windows. A callback feeder instead of write(data) should work better on both platforms, so that's what I'll do.
Problem I'm having with the demo right now is gaps between buffers... leading to clicky sounds. Yeah. But I'm sure it is just latency that should be solved with the proper callback approach and buffer sizing.
That MCI function might work for you as a next step though, maybe even a final step if the multiple devices works.
BTW: you could also prolly make it do MIDI commands instead of playing wavs and get all kinds of cool stuff. Simpleaudio.d's low level midi is already functioning - the demo main even shows a piano scale. Rigging it into the xbox controller shouldn't be too hard... note on when the button is pressed, note off when released, and not even think about timing.. Not really an answer to the question but a cool thing to play with in the same vein!
Ok, I hope I don't mess this up, I have had a look for some answers but can't find anything. I am trying to make a simple sampler in openframeworks using the FMOD sound player in 3D mode. I can make a single instance work fine (recording a new file using libsndfilerecorder and then playing it back and moving it in surround.
However I want to have 8 layers of looping audio that I can record and replace one layer at a time in a live show. I get a lot of problems as soon as I have more than 1 layer.
The first part of my question relates to the FMOD 3D modes, it is listener relative, so I have to define the position of my listener for every sound (I would prefer to have head relative mode but I cannot make this work at all. Again this works fine when I am using a single player but with multiple players only the last listener I update actually works.
The main problem I have is that when I use multiple players I get distortion, and often a mix of other currently playing sounds (even when the microphone cannot hear them) in my new recordings. Is there an incompatability with libsndfilerecorder and FMOD?
Here I initialise the players
for (int i=0; i<CHANNEL_COUNT; i++) {
lvelocity[i].set(1, 1, 1);
lup[i].set(0, 1, 0);
lforward[i].set(0, 0, 1);
lposition[i].set(0, 0, 0);
sposition[i].set(3, 3, 2);
svelocity[i].set(1, 1, 1);
//player[1].initializeFmod();
//player[i].loadSound( "1.wav" );
player[i].setVolume(0.75);
player[i].setMultiPlay(true);
player[i].play();
setupHold[i]==false;
recording[i]=false;
channelHasFile[i]=false;
settingOsc[i]=false;
}
When I am recording I unload the file and make sure the positions of the player that is not loaded are not updating.
void fmodApp::recordingStart( int recordingId ){
if (recording[recordingId]==false) {
setupHold[recordingId]=true; //this stops the position updating
cout<<"Start recording Channel " + ofToString(recordingId+1)+" setup hold is true \n";
pt=getDateName() +".wav";
player[recordingId].stop();
player[recordingId].unloadSound();
audioRecorder.setup(pt);
audioRecorder.setFormat(SF_FORMAT_WAV | SF_FORMAT_PCM_16);
recording[recordingId]=true; //this starts the libSndFIleRecorder
}
else {
cout<<"Channel" + ofToString(recordingId+1)+" is already recording \n";
}
}
And I stop the recording like this.
void fmodApp::recordingEnd( int recordingId ){
if (recording[recordingId]=true) {
recording[recordingId]=false;
cout<<"Stop recording" + ofToString(recordingId+1)+" \n";
audioRecorder.finalize();
audioRecorder.close();
player[recordingId].loadSound(pt);
setupHold[recordingId]=false;
channelHasFile[recordingId]=true;
cout<< "File recorded channel " + ofToString(recordingId+1) + " file is called " + pt + "\n";
}
else {
cout << "Sorry track" + ofToString(recordingId+1) + "is not recording";
}
}
I am careful not to interrupt the updating process but I cannot see where I am going wrong.
Many Thanks
to deal with the distortion, i think you will need to lower the volume of each channel on playback, try setting the volume to 1/8 of the max volume. there isn't any clipping going on so if the sum of sounds > 1.0f you will clip and it will sound bad.
to deal with crosstalk when recording: i guess you have some sort of feedback going on with the output, ie the output sound is being fed back into the input channel, probably by the operating system. if you run another app that makes sound do you also get that in your recording as well? if so then that is probably your problem.
if it works with one channel, try it with just 2, instead of jumping straight up to 8 channels.
in general i would try to abstract out the playback/record logic and soundPlayer/recorder into a separate class. you have a couple of booleans there and it's really easy to make mistakes with >1 boolean. is there any way you can replace the booleans with an enum or an integer state variable?
EDIT: I didn't see the date on your question :D Suppose you managed to do it by now. Maybe it helps somebody else..
I'm not sure if I can answer everything of your question, but I can share how I've worked with 3D sound in FMOD. I haven't worked with recording though.
For my own application a user can place sounds in 3D space around himself. For this I only have one Listener and multiple Sounds. In your code you're making a listener for every sound, are you sure that is necessary? I would imagine that this causes the multiple listeners to pick up multiple sounds and output that to your soundcard. So from the second sound+listener, both listeners pick up both sounds? I'm not a 100% sure but it sounds plausible to me.
I made a class to create sound objects (and one listener). Then I use a vector to store the objects and move trough them to render them.
My class SoundBox basically holds all the necessary things for FMOD
Making a "SoundBox" object and adding it to my soundboxes vector:
SoundBox * box = new SoundBox(box_loc, box_rotation, box_color);
box->loadVideo(ofToDataPath(video_files[soundboxes.size()]));
box->loadSound(ofToDataPath(sound_files[soundboxes.size()]));
box->setVolume(1);
box->setMultiPlay(true);
box->updateSound(box_loc, box_vel);"
box->play();
soundboxes.push_back(box);
Constructor for the SoundBox. I use a similar constructor in the same class for the listener, but since the listener will always be at the origin for me, it doesn't take any arguments and just sets all the listener locations to 0. The constructor for the listener only gets called once, while the one for the Sound gets called whenever I want to make a new one. (don't mind the box_color. I'm drawing physical boxes in this case..):
SoundBox::SoundBox(ofVec3f box_location, ofVec3f box_rotation, ofColor box_color) {
_box_location = box_location;
_box_rotation = box_rotation;
_box_color = box_color;
sound_position.x = _box_location.x;
sound_position.y = _box_location.y;
sound_position.z = _box_location.z;
sound_velocity.x = 0;
sound_velocity.y = 0;
sound_velocity.z = 0;
Then I just use a for loop to loop trough them and play them if they're not playing. I also have some similar code to select them and move then around.
for(auto box = soundboxes.begin(); box != soundboxes.end(); box++){
if(!(*box)->getIsPlaying())
(*box)->play();
}
I really hoped this helped. I'm not a very experienced programmer but this is how I got FMOD with multiple sounds to work in OpenFrameworks and hope you can use some of it. I just dumped as much of my code as I could :D
My main suggestion is to make one listener instead of more. Also having a class for making the sounds is useful if you, for instance, want to relocate the sounds after the initial placement.
Hope it helps and good luck :)