I'm working on node-midi, a node.js wrapper for the RtMidi C++ library that provides realtime MIDI I/O, and its simplified wrapper node-easymidi.
The library is working great, but one thing, I wanted to read the midi piano keyboard input from not only the easymidi, but also from an other application, FL studio, to play the midi input sounds. If I try to connect to easymidi while FL studio, I get an error MidiInWinMM::initialize: no MIDI input devices currently available.. So I needed to exit FL studio before running the node app.
So, how can I modify the code to work with multiple applications aside from the node app? Or I need to install other libraries or change Windows settings? I read google, stackoverflow, github issue, but couldn't find a solution by myself. So I want your help here. Thank you.
Here's the code.
var easymidi = require('easymidi');
var inputs = easymidi.getInputs();
var outputs = easymidi.getOutputs();
console.log(inputs, outputs ) ;
// [ 'CASIO USB-MIDI 0' ] [ 'Microsoft GS Wavetable Synth 0', 'CASIO USB-MIDI 1' ]
var input = new easymidi.Input('CASIO USB-MIDI 0');
input.on('noteon', function (msg) {
console.log(msg ) ;
});
input.on('noteoff', function (msg) {
console.log(msg ) ;
});
Unfortunately, MIDI instruments cannot be shared in Windows.
If the port is used by one application, another application cannot connect to it.
Related
I am writing a WinRT app, testing it in VS 2015, x86 debug. I am getting
unwanted audio output that is not created by my application. Here is my code:
using Windows.Media.SpeechRecognition;
SpeechRecognitionResult result = await recEngine.RecognizeAsync();
//recEngine is of type SpeechRecognizer; using dictation grammar.
if (result.Confidence == SpeechRecognitionConfidence.Rejected)
return;
if (result.RawConfidence < .60)
return;
//proceed to process the speech-to-text in the result......
If I just return (have unrecognized speech), I do not get the audio output: fine. If I am successful in recognizing the speech, I process it in my app, but I also get the following audio output (in my desktop speakers):
"I do not recognize the command. Please try again".
Where is this coming from and how do I suppress it?
sorry, my mistake. I found the audio output (that I created myself).
I receive over network PCM audio data stream and this part works fine so I am ending up with
DataReader incomming = args.GetDataReader();
byte[] RcvBuffer = new byte[incomming.UnconsumedBufferLength];
incomming.ReadBytes(RcvBuffer);
I have all audio data in buffer.
How I can play this through telephone Speaker ? Can you point me in some direction ?
Thanks
There're many ways to do that.
You can prepend the WAVE header to your data, and use MediaElement for playback, see the documentation for SetSource method.
If however by “telephone speaker” you mean the earphone, then it is only possible if you are creating a VoIP app.
It took a while but I sorted it, maybe someone else will need help in the future.
First Problem - since I just started app development for Windows Phone I have chosen Blank App (Windows Phone) instead Blank App (Windows Phone Silverlight) and I did not have access to many features that are available in Silverlight projects, so my suggestions for beginners: understand what each project is for.
Like Soonts said there are many ways to do this, this is one that I used.
I simplified this code and retyped this so there can be some typos.
using Microsoft.Xna.Framework.Audio;
using System.IO;
1) Create Stream to load your incoming data:
MemoryStream stream = new MemoryStream();
2) Load data from buffer to stream:
stream.Write(RcvBuffer, 0, RcvBuffer.Length);
3) I am using SoundEfect to play this through Loud-Speaker. Sample rate that I use is 8 kHz
SoundEffect sound;
sound = new SoundEffect(stream.toArray(), 8000, AudioChannels.Mono)
sound.Play();
Background: I'm coding a metro-styled app for Win8. I need to be able to play music-file. Because of quality and space requirements we're using encoded audio (mp3/ogg).
I'm using XAudio2 to play sound effects (.wav files), but since I couldn't figure out a way to play encoded audio with it, I decided to play the music files with Media Foundation (IMFMediaPlayer interface).
I downloaded metro apps sample, and found out that the Media Engine Native C++ video playback sample was closest to what I needed.
Now that my app has MediaPlayer playing musics, I ran into a problem. If the device running the app is slow enough, MediaPlayer hangs. When I'm running the release-version of the app on my device, it's fine and I can hear the music just fine. But when I attach the debugger or run it on a slower device, it hangs when I'm setting bytestream for the MediaPlayer to play.
Here's some code, you'll find it pretty similiar to the sample:
StorageFolder^ installedLocation = Windows::ApplicationModel::Package::Current->InstalledLocation;
m_pickFileTask = Concurrency::task<StorageFile^>(installedLocation->GetFileAsync(filename)), m_tcs.get_token());
auto player = this;
m_pickFileTask.then([player](StorageFile^ fileHandle)
{
player->SetURL(fileHandle->Path);
Concurrency::task<IRandomAccessStream^> fOpenStreamTask = Concurrency::task<IRandomAccessStream^> (fileHandle->OpenAsync(Windows::Storage::FileAccessMode::Read));
fOpenStreamTask.then([player](IRandomAccessStream^ streamHandle)
{
MEDIA::ThrowIfFailed(
player->m_spMediaEngine->Pause()
);
MEDIA::GetMediaError(player->m_spMediaEngine);
player->SetBytestream(streamHandle);
if (player->m_spMediaEngine)
{
MEDIA::ThrowIfFailed(
player->m_spEngineEx->Play()
);
MEDIA::GetMediaError(player->m_spMediaEngine);
}
}
);
}
);
And here's the SetBytestream method:
SetBytestream(IRandomAccessStream^ streamHandle)
{
if(m_spMFByteStream != nullptr)
{
m_spMFByteStream->Close();
m_spMFByteStream = nullptr;
}
MEDIA::ThrowIfFailed(
MFCreateMFByteStreamOnStreamEx((IUnknown*)streamHandle, &m_spMFByteStream)
);
MEDIA::ThrowIfFailed(
m_spEngineEx->SetSourceFromByteStream(m_spMFByteStream.Get(), m_bstrURL)
);
MEDIA::GetMediaError(m_spEngineEx);
return;
}
The line where it hangs is:
m_spEngineEx->SetSourceFromByteStream(m_spMFByteStream.Get(), m_bstrURL)
When I'm debugging the app, I can press pause and see the stack. Well, not much of it, but atleast I can see it that it's indefinitely at
ntdll.dll!77b7f4dc()
Any ideas why my app would hang in such a way?
(OPTIONAL: If you know a better way to play mp3/ogg in a c++ metro-styled app, let me know)
Could not figure out why this is happening, but I managed to code a work-a-round:
IMFSourceReader can be used to decode MP3s and feed bytes into XAudio2SourceVoice.
XAudio2 audio stream effect sample contains good example how to do this.
I've been looking around for an answer on this...
Basically, I want to read data from the serial port (in this case, over USB). I've looked into the node-serialport module but it keeps stalling after the first result form the serial port. I expected it to just spit out the data when it received it. It's as if a buffer is filling up and needs to be flushed somehow?
I've slightly modified the code from the demos I found here - https://github.com/voodootikigod/node-serialport/tree/master/tests
Here's my code:
var sys = require("sys"),
repl = require("repl"),
serialPort = require("serialport").SerialPort;
// Create new serialport pointer
var serial = new serialPort("/dev/tty.usbmodem1d11" , { baudrate : 9600 });
// Add data read event listener
serial.on( "data", function( chunk ) {
sys.puts(chunk);
});
serial.on( "error", function( msg ) {
sys.puts("error: " + msg );
});
repl.start( "=>" );
I'm using an Arduino hence the 9600 baudrate.
Any help would be awesome, cheers,
James
Author of node-serialport. I have tracked down the issue and it is due to a compilation issue with IOWatcher in node.js. I have revised the strategy for reading from the serial port and it now should function as designed in all cases. Please ensure you are using node-serialport 0.2.6 and greater.
Now go out and build JS controlled robots!!!
I also experienced problems with the serial port read.
This is due to a bug in node.js v4.7 (see this issue)
However it worked after switching to an older version of Node.js (v4.0).
It might work with versions up to v4.6 also, but I haven't verified that yet.
Is Adobe Media Encoder (AME) Scriptable? I've heard people mention it was "officially scriptable" but I can't find any reference to its scriptable object set.
Has anyone had any experience scripting AME?
Adobe media encoder is 'officially' not scriptable but we can use extend script API for scripting AME.
Below functions are available through extend script
1.Adding a file to batch
Encode progress
host = App.GetEncoderHost ();
enc = EHost.CreateEncoderForFormat ( "QuickTime");
flag = Enc.LoadPreset ( "HD 1080i 29.97, H.264, AAC 48 kHz");
an if (flag) {
f = enc.encodeEncodeProgress
= function (progress) {
$ .writeln (progress);
}
eHost. enc.encode ("/ Users / test / Desktop / 00000.MTS", "/Users/test/Desktop/0.mov");
} else {
alert ("The preset could not be loaded ");
}
encode end
ehost = App.GetEncoderHost ();
enc = EHost.CreateEncoderForFormat ( "QuickTime");
flag = Enc.LoadPreset ( "HD 1080i 29.97, H.264, AAC 48 kHz");
an if (flag) {
f = enc.onEncodeFinished
= function (success) {
if (success) {
alert ("Successfully encoding has ended ");
} Else {
Alert (" failed to encode ");
}
}
EHost.RunBatch ();
} Else {
Alert (" preset could not be read ");
}
2.Start batch
eHost = app.getEncoderHost ();
eHost.runBatch ();
3.Stop batch
eHost = app.getEncoderHost ();
eHost.stopBatch ();
4.Pause batch
eHost = app.getEncoderHost ();
eHost.pauseBatch ();
5.Getting preset formats
EHost = App.GetEncoderHost ();
List = EHost.GetFormatList ();
6.getting presets
eHost = app.getEncoderHost ();
enc = eHost.createEncoderForFormat ("QuickTime");
list = enc.getPresetList ();
and many more...
The closest bits of info I've found are:
http://www.openspc2.org/book/MediaEncoderCC/
The latter resource is actually good, if you can read japanese, or at least use the Chrome built-in translate function, then you can see it has resources such as this one:
http://www.openspc2.org/book/MediaEncoderCC/easy/encodeHost/009/index.html
We can perform almost all basic functionalities through script.
I had a similar question about Soundbooth.. I haven't tried scripting Adobe Media Encoder though, it doesn't show up in the list of applications I could potentially connect to and script with the ExtendScript Toolkit.
I did find this article that might come in handy if you're on a Windows. I guess using something similar written in AppleScript could do the job on a OSX. I haven't tried it, but this Sikuli thing looks nice, maybe it could help with the job.
Adobe Media Encoder doesn't seem to be scriptable. I was wondering, for batch converting, could you use ffmpeg ? There seem to be a few scripts out there for that, if you google for ffmpeg batch flv.
HTH,
George
Year 2021
Yes, AME is scriptable in ExtendScript. AME API doc can be found at
https://ame-scripting.docsforadobe.dev/index.html.
The API methods can be invoked locally inside AME or remotely through BridgeTalk.
addCompToBatch and other alternatives in the API doc seem to be safe to use. This is working:
app.getFrontend().addCompToBatch(project, preset, destination);
The method requires project to be structured so that 1 and only 1 comp is at the root of the project.
encoder.encode – references of which can be found in Web, supposed to support encode progress callbacks - is not available in AME 2020 and 2021. As a result, this is not working:
var encoder = app.getEncoderHost().createEncoderForFormat(encoderFormat);
var res = encoder.loadPreset(encoderPreset);
if(res){
encoder.encode(project, destination); // error: encode is not a function
}
The method seems to have been removed in AME 2017.1, according to the post reporting the issue https://community.adobe.com/t5/adobe-media-encoder-discussions/media-encoder-automation-system-with-using-extendscript/td-p/9344018
The official stance at the moment is "no", but if you open the Adobe Extend Script Toolkit, and set the target app to Media Encoder, you will see in the Data Browser that a few objects and methods are already exposed in the app object, like app.getFrontend(), app.getEncoderHost() etc. There is no official documentation though, and no support, so you are free to experiment with them at your own risk.
You can use the ExtendScript reflection interface like this:
a = app.getFrontend()
a.reflect.properties
a.reflect.methods
a.reflect.find("addItemToBatch").description
But as far as I can see, no meaningful information can be found this way beyond list of methods and properties.
More about the ExtendScript reflect interface can be found in the JavaScript Tools Guide CC document.
Doesn't seem to be. There're some reference to it being somewhat scriptable via using FCP XML yet it's not "scriptable" in its accepted form.
Edit, it looks like they finally got their finger out and made ME scriptable: https://stackoverflow.com/a/69203537/432987
I got here after it came second in the duckduckgo results for "extendscript adobe media encoder". First was a post on the Adobe forums where an adobe staffer wrote:
Scripting in Adobe Media Encoder is not a supported feature.
and, just to give the finger to anyone seeking to develop solutions for adobe users using adobe's platform:
Also, this is a user-to-user forum, not an official channel for support from Adobe personnel.
I think the answer is "Adobe says no"