Command line to Render MIDI from Kontakt patch [closed] - audio

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a bunch of MIDI files to render with the same Kontakt patch.
I would like to automatic render a these MIDI files with command line, without needing to load any program manually or assign manually the kontakt patch to the midi track.
I want to write my own program "MyProgram" and use it as follows :
For example
MyProgram.exe -MIDI myMidiFile.mid -kontakt myPatch.nki -out myWav.wav,
which will render the MIDI file as Wav file with the specific Kontakt Patch I assigned.
I don't have any plaform constraint, however, Windows would be the best I guess because of the VST context.
I have no idea how to achieve that, if someone has an idea?
Cheers

Not possible as specified unless NI has released something that I'm unaware of.
What is instead possible is generic VSTi plugin state persistence. (See http://vstdev.richackard.com/doc/vstsdk/faq.html). What you need is a command line VST host that will export/import the state as a file, so you can load it back again. This is useful because Kontakt is a VST plugin, so you'll be able to automate anything you want by controlling Kontakt over VST. Once set up, it would be conceptually possible to load and apply this state via a fully automated command line tool. This could definitely be implemented using the SDK, http://www.steinberg.net/en/company/developers.html
More help on VST host development: http://teragonaudio.com/article/How-to-make-your-own-VST-host.html
I can only find one similar tool that already exists; I don't know for sure if it will work for you, but the forum posts I'm reading suggest it supports VST save states.
http://teragonaudio.com/MrsWatson.html
Mrs Watson is open source, so you can extend it to better suit your needs if necessary.

I don't think you will be able to do it directly from the .nki file because this is Kontakt's own format and there isn't a function in the VST spec that will let you load it in that form.
What you may be able to do though is load Kontakt into a sequencer/DAW, load the nki file and then save the plugin state to an fxp file and then load that in your application. Note that Kontakt isn't currently VST3 so you'll need to use the VST2.x SDK.

Related

How to send image as a REST API response? What guidelines to follow? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 1 year ago.
Improve this question
I am making an application using Node.js and Express. I am able to save files to the server using multer, however while sending responses I have two options,
Send the URI of the image in JSON, and let the front-end call this to display the image.
Send the image data using some form of encoding like Base64 as part of JSON.
Since I am new to web development, I am confused as to which option to use. Some people have said that the first option requires two API calls, so can be slow. While I have also heard that the second option will take up more memory resources.
What other things should I consider while choosing, and is there any other way of sending images to the client side?
Option 1
Is less complex since no conversion is needed. These 2 API calls won't slow you down. The image size is way more important!!.. The file can be stored/accessed directly on filesystem and served from there. Also a filedownload is implemented in a short period of time. Also the base64 encoding makes the file roughly ~33% (!!) bigger what has a huge impact on large files regarding performance.
Option 2
Base 64 is more secure as nobody can link to your website as described here .
You only need to use base64 for security reasons OR if you have to transfer the image data as string if you cannot transfer it as binary.
Use Case
If this is your private non-productive project just try out both and use the one you like. In the end you are learning something.. It's only important to stay consistent !
If one option fits better to you, just implement it the way you like. You can always refactor a given part of the application later when you may have more experience or when the core parts of your application are finished. Sometimes, after working a while with one of the techniques it gets more clear which approach to use.
For learning it's sometimes better to go ahead, and implement something what works
and start to refactor as problems occur. Rather than overengineering small

LabVIEW data writing in a TDMS file [closed]

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 9 months ago.
Improve this question
I want to acquire pressure data from a pressure sensor. When I write the data using the "write to measurement file", only one part of the data is saved and others are missed.
I also try exporting data to excel from the waveform chart. But I receive a message that says there is not enough memory.
What should I do to save whole the data without missing them?
Is there any way to save the data directly to the Hard drive?
Thanks
I don't like the expressVIs myself. Recording directly to a TDMS file is the way to go. There are great examples of writing to a tdms file in the Labview examples. I think the TDMS API is pretty intuitive as well, look at the TDMS palette under the File IO palette, all you really need for this is to open, write, then close.
TDMS are incredibly fast for writing to and allow multiple groupings and metadata at multiple levels. I use TDMS all the time in my applications and never miss any data.

How or where can I get separate notes of an instrument for playback in my application? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
I am looking to create a music creation application, and would like to allow the user to play the individual notes of an instrument. Is there a place online where I can find individual sound files that I may playback for each note, or is there a way of programmatically "generating" each pitch? I am not concerned with sound quality at this point in my development.
EDIT: I am still in the early stages of development. I want the app to be browser based, using Javascript or something similar. A Linux development environment, if that is of relevance at all. The notes will be played via an on-screen interface.
The University of Iowa's Electronic Music Studios has a very nice and complete archive of sampled instruments, with one musical note per file. You should also check out freesound, though that is a much more general-purpose sample sharing site.
There are plenty of places online to find sampled instruments. If you're not concerned about sound quality, some free soundfonts will most likely do the job.
For example, this site http://soundfonts.homemusician.net/ has pianos, basses, guitars, horn etc. (Google "free sf2" for more)
There are plenty of ways to generate (aka synthesise) tones as well.
If you don't mind MIDI files, you can get a free MIDI software piano and create your own files: C.mid, C#.mid, D.mid, etc.
Here's one with a quirky interface but there are many more:
http://download.cnet.com/MidiPiano/3000-2133_4-10542342.html
The easiest way to do this is to simply output MIDI messages to the synth built-in to every computer. No need to create MIDI files or use extra sound fonts.
You didn't mention what language you are using, so it is hard to suggest ways to get started. In all cases though, you'll want to read up a bit on what MIDI actually is.
Basically, MIDI is nothing but control data, commonly used with synthesizers. At a basic level, there are note-on, and note-off messages. There are many other kinds of messages too, such as pitch bend, control change, etc. MIDI supports 16 "channels", which are sent all down the same line, just with a different identifier.
A good utility (on Windows) for debugging MIDI messages (and getting a better idea of the protocol in general!) can be found here: http://www.midiox.com/

Easy-to-use AutoHotkey/AutoIt alternatives for Linux [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 11 years ago.
Improve this question
I'm looking for recommendations for an easy-to-use GUI automation/macro platform for Linux.
If you're familiar with AutoHotkey or AutoIt on Windows, then you know exactly the kind of features I need, with the level of complexity. If you aren't familiar, then here's a small code snippet of how easy it is to use AutoHotkey:
InputBox, varInput, Please enter some random text...
Run, notepad.exe
WinWaitActive, Untitled - Notepad
SendInput, %varInput%
SendInput, !f{Up}{Enter}{Enter}
WinWaitActive, Save
SendInput, SomeRandomFile{Enter}
MsgBox, Your text`, %varInput% has been saved using notepad!
#n::Run, notepad.exe
Now the above example, although a bit pointless, is a demo of the sort of functionality and simplicity I'm looking for. Here's an explanation for those who don't speak AutoHotkey:
----Start of Explanation of Code ----
Asks user to input some text and stores it in varInput
Runs notepad.exe
Waits till window exists and is active
Sends the contents of varInput as a series of keystrokes
Sends keystrokes to go to File -> Exit
Waits till the "Save" window is active
Sends some more keystrokes
Shows a Message Box with some text and the contents of a variable
Registers a hotkey, Win+N, which when pressed executes notepad.exe
----End of Explanation----
So as you can understand, the features are quite obvious: Ability to easily simulate keyboard and mouse functions, read input, process and display output, execute programs, manipulate windows, register hotkeys, etc. - all being done without requiring any #includes, unnecessary brackets, class declarations, etc. In short: Simple.
Now I've played around a bit with Perl and Python, but it's definitely not AutoHotkey. They're great for more advanced stuff, but surely, there has to be some tool out there for easy GUI automation, right?
PS: I've already tried running AutoHotkey with Wine, but sending keystrokes and hotkeys don't work.
I'd recommend the site alternativeto.net to find alternative programs.
It shows three alternatives for AutoIt: AutoKey, Sikuli, and Silktest. AutoKey seems up to the job.
IronAHK is being developed as a cross-platform flavor of AutoHotkey which can be used on Linux, but it's not a fleshed out product yet.
Sikuli lets you automate your interface using screenshots. It runs on any Java platform, so it is cross-platform.
You should look at Experitest. I'm using the Windows version, but it's Java-based and I think it supports Linux as well.

Service to make an audio podcast from a video one? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Video podcast
???
Audio only mp3 player
I'm looking for somewhere which will extract audio from video, but instead of a single file, for an on going video podcast.
I would most like a website which would suck in the RSS and spit out an RSS (I'm thinking of something like Feedburner), though would settle for something on my own machine.
If it must be on my machine, it should be quick, transparent, and automatic when I download each episode.
What would you use?
Edit: I'm on an Ubuntu 8.04 machine; so running ffmpeg is no problem; however, I'm looking for automation and feed awareness.
Here's my use case: I want to listen to lectures at Google Video, or Structure and Interpretation of Computer Programs. These videos come out fairly often, so anything that's needed to be done manually will also be done fairly often.
Here's one approach I'd thought of:
download the RSS
parse the RSS for enclosures,
download the enclosures, keeping a track what has already been downloaded previously
transcode the files, but not the ones done already
reconstruct an RSS with the audio files, remembering to change the metadata.
schedule to be run periodically
point podcatcher at new RSS feed.
I also liked the approach of gPodder of using a post-download script.
I wish the Lazy Web still worked.
You could automate this using the open source command line tool ffmpeg. Parse the RSS to get the video files, fetch them over the net if needed, then spit each one out to a command line like this:
ffmpeg -i episode1.mov -ab 128000 episode1.mp3
The -ab switch sets the output bit rate to 128 kbits/s on the audio file, adjust as needed.
Once you have the audio files you can reconstruct the RSS feed to link to the audio files if so desired.
How to extract audio from video to MP3:
http://www.dvdvideosoft.com/guides/dvd/extract-audio-from-video-to-mp3.htm
How to Convert a Video Podcast to Audio Only:
http://www.legalandrew.com/2007/03/10/how-to-convert-a-video-podcast-to-audio-only/
When you edit your video, doesn't your editor provide you an option to split out the audio?
What platform is your own machine? What format is the video podcast?
You could possibly get Handbrake to do this (Windows, Linux and Mac), I don't know if it's scriptable at all but I think it can be used to separate audio and video.
edit: There is a commandline interface for Handbrake, but it appears I was wrong about it accepting non-DVD input.
On the Mac I'd probably rig up something with Applescript and QuickTime - what platform are you on?

Resources