Record LineOut output directly to file with JSyn - audio

I have built a loopstation in JSyn. It allows you to record and play back samples. By playing multiple samples you can layer up sounds (e.g. one percussion sample, one melody, etc)
JSyn allows me to connect each of the sample players directly to my lineout where it is mixed automatically. But now I would like to record the sound just as the user hears it to a .wav-file. But I am not sure what I should connect the input port of the recorder to.
What is the smartest way to connect the audio output of all samples to the WaveRecorder?
In other words: In the Programmer's Guide there is an example for this but I am not sure how I create the "finalMix" used there.

Rather than using multiple LineOuts, just use one LineOut.
You can mix all of your signals together using a chain of MultiplyAdd units.
http://www.softsynth.com/jsyn/docs/javadocs/com/jsyn/unitgen/MultiplyAdd.html
Or you can use a Mixer unit.
http://www.softsynth.com/jsyn/docs/javadocs/com/jsyn/unitgen/MixerStereoRamped.html
Then connect the mix to your WaveRecorder and to your single LineOut.

Related

Real Time Audio Processing Structure?

I have looked into tons of different resources regarding real-time audio processing, but not many that fit my particular use case. I would like to add filters to microphone input that can be sent, rather than to an output device, to something like a Discord, Zoom, or Skype call.
A basic example would be joining a Zoom call with an alien/robot voice.
Voice modulators that I aim to emulate, such as VoiceMod (not open source), create their own input source that you would select on Discord, but I have not seen this anywhere else. I doubt this is something I could use web-audio-api for - Is this something that requires a server in order to buffer, filter, and redirect audio?

Advice on dynamically combining mpeg-dash mpd data

I'm doing research for a project that's about to start.
We will be supplied hundreds of 30 second video files that the end user can select (via various filters) we then want to play them as if it was one video.
It seems that Media Source Extensions with MPEG-DASH is the way to go.
I feel like it could possibly be solve in the following way, but I'd like to ask if this sounds right from anyone who has done similar things
My theory:
Create mpd's for each video (via mp4box or similar tool)
User make selections (each of which has a mpd)
Read each mpd and get their <period> elements (most likely only one in each)
Create a new mpd file and insert all the <period> elements into it in order.
Caveats
I imagine this may be problematic if the videos were all different sizes formats etc, but in this case we can assume consistency.
So my question is to anyone with mpeg-dash / mpd exterience, does this sound right? or is there a better way to acheive this?
Sounds right, multi period is the only feasible way in my opinion.
Ideally you would encode all the videos with the same settings to provide the end user a consistent experience. However, it shouldn't be a problem if quality or even aspect ratio etc change from one period to another from a technical point of view. You'll need a player which supports multi period, such as dash.js or Bitmovin.

Using hierarchical blocks in redhawk

Is there a way to use hirarchical blocks in redhawk?
For example, say I want to make a digital modulator that is a composition of filters, upsamplers, etc, and I want to use it as a single block in a waveform project, that has other hierarchical components as well. How would I combine the already made filter and upsampler blocks into the digital modulator block using redhawk?
You currently cannot create waveforms of waveforms. A waveform can however have external ports, and external properties allowing you to chain waveforms together dynamically and treat it similar to a component from a programmatic perspective. For example in the example below I launch two waveforms on the domain and connect the two, these waveforms are the examples that come bundled with REDHAWK and have external ports and properties.
>>> from ossie.utils import redhawk
>>> dom = redhawk.attach()
>>> wf1 = dom.createApplication('/waveforms/rh/FM_mono_demo/FM_mono_demo.sad.xml')
>>> wf2 = dom.createApplication('/waveforms/rh/FM_mono_demo/FM_mono_demo.sad.xml')
>>> wf1.connect(wf2)
There isn't a construct for a component of components (other than a waveform). As of the REDHAWK 2.1 beta release there is a 'shared address' construct that allows you to do something similar to what you seem to be asking for. The 'shared address' BULKIO pattern was specifically developed to create high-speed connections between components and reduce the processing load caused by IO. Take a look at https://github.com/RedhawkSDR/core-framework/tree/develop-2.1/docs/shared-address and see if this is what you are looking for. It will allow you to launch 'N' components built according to the shared address pattern into a single component host and still retain each individual components property interfaces etc.
If you are more specific about why you want to use a hierarchical block, a more targeted answer may be possible.

DirectShow graph separate Avi and Wav (play back, not saving)?

I am new to Direct Show. What I am trying to do is use one iGraphBuilder object, to play both a silent Avi video, and a separate Wav file at the same time. I can't merge the two into one video, but I would like it to appear as if they were.
Is there any way to go about using one set of filters to run an avi and wav file concurrently?
Thanks!
You can achieve this both ways: by adding two separate chains into the same graph, or using two separate filter graphs.
The first method will give you a graph like this:
Second method will get you two graphs:
The first approach has advantage that you have lipsync between video and audio and you control the graphs as a single playback object. The other method's advantage is ability to control video and audio separately, including stopping and changing files independently one from the other.

Can a single MIDI track play more then one note at once?

I am writing my own MIDI parser and everything seems to be going nicely.
I am testing against some of the files I see in the wild. I noticed that a MIDI track never appears to have more then one note on at once (produces more then one tone). Is this by design, can a midi track require more then one note to play at once?
(I am not referring to the number of simultaneous tracks, I am referring to the number of tones in a single track.)
The midi files I have tested look like this:
ON_NOTE71:ON_NOTE75:ON_NOTE79
ON_NOTE71:OFF_NOTE71:ON_NOTE75:OFF_NOTE75:ON_NOTE79:OFF_NOTE79
Can it look like this?
ON_NOTE71:ON_NOTE73:OFF_NOTE73:OFF_NOTE71
How do I detect this alternative structure?
Yes. Playing more than one note at once is known as polyphony. Different MIDI specifications define support for different levels of polyphony.
See http://www.midi.org/techspecs/gm.php
The number of notes that can play at once is a hardware implementation detail. Your software should allow for any number of simultaneous notes to be playing at the same time. I suggest keeping a table of which notes are currently on so that you can send a note off for each one when playback is stopped. Ideally the table should have a count for each note that is increased when a note on happens and decreased when a note off happens. That way if a certain pitch has two note on events pending you can send two note off events. You can't know how the device you're communicating with will handle successive note on events for the same pitch so it's safest to send an equal number of note off events.
Yes. Both controllers and software can produce such events.

Resources