Pure Data: Dynamically route an audio signal to different channels - audio

I'm using Pure Data for a project where I'll be playing several audio files at the same time to different speakers.
Let's say I have two files, and I want one to be played on the left channel of the soundcard, and the second one on the right channel, so that's the first and second inlet of the dac~ 1 2 object.
How can I route the audio signal depending on another value?
I'm basically looking for something like the route object, but with some extra parameter, or some way to pack the audio signal with the channel number (1, 2) and use the number to route the signal.
I just found out that Yves Degoyon's "unauthorized" library has the spigot~ object that does what I want, but only with two channels. In the end I would like to be able to output different sounds to eight or nine channels.

Pd-extended is not maintained any more. You can install Zexy for Vanilla Pd via the Debian package or the Deken plugin. Then you will have the demultiplex~ object available. However, there might be good reasons why you might not wanting to use an external at all. Here is one way to patch a kind of a switchboard. Additional benefit: You can specify your favorite fade time and type.

You can use [demultiplex~] from the Zexy library to route one incoming signal to one of several outlets. For instance, [demultiplex~ 1 2 3 4] will have one inlet and four outlets. The single inlet takes both an incoming signal (which will be routed) as well as an single float which selects the outlet to which the signal will be routed. For the opposite behaviour (several incoming signals to several inlets, and only one of them being output by the single outlet) try [multiplex~].
Also note that you can use [mux~] and [demux~] as they are aliases for these same objects.

Based on Max N answer, you can also use a toggle to modify the volume of the signal and know where it redirects :
In this case, if the toggle is active, the signal will be sent on the left outlet. If it is inactive, the right outlet will receive the signal.

Related

How do I create a chain of lazy streams where 2 streams can fetch data from 1?

I'm creating a system where users can chain together nodes, and data flows from one node to the other. At the end is a node which constantly pulls from the node before it, and does something with that data (e.g. for audio data, plays it). If you're familiar with FL Studio's Patcher, that's essentially what I'm trying to do:
The trivial implementation, which is to just have an interface like this:
interface Node {
byte[] getData();
}
where each node stores the one before it would work fine, except that I want to be able to have 2 nodes both requesting data from a single node, like this:
The issue here is that the source node is pulled twice every "step" (the intended behavior would be for the same value to be sent to both nodes). If you had for example an audio signal in the source node, the intended behavior would be for 2 different effects to be done to it, then the signals combined, while what would actually happen is that the signal is chopped up and each effect gets a different part of the signal depending on which happens to be called first.
What is a good way to solve this?

How to handle multiple simultaneous inputs to a FIFO?

My verilog code generates DAC ramp signals (channel, value) for 8 channels. I am adding this functionality to a project that already has a DAC controller/communicator and associated FIFO. I would like to add the data I generate simultaneously for all 8 channels to this existing FIFO. I have easily done this for a single channel, but I am not sure the best way to include all the channels.
The ramps are not very fast, and all the clocks are 50 MHz. So, I have many clock cycles (~150+) to work with. However, data could come from multiple channels in the same clock cycle.
Should I create 8 FIFOs (only big enough for a few instructions), 1 per channel? Or is there a more efficient way to do this?
If I lose an occasional data point, that wouldn't be a big problem.
Not sure if this is simple CDC fifo or something else but this can be done in many ways, if You have plenty of time before arrival of next data packet then You can:
(1) In push domain implement simple round-robin algorithm that will push data from each channel sequentially. You might want to add 3bit side-band signal for channel identification so that pop domain could distribute it further based on actual data origin.
(2) aggregate all data before push to the FIFO and just make FIFO data port width equal to width of 8x input channels. In the pop domain You can de-aggregate it again (if needed), positioning should be deterministic so this operation would be very straightforward.
If there is need for some sort of more sophisticated data flow management You might be forced to use 8 separate FIFOs.

Has anybody some advice on programming realtime audio synthesis?

I'm currently working on a personal project: creating a library for realtime audio synthesis in Flash. In short: tools to connect wavegenarators, filters, mixers, etc with eachother and supply the soundcard with raw (realtime) data. Something like max/msp or Reaktor.
I already have some working stuff, but I'm wondering if the basic setup that I wrote is right. I don't want to run into problems later on that force me to change the core of my app (although that can always happen).
Basically, what I do now is start at the end of the chain, at the place where the (raw) sounddata goes 'out' (to the soundcard). To do that, I need to write chunks of bytes (ByteArrays) to an object, and to get that chunk I ask whatever module is connected to my 'Sound Out' module to give me his chunk. That module does the same request to the module that's connected to his input, and that keeps happening until the start of the chain is reached.
Is this the right approach? I can imagine running into problems if there's a feedbackloop, or if there's another module with no output: if i were to connect a spectrumanalyzer somewhere, that would be a dead end in the chain (a module with no outputs, just an input). In my current setup, such a module wouldnt work because i only start calculating from the sound-output module.
Has anyone experience with programming something like this? I'd be very interested in some thoughts about the right approach. (For clarity: i'm not looking for specific Flash-implementations, and that's why i didnt tag this question under flash or actionscript)
I did a similar thing a while back, and I used the same approach as you do - start at the virtual line out, and trace the signal back to the top. I did this per sample though, not per buffer; if I were to write the same application today, I might choose per-buffer instead though, because I suspect it would perform better.
The spectrometer was designed as an insert module, that is, it would only work if both its input and its output were connected, and it would pass its input to the output unchanged.
To handle feedback, I had a special helper module that introduced a 1-sample delay and would only fetch its input once per cycle.
Also, I think doing all your internal processing with floats, and thus arrays of floats as the buffers, would be a lot easier than byte arrays, and it would save you the extra effort of converting between integers and floats all the time.
In later versions you may have different packet rates in different parts of your network.
One example would be if you extend it to transfer data to or from disk. Another example
would be that low data rate control variables such as one controlling echo-delay may, later, become a part of your network. You probably don't want to process control variables with the same frequency that you process audio packets, but they are still 'real time' and part of the function network. They may for example need smoothing to avoid sudden transitions.
As long as you are calling all your functions at the same rate, and all the functions are essentially taking constant-time, your pull-the-data approach will work fine. There will
be little to choose between pulling data and pushing. Pulling is somewhat more natural for playing audio, pushing is somewhat more natural for recording, but either works and ends up making the same calls to the underlying audio processing functions.
For the spectrometer you've got
the issue of multiple sinks for
data, but it is not a problem.
Introduce a dummy link to it from
the real sink. The dummy link can
cause a request for data that is not
honoured. As long as the dummy link knows
it is a dummy and does not care about
the lack of data, everything will be
OK. This is a standard technique for reducing multiple sinks or sources to a single one.
With this kind of network you do not want to do the same calculation twice in one complete update. For example if you mix a high-passed and low-passed version of a signal you do not want to evaluate the original signal twice. You must do something like record a timer tick value with each buffer, and stop propagation of pulls when you see the current tick value is already present. This same mechanism will also protect you against feedback loops in evaluation.
So, those two issues of concern to you are easily addressed within your current framework.
Rate matching where there are different packet rates in different parts of the network is where the problems with the current approach will start. If you are writing audio to disk then for efficiency you'll want to write large chunks infrequently. You don't want to be blocking your servicing of the more frequent small audio input and output processing packets during those writes. A single rate pulling or pushing strategy on its own won't be enough.
Just accept that at some point you may need a more sophisticated way of updating than a single rate network. When that happens you'll need threads for the different rates that are running, or you'll write your own simple scheduler, possibly as simple as calling less frequently evaluated functions one time in n, to make the rates match. You don't need to plan ahead for this. Your audio functions are almost certainly already delegating responsibility for ensuring their input buffers are ready to other functions, and it will only be those other functions that need to change, not the audio functions themselves.
The one thing I would advise at this stage is to be careful to centralise audio buffer
allocation, noticing that buffers are like fenceposts. They don't belong to an audio
function, they lie between the audio functions. Centralising the buffer allocation will make it easy to retrospectively modify the update strategy for different rates in different parts of the network.

drop/rewrite/generate keyboard events under Linux

I would like to hook into, intercept, and generate keyboard (make/break) events under Linux before they get delivered to any application. More precisely, I want to detect patterns in the key event stream and be able to discard/insert events into the stream depending on the detected patterns.
I've seen some related questions on SO, but:
either they only deal with how to get at the key events (key loggers etc.), and not how to manipulate the propagation of them (they only listen, but don't intercept/generate).
or they use passive/active grabs in X (read more on that below).
A Small DSL
I explain the problem below, but to make it a bit more compact and understandable, first a small DSL definition.
A_: for make (press) key A
A^: for break (release) key A
A^->[C_,C^,U_,U^]: on A^ send a make/break combo for C and then U further down the processing chain (and finally to the application). If there is no -> then there's nothing sent (but internal state might be modified to detect subsequent events).
$X: execute an arbitrary action. This can be sending some configurable key event sequence (maybe something like C-x C-s for emacs), or execute a function. If I can only send key events, that would be enough, as I can then further process these in a window manager depending on which application is active.
Problem Description
Ok, so with this notation, here are the patterns I want to detect and what events I want to pass on down the processing chain.
A_, A^->[A_,A^]: expl. see above, note that the send happens on A^.
A_, B_, A^->[A_,A^], B^->[B_,B^]: basically the same as 1. but overlapping events don't change the processing flow.
A_, B_, B^->[$X], A^: if there was a complete make/break of a key (B) while another key was held (A), X is executed (see above), and the break of A is discarded.
(it's in principle a simple statemachine implemented over key events, which can generate (multiple) key events as output).
Additional Notes
The solution has to work at typing speed.
Consumers of the modified key event stream run under X on Linux (consoles, browsers, editors, etc.).
Only keyboard events influence the processing (no mouse etc.)
Matching can happen on keysyms (a bit easier), or keycodes (a bit harder). With the latter, I will just have to read in the mapping to translate from code to keysym.
If possible, I'd prefer a solution that works with both USB keyboards as well as inside a virtual machine (could be a problem if working at the driver layer, other layers should be ok).
I'm pretty open about the implementation language.
Possible Solutions and Questions
So the basic question is how to implement this.
I have implemented a solution in a window manager using passive grabs (XGrabKey) and XSendEvent. Unfortunately passive grabs don't work in this case as they don't capture correctly B^ in the second pattern above. The reason is that the converted grab ends on A^ and is not continued to B^. A new grab is converted to capture B if still held but only after ~1 sec. Otherwise a plain B^ is sent to the application. This can be verified with xev.
I could convert my implementation to use an active grab (XGrabKeyboard), but I'm not sure about the effect on other applications if the window manager has an active grab on the keyboard all the time. X documentation refers to active grabs as being intrusive and designed for short term use. If someone has experience with this and there are no major drawbacks with longterm active grabs, then I'd consider this a solution.
I'm willing to look at other layers of key event processing besides window managers (which operate as X clients). Keyboard drivers or mappings are a possibility as long as I can solve the above problem with them. This also implies that the solution doesn't have to be a separate application. I'm perfectly fine to have a driver or kernel module do this for me. Be aware though that I have never done any kernel or driver programming, so I would appreciate some good resources.
Thanks for any pointers!
Use XInput2 to make device(keyboard) floating, then monitor KeyPress and KeyRelease event on the device, using XTest to regenerate KeyPress & KeyRelease event.

Threads for peer to peer communication

I am trying to make a multi-player network game. Each player is represented by a rectangle on the screen. I am using OpenGL for the graphics and also the user input (commands like MOVE-LEFT, MOVE-RIGHT etc ) will be handled by it (or GLUT or sumthing).
I have the following architecture for the game.
There are 4 players(nodes) in the game. Each player is sending and receiving data using UDP. Each player can send data to any other player.
Data is required to be sent by a player if there is any input from the corresponding user. (For example MOVE-LEFT command etc).
Whenever a player (say p1) receives any data from any other player(say p2) (like new position of the player p2 on the screen), the player p1 's screen should be updated immediately.
I am thinking on the following lines :
Create one thread for handling graphics.
Create 2 more threads , 1 each for receiving and sending data, using UDP.
Whenever the graphics thread gets input for 'myposition' from the user, it updates the shared global variable 'myposition'. The network-send thread, which is waiting on this variable, gets activated and tells every other player about its new position.
Similarly whenever 'position' updates are received from any other player 'i', the network-receive thread updates the global variable player[i].position. The graphics thread will now redraw the scene with the updated positions.
Is this design correct. If yes, How good is this design and how can i improve it
Network Game Programming is a monster of a topic, so it is hard to say "yes, this is how you design a network architecture". It is completely dependent on your game requirements. What sort of volume of packets do you plan on sending? Will there be a packet sent every frame that indicates Player A is holding the left key? Is this traffic confined to a LAN?
For something as simple as synchronizing movement between amongst 4 clients, putting send, receive, and rendering in separate threads seems like overkill. Perhaps as a starting point, you should begin with a more simple design and make the move to multithreading when you feel that your packets are not going out fast enough.
For example, you may wish to have a game loop like the following (all within the same thread):
while (running):
readUpdSocketForIncomingPackets();
updateGameObjects();
renderGameObjects();
sendPacketsToPeers();
At the start of each frame, you can read your udp socket for incoming packets and update positions (and whatever else you are sending to your peers), then draw. As game input is processed, packets are created and accumulated in a packet queue. Doing it this way, allows you to perform optimizations, such as cramming/merging multiple messages into one packet, removing duplicate messages (e.g. only send the latest position update), etc. So at the end of each game loop, the final queue of packets are processed and send to the peers.
But again, this is a big topic and I've glossing over a lot of details.
Take a look at gaffer's blog:
http://gafferongames.com/networking-for-game-programmers/what-every-programmer-needs-to-know-about-game-networking/
He's got some great articles that address some fundamentals of network game programming.

Resources