Why is my Clojure project slow on Raspberry Pi? - audio

I've been writing a simple Clojure framework for playing music (and later some other stuff) for my Raspberry Pi. The program parses a given music directory for songs and then starts listening for control commands (such as start, stop, next song) via a TCP interface.
The code is available via GitHub:
https://github.com/jvnn/raspi-framework
The current version works just fine on my laptop, it starts playing music (using the JLayer library) when instructed to, changes songs, and stops just as it should. The uberjar takes a few seconds to start on the laptop as well, but when I try to run it on the Raspberry Pi, things get insanely slow.
Just starting up the program so that all classes are loaded and the actual program code starts executing takes way over a minute. I tried to run it with the -verbose:class switch, and it seems the jvm spends the whole time just loading tons of classes (for Clojure and everything else).
When the program finally starts, it does react to the commands given, but the playback is very laggy. There is a short sub-second sound, then a pause for almost a second, then another sound, another pause etc... So the program is trying to play something but it just can't do it fast enough. CPU usage is somewhere close to 98%.
Now, having an Android phone and all, I'm sure Java can be executed on such hardware well enough to play some mp3 files without any troubles. And I know that JLayer (or parts of it) is used in the gdx game development framework (that also runs on Android) so it shouldn't be a problem either.
So everything points in me being the problem. Is there something I can do either with leiningen (aot is already enabled for all files), the Raspberry Pi, or my code that could make things faster?
Thanks for your time!
UPDATE:
I made a tiny test case to rule out some possibilities and the problems still exist with the following Clojure code:
(ns test.core
(:import [javazoom.jl.player.advanced AdvancedPlayer])
(:gen-class))
(defn -main
[]
(let [filename "/path/to/a/music/file.mp3"
fis (java.io.FileInputStream. filename)
bis (java.io.BufferedInputStream. fis)
player (AdvancedPlayer. bis)]
(doto player (.play) (.close))))
The project.clj:
(defproject test "0.0.1-SNAPSHOT"
:description "FIXME: write description"
:dependencies [[org.clojure/clojure "1.5.1"]
[javazoom/jlayer "1.0.1"]]
:javac-options ["-target" "1.6" "-source" "1.6" "-Xlint:-options"]
:aot :all
:main test.core)
So, no core.async and no threading. The playback did get a bit smoother, but it's still about 200ms music and 200ms pause.

Most obvious to me is that you have a lot of un-hinted interop code, leading to very expensive runtime reflection. Try running lein check (I think that's built in, but maybe you need a plugin) and fixing the reflection issues it points out.

Related

Reading USB-midi input live in python

I am looking for a way to read the USB-MIDI input live and have triggers, that run when a certain note is played. For example it should run function x, when an "e" is being played. This is Python 3 based either on windows 10 machine or a raspberry pi.
All the information I found has been years to decades old with pygame, py-midi, pyportmidi. Is there any current library that supports this? Pygame seems to rely on polling causing a short delay, which is a problem for this scenario.
In MIDI-OX, the Monitor displays the notes being played in real-time, though I can't do anything useful with it from there, as I need the python triggers, or events.

LIRC and audio bugging each other out on Raspbian

I'm having a problem with LIRC breaking audio on the OS scale after firing a command.
For example, I'd do:
irsend send_once Samsung_BN59-01224C KEY_VOLUMEUP --count=5
and afterwards, play an audio file, and the program governing that file would seize up and not play any sound. Same goes for a script I've written that uses the pygame library for python.
What's worse is that LIRC also stops firing correctly after this bug occurs. I can see infrared light being shot out of the diode, but there might be something off with the timing.
This happens both ways, so, after playing an audio file, LIRC will stop working but further playing of audio is possible.
The following extremely rarely but sometimes I'm able to play audio after LIRC finishes a command, and the result is heavily pitched down version of the original sound that cuts out after around a second or so.
Tested with different remotes, same results occur. I'm not sure if the fix that a user proposed in this thread could cause this (https://github.com/raspberrypi/linux/issues/2993) but I'm putting it out there that I used it, since unmodified LIRC has problems with both the receiver and transmitter turned on in /boot/config.txt. The rest of my installation is standard.
Fixed this by reverting the fix I posted in the last paragraph. Apparently using PWM for Infrared causes issues with onboard audio on raspbian. I commented out the lines responsible for the receiver and left the gpio-ir-tx option uncommented. Works fine with just the transmitter on.

How to cache files with Perl while playing sound files using vlc?

I would like to manually cache files in Perl, so when playing a sound there is little to no delay.
I wrote a program in Perl, which plays an audio file by doing a system call to VLC. When executing it, I noticed a delay before the audio started playing. The delay is usually between about 1.0 and 1.5 seconds. However, when I create a loop which does the same VLC call multiple times in a row, the delay is only about 0.2 - 0.3 seconds. I assume this is because the sound file was cached by Linux. I found Cache::Cache on CPAN, but I don't understand how it works. I'm interested in a solution without using a module. If that's not possible, I'd like to know how to use Cache::Cache properly.
(I know it's a bad idea to use a system call to VLC regarding execution speed)
use Time::HiRes;
use warnings;
use strict;
while (1) {
my $start = Time::HiRes::time();
system('vlc -Irc ./media/audio/noise.wav vlc://quit');
my $end = Time::HiRes::time();
my $duration = $end - $start;
print "duration = $duration\n";
<STDIN>;
}
Its not as easy as just "caching" a file in perl.
vlc or whatever program needs to interpret the content of the data (in your case the .wav file).
Either you stick with calling an external program and just give it a file to execute or you need to implement the whole stack in perl (and probably Perl XS Modules). By whole stack I mean:
1. Keeping the Data (your .wav file) in Memory (inside the perl runtime).
2. Interpreting the Data inside Perl.
The second part is where it gets tricky you would probably need to write a lot of code and/or use 3rd Party modules to get where you want.
So if you just want to make it work fast, stick with system calls. You could also look into Nama which might give you what you need.
From your Question it looks like you are mostly into getting the runtime of a .wav file. If its just about getting information about the File and not about playing the sound then maybe Audio::Wav could be the module for you.
Cacheing internal to Perl does not help you here.
Prime the Linux file cache by reading the file once, for example at program initialisation time. It might happen that at the time you want to play it, it has already been made stale and removed from the cache, so if you want to guarantee low access time, then put the files on a RAM disk instead.
Find and use a different media player with a lower footprint that does not load dozens of libraries in order to reduce start-up time. Try paplay from package pulseaudio-utils, or gst123, or mpg123 with mpg123-pulse.

Sound only plays once in tcl/tk (snack, pulseaudio, Linux)

I'm using snack package in Linux Mint 13 (Maya), tk8.5 (wish).
My audio output is an analog stereo with pulseaudio software.
Acording to this: http://www.speech.kth.se/snack/tutorial.html
all I have to do to play a sound again is to use play command again.
I have a sound object and it only plays once no matter how many times I call the play command.
I tried putting a stop command before play, like this:
mysound stop
mysound play
What happens: it plays on the first but not on the second call, plays on the third but not on the fourth call, and it goes on. This is asynchronous, which means I pushed buttons to repeat stop-play. Now, this script:
package require snack
snack::sound s
s read knock.wav
after 1000 {s play}; #play sound effect
after 5000 {s play}; #this one doesn't work
after 10000 {s play}; #this one doesn't work
after 15000 {s stop; s play}; #played
after 20000 {s stop; s play}; #not played
after 25000 {s stop; s play}; #played
Same behavior as I had using button release events. In Android, the behavior work exactly as in theory, except that it has huge delays depending on the device (e.g. the sound comes after 2 seconds in one phone and after 200ms in another with better hardware).
I know the theory is right, and my final question is: how can I improve a Linux implementation that uses a more robust sound playing? Maybe using midi sounds. A solution that could work in any UNIX machine. Does snack provide that?
Thank you so much, for this is very important for me and I believe for other as well!
Unfortunately you don't tell us what your system is (what linux are you using and what is your audio system and device) or what you are really doing. So please provide a minimal working example.
Here is mine, that is working interactively (I routinely use Tkcon for this).
package require sound
snack::sound s1 -load longrunning-audio.wav
s1 play
# wait
s1 stop
s1 play
s1 pause
s1 pause; # resumes playing
I use the sound package instead of snack, because we don't need the graphical elements here.
And as a script
#!/usr/bin/env tclsh
package require sound
snack::sound s1 -load longrunning-audio.wav
puts "play"; s1 play
after 4000 puts "stop" \; s1 stop
after 6000 puts "play" \; s1 play
after 10000 puts "pause" \; s1 pause
after 12000 puts "resume" \; s1 pause;
after 16000 {apply {{} {
puts "stop and exit"
s1 stop
exit
}}}
# start the event loop
vwait forever
after starts a command after the given time in microseconds. In a real program you would use some procedures for this, here it is only to simulate some interaction.
Maybe you are suffering from a badly packaged snack or your audio system is playing fools with you. I remember a similar problem with some version of pulseaudio in combination with one of the output devices. Only the start of the sound was played but the audio system stayed active (use snack::audio active to show the current state).
Wow, you've got a lot of callback scripts scheduled at the same time! The after calls themselves take very little time. This means that you've got several calls to s play happening at virtually the same time. You are aware that only the last such one will have any effect? It's effectively as if you did this (in the “1000ms from now” timeslot):
s play
s play
s stop; s play
s stop; s play
s stop; s play
Yes, they're several callbacks, but they'll all actually get evaluated one after each other. You won't see any time between them.
I believe that the sound object, s, can only ever be playing or not playing. Also, sounds don't play instantly (obviously) so you've got to allow time from starting the playing to stopping it if you want the sound to be heard. Doing what you're doing will result in lots of activity at the command processing level, but not so much will be observable.

low latency sounds on key presses

I am trying to write an application(I'm a gui first timer) for my son, he has autism. There is a video player in the top half and a text entry area in the bottom. When letters are typed sounds are produced to mimic the words in the video.
There have been other posts on this site in regard to playing sounds on key presses, using gstreamer as a system call. I have also tried libcanberra but both seem to have significant delays between sounds. I can write the app in python or C but will likely do at least some of it in C.
I also want to mention that the video portion is being played by gstreamer. I tried to create two instances of gstreamer, to avoid expensive system calls but the audio instance seemed to kill the app when called.
If anyone has any tips on creating faster responding sounds I would really appreciate it.
You can upload a raw audio sample directly to PulseAudio so there will be no decoding and (perhaps save) extra switches by using the following function from Canberra:
http://developer.gnome.org/libcanberra/unstable/libcanberra-canberra.html#ca-context-cache
The next ca_context_play() will use it.
However, the biggest problem you'll encounter with this scenario (with simultaneous video playback) is that the audio device might be configured with large latency with PulseAudio (up to 1/2s or more for normal playback). It may be reasonable to file a bug to libcanberra to support a LOW_LATENCY flag, as it currently doesn't attempt to minimize delay for sound events afaik. That would be great to have.
GStreamer pulsesink could probably get low latency too (it has some properties for that), but I am afraid it won't be as lightweight as libcanberra, and you won't be able to cache a sample for instance. Ideally, GStreamer could also learn to cache samples, or pre-fill PulseAudio...

Resources