Java ME AudioSamples do not work on Ubuntu - audio

I'm using Ubuntu 11.04 and trying to import Sun Wireless Toolkit 2.5.2 audio sample project in NetBeans 6.9. On Windows this works, while on Ubuntu, when running the sample midlet code, it does not make any sound, even not playing a simple tone.
After player.start() nothing happens, checking player.getMediaTime() gets always 0, and finally attempt to stop = player.stop() will not return the call at all.
I've searched all over the place. This post seems to report an answer for a potentially similar problem when using FreeTTs:
FreeTTS no audio linux ubuntu - no errors
the suggested workaround involves modifying/replacing the player class in com.sun.speech.freetts.audio packages. I wonder how to find the corresponding class that causes the lock in my case.
Probably it's easier to drop Ubuntu and use Windows VM for development, but at least I'd like to try. Maybe there is a simple solution.
Updated: Any type of player plays well on my Ubuntu box. Also I tried to use different non-me code, such as JLayer or Jorbis based players, even Java sound API. All work. It's just somewhere in the Java ME style player where it breaks down.

Related

Reading USB-midi input live in python

I am looking for a way to read the USB-MIDI input live and have triggers, that run when a certain note is played. For example it should run function x, when an "e" is being played. This is Python 3 based either on windows 10 machine or a raspberry pi.
All the information I found has been years to decades old with pygame, py-midi, pyportmidi. Is there any current library that supports this? Pygame seems to rely on polling causing a short delay, which is a problem for this scenario.
In MIDI-OX, the Monitor displays the notes being played in real-time, though I can't do anything useful with it from there, as I need the python triggers, or events.

LIRC and audio bugging each other out on Raspbian

I'm having a problem with LIRC breaking audio on the OS scale after firing a command.
For example, I'd do:
irsend send_once Samsung_BN59-01224C KEY_VOLUMEUP --count=5
and afterwards, play an audio file, and the program governing that file would seize up and not play any sound. Same goes for a script I've written that uses the pygame library for python.
What's worse is that LIRC also stops firing correctly after this bug occurs. I can see infrared light being shot out of the diode, but there might be something off with the timing.
This happens both ways, so, after playing an audio file, LIRC will stop working but further playing of audio is possible.
The following extremely rarely but sometimes I'm able to play audio after LIRC finishes a command, and the result is heavily pitched down version of the original sound that cuts out after around a second or so.
Tested with different remotes, same results occur. I'm not sure if the fix that a user proposed in this thread could cause this (https://github.com/raspberrypi/linux/issues/2993) but I'm putting it out there that I used it, since unmodified LIRC has problems with both the receiver and transmitter turned on in /boot/config.txt. The rest of my installation is standard.
Fixed this by reverting the fix I posted in the last paragraph. Apparently using PWM for Infrared causes issues with onboard audio on raspbian. I commented out the lines responsible for the receiver and left the gpio-ir-tx option uncommented. Works fine with just the transmitter on.

OpenCV VideoCapture / V4L2 latency when grabbing a new webcam image

For a computer vision project that I am working on I need to grab images using a Logitech C920 webcam. I am using OpenCV's VideoCapture to do that, but the problem that I am facing is that the image that I take at a certain moment does not show the latest thing that the camera sees. That is, if I take an image at timestamp t, it shows what the camera saw at timestamp (t - delta), so to say.
I did this by writing a program that increments a counter and shows it on the screen. I pointed the camera at the screen and let it record. When the counter reached a certain value, say 10000, it would grab an image and save it with the filename "counter_value.png" (e.g. 10000.png). That way I was able to compare the current value of the counter with the current value seen by the camera. I noticed that most of the time the delay is about 4-5 frames, but it is not a fixed value.
I saw similar posts about this issue, but none of them really helped. Some people recommended putting the frame grabbing routine into a separate thread and updating a "current_frame" Mat variable. I tried that, but for some reason the issue is still present. Someone else mentioned that the camera worked well on Windows (but I need to use Linux, though). I tried running the same code on Windows and indeed the delay was only about 1 frame (which might as well be that the camera did not see the counter because the screen did not update fast enough).
I then decided to run a simple webcam viewer based only on V4L2 code, thinking that the issue might be coming from OpenCV. I again experienced the same delay, which makes me believe that the driver is using some sort of buffer to cache the images.
I am new to V4L2 and I really need to solve this problem as soon as possible, so my questions to you guys are:
Has anyone found a solution for getting the latest image using V4L2 (and maybe OpenCV)?
If there is no way to solve it using V4L2, does anyone know another alternative to fixing this issue on Linux?
Regards,
Mihai
It looks like that there will be always a delay between the VideoCapture::grab() call and when the frame is actually taken. This is because of frame buffering that is done at hardware/SO level and you cannot avoid that.
OpenCV provides the VideoCapture::get( CV_CAP_PROP_POS_MEC) ) method to give you the exact time a frame was captured, but this is only possible if the camera supports it.
Recently a problem has been discovered in V4L OpenCV implementation:
http://answers.opencv.org/question/61099/is-it-possible-to-get-frame-timestamps-for-live-streaming-video-frames-on-linux/
And a few days ago a fix has been pulled:
https://github.com/Itseez/opencv/pull/3998
In the end, if you have the right setup, you can know what is the time a frame was taken (and therefore compensate).
It is possible the problem is with the Linux UVC driver, but I have been using Microsoft LifeCam Cinemas for machine vision on Ubuntu 12.04 and 14.04 machines, and have not seen a 4-5 frame delay. I operate them in low light conditions, though, in which case they reduce the frame rate to 7.5 fps.
One other possible culprit is a delay in the webcam depending what format is used. The C920 appears to support H.264 (which few webcams do), so Logitech may have put most effort to make this work well, yet OpenCV appears not to support H.264 on Linux; see this answer for what formats it supports. The same question also has an answer with a kernel hack(!) to fix an issue with the UVC driver.
PS: to check the format actually used in my case, I added
fprintf(stderr, ">>> palette: %d\n", capture->palette);
at this line in the OpenCV code.

Which version of linux has support for Dolby Advanced Audio v2?

I am using LENEVO G500 Laptop and my sound card has support for Dolby Advanced Audio v2 that works nicely in Windows OS (i.e. Windows 7, 8 and 8.1). However I have failed to enable Dolby sound effect in my Linux OS (have tried it with Linux Mint 17, Fedora 20).
Does anyone have an idea which linux version has support for this feature or how I can enable in a linux OS.
I would appreciate if you could direct me to the right direction.
Thanks.
I've googled out a very good advise on forums that helps me to achieve a Dolby like sound on my Kubuntu 19.04 with Lenovo g780.
Install PulseEffects https://github.com/wwmm/pulseeffects
(repos with deb files are here: https://github.com/wwmm/pulseeffects/wiki/Package-Repositories#debian--ubuntu)
Restart the user session or reboot after this, because PulseAudio will be upgraded, and it may cause problems if you don't restart.
Run PulseEffects and close it. It'll create all settings dirs on first launch. They required for next step.
Install PulseEffects-Presets from here: https://github.com/JackHack96/PulseEffects-Presets
(I've used the suggested script that automatically downloads them to PulseEffects import dirs, it will require flatpak that could be got from repos with sudo apt install flatpak)
Launch PulseEffects again. Select Convolver. Enable it. Click on wave button. You'll see a list of presets. Enable:
Dolby ATMOS ((128K MP3)) 1.Default.irs
Close the dialog and that's it. You can toggle Convolver in PulseEffects on and off while playng music to compare results. You may play with other presets as well.
For improving sound on a notebook or a tablet PulseEffects help pages come with a tuorial about how to achieve this.
App can be minimized to tray on GTK-enabled desktops with an additional application: https://github.com/boomshop/pulseffectstray
It's better enable autostart in app settings (it will copy it's desktop file to ~/.config/autostart with --gapplication-service command line. So next time start without GUI).
It is possible to get reasonably close to the Dolby Advanced Audio output on Linux.
TLDR:
Record the result of playing a -0dBFS impulse in Windows with all effects enabled. Save that as a wav file and use it as input to the PulseEffects Convolver.
Step-by-step:
Install Audacity in Windows.
Configure Audacity to use WASAPI
Select the loopback device as input
Select your laptop speakers as output, making sure all Dolby Advanced Audio effects are enabled.
Start recording
Play an impulse audio file (you might need to do this twice, Audacity often doesn't pick up the first impulse.
Zoom in and select the area around the recorded impulse (see screenshot below)
Export the selection as a WAV file and change the extension to irs
Import this irs file into the Convolver.
Some notes:
Audacity isn't required, presumably any software capable of recording from the output device will be fine.
To avoid any changes introduced by sample rate conversion, set the sample rate of the output and input devices in Windows to be the same.
When recording, in Step 5, Audacity would not record unless audio was playing. This is probably due to using WASAPI. Just start recording, play the impulse and if you don't see it in the recording output as a single spike, play it again.
The screenshot is quite heavily zoomed in so that you can see the area where there is data. When selecting what to export, try to make sure the selection is roughly centered around the central peak. It doesn't have to be perfect.
As a useful check to make sure what you are recording has been processed by Dolby Advanced Audio, you can disable all effects on the output device in Windows and record the impulse a second time. This should show up as a single peak sample and not the symmetric pattern.
After a bit of research I found this explanation that seems to have satisfied my query. It generally says ...
There isn't going to be an easy fix for this, unless Dolby releases a Linux driver or publishes more information on what exactly their software is doing (which is unlikely).
Haswell-ThinkPad-problems, linux-low-audio-quality
Beware that recently PulseEffects has changed it's name to EasyEffects, but PulseEffects-Presets hasn't updated it's config files to cover this change; Therefore this answer might not be applicable anymore.

Why is my Clojure project slow on Raspberry Pi?

I've been writing a simple Clojure framework for playing music (and later some other stuff) for my Raspberry Pi. The program parses a given music directory for songs and then starts listening for control commands (such as start, stop, next song) via a TCP interface.
The code is available via GitHub:
https://github.com/jvnn/raspi-framework
The current version works just fine on my laptop, it starts playing music (using the JLayer library) when instructed to, changes songs, and stops just as it should. The uberjar takes a few seconds to start on the laptop as well, but when I try to run it on the Raspberry Pi, things get insanely slow.
Just starting up the program so that all classes are loaded and the actual program code starts executing takes way over a minute. I tried to run it with the -verbose:class switch, and it seems the jvm spends the whole time just loading tons of classes (for Clojure and everything else).
When the program finally starts, it does react to the commands given, but the playback is very laggy. There is a short sub-second sound, then a pause for almost a second, then another sound, another pause etc... So the program is trying to play something but it just can't do it fast enough. CPU usage is somewhere close to 98%.
Now, having an Android phone and all, I'm sure Java can be executed on such hardware well enough to play some mp3 files without any troubles. And I know that JLayer (or parts of it) is used in the gdx game development framework (that also runs on Android) so it shouldn't be a problem either.
So everything points in me being the problem. Is there something I can do either with leiningen (aot is already enabled for all files), the Raspberry Pi, or my code that could make things faster?
Thanks for your time!
UPDATE:
I made a tiny test case to rule out some possibilities and the problems still exist with the following Clojure code:
(ns test.core
(:import [javazoom.jl.player.advanced AdvancedPlayer])
(:gen-class))
(defn -main
[]
(let [filename "/path/to/a/music/file.mp3"
fis (java.io.FileInputStream. filename)
bis (java.io.BufferedInputStream. fis)
player (AdvancedPlayer. bis)]
(doto player (.play) (.close))))
The project.clj:
(defproject test "0.0.1-SNAPSHOT"
:description "FIXME: write description"
:dependencies [[org.clojure/clojure "1.5.1"]
[javazoom/jlayer "1.0.1"]]
:javac-options ["-target" "1.6" "-source" "1.6" "-Xlint:-options"]
:aot :all
:main test.core)
So, no core.async and no threading. The playback did get a bit smoother, but it's still about 200ms music and 200ms pause.
Most obvious to me is that you have a lot of un-hinted interop code, leading to very expensive runtime reflection. Try running lein check (I think that's built in, but maybe you need a plugin) and fixing the reflection issues it points out.

Resources