I am looking for a way to read the USB-MIDI input live and have triggers, that run when a certain note is played. For example it should run function x, when an "e" is being played. This is Python 3 based either on windows 10 machine or a raspberry pi.
All the information I found has been years to decades old with pygame, py-midi, pyportmidi. Is there any current library that supports this? Pygame seems to rely on polling causing a short delay, which is a problem for this scenario.
In MIDI-OX, the Monitor displays the notes being played in real-time, though I can't do anything useful with it from there, as I need the python triggers, or events.
Related
I have 2 applescripts (saved as apps) that make webhook calls in a loop to control the volume of my stereo. Each script displays a dialog that asks for a number of ticks to tick the volume up or down, and it loops to make the webhook call each time.
Background: I wrote a program called pi_bose that runs on my raspberry pi to send commands to my Bose Series 12 stereo. It sends codes on the 28Mhz band using a wire as an antenna plugged into one of the GPIO ports. Node red receives the webhook calls and runs that script. But there are various things that can make it fail. The antenna can be loose because the pi has been bumped. Node red isn't running. The program has a small memory leak that causes a problem after having been used for about 6 months. And sometimes there's background interference that makes not every transmission work (I could probably use a longer antenna to address that I guess). But sometimes, whatever is playing on the stereo is just so soft that it's hard to detect the subtle change to the volume. And sometimes, it seems that either the webhook call happens slowly and the volume is changing - it just happens over the course of 20-30 seconds. So...
I know I could do the loop on the pi itself instead of repeating the webhook call, but I would like to see progress on the mac itself.
I'd like some sort of cue that gives me some feedback to let me know each time the webhook call happens. Like, a red dot on the AppleScript app icon or something in the corner of the screen that appears for a fraction of a second each time the webhook call is made.
Alternatively, I could make the script make some sort of sound, but I would rather not disrupt audibly whatever is playing at the time.
Does anyone know how to do that? Is it even possible to display an icon without a dialog window in applescript?
I'm trying to set up my LabView VI + my USB 6001 I/O box to be able to read multiple independent voltages at once, while also outputting a single constant voltage.
I've successfully gotten my USB box to output the voltage I want while reading back a single voltage, but so far I've been unable to read back more than one voltage (and if I do, the two voltages seem to be co-dependent on one another in some way).
Here's a screenshot of my VI:
Everything to the right of the screenshot window should be unimportant to the question.
If anyone is curious, this is to drive multiple LVDT's and read back their respective voltages.
Thank you all for your help!
Look at your DAQ's manual, especially the pages I noted below.
http://www.ni.com/pdf/manuals/374259a.pdf
Page 11
All the AI channels get multiplexed, and the low-side reference can be switched (RSE vs. differential). So the two channels you're sampling require both of those to switch. It might be a settling issue where the ADC is taking a sample before the input value is stable.
To verify this, try using using the same low side (differential or RSE) on both channels. Also try slowing down your sample rate (but your 1 kHz should already be slow enough...).
Page 14
Check this to make sure you have everything connected and grounded correctly.
Page 18
Check this for more details about switching between 2 sources quickly.
Perhaps you could try it using the Daqmx express VIs:
http://www.ni.com/tutorial/2744/en/
For a computer vision project that I am working on I need to grab images using a Logitech C920 webcam. I am using OpenCV's VideoCapture to do that, but the problem that I am facing is that the image that I take at a certain moment does not show the latest thing that the camera sees. That is, if I take an image at timestamp t, it shows what the camera saw at timestamp (t - delta), so to say.
I did this by writing a program that increments a counter and shows it on the screen. I pointed the camera at the screen and let it record. When the counter reached a certain value, say 10000, it would grab an image and save it with the filename "counter_value.png" (e.g. 10000.png). That way I was able to compare the current value of the counter with the current value seen by the camera. I noticed that most of the time the delay is about 4-5 frames, but it is not a fixed value.
I saw similar posts about this issue, but none of them really helped. Some people recommended putting the frame grabbing routine into a separate thread and updating a "current_frame" Mat variable. I tried that, but for some reason the issue is still present. Someone else mentioned that the camera worked well on Windows (but I need to use Linux, though). I tried running the same code on Windows and indeed the delay was only about 1 frame (which might as well be that the camera did not see the counter because the screen did not update fast enough).
I then decided to run a simple webcam viewer based only on V4L2 code, thinking that the issue might be coming from OpenCV. I again experienced the same delay, which makes me believe that the driver is using some sort of buffer to cache the images.
I am new to V4L2 and I really need to solve this problem as soon as possible, so my questions to you guys are:
Has anyone found a solution for getting the latest image using V4L2 (and maybe OpenCV)?
If there is no way to solve it using V4L2, does anyone know another alternative to fixing this issue on Linux?
Regards,
Mihai
It looks like that there will be always a delay between the VideoCapture::grab() call and when the frame is actually taken. This is because of frame buffering that is done at hardware/SO level and you cannot avoid that.
OpenCV provides the VideoCapture::get( CV_CAP_PROP_POS_MEC) ) method to give you the exact time a frame was captured, but this is only possible if the camera supports it.
Recently a problem has been discovered in V4L OpenCV implementation:
http://answers.opencv.org/question/61099/is-it-possible-to-get-frame-timestamps-for-live-streaming-video-frames-on-linux/
And a few days ago a fix has been pulled:
https://github.com/Itseez/opencv/pull/3998
In the end, if you have the right setup, you can know what is the time a frame was taken (and therefore compensate).
It is possible the problem is with the Linux UVC driver, but I have been using Microsoft LifeCam Cinemas for machine vision on Ubuntu 12.04 and 14.04 machines, and have not seen a 4-5 frame delay. I operate them in low light conditions, though, in which case they reduce the frame rate to 7.5 fps.
One other possible culprit is a delay in the webcam depending what format is used. The C920 appears to support H.264 (which few webcams do), so Logitech may have put most effort to make this work well, yet OpenCV appears not to support H.264 on Linux; see this answer for what formats it supports. The same question also has an answer with a kernel hack(!) to fix an issue with the UVC driver.
PS: to check the format actually used in my case, I added
fprintf(stderr, ">>> palette: %d\n", capture->palette);
at this line in the OpenCV code.
I have multiple webcams hooked up to my raspberry pi and they are compatible and work. I want the raspberry to take a picture every 5 seconds (with every camera), save it to /var/www/picture.jpeg (picture2.jpeg and picture3.jpeg) and repeat overwriting the old images. Im not quite sure how to achieve this and need help!
Thanks!
You may use guvcview or motion, I forgot with which argument you need to run, but just lauch it with --help and you will get all the information. Actually, I've never tried with more than onw camera, but I think that it is possible to do it. You may have to search. But for the delay, there is no problem with those to programs.
I am trying to write an application(I'm a gui first timer) for my son, he has autism. There is a video player in the top half and a text entry area in the bottom. When letters are typed sounds are produced to mimic the words in the video.
There have been other posts on this site in regard to playing sounds on key presses, using gstreamer as a system call. I have also tried libcanberra but both seem to have significant delays between sounds. I can write the app in python or C but will likely do at least some of it in C.
I also want to mention that the video portion is being played by gstreamer. I tried to create two instances of gstreamer, to avoid expensive system calls but the audio instance seemed to kill the app when called.
If anyone has any tips on creating faster responding sounds I would really appreciate it.
You can upload a raw audio sample directly to PulseAudio so there will be no decoding and (perhaps save) extra switches by using the following function from Canberra:
http://developer.gnome.org/libcanberra/unstable/libcanberra-canberra.html#ca-context-cache
The next ca_context_play() will use it.
However, the biggest problem you'll encounter with this scenario (with simultaneous video playback) is that the audio device might be configured with large latency with PulseAudio (up to 1/2s or more for normal playback). It may be reasonable to file a bug to libcanberra to support a LOW_LATENCY flag, as it currently doesn't attempt to minimize delay for sound events afaik. That would be great to have.
GStreamer pulsesink could probably get low latency too (it has some properties for that), but I am afraid it won't be as lightweight as libcanberra, and you won't be able to cache a sample for instance. Ideally, GStreamer could also learn to cache samples, or pre-fill PulseAudio...