Measuring Multiple Voltages in LabView w/USB 6001 - io

I'm trying to set up my LabView VI + my USB 6001 I/O box to be able to read multiple independent voltages at once, while also outputting a single constant voltage.
I've successfully gotten my USB box to output the voltage I want while reading back a single voltage, but so far I've been unable to read back more than one voltage (and if I do, the two voltages seem to be co-dependent on one another in some way).
Here's a screenshot of my VI:
Everything to the right of the screenshot window should be unimportant to the question.
If anyone is curious, this is to drive multiple LVDT's and read back their respective voltages.
Thank you all for your help!

Look at your DAQ's manual, especially the pages I noted below.
http://www.ni.com/pdf/manuals/374259a.pdf
Page 11
All the AI channels get multiplexed, and the low-side reference can be switched (RSE vs. differential). So the two channels you're sampling require both of those to switch. It might be a settling issue where the ADC is taking a sample before the input value is stable.
To verify this, try using using the same low side (differential or RSE) on both channels. Also try slowing down your sample rate (but your 1 kHz should already be slow enough...).
Page 14
Check this to make sure you have everything connected and grounded correctly.
Page 18
Check this for more details about switching between 2 sources quickly.

Perhaps you could try it using the Daqmx express VIs:
http://www.ni.com/tutorial/2744/en/

Related

ESP32: BLE transmission speed is very slow

I am trying to build an Android app that interfaces with the ESP32 using BLE. I am using the RxBluetoothKotlin library from Vincent Masselis for the Android side. For the ESP32 side, I am using the default Kolban libraries that are included in the Arduino IDE. My phone is a OnePlus 5T and my ESP32 is a MH ET Live ESP32DevKIT. My Android app can be found here, and my ESP32 program here.
The whole system works pretty much perfectly for me in terms of pure functionality. That is to say, every button does what it's supposed to do, and I get the exact behaviour I had expected to get. However, the communication itself is very slow. Around 200 bytes/second. My test button in the Android app requests a bunch of text data from the ESP32, and displays this in a dialog. It also lists a number which represents the time between request and reception in milliseconds. Using this, I get around 2 seconds for 440 bytes of data. When I send less data, the time decreases approximately linearly with data size. 40 bytes of data will take around 200ms, and 20 bytes or under typically takes less than 100ms.
This seems rather slow to me. From what I understand, I should be able to at least get a few kilobytes per second. I have tried to check the speed using nRF Connect, but I get the same 2 seconds timespan for my data transfer. This suggests that the problem is not in my app, since I also have it with a completely different app. I also put the code in my main loop inside of callbacks instead (which I probably should have done in the first place), but this didn't change things at all. I have tried taking the microcontroller and my phone to a few different locations, hoping to eliminate interference. I have tried to mess with BLEDevice::setPower and BLEDevice::setMTU, as well as setting RxBluetoothGatt.requestMtu(500) on the Android side. Everything so far seems to have had little to no effect. The only thing that did anything, was adding the line "pServer->updatePeerMTU(0,500);" in my loop during the connection phase. This caused the first 23 bytes of data to be repeated whenever I pressed the test button in my app, and made the data transfer take about 3 seconds. If I'm lucky, I can get maybe a bit under 1.8 seconds for 440 bytes, but this is a very small change when I'm expecting an order of magnitude of difference, and might even be down to pure chance rather than anything I did.
Does anyone have an idea of how to increase my transfer speed?
The data transmission speed is mainly influenced by the Bluetooth LE connection interval (between 7.5 ms and 4 seconds) and is negotiated between the master (central unit) and the peripheral device. The master establishes a connection with a parameter set and the peripheral can propose to change this parameter set. In the end, however, the central unit decides which parameter set is to be used.
But the Bluetooth connection interval cannot be changed by an Android applications directly, which normally act as the central role. Instead it can request a connection priority which is known to have an influence on the connection interval.

Intercepting Sound From Other Programs

I want to do a couple of things:
-I want to hear sound from all other programs through max, and max only.
-I want to edit that sound in real time and hear only the edited sound.
-I want to slow down the sound, while stacking the non-slowed, incoming input onto a buffer, which I can then speed through to catch up.
Is this possible in Max? I have had a lot of difficulty working even step 1. Even if I use my speakers as an input device, I am unable to monitor it let alone edit it. I am using Max for Live, for what it's worth.
Step 1 and 2
On Mac, you can use Loopback
You can set your system output to the loopback driver, then set the loopback driver as the input in Max and then the speakers as the output.
For Windows you would do the same, but with a different internal audio routing system like Jack
Step 3
You can do that with the buffer~ object. Of course the buffer will have a finite size, and storing hours of audio might be problematic, but minutes shouldn't be a problem on a decent computer. The buffer~ help file will show you the first steps needed to store and read audio from it.

Custom player using NDK/C++/MediaCodec - starvation/buffering in decoder

I have a very interesting problem.
I am running custom movie player based on NDK/C++/CMake toolchain that opens streaming URL (mp4, H.264 & stereo audio). In order to restart from given position, player opens stream, buffers frames to some length and then seeks to new position and start decoding and playing. This works fine all the times except if we power-cycle the device and follow the same steps.
This was reproduced on few version of the software (plugin build against android-22..26) and hardware (LG G6, G5 and LeEco). This issue does not happen if you keep app open for 10 mins.
I am looking for possible areas of concern. I have played with decode logic (it is based on the approach described as synchronous processing using buffers).
Edit - More Information (4/23)
I modified player to pick a stream and then played only video instead of video+audio. This resulted in constant starvation resulting in buffering. This appears to have changed across android version (no fix data here). I do believe that I am running into decoder starvation. Previously, I had set timeouts of 0 for both AMediaCodec_dequeueInputBuffer and AMediaCodec_dequeueOutputBuffer, which I changed on input side to 1000 and 10000 but does not make much difference.
My player is based on NDK/C++ interface to MediaCodec, CMake build passes -DANDROID_ABI="armeabi-v7a with NEON" and -DANDROID_NATIVE_API_LEVEL="android-22" \ and C++_static.
Anyone can share what timeouts they have used and found success with it or anything that would help avoid starvation or resulting buffering?
This is solved for now. Starvation was not caused from decoding perspective but images were consumed in faster pace as clock value returned were not in sync. I was using clock_gettime method with CLOCK_MONOTONIC clock id, which is recommended way but it was always faster for first 5-10 mins of restarting device. This device only had Wi-Fi connection. Changing clock id to CLOCK_REALTIME ensures correct presentation of images and no starvation.

OpenCV VideoCapture / V4L2 latency when grabbing a new webcam image

For a computer vision project that I am working on I need to grab images using a Logitech C920 webcam. I am using OpenCV's VideoCapture to do that, but the problem that I am facing is that the image that I take at a certain moment does not show the latest thing that the camera sees. That is, if I take an image at timestamp t, it shows what the camera saw at timestamp (t - delta), so to say.
I did this by writing a program that increments a counter and shows it on the screen. I pointed the camera at the screen and let it record. When the counter reached a certain value, say 10000, it would grab an image and save it with the filename "counter_value.png" (e.g. 10000.png). That way I was able to compare the current value of the counter with the current value seen by the camera. I noticed that most of the time the delay is about 4-5 frames, but it is not a fixed value.
I saw similar posts about this issue, but none of them really helped. Some people recommended putting the frame grabbing routine into a separate thread and updating a "current_frame" Mat variable. I tried that, but for some reason the issue is still present. Someone else mentioned that the camera worked well on Windows (but I need to use Linux, though). I tried running the same code on Windows and indeed the delay was only about 1 frame (which might as well be that the camera did not see the counter because the screen did not update fast enough).
I then decided to run a simple webcam viewer based only on V4L2 code, thinking that the issue might be coming from OpenCV. I again experienced the same delay, which makes me believe that the driver is using some sort of buffer to cache the images.
I am new to V4L2 and I really need to solve this problem as soon as possible, so my questions to you guys are:
Has anyone found a solution for getting the latest image using V4L2 (and maybe OpenCV)?
If there is no way to solve it using V4L2, does anyone know another alternative to fixing this issue on Linux?
Regards,
Mihai
It looks like that there will be always a delay between the VideoCapture::grab() call and when the frame is actually taken. This is because of frame buffering that is done at hardware/SO level and you cannot avoid that.
OpenCV provides the VideoCapture::get( CV_CAP_PROP_POS_MEC) ) method to give you the exact time a frame was captured, but this is only possible if the camera supports it.
Recently a problem has been discovered in V4L OpenCV implementation:
http://answers.opencv.org/question/61099/is-it-possible-to-get-frame-timestamps-for-live-streaming-video-frames-on-linux/
And a few days ago a fix has been pulled:
https://github.com/Itseez/opencv/pull/3998
In the end, if you have the right setup, you can know what is the time a frame was taken (and therefore compensate).
It is possible the problem is with the Linux UVC driver, but I have been using Microsoft LifeCam Cinemas for machine vision on Ubuntu 12.04 and 14.04 machines, and have not seen a 4-5 frame delay. I operate them in low light conditions, though, in which case they reduce the frame rate to 7.5 fps.
One other possible culprit is a delay in the webcam depending what format is used. The C920 appears to support H.264 (which few webcams do), so Logitech may have put most effort to make this work well, yet OpenCV appears not to support H.264 on Linux; see this answer for what formats it supports. The same question also has an answer with a kernel hack(!) to fix an issue with the UVC driver.
PS: to check the format actually used in my case, I added
fprintf(stderr, ">>> palette: %d\n", capture->palette);
at this line in the OpenCV code.

low latency sounds on key presses

I am trying to write an application(I'm a gui first timer) for my son, he has autism. There is a video player in the top half and a text entry area in the bottom. When letters are typed sounds are produced to mimic the words in the video.
There have been other posts on this site in regard to playing sounds on key presses, using gstreamer as a system call. I have also tried libcanberra but both seem to have significant delays between sounds. I can write the app in python or C but will likely do at least some of it in C.
I also want to mention that the video portion is being played by gstreamer. I tried to create two instances of gstreamer, to avoid expensive system calls but the audio instance seemed to kill the app when called.
If anyone has any tips on creating faster responding sounds I would really appreciate it.
You can upload a raw audio sample directly to PulseAudio so there will be no decoding and (perhaps save) extra switches by using the following function from Canberra:
http://developer.gnome.org/libcanberra/unstable/libcanberra-canberra.html#ca-context-cache
The next ca_context_play() will use it.
However, the biggest problem you'll encounter with this scenario (with simultaneous video playback) is that the audio device might be configured with large latency with PulseAudio (up to 1/2s or more for normal playback). It may be reasonable to file a bug to libcanberra to support a LOW_LATENCY flag, as it currently doesn't attempt to minimize delay for sound events afaik. That would be great to have.
GStreamer pulsesink could probably get low latency too (it has some properties for that), but I am afraid it won't be as lightweight as libcanberra, and you won't be able to cache a sample for instance. Ideally, GStreamer could also learn to cache samples, or pre-fill PulseAudio...

Resources