What is Kinect + Linux being used for? - linux

An article on Hackaday piqued my curiosity, and I see Kinect + Linux questions being asked here (mostly about configuration), so I'll venture this question:
It is clear to me that Kinect can be used together with Linux on a "regular pc" -- but I can't help wondering why, that is, what might you actually use this for?
I don't suppose people really like the human/computer interface presented in movies such as "Minority Report" -- surely, nobody is actually doing text editing, coding, or business data processing by "hand-waving". So besides just games & exercises, what are examples of actual, real-world, useful (ie. 'professional') applications of such a setup?
For instance, can it be used for 3D scanning of real-world objects to obtain digital models? What sort of accuracy would such a scan yield?

The Kinect can be used for a wide variety of useful applications. I'm not sure if you are asking specifically about Linux or if Windows ("regular PC") is acceptable, but I'll provide you with some examples that come to mind.
For Linux specifically, it is likely that applications on Linux are using the sensor's raw sensor data only, rather the skeletal tracking feature. Many Kinect applications are on Windows because Microsoft's Kinect SDK is available only on Windows, and it provides the best skeletal tracking accuracy to-date.
You are right that the Kinect is rarely used where a keyboard & mouse would be faster and more accurate, but note that it is potentially relevant for accessibility.
And yes, it can be used for 3D scanning of real-world objects. I'm not sure about the exact accuracy, but I think it is acceptable for many applications. The main benefits are its low cost and speed.
For examples of 3D scanning, check out:
KinectFusion, a Microsoft Research project
Occipital Structure sensor for 3D scanning. (This is not the Kinect sensor, but provides an example application for 3D scanning. The company has a Kinect-related history as well.)
Styku - 3D body scanning for clothes fitting
Aside from 3D scanning, here are some other examples of applications:
Atlas5D - at-home patient monitoring
GestSure - 'Minority Report' interface for surgical rooms
Jintronix - games, exercises, assessments for physical therapy
There are many depth sensors like the Kinect3D on the market. The latest notable application would be iPhone X's depth sensor and FaceID. Many companies in the space are working actively in FaceID now, which would also be useful on Linux. Check out Microsoft's Window Hello biometric facial ID system - see Microsoft's official website:
Manufacturing of the Kinect sensor and adapter has been discontinued,
but the Kinect technology continues to live on in products like the
HoloLens, Cortana voice assistant, the Windows Hello biometric facial
ID system, and a context-aware user interface.
Kinect has applications in the robotics community as well, though I don't know the specifics. I assume many in robotics community use Linux when working with the Kinect. The depth and color cameras can be used to provide vision and the microphone array for audio input.
Generally, the Kinect had a big impact when it was released not just because of its technology but also because of its low price point, even if it's not the most accurate for every application. As this technology improves, I hope many other applications will emerge and become mainstream.
EDIT: also, check out this Hacker News discussion: "Microsoft Has Stopped Manufacturing The Kinect"

Related

What considerations to make for selecting Bluetooth Chipsets for control of LED via PWM?

I am involved working on new hardware LED products where we are selecting a Bluetooth chipset to use in multiple products controlled by iOS and Android apps, for at minimum the next 3-4 years. Also I am not the developer, a third party will be contracted for this project.
As part of background research, I wanted to ask for feedback from Stack Overflow communities' experience with programming for Bluetooth, more specifically with custom firmware and GPIO PWM for LED?
What kind of challenges did you come across?
Are there any granular details or features to look out for with the hardware?
**Edit: based on first answer-
Requirements:
BLE 5
I do need OTA update capability
Chip size not big constraint, plastic enclosure can accommodate up to 1 inch/25mm or bit more easily.
Not high temp application
Single-chip solution, that will be programmed with our firmware, controlling 4x PWM Channels is ideal for our LED strips, avoiding separate MCU
Cost per unit (lowest average cost/unit)- important factor at volume, TBD
**Qualities I can not gauge well myself, being a designer and not an experienced programmer:
Ease of integration/support (lowest cost of development)
Quality of the chip manufacturer's software tools
Quality of the chips documentation
I have found some questions related with Raspberry Pi that seem generally helpful, but those questions don't help me with features or the support and documentation as related to BT SOCs.
**Edit: Yes I we are only considering BLE, and the NORDIC Semiconductor link I have included below are BLE and BT 5.
NORDIC chips are on my short list, they seem well supported and capable of 3x or 4x PWM channels for example nRF52832 Nordic nRF52832 Spec info. or the newer model RF5340. Does anyone have experience with them?
I really appreciate any answers regarding development considerations with Bluetooth.
I will edit & clarify if needed.
If you wish to support iOS Apps, a BLE device is necessary, BT classic requires a special apple license (for your product) to be able to connect with iOS Apps.
But other than that, your specifications dont really help to rule out ANY chip.
The first question that comes to mind is what other features do you
have already on your specification list that could be satisfied with
a common solution. I.e. if you also need WiFi, don't choose two
separate BLE/WiFi Chips, buy a chip that can do both (it's both
2.4GHz RF). If you need OTA updates for your firmware, choose a chip manufacturer with extensive and well documented tooling.
Consider special requirements:
Do you need a very small chip?
Does it need to be run at high temperatures (i.e. inside a light bulb)?
Do you need to run at ultra-low-power?
Does it need a high performance RF transceiver?
Decide whether you need a single-chip solution, that will be programmed with your firmware, or if your firmware will run on a dedicated microcontroller which is connected to the BLE chip.
Unless you have absolutely no special requirements to narrow down the selection, I'd base my decision along these criteria (not ordered):
Ease of integration (lowest cost of development)
Cost per unit (lowest average cost/unit)
Quality of the chip manufacturer's software tools
Quality of the chips documentation
GPIO-PWM Output should be possible with almost any programmable BLE chip.

Which developer roles or titles are needed to build a software DAW?

So far, I've used many different Audio Production software on Mac and Windows platforms. Often times, I ponder on the idea of creating my own DAW, but I realize that would be an extremely difficult challenge for a single person to undertake (especially if only knowledgeable in one particular area / language of programming).
There's a flood of ideas / features that comes to my mind just by the thought of some of the other DAWs I've used. From implementing MIDI in/out, Audio Routing, Mix Buses, VST support, User Interface for a Piano Roll and Song view, etc...
So my question is...
Which roles would be required in a team of developers to create a complete Digital Audio Workstation (DAW) Software?
I think the right answer is several good developers (you don't need so many, perhaps 3) a good product manager, an ui designer/graphist a lot of testers. And a good coffee machine.
The real problem is what kind of DAW do you want, portable on mac and windows, which OSs, which formats (vst 2, 3, AU, RTAS, AAX, rack extension, DX), do you want only MIDI and adio tracks, which external MIDI devices you want to support, do you support OSC, other protocols?
What will be the features of you mixer, integrated effects? What support of audio API on windows (wasapi, asio ...) do you want some cloud feature ? community or online store integration?
What kind of breakthrough would you have compared to cubase, live, PT, DP, Logic, garage band, bitwig, studio one, sonar, fl studio ...? Do you want modular patches or just tracks? Will you have advanced integrated controls or midi modifiers?
All that is the problem...
This is a very complex question!

Does DirectSound usually support echo cancellation and noise reduction?

I'm currently using the waveInOpen set of Windows API functions to record audio for a VOIP application. I'm now being asked to add echo cancellation, and possibly noise reduction, and gain control. I know nothing about DirectSound, but while searching on "echo cancellation" on Google I came across references on MSDN to DirectSound such as CaptureAcousticEchoCancellationEffect.
If I switch to DirectSound will I get some of these features "for free"? Are they only supported if the hardware supports it, and if so, how often will that hardware be present in the average consumer PC?
Starting with Windows Vista, Microsoft provides a separate component Voice Capture DSP:
The voice capture DMO includes the following DSP components:
Acoustic echo cancellation (AEC)
Microphone array processing
Noise suppression
Automatic gain control
Voice activity detection
Applications can turn each component on and off individually.
You can use it in your DSP application to leverage EAC and NS implemented in software.
As far as I know these features are not professionally supported in DirectSound. A hardware device that has support for these features usually is equipped with a special processor/DSP and costs a lot more than the standard hardware device.

Low level audio programming

I wonder; does audio software like Cubase and Audacity use PlaySound calls??
Where can I learn about low level audio programming? As far as I've found information on the web, MCI seems to be the lowest level audio API in Windows...
Thanks
Edit: I don't ask for information specific for Windows only.
There's several audio APIs to choose from. The oldest and most widely supported is the waveOut API - look for functions starting with waveOut in MSDN. A slightly newer one is DirectSound which is geared more towards games, but it's main feature over waveOut is positional 3D sound which professional audio software doesn't use (it was also supposed to have lower latency than waveOut, but that never really materialized). For low latency audio, there is ASIO. Professional audio apps support this API, but not all drivers do (it's a standard feature in professional sound cards, but not gaming or on-board hardware). ASIO can provide much lower latency than waveOut or DirectSound. Finally, there's the kernel streaming interface, which is the lowest-level audio interface still accessible from user-mode code. This is a direct pipe into Windows's internal mixer which combines output from all apps that are currently playing sound into the signal that gets sent to the sound card. It's scarcely documented though. There's a driver called ASIO4ALL (just google it) that provides ASIO support on soundcards without ASIO drivers by implementing the ASIO API on top of the kernel streaming interface.
I'm a little late to the game here, but I posted a Windows API history last week that might add a little more context. The choice of API really depends on your needs. If you want to avoid 3rd party libraries, it really only comes down to MME, XAudio2, and Core Audio (WASAPI).
A Brief History of Windows Audio APIs
Hope this helps!
Actually, if you are looking for more than Windows-only output support, then the best way to start is to review Phil Burk's PortAudio, available as of this writing at http://www.portaudio.com/ .
ASIO is a good quality interface, but it's proprietary and owned by Steinberg.
There are many lower-level interfaces to audio output than MCI in modern Windows. These include, at least, DirectSound, XAudio and WASAPI.
I recommend avoiding the Windows APIs as much as possible, and learning PortAudio instead.

How to programmatically use the mobile phone's IrDA to remote control a media player?

which API or library on which mobile OS is to be used when one needs to write a code to use the phone's IrDA to create the necessary impulses to remote control consumer electronics e.g. a HDD media player?
Is maybe a certain mobile OS better suited for that kind of application than others?
First you need to know that IrDA is not the best choice for remote control. It can be done, but IrDA is by design high speed/low range, you can emulate low speeds but ranges (IMO) are far from practical usage (Nokia e50 is able to control digital camera shutter from 2-3m... with very, very careful aiming). The amount of hacking needed to achieve this is shown here, you basically need to trick IrDA to send correct impulses with correct frequency.
The second thing is that CIR remote control is not as simple as you might think. There are countless standards that differ in used frequency, modulation, wavelength, command codes and so on. You need to know what you want to support. LIRC site can be very helpful in determining that http://lirc.sourceforge.net/remotes/. Approachable explanation of what it all means is available here: http://www.sbprojects.com/knowledge/ir/ir.htm
As for ready made libraries and platforms... I honestly don't know. I've seen it done on PocketPC (nevo among others) and Symbian S60 (irRemote). Haven't seen working J2ME app yet.
Last time I needed the IR remote I hacked it together using IR diode, AVR ATTiny and surprisingly short piece of assembly :)

Resources