4K MJPEG Camera Video Preview with delay (on Windows 10) - windows-10

I'm trying to preview video stream from 4K Camera (Brio) in my application. The application uses DirectShow to open camera and receive frames. Filter configuration is shown in image below.
The problem are high resolutions (ie. 4096x2160). With 4096x2160 resolution both GraphEdit and my application have delay when I preview video stream.
I'm testing this on Windows 10. Note that Windows 10 preinstalled Camera application works perfect with this resolution. I've also tried the same with UWP sample using MediaCapture Api, but the problem is the same.
What am I missing?

Windows 10 preinstalled Camera application does not use DirectShow, uses completely different code path based on Media Foundation API and is overall more efficient in JPEG decompression in particular. That is, you cannot compare directly your DirectShow based graph to what Windows Store Camera app is doing.
In your situation MJPEG Decompressor Filter is an outdated piece of software incompatible with this resolution and is a bottleneck. Also for live video DirectShow graph needs to have Smart Tee Filter.
Performance wise I would recommend to build media pipeline on Media Foundation, even though it is more difficult and comes with less documentation and samples.

Related

Live streaming from UWP to Linux/Python Server

I have an UWP app that capture a live video stream (webcam), encodes it in h264, and sends it through a TCP socket (in a local network, I need high performance) to a Linux device.
Is there a way to do this? I need the video not for playing it but for extract single frames. I could do that with opencv but it requires a local video file, instead I'm using a live stream.
I would send photos instead of a video stream if the time needed for capture one was acceptable, but it requires about 250 ms.
Is RTP required? Does UWP (windows) provides a way to achive this?
Thank you
P.S.: The UWP app runs in Hololens.
You can use WebRTC to transmit live video from the HoloLens easily to any target. That's probably the easiest way to do it without going really low level.
For an introduction just grab this repo and try the sample app which runs perfectly on the HoloLens https://github.com/webrtc-uwp/PeerCC/tree/e95f231e1dc9c248ca2ffa040276b8a1265da145/Client

Capturing microphone audio using Naudio + WASAPI?

I am looking for an example code on how to capture microphone audio using Naudio + WASAPI.
(I am not interested in direct to disk recording, what i need is to process the input buffer in realtime in order to do some audio effects.)
I've searched a lot, but could not find any decent sample online.
Can you please help?
P.S. BASS library and C# examples are welcome as well!
The NAudio source code comes with a demo app that shows how to capture audio using WASAPI. Look in NAudioDemo\RecordingDemo\RecordingPanel.cs.
MSDN has a lot of code samples, though not covering NAudio they do have a few samples that show in detail how to use the Windows Audio Session API.
Since WASAPI is a native-only API there are both sample projects that show you how to use that API from a native-only app Here as well as samples that show you how to build a native component that wraps the API for consumption from a CSharp application. I couldn't find the direct link to the C#/C++-sample but it's included in the Windows 8 App Samples package. Then there's the option of writing a managed wrapper for the API altogether but unless you enjoy pain and are looking for an adventure in marshaling I wouldn't recommend it...
If you're developing for Windows Phone then there's a VOIP-sample in the WP8 SDK that covers how to capture and render PCM audio data using WASAPI.
As Mark pointed out, the size of the pcm data buffer might differ over time and this is in part due to the fact that WASAPI is a low-latency Audio API and therefore has as little abstraction between the consumer (your app) and the producer (the driver) as possible. Though there's nothing that stops you from doing some fixes size buffering of your own and only pass on the data to your app when your own buffer is full.

MKV, MP4, or FLV for web video streaming

I'm currently on edge with what container I should use for the videos I put on my website.
I recently started uploaded videos of game play/walkthroughs and saw the need for a container that could hold HD video without limitations on file size, codecs (AAC or AVC), or resolution (in the future I want to be able to support 5K video) and 5.1 Dolby digital and up audio. Of course I don't expect the 5K to be efficient at being streamed, I just want it to be available.
This is where the confusion started.
I currently use the .flv container because people state it is all around better. Less resource consumptive, widely used, and supports the common codecs. The problem with this is simple. It cannot support the HD content I want to show: 5.1 dolby audio and limitless file size.
MP4 is everything I need, but I heard that it can be slow to respond, pseudostreaming modules are not widely accepted by browsers, and I don't have time to change containers everytime someone wants to update to .mp5, 6, 12, etc.
That's where I am including .mkv as the container. .MKV also supports everything I want (HD, 3D), all codecs, universal, and limitless file attributes. THE ONLY problem is that it cannot be streamed.
I know this is a programmers site, but may be in the future, being that we can only advance web connections, I or someone else could program a module for apache .mkv streaming. I'm don't know where an apache module source is, so I cannot do it at this time.
I leaning between .flv and .mkv. I'm not really concerned about .mp4 because if I want to be future-proof I need .mkv, if I'm not concerned about the future or updates I should stay with .flv.
What do you all think. Would it really be so difficult to program a .mkv streaming module?
Excluding web streaming, which of the 3 would be all around better. Video quality (AAC AVC), file size limits, universal, web support, etc.
Thanks,
You can use the window media streaming platform. After that they will look after your every problem. However ,MP4 with h264 video and aac audio and streamed/played with flash is also good.

Low audio quality with Microsoft Translator

I'm working on a desktop application built with XNA. It has a Text-To-Speech application and I'm using Microsoft Translator V2 api to do the job. More specifically, I'm using is the Speak method (http://msdn.microsoft.com/en-us/library/ff512420.aspx), and I play the audio with SoundEffect and SoundEffectInstance classes.
The service works fine, but I'm having some issues with the audio. The quality is not very good and the volume is not loud enough.
I need a way to improve the volume programmatically (I've already tried some basic solutions in CodeProject, but the algorithms are not very good and the resulting audio is very low quality), or maybe use another api.
Are there some good algorithms to improve the audio programmatically? Are there other good text-to-speech api's out there with better audio quality and wav support?
Thanks in advance
If you are doing off-line processing of audio, you can try using Audacity. It has very good tools for off-line processing of audio. If you are processing real-time streaming audio you can try using SoliCall Pro. It creates virtual audio device and filters all audio that it captures.

Flash + RTMFP + Stratus: Video Quality on Linux

I'm developing a video chat-like application using Flash RTMFP and Stratus. So far, I'm having good success. I can build from source, tweak settings, and get video and audio in both directions.
There's one glaring problem I haven't been able to solve, however -- when using a client on a Linux machine, the video received by the other end looks very poor. It's blocky and pixellated, almost as if it's rendering 160x120 in a much larger frame. When sending from a Mac (my other dev machine), the video looks quite good.
I've tried modifying all the settings I can think of -- frame rate, "quality", size, audio settings -- with no discernible improvement. I've tried running it as a local file and from a remote server. The network where I'm working is extremely fast, so that shouldn't be an issue.
Is there anything else I can try? Any suggestions or ideas are greatly appreciated.
Many thanks!
Bad camera or bad camera driver?
Stratus does not change video encoding, it simply is another variation of the RTMFP protocol for transferring exactly the same compressed stream.
One way you can check whether Stratus indeed plays any role in this is to try to stream the same stuff through Adobe Flash Media Server, the development version is free from adobe.com.
I have done Stratus applications, and have not experienced any degradation of video quality compared to Flash Media Server solution. In fact when the camera quality is set to 100, you won't notice the difference between raw camera video and compressed stream when using loopback mode. Apart from possibly limited framerate, if you specify bandwidth (the three are intimately related - bandwidth, framerate, quality, as per documentation of Camera.setQuality or Camera.setMode)

Resources