How implement real time video encoding using Libde265 and Linux - linux

I been reading a lot about H265 encoder but I'm no really sure how to start a C or Python application to encode a video stream in real time using H.265 encoder from libde265, I all ready install the library and I guess I could use opencv to get the input video stream from a usb camera, do anyone has worked in this type of application ?

If you are not particular about using libde265, which I am not aware of, please give a shot with gstreamer. gstreamer has lots of plugins and examples if you are doing some standard tasks like encoding on your stream. It also integrates well with native development.
I have worked on a project similar to yours, where we did H264 encoding and decoding along with few other things on a live camera feed.
Please find the video of a similar application here: https://www.youtube.com/watch?v=JcpkGDpfVU0
Just my two cents!!

Related

Displaying mjpeg/h264 live streaming (with additional information) on a web page?

Right now my goal is to grab a streaming video from an IP surveillance camera and display it on a web page.
The camera allows to encode the streaming either in h264 or mjpeg, and transmits it by the RTSP protocol.
The streaming has to be available for several kinds of devices (mainly computers, android smartphones and iphones).
According to my findings it seems like the best option for doing that (in terms of latency) is to transmit the frames of the video through a websocket:
http://phoboslab.org/log/2013/09/html5-live-video-streaming-via-websockets.
Almost all the implementations of this mechanism I've found are based on mjpeg since it's easier to get the video frames.
There's also a h264 player: https://github.com/131/h264-live-player, based on https://github.com/mbebenita/Broadway, which I didn't manage to run ( I would appreciate any help in that respect).
Now the first question is: it is worth trying to work with h264 (since it saves a lot of bandwidth). Or would the h264 decode process probably introduce too much latency?
I would also like to ask if anyone knows a better solution that the one I'm trying to implement.
Finally, where I say "additional information" I mean that I might want to include some additional data associated with some video frames. (something like subtitles or telemetry data).

How to control Kurento audio recorder quality

We recently built a demo application utilizing Kurento Media Server to record applicant video interview, but the audio quality is not well , some audio is not recognizable and some of it had high pitch noise. We've been test it on several models of PC or Mac, so this should not be device problem.
We've been using RecorderEndpoint with media profile MediaProfileSpecType.WEBM ,and all other setting remain as default.
To fix this problem, we tried:
We upgrade to Kurento 6.2.1 which use Opus as the audio encoder.
Try to using setMaxOuputBitrate of the recorder, we don't see it has improvements or I don't know which bit rate range can be used.
Change SDPOffer to setup a high bit rate audio for Opus which we don't know where to modify
None of it is working so far, so please tell us where to look.
Thanks.
Please check with this recording tutorial. The audio should be fine. Just make sure you are only sending audio, and not video. That should help.
If the audio is not being recorded correctly, I would try and hear what's coming out of your box through your browser. Try and run the hello-world tutorial, with a pair of headphones connected to your box so you don't have echoes.
About #2, if you want to raise the bitrate exchanged between the webrtc endpoint and the recorder, you need to invoke the setOutputBitrate command on the webrtc endpoint.

How to do audio stream processing in Linux(RPi) via C?

Hej
I would like to build an audio effect into a RPi. This effect should be programmed in C. I am not familiar with the software audio interfaces in Linux. The ALSA interface looks very complicated. Port Audio seems to be an alternative.
Any ideas(maybe with a tutorial)?
With some work you can also get OpenAL to stream and render audio using c language - then you could perform your processing in that context ...
Node.js is available on RPi which offers audio modules
PortAudio seems the best approach. A good tutorial can be found here:
http://portaudio.com/docs/v19-doxydocs/tutorial_start.html
Sometimes the Interface configuration needs to be done manually.

Capture audio stream from Xtion pro with OpenNI2?

Dose any one try to captured the audio stream using OpenNI2 library from Xtion pro??
I searched the Internet and found the audio API in OpenNI2 source code. Audio API
It seems that it only can "Play" the audio but capture audio stream.
And it doesn't demonstrate how to use those API.
Is there any example code which recorded the audio stream using OpenNI2 from Xtion pro?
BTW, my OpenNI version is 2.2.0.33.
Thanks anyone's help : )
After I surveyed so much information, I found that OpenNI2 didn't support the audio stream anymore. Hence, someone suggest me to use another library to capture audio stream from Xtion Pro.
Now, I'm using the PortAudio to deal with the audio stream. It's quit powerful tool and easy to use.
Moreover, it's cross-platform library which support Windows, Mac OS, and Unix using C/C++, and the Documentation is clear and the example code is understandable.
So, if some newbies like me want to use Xtion Pro to capture audio stream, I will recommend this library.
Any other suggestions are very welcome : )

Capturing microphone audio using Naudio + WASAPI?

I am looking for an example code on how to capture microphone audio using Naudio + WASAPI.
(I am not interested in direct to disk recording, what i need is to process the input buffer in realtime in order to do some audio effects.)
I've searched a lot, but could not find any decent sample online.
Can you please help?
P.S. BASS library and C# examples are welcome as well!
The NAudio source code comes with a demo app that shows how to capture audio using WASAPI. Look in NAudioDemo\RecordingDemo\RecordingPanel.cs.
MSDN has a lot of code samples, though not covering NAudio they do have a few samples that show in detail how to use the Windows Audio Session API.
Since WASAPI is a native-only API there are both sample projects that show you how to use that API from a native-only app Here as well as samples that show you how to build a native component that wraps the API for consumption from a CSharp application. I couldn't find the direct link to the C#/C++-sample but it's included in the Windows 8 App Samples package. Then there's the option of writing a managed wrapper for the API altogether but unless you enjoy pain and are looking for an adventure in marshaling I wouldn't recommend it...
If you're developing for Windows Phone then there's a VOIP-sample in the WP8 SDK that covers how to capture and render PCM audio data using WASAPI.
As Mark pointed out, the size of the pcm data buffer might differ over time and this is in part due to the fact that WASAPI is a low-latency Audio API and therefore has as little abstraction between the consumer (your app) and the producer (the driver) as possible. Though there's nothing that stops you from doing some fixes size buffering of your own and only pass on the data to your app when your own buffer is full.

Resources