I am from signal processing background. When I listen audio from youtube, sometime I feel the chances of improvement in audio quality at run time. These basic improvements will be auto adjusted as per the audio content. I mean, there is a possibility of write a generalized or feedback based algorithm.
I am aware to test my algorithm on offline audio(recorded audio file). But I do not know, how can I interface my algorithm with youtube audio? Is it possible to write some windows API which activates whenever any audio plays over the machine.
Related
Right now my goal is to grab a streaming video from an IP surveillance camera and display it on a web page.
The camera allows to encode the streaming either in h264 or mjpeg, and transmits it by the RTSP protocol.
The streaming has to be available for several kinds of devices (mainly computers, android smartphones and iphones).
According to my findings it seems like the best option for doing that (in terms of latency) is to transmit the frames of the video through a websocket:
http://phoboslab.org/log/2013/09/html5-live-video-streaming-via-websockets.
Almost all the implementations of this mechanism I've found are based on mjpeg since it's easier to get the video frames.
There's also a h264 player: https://github.com/131/h264-live-player, based on https://github.com/mbebenita/Broadway, which I didn't manage to run ( I would appreciate any help in that respect).
Now the first question is: it is worth trying to work with h264 (since it saves a lot of bandwidth). Or would the h264 decode process probably introduce too much latency?
I would also like to ask if anyone knows a better solution that the one I'm trying to implement.
Finally, where I say "additional information" I mean that I might want to include some additional data associated with some video frames. (something like subtitles or telemetry data).
I am working on a project for large group broadcasting in WebRTC since it needs to work on iOS and Android devices, I am using Kurento, and iOSWEBRTC cordvoa plugin to build this I am curious if anyone can help improve my plan, or if there is a easier way to achieve this.
We need to have a video/audio conference with 5 people per room, however we need to be able to show that video to large audiences. Now my idea would be use Kurento as a middle-man and capture the streams into .webm files for live playback as the conference is going on.
Is there a better way to achieve this? And how would I playback the webm file as it is being recorded, it needs to update and continue playing as more video is sent, basically a live stream copy of the camera.
I am unsure if I am going the best route but I figured that would reduce the bandwidth from my original idea, I originally was thinking of making it like this:
5 person conference for broadcasters X number of viewers then downloaded those streams however I realize the upload bandwidth requirement would be crazy high, that is why I settled on this idea. Additionally the viewers do not have to see real time like the broadcasters. They need to be able to see and communicate with each other at the same time and the viewers can be a few seconds behind.
TL;DR:
Trying to make a 5 person video conference with video/audio capturing to then live stream it to viewers players. This would allow avoiding of PeerConnection bandwidth limitations. Would this work or am I forgetting something?
You'll need to look into using an SFU or MCU. An MCU is very costly, but multiplexes video streams and sends down a single video stream to all peers, and can also record that stream. An SFU is a single point of receipt of all streams, and selectively forwards them to clients. It could record off individual streams and then you could do post-processing to make a single recording out of the multiple recorded streams. A mesh network of connections really doesn't work for this use case.
I am trying to build a website and mobile app (iOS, Android) for the internet radio station.
Website users broadcast their music or radio and mobile users will just listen radio stations and chat with other listeners.
I searched a week and make a prototype with Wowza engine (using HLS and RTMP) and SHOUTcast server on Amazon EC2.
Using HLS has a delay with 5 seconds, but RTMP and SHOUTcast has 2 second delay.
With this result I think I should choose RTMP or SHOUTcast.
But I am not sure RTMP and SHOUTcast are the best protocol. :(
What protocol should I choose?
Do I need to provide a various protocol to cover all platform?
This is a very broad question. Let's start with the distribution protocol.
Streaming Protocol
HLS has the advantage of allowing users to get the stream in the bitrate that is best for their connection. Clients can scale up/down seamlessly without stopping playback. This is particularly important for video, but for audio even mobile clients are capable of playing 128kbit streams in most areas. If you intend to have a variety of bitrates available and want to change quality mid-stream, then HLS is a good protocol for you.
The downside of HLS is compatibility. iOS supports it, but that's about it. Android has HLS support but it is still buggy. (Maybe in another year or two once all the Android 3.0 folks are gone, this won't be as much of an issue.) JWPlayer has some hacks to make HLS work in Flash for desktop users.
I wouldn't bother with RTMP unless you're only concerned with Flash users.
Pure progressive streaming with HTTP is the route I almost always choose to go. Everything can play it. (Even my Palm Pilot's default media player from 12 years ago.) It's simple to implement and well understood.
SHOUTcast is effectively HTTP, but a poorly implemented version that has compatibility issues, particularly on mobile devices. It has a non-standard status line in its response which breaks a lot of clients. Icecast is a good alternative, and is what I would recommend for production use today. As another option, I have created my own streaming service called AudioPump which is HTTP as well, and has been specifically built to fix compatibility with oddball mobile clients, such as native Android players on old hardware. It isn't generally available yet, but you can contact me at brad#audiopump.co if you want to try it.
Latency
You mentioned a latency of 2 seconds being desirable. If you're getting 2-second latency with SHOUTcast, something is wrong. You don't want latency that low, particularly if you're streaming to mobile clients. I usually start with a 20-second buffer at a minimum, which is flushed to the client as fast as it can receive it. This enables immediate starting of the stream playback (as it fills up the client-side buffer so it can begin decoding) while providing some protection against buffer underruns due to network conditions. It's not uncommon for mobile users to walk around the corner of a building and lose their nice signal quality. You want your stream to survive that as best as possible, so if you have already sent the data to cover the drop out, the user doesn't have to know or care that their connection became mediocre for a short period of time.
If you do require low latency, you're looking at the wrong technology entirely. For low latency, check out WebRTC.
You certainly can tweak your traditional internet radio setup to reduce latency, but rarely is that a good idea.
Codec
Codec choice is what will dictate your compatibility more than anything else. MP3 is easily the most compatible, and AAC isn't far behind. If you go with AAC, you get better quality audio for a given bitrate. Most folks use this to reduce their bandwidth bill.
There are licensing fees with MP3, and there may be with AAC depending on what you're using for a codec. Check with a lawyer. I am not one, and the licensing is extremely complicated.
Other codecs include Vorbis and Opus. If you can use Opus, do so as the licensing is wide open and you get good quality for the bandwidth. Client compatibility here though is the killer of Opus. (Maybe in a few years it will be better.) Vorbis is a mediocre codec, but is free and clear.
On the extreme end, I have some stations doing their streaming in FLAC. This is lossless audio quality, but you're paying for 8x the bandwidth as you would with a medium quality MP3 station. FLAC over HTTP streaming compatibility is not code at the moment, but it works alright in VLC.
It is very common to support multiple codecs for your streams. Depending on your budget, if you can't do that, you're best off with MP3.
Finally on encoding, don't go from a lossy codec to another lossy codec if you can help it. Try to get the output stream as close to the input as possible. If you re-encode audio, you lose quality every time.
Recording from Browser
You mentioned users streaming from a browser. I built something like this a couple years ago with the Web Audio API where the audio is captured and then encoded and sent off to Icecast/SHOUTcast servers. Check it out here: http://demo.audiopump.co:3000/ A brief explanation of how it works is here: https://stackoverflow.com/a/20850467/362536
Anyway, I hope this helps you get started.
Streaming straight audio/mpeg (mp3 packets) has worked everywhere I've tried.
If you are developing an APP then go with AAC, if you are simply playing via web browser then you need a HTML5 Implimentation which is MP3. All custom protocols like RTMP or SHOUTcast requires additional UI to be built. There are some third party players available in open source world. You can either use them or stick to HTML5 MP3/OGG as most people now days are using chrome browser or other HTML5 complaint browsers.
I'm working on a desktop application built with XNA. It has a Text-To-Speech application and I'm using Microsoft Translator V2 api to do the job. More specifically, I'm using is the Speak method (http://msdn.microsoft.com/en-us/library/ff512420.aspx), and I play the audio with SoundEffect and SoundEffectInstance classes.
The service works fine, but I'm having some issues with the audio. The quality is not very good and the volume is not loud enough.
I need a way to improve the volume programmatically (I've already tried some basic solutions in CodeProject, but the algorithms are not very good and the resulting audio is very low quality), or maybe use another api.
Are there some good algorithms to improve the audio programmatically? Are there other good text-to-speech api's out there with better audio quality and wav support?
Thanks in advance
If you are doing off-line processing of audio, you can try using Audacity. It has very good tools for off-line processing of audio. If you are processing real-time streaming audio you can try using SoliCall Pro. It creates virtual audio device and filters all audio that it captures.
I'm developing a video chat-like application using Flash RTMFP and Stratus. So far, I'm having good success. I can build from source, tweak settings, and get video and audio in both directions.
There's one glaring problem I haven't been able to solve, however -- when using a client on a Linux machine, the video received by the other end looks very poor. It's blocky and pixellated, almost as if it's rendering 160x120 in a much larger frame. When sending from a Mac (my other dev machine), the video looks quite good.
I've tried modifying all the settings I can think of -- frame rate, "quality", size, audio settings -- with no discernible improvement. I've tried running it as a local file and from a remote server. The network where I'm working is extremely fast, so that shouldn't be an issue.
Is there anything else I can try? Any suggestions or ideas are greatly appreciated.
Many thanks!
Bad camera or bad camera driver?
Stratus does not change video encoding, it simply is another variation of the RTMFP protocol for transferring exactly the same compressed stream.
One way you can check whether Stratus indeed plays any role in this is to try to stream the same stuff through Adobe Flash Media Server, the development version is free from adobe.com.
I have done Stratus applications, and have not experienced any degradation of video quality compared to Flash Media Server solution. In fact when the camera quality is set to 100, you won't notice the difference between raw camera video and compressed stream when using loopback mode. Apart from possibly limited framerate, if you specify bandwidth (the three are intimately related - bandwidth, framerate, quality, as per documentation of Camera.setQuality or Camera.setMode)