Can a RTSP pause request be supported while playing a live video stream? [closed] - rtsp

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
I wondered whether the PAUSE request of RTSP protocol can be supported while playing a live video stream.
For example, a real time video stream from a camera.

From the RFC:
10.6 PAUSE
The PAUSE request causes the stream delivery to be interrupted
(halted) temporarily. If the request URL names a stream, only
playback and recording of that stream is halted. For example, for
audio, this is equivalent to muting. If the request URL names a
presentation or group of streams, delivery of all currently active
streams within the presentation or group is halted. After resuming
playback or recording, synchronization of the tracks MUST be
maintained.
The specification does not say whether live stream pausing must or should be available. PAUSE makes sense for live streams in terms of temporarily not sending data. However it is up to server to support or not support this command. My guess is that few cameras implement it for live video - the options are either to receive and not display video, or disconnect and re-connect later.

Related

Separate server for video encoding? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 4 years ago.
Improve this question
I'm making a website that will handle video upload and encoding. My idea was to have the main server handle both client requests and video processing. But from my understanding, video encoding is cpu intensive. So I'm not sure if its a good idea to have one server do all the work, or have a separate server to do processing stuff. I want to try to future proof myself a bit in case I ever get high volumes of traffic, thus adding more processing work for the server.
So my question, is it overkill these days to have a separate server for video encoding, or am I going about this all wrong?
Ps. I'm using nodejs.
It will be overkill for someone starting out. As you mentioned, you don't have an idea of how much volume of traffic to expect yet, and it's difficult to project growth of your web app since it might grow gradually or take off immediately and hammer your server.
I would approach this in such a way where I can separate and queue the video processing work away from the main website. This will allow you to scale the video processing portion of your app without requiring you to run the entire website on there.
With a type of queuing system, you can also manage the amount of video you're processing at any point in time. So if 1 server can handle 5 video processing requests at once, any new request would have to wait until it finishes the previous request etc. Almost a micro-service type architecture.
Hope this gives you some ideas.

How to save webRTC opus audio stream on server side using nodejs? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 4 years ago.
Improve this question
There are some solutions to save a raw usermedia audio stream on the server side but I want to save the webRTC encoded stream which has low channel bandwidth transmission. I think of a solution that I'm not sure about:
Connect server and client using webRTC, the stream from the client is encoded then by the browser, convert the stream to mp3/ogg for later usage on the server.
I found two server side nodejs webrtc implementations :
1- licode
2- node-webrtc
Is there any other solution or better idea for my problem?
You could give a try to kurento
I will just link you this post :
https://stackoverflow.com/a/24960167/1032907
you could give https://github.com/mido22/recordOpus a try,
I basically, capture user's microphone and convert the raw pcm data into opus packets, send it to server, convert back to wav format, also provided the option of converting to mp3 and ogg using ffmpeg.
I have recently successfully set up an OpenVidu server on Ubuntu for recording video and audio, which runs the Kurento Media Server under the hood, and offers a host of convenient API's. Running the OpenVidu server with their CloudFormation config is the easiest, which takes care of SSL setup, running the docker container necessary for recording, etc.

Speech to Text (Voice Recognition) Directly from Audio / Transcription [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
Need to be able to convert or transcribe audio (eg from .MP3, other audio format) containing speech into text transcripts using a speech to text (voice recognition) algorithm with high accuracy.
There are many available ways of doing this that are increasingly accurate but are designed for speech spoken into the device microphone (e.g. the Google Translate/corresponding API for web, Dragon app for iOS).
I need a way to directly feed an audio file into the speech recognition engine/API.
Don't want to play the audio through a speaker and capture it with a microphone -- takes considerable time for long audio files, and degrades audio quality and resulting transcription quality.
Does a web service, or API, or code for this exist? Is there some kind of a wrapper around one of the existing services that presume that the microphone will be the source?
Thanks
There is now a relatively new service that allows Speech to Text automatic transcription, and a great web interface for human editing of the results. It's:
https://trint.com/
We've used it, and been pleased with the results. The transcription is certainly not perfect, but it's a great start, and it allows ready human editing.
There is also now a new API and service available from IBM Bluemix/Watson. You can try the free demo here:
https://speech-to-text-demo.mybluemix.net/
This service does a pretty decent job of converting audio (sourced from the mic or from an audio file) into text. Currently at least in the demo it appears that it doesn't use MP3, but will use wav and other formats. This service has a full API, and it is primarily designed to be built into applications.

Building a open source media server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking us to recommend or find a tool, library or favorite off-site resource are off-topic for Stack Overflow as they tend to attract opinionated answers and spam. Instead, describe the problem and what has been done so far to solve it.
Closed 9 years ago.
Improve this question
I'm looking into the possibilities of building an open-source streaming media server on Debian. It will be serving a mix of mp3 and mp4 files, perhaps 10-30 streams at a time, fairly high quality.
What are the possibilities for a Linux streaming media server that is totally open-source?
XBMC and MythTV are two popular media server software distributions that come to my mind. They are also available as individual packages that you should be able to install on any distro.
In addition to media server functionality, MythTV provides DVR and TV tuner functionality as well.
I always thought of XBMC and Mythtv as stream consumers, rather than stream providers. Can't speak to XBMC at all. Myth can definitely provide streams and it sets them up pretty much out-of-the-box ready to go. Not sure it can handle 30 concurrent streams. If you want that many, I'm guessing this will go beyond your home network and you want something that can be hardened and exposed to the internet. I'd recommend mediatomb as a streaming server. Maybe also lots of RAM for a filesystem cache and an extra couple of network cards. I think that's where your bottleneck will form.

Playing audio files on a website [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
Questions asking for code must demonstrate a minimal understanding of the problem being solved. Include attempted solutions, why they didn't work, and the expected results. See also: Stack Overflow question checklist
Closed 9 years ago.
Improve this question
Can you suggest a good way to play audio files on a website?
I am building a browser based sound board with html5 audio and java script. I am testing it in Safari, chrome, ie and firefox. So far no problem. The only issue is the formats the browsers play ie firefox will only play .ogg.
To solve this i have a user agent detector to direct to a version of the site with has .ogg files on it.
I would recommend using the audio tag. As for fall back 'Yi Jiang' is dead right. You could use something like JPlayer which is an html5 audio plugin with fallback.
-kev
Please mark an answer
Have you tried a HTML5 Audio Google Search?
<audio src="elvis.ogg" controls autobuffer></audio>

Resources