How to Convert Rtp Multicast Stream from DVB-T to HLS? - multicast

I use Mumudvb to get signal from dvb-t and dvb-s to RTP Multicast stream and successfully do that, The result stream URL is something like rtp://239.1.2.1:60001.
Now i want to know How can i convert RTP (Or UDP)stream to Http Live Stream (HLS)?
Edit:
I could convert live stream with ffmpeg, but its not stable, when an error occurred in ffmpeg the conversation stop and there is no way to found fail and for example restart ffmpeg, I am looking for new way for that convert operation.
Thanks a lot

VLC can probably do this, something along the lines of:
cvlc -vvv rtp://#239.1.2.1:60001
--sout '#std{access=livehttp{seglen=5,delsegs=true,numsegs=5,
index=/path/to/stream.m3u8,
index-url=http://example.org/stream-########.ts},
mux=ts{use-key-frames},
dst=/path/to/stream-########.ts}'
Substitute /path/to/stream* with whatever path you want to serve your playlist and segments from, and http://example.org with your machine's domain name or IP address.
See these command line examples for further pointers.
I'm not sure if VLC retries more gracefully after input errors than ffmpeg. In any case, you can script retry after failure behavior, here is one example.

Related

Randomly silencing part of input audio in real time

My machine is running Ubuntu 20 LTS. I want to manipulate the input live audio in real-time. I have achieved pitch shifting using sox. The command being -
sox -t pulseaudio default -t pulseaudio null pitch +1000
and then routing the audio from "Monitor of Nullsink" .
What I actually want to do is, silence randomized parts of the input audio, with a range. What I mean is, randomly mute 1-2s of the input audio.
The final goal of this project will be to write a script that manipulates my voice and makes it seems like my network is bad.
There is no restriction in method of achieving. That is we may use any language, make an extension, directly manipulate the input audio with sox, ffmpeg etc. Anything goes.
Found the solution by using trim in sox. The project can be found in
https://github.com/TathagataRoy1278/Bad_Internet_Audio_Modulator

When streaming with Icecast and Liquidsoap the sound keep rewinding

when I stream with Liquidsoap and Icecast the stream keep playing the same part over and over, with a rewind sound...
This is the stream: http://radio.oursound.com.br:8000/oursoundradio
I was unable to find anything about it, this is my liq script
source = input.http("http://LINK_TO_MP3.mp3",buffer=10.0, max=20.0,logfile="/tmp/001.log")
source = mksafe(source)
output.icecast(%vorbis,host="localhost",password="password",mount="oursoundradio", source)
I am using vorbis, because when I use MP3, I keep getting this error
strange error flushing buffer ...
strange error flushing buffer ...
strange error flushing buffer ...
strange error flushing buffer ...
But this is for another day, what I need help is with the streaming rewinding, I am completely new to Liquidsoap and Icecast...
But already read all the documentation, and found nothing...
Thanks for the help...
input.http is meant to be used for radio-style HTTP streams that never really end. Liquidsoap is treating it as such, getting disconnected when the file is fully downloaded, and is likely looping a buffer. There shouldn't be a "rewind" sound... you're probably hearing a blip of an MP3 artifact. Your station is down right now, or I'd give it a listen to check.
You should use single instead. Untested, but try something like this:
source = once(single("http://example.com/file.mp3"))
Of course in practice, you probably actually want playlist.
I just had the same issue and I solved by (logged as root):
apt install --reinstall icecast2

Capturing PCM audio data stream into file, and playing stream via ffmpeg, how?

Would like to do following four things (separately), and need a bit of help understanding how to approach this,
Dump audio data (from a serial-over-USB port), encoded as PCM, 16-bit, 8kHz, little-endian, into a file (plain binary data dump, not into any container format). Can this approach be used:
$ cat /dev/ttyUSB0 > somefile.dat
Can I do a ^C to close the file writing, while the dumping is in progress, as per the above command ?
Stream audio data (same as above described kind), directly into ffmpeg for it to play out ? Like this:
$ cat /dev/ttyUSB0 | ffmpeg
or, do I have to specify the device port as a "-source" ? If so, I couldn't figure out the format.
Note that, I've tried this,
$ cat /dev/urandom | aplay
which works as expected, by playing out white-noise..., but trying the following doesn't help:
$ cat /dev/ttyUSB1 | aplay -f S16_LE
Even though, opening /dev/ttyUSB1 using picocom # 115200bps, 8-bit, no parity, I do see gibbrish, indicating presence of audio data, exactly when I expect.
Use the audio data dumped into the file, use as a source in ffmpeg ? If so how, because so far I get the impression that ffmpeg can read a file in standard containers.
Use pre-recorded audio captured in any format (perhaps .mp3 or .wav) to be streamed by ffmpeg, into /dev/ttyUSB0 device. Should I be using this as a "-sink" parameter, or pipe into it or redirect into it ? Also, is it possible that in 2 terminal windows, I use ffmpeg to capture and transmit audio data from/into same device /dev/ttyUSB0, simultaneously ?
My knowledge of digital audio recording/processing formats, codecs is somewhat limited, so not sure if what I am trying to do qualifies as working with 'raw' audio or not ?
If ffmpeg is unable to do what I am hoping to achieve, could gstreamer be the solution ?
PS> If anyone thinks that the answer could be improved, please feel free to suggest specific points. Would be happy to add any detail requested, provided I have the information.

How can I concatenate ATSC streams from DVB card?

I'm trying to make a simple "TV viewer" using a Linux DVB video capture card. Currently I watch TV using the following process (I'm on a Raspberry Pi):
Tune to a channel using azap -r TV_CHANNEL_HERE. This will supply bytes to
device /dev/dvb/adapter0/dvr0.
Open OMXPlayer omxplayer /dev/dvb/adapter0/dvr0
Watch TV!
The problem comes when I try to change channels. Even if I set the player to cache incoming bytes (tried with MPlayer also), the player can't withstand a channel change (by restarting azap with a new channel.
I'm thinking this is because of changes in the MPEG TS stream metadata.
Looking for a C library that would let me do the following:
Pull cache_size * mpeg_ts_packet_size from DVR device.
Evaluate each packet and rewrite metadata (PID, etc) as needed.
Populate FIFO with resulting packet.
Set {OMXPlayer,MPlayer} to read from FIFO.
The other thing I was thinking would be to use a program that converts MPEG TS into MPEG PS and concatenate the bytes that way.
Thoughts?
Indeed, when you want to tune on an other channel, some metadata can potentially change and invalid previously cached data.
Unfortunately I'm not familiar with the tools you are using but your point 2. makes me raise an eyebrow: you will waste your time trying to rewrite Transport Stream data.
I would rather suggest to stop and restart process on zapping since it seems to work fine at start.
P.S.:
Here are some tools that can help. Also, I'm not sure at which level your problem is but VLC can be installed on Raspberry PI and it handles TS gracefully.

How to retrieve H263/H264 data from a pcap file.

I have tried tools like videosnarf that takes a pcap file as input and creates a raw .h264 file , which could be later encoded with ffmpeg , and finally can be played with vlc player. But videosnarf can only handle h264 data.
I'm not able to find a similar tool that can dump h263 data from a pcap file. I tried to decode h263 stream from wireshark but I have had no luck so far.
I can program in perl/python but I don't know what exact steps to follow to retrieve h263 raw data from a pcap file as I haven't played with pcap capture files before.
sjd,
You can try setting up a sniffer using Twisted Python library (Twisted) this would allow you to capture the raw data coming in over your network as long as you can tell Twisted what port to listen to (or listen all), and where to dump the file to, etc and then do something with that new file (like send it into ffmpeg to test saving to .mov).
You would have to generate the .sdp file for ffmpeg, so unless you automate that step of the process, it is really annoying. I am currently working on the automation portion but am struggling just as much.
I'm using EventMachine for Ruby with FFMPEG, and .sdp from SIP.
Hope this helps a little.

Resources