when I stream with Liquidsoap and Icecast the stream keep playing the same part over and over, with a rewind sound...
This is the stream: http://radio.oursound.com.br:8000/oursoundradio
I was unable to find anything about it, this is my liq script
source = input.http("http://LINK_TO_MP3.mp3",buffer=10.0, max=20.0,logfile="/tmp/001.log")
source = mksafe(source)
output.icecast(%vorbis,host="localhost",password="password",mount="oursoundradio", source)
I am using vorbis, because when I use MP3, I keep getting this error
strange error flushing buffer ...
strange error flushing buffer ...
strange error flushing buffer ...
strange error flushing buffer ...
But this is for another day, what I need help is with the streaming rewinding, I am completely new to Liquidsoap and Icecast...
But already read all the documentation, and found nothing...
Thanks for the help...
input.http is meant to be used for radio-style HTTP streams that never really end. Liquidsoap is treating it as such, getting disconnected when the file is fully downloaded, and is likely looping a buffer. There shouldn't be a "rewind" sound... you're probably hearing a blip of an MP3 artifact. Your station is down right now, or I'd give it a listen to check.
You should use single instead. Untested, but try something like this:
source = once(single("http://example.com/file.mp3"))
Of course in practice, you probably actually want playlist.
I just had the same issue and I solved by (logged as root):
apt install --reinstall icecast2
Related
After creating the playlist with mp4 URLs the loading time between two mp4 files is high and the stream is not running smoothly. Please let me know if this can be fix by changing some settings on the server.
Let me explain the best practices for that. I hope it helps.
Improve WebRTC Playback Experience
ATTENTION: It does not make sense to play the stream with WebRTC because it’s already recorded file and there is no ultra low latency requirement. It make sense to play the stream with HLS. Just keep in mind that WebRTC playback uses more processing resources than the HLS. Even if you would like to decrease the amount of time to switch streams, please read the followings.
Open the embedded player(/usr/local/antmedia/webapps/{YOUR_APP}/play.html)
Find the genericCallback method and decrease the timeout value from 3000 to 1000 or even lower at the end of the genericCallback method. It’s exactly this line
Decrease the key frame interval of the video. You can set to 1 seconds. Generally recommend value is 2 seconds. WebRTC needs key frame to start the play. If the key frame interval is 10 seconds(default in ffmpeg), player may wait up to 10 seconds to play.
Improve HLS Playback Experience
Open the properties file of the application -> /usr/local/antmedia/webapps/{YOUR_APP}/WEB-INF/red5-web.properties
Add the following property
settings.hlsflags=delete_segments+append_list+omit_endlist
Let me explain what it means.
delete_segments just deletes the segment files that is out of the list so that your disk will not get full.
append_list just adds the
new segment files to the older m3u8 file so that player thinks that it’s just playing the same stream.
omit_endlist disables writing the
EXT-X-ENDLIST to the end of the file so player thinks that new segments are in their way and it wait for them. It does not run
stopping the stream.
Disable deleting hls files on ended to not encounter any race condition. Continue editing the file /usr/local/antmedia/webapps/{YOUR_APP}/WEB-INF/red5-web.properties and replace the following line
settings.deleteHLSFilesOnEnded=true with this one
settings.deleteHLSFilesOnEnded=false
Restart the Ant Media Server
sudo service antmedia restart
antmedia.io
I only began using icecast a few days ago, so if I stuffed something up somewhere, please let me know.
I have a weird problem with icecast. Everytime a track is "finished" on icecast, a section of the end of the currently playing track (i think 64kbs of the track) is repeated about 2 to 3 times before the next song plays, but the next song doesn't begin playing in the start, but a few seconds of the way through. Also, I can notice that the playback speed (and hence the pitch) sometimes differs from the original as well.
I consulted this post and this post that was quoted below which taught me what the <burst-on-connect> and the <burst-size> tags are used for. It also taught me this:
What's happening here is that nothing is being added to the buffer, so clients connect, get the contents of that buffer, and then the stream ends. The client must be re-connecting repeatedly, and it keeps getting that same buffer.
Cheers to Brad for that post. A solution to this problem was provided in a comments section of that post and it said to decrease the <source-timeout> of the icecast server, so that it will close the connection quicker and stop any repeating. But this is assuming I want to close the mountpoint, and I dont, because what I am using Icecast for is actually a 24/7 radio player. If I did close my mountpoint, then what happens is VLC just turns off and doesn't repeatedly attempt to connect anymore. Unless this is wrong. I don't know.
I use VLC to hear the playback of the icecast streams and I use nodeshout which is a bunch of bindings from libshout built for node.js. I use nodeshout to send data to a bunch of mounts on my icecast server. In the future I plan to make a site that will listen to the icecast streams, meaning it will replace VLC.
icecast.xml
<limits>
<clients>100</clients>
<sources>4</sources>
<queue-size>1008576</queue-size>
<client-timeout>30</client-timeout>
<header-timeout>15</header-timeout>
<source-timeout>30</source-timeout>
<burst-on-connect>1</burst-on-connect>
<burst-size>252144</burst-size>
</limits>
This is a summary of the audio sending code on my node.js server.
nodejs
// these lines of code is a smaller part of a function, and this sets all the information. The variables name, description etc come from the arguments of the function
var nodeshout = require("nodeshout");
let shout = nodeshout.create();
shout.setHost('localhost');
shout.setPort(8000);
shout.setUser('source');
shout.setPassword(process.env.icecastPassword); //password in .env file
shout.setName(name);
shout.setDescription(description);
shout.setMount(mount);
shout.setGenre(genre);
shout.setFormat(1); // 0=ogg, 1=mp3
shout.setAudioInfo('bitrate', '128');
shout.setAudioInfo('samplerate', '44100');
shout.setAudioInfo('channels', '2');
return shout
// now meanwhile somewhere lower in the file, there is this summary of how the audio is sent to the icecast server
var nodeshout = require("nodeshout")
var {FileReadStream, ShoutStream} = require("nodeshout") //here is where the FileReadStream and ShoutStream functions come from
const filecontent = new FileReadStream(pathToSong, 65536); //if I change the 65536 to a higher value, then more bytes are being repeated at the end of the track. If I decrease this, it starts sounding buggy and off.
var streamcontent = filecontent.pipe(new ShoutStream(shoutstream))
streamcontent.on('finish', () => {
next()
console.log("Track has finished on " + stream.name + ": " + chosenTrack)
})
I also notice weirder behaviour. After the previous song had it's last chunk repeated a few times, that's when the server calls the streamcontent.on('finish') event that is located in the nodejs script, and only then does it warn me that the track is finished.
What I have tried
I tried messing around with the <source-timeout> tag, the number of bytes (or bits im not sure) that are being sent on nodejs, the burst size, I also tried turning bursting off completely but it results in super strange behavior.
I also thought creating a new stream every time per song was a bad idea as seen in new ShoutStream(shoutstream) when piping the file data, but using the same stream meant that the program would return an error because it would write the next track to the shoutstream after it had said it had closed.
If any more information is necessary to figure out what is going on, I can provide it. Thanks for your time.
Edit: I would like to add: Do you think I should manually control how many bytes are sent to icecast and then use the same stream object instead of calling a new one every time?
I found out why the stream didn't play some tracks as opposed to others.
How I got there
I could not switch to ogg/vorbis or ogg/opus for my stream, so I had to do something with my source client. I double checked everything was correct and that my audio files were in the correct bitrate. When i ran the ffprobe tool with ffprobe audio.mp3 sometimes the bitrates did not adhere to the typical rates of 120kbps, 128kbps, 192, 312, etc etc so on. It was always some strange value such as 129852 just to provide an example.
I then downloaded the checkmate mp3 checker here and checked my audio files, and they were all encoded in a variable bitrate!!! VBR damnit!
TLDR
I fixed my problem by re-encoding all my tracks to a constant bitrate of 128kbps using ffmpeg.
Quick Edit: I am pretty sure that programs such as Darkice might already support variable bit rate transfers to Icecast servers, but it would be impractical for me to use darkice, hence why I stuck with nodeshout.
I want to convert a FLAC file to a MP3 (and Vorbis, in a second time) file.
These MP3/Vorbis streams are then transmitted, raw, to a second device that decodes them.
Depending on the quality of the transmission, I want to be able to change the bitrate on-the-fly.
The change must be gapless (hence the "in the PLAYING state" in the title).
The specific encoders are lamemp3enc and vorbisenc (and cannot be changed).
To my knowledge, changing the bitrate while playing is actually not possible with these codecs.
But I guess there are clean and simple ways to change the bitrate without introducing any gap in the stream: I'd like to learn about any of them.
(NB: I did write any, not all, I am not asking for the "best" way, I am not asking for a review, I just want something that works.)
Read through this ..
You will:
block element before lamemp3enc
flush the encoded frames into queue with EOS sent to lame and discard the EOS when it comes out of lame
then set the lamemp3enc into NULL state
change the parameters
set lame to PLAYING or PAUSED - this will preroll it again with new data using new bitrate
check when lame is in playing and then you know everything is working
there should be no gap as queue has lot of old buffers which it sends forward during you are doing the witch
You can inspire yourself with the example from the link above.. However you are not doing any removing and adding new elements.. Do not forget to set it to NULL state as it will discard all internal states (well hopefully if its not buggy). Then You would just change the parameters with g_object_set...
Also I never did this so you may also ask on IRC of #gstreamer at freenode if you are stuck or not sure.
HTH
I use Mumudvb to get signal from dvb-t and dvb-s to RTP Multicast stream and successfully do that, The result stream URL is something like rtp://239.1.2.1:60001.
Now i want to know How can i convert RTP (Or UDP)stream to Http Live Stream (HLS)?
Edit:
I could convert live stream with ffmpeg, but its not stable, when an error occurred in ffmpeg the conversation stop and there is no way to found fail and for example restart ffmpeg, I am looking for new way for that convert operation.
Thanks a lot
VLC can probably do this, something along the lines of:
cvlc -vvv rtp://#239.1.2.1:60001
--sout '#std{access=livehttp{seglen=5,delsegs=true,numsegs=5,
index=/path/to/stream.m3u8,
index-url=http://example.org/stream-########.ts},
mux=ts{use-key-frames},
dst=/path/to/stream-########.ts}'
Substitute /path/to/stream* with whatever path you want to serve your playlist and segments from, and http://example.org with your machine's domain name or IP address.
See these command line examples for further pointers.
I'm not sure if VLC retries more gracefully after input errors than ffmpeg. In any case, you can script retry after failure behavior, here is one example.
I'm trying to make a simple "TV viewer" using a Linux DVB video capture card. Currently I watch TV using the following process (I'm on a Raspberry Pi):
Tune to a channel using azap -r TV_CHANNEL_HERE. This will supply bytes to
device /dev/dvb/adapter0/dvr0.
Open OMXPlayer omxplayer /dev/dvb/adapter0/dvr0
Watch TV!
The problem comes when I try to change channels. Even if I set the player to cache incoming bytes (tried with MPlayer also), the player can't withstand a channel change (by restarting azap with a new channel.
I'm thinking this is because of changes in the MPEG TS stream metadata.
Looking for a C library that would let me do the following:
Pull cache_size * mpeg_ts_packet_size from DVR device.
Evaluate each packet and rewrite metadata (PID, etc) as needed.
Populate FIFO with resulting packet.
Set {OMXPlayer,MPlayer} to read from FIFO.
The other thing I was thinking would be to use a program that converts MPEG TS into MPEG PS and concatenate the bytes that way.
Thoughts?
Indeed, when you want to tune on an other channel, some metadata can potentially change and invalid previously cached data.
Unfortunately I'm not familiar with the tools you are using but your point 2. makes me raise an eyebrow: you will waste your time trying to rewrite Transport Stream data.
I would rather suggest to stop and restart process on zapping since it seems to work fine at start.
P.S.:
Here are some tools that can help. Also, I'm not sure at which level your problem is but VLC can be installed on Raspberry PI and it handles TS gracefully.