Initial Burst node stream - node.js

anyone knows how to create a node js stream with an initial burst ?
Lets say we have an mp3 stream, the first second after a user starts the stream {s}he should be provided with a high rate and after some seconds the bit rate should reduced.
I tried reshaper but after changing the rate the stream stops with an chunk error.

Related

how to get low latency in Exoplayer RTSP live streaming on android?

I am working on RTSP live Streaming. I am getting live stream on my android App using exoplayer RTSP stream player. But latency of that streaming is about 3 seconds. As latency on vlc media player is 1 second. so how to reduce latency in exoplayer. Is there any way please tell me
What you're facing is buffering latency. VLC uses its own engine and buffering algorithms. However, if you wanna reduce buffering latency on ExoPlayer, you gotta get yourself familiarized with LoadControl. ExoPlayer uses a DefaultLoadControl in default instantiation. This LoadControl is a class that belongs to the ExoPlayer library, and it determines the values of time durations the player should spend in order to buffer the stream. If you wanna reduce the delay, you gotta reduce the LoadControl buffer values.
Here's a quick snippet of creating a SimpleExoPlayer instance with a custom load control :
val customLoadControl = DefaultLoadControl.Builder()
.setBufferDurationsMs(minBuffer, maxBuffer, playbackBuffer, playbackRebuffer)
.build()
Parameters in a nutshell : minBuffer is the minimum buffered video duration, maxBuffer is the maximum buffered video duration, playbackBuffer is the required buffered video duration in order to start playing , playbackRebuffer is the required buffered video duration in case it failed and it retries.
In your case, you should set your values really low, especially the playbackBuffer and minBuffer parameters. You should mess around with small values (they are in milliseconds). Going with a value of 1000 for both minBuffer and playbackBuffer
How to use the custom load control : After building the custom load control instance and storing it in a variable, you should pass it when you're building your SimpleExoPlayer :
myMediaPlayer = SimpleExoPlayer.Builder(this#MainActivity)
.setLoadControl(customLoadControl)
.build()
What to expect:
Using the default values is always recommended. If you mess with the values, the app may crash, the stream may be stuck, or the player may get very glitchy. So manipulate the values intelligently.
Here is the javadocs DefaultLoadControl Javadocs
If you don't know what buffering is, exoplayer (or any other player) may need to buffer (load the upcoming portion of the video/audio and store it in memory, rendered, way faster to access and reduces playback problems. Streamed media however needs buffering because it comes in form of chunks. So each chunk that arrives, will eventually be buffered. If you set the required buffered duration to 1000, you are telling ExoPlayer that the first chunk of stream that arrives whose length is 1000 millisecond should be buffered and played right away. I believe there is no simpler way to explain this. Best of luck.

how to deal with processing time delay of audio codec while streaming over RTP

In section 2.1 of the Speex codec manual it says:
Every speech codec introduces a delay in the transmission. For Speex, this delay is equal to the frame size, plus some amount
of “look-ahead” required to process each frame. In narrowband operation (8 kHz), the delay is 30 ms, while for wideband (16
kHz), the delay is 34 ms. These values don’t account for the CPU time it takes to encode or decode the frames.
In RTP Payload Format for the Speex Codec, RFC5574 it says:
ptime: SHOULD be a multiple of 20 msec
I have a 20mS frame time of encoded data. so I assume my ptime should be 20.
The delay for the encoding is 30mS or more. The time between RTP packets are 20mS. How is this supposed to work? Every other RTP payload is an empty packet? How do I resolve this?
Seemingly this is an issue with every codec. I must be missing some fundamental understanding of how streaming works.
I have validated I can stream a pre-encoded buffer and it sounds as intended.
I have tried:
Creating a large queue in the beginning to compensate, however this quickly becomes zero length.
Sending zero data as the payload
Ideas I haven't yet tried:
Send a packet of all padding and mark the RTP header as padding
Increase the sequence but not the timestamp until the next actual payload is ready (this sounds like it is against the spec?)
Note: I'm now wondering if the delay mentioned by speex is within the encoded output and the delay I am seeing while streaming is due to my limited CPU (embedded)
My note was correct. This question is flawed.
The Speex manual is referring to a delay in the audio output, not an inherent delay of processing time. Therefore the issue in question is not an issue.
I'm glad I asked the question, it helped me come to the solution.

NodeJS Server Side MP3 'Playback'?

This is my current approach to fake a Radio-Like stream with node.
Node ReadStream
This ReadStream just reads an mp3 and streams it to a html5 based audio player.
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Counter
This counter represents the current playback position of the RadioStream.
It keeps incrementing each second to simulate playback. Once a client connects to the server, the stream will start at the counters position. The only thing which I do not get around is the correct increment size of the counter.
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
Or is there a way to offset a mp3 stream by seconds ?
Metadata
Once I have the correct position, it will be super easy to build a playlist with Metadata, such as Song Name, Composer etc and push them to the client via socketio.
––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––––
If you have an idea how to solve this better please let me know.
I also tried using icecast with the following source clients :
vlc: cuesheet support is buggy, it is not forwarding metadata
Liquidsoap : cuesheet support is buggy, it is not forwarding
metadata
I tried executing ezstream with node and start the counter which increments in seconds, but the counter gets very fast out of sync.
Looks like my approch is anything else than ideal, so how to solve this in a smarter way please?

How to get amplitude of an audio stream in an AudioGraph to build a SoundWave using Universal Windows?

I want to built a SoundWave sampling an audio stream.
I read that a good method is to get amplitude of the audio stream and represent it with a Polygon. But, suppose we have and AudioGraph with just a DeviceInputNode and a FileOutpuNode (a simple recorder).
How can I get the amplitude from a node of the AudioGraph?
What is the best way to periodize this sampling? Is a DispatcherTimer good enough?
Any help will be appreciated.
First, everything you care about is kind of here:
uwp AudioGraph audio processing
But since you have a different starting point, I'll explain some more core things.
An AudioGraph node is already periodized for you -- it's generally how audio works. I think Win10 defaults to periods of 10ms and/or 20ms, but this can be set (theoretically) via the AudioGraphSettings.DesiredSamplesPerQuantum setting, with the AudioGraphSettings.QuantumSizeSelectionMode = QuantumSizeSelectionMode.ClosestToDesired; I believe the success of this functionality actually depends on your audio hardware and not the OS specifically. My PC can only do 480 and 960. This number is how many samples of the audio signal to accumulate per channel (mono is one channel, stereo is two channels, etc...), and this number will also set the callback timing as a by-product.
Win10 and most devices default to 48000Hz sample rate, which means they are measuring/output data that many times per second. So with my QuantumSize of 480 for every frame of audio, i am getting 48000/480 or 100 frames every second, which means i'm getting them every 10 milliseconds by default. If you set your quantum to 960 samples per frame, you would get 50 frames every second, or a frame every 20ms.
To get a callback into that frame of audio every quantum, you need to register an event into the AudioGraph.QuantumProcessed handler. You can directly reference the link above for how to do that.
So by default, a frame of data is stored in an array of 480 floats from [-1,+1]. And to get the amplitude, you just average the absolute value of this data.
This part, including handling multiple channels of audio, is explained more thoroughly in my other post.
Have fun!

How to prevent data throttling with audio codec streaming

I am sampling a incoming audio stream at 8Ksps. I have a codec that takes ~1.6ms to encode a packet of data (80 samples) into an encoded packet (5 samples). At this rate I get 8000*1.662e-3 ~= 13 samples every encoding cycle. But I need 80 samples every cycle. How do I keep the stream continuous? My only guess is slow down the bitrate of the outgoing encode stream but I'm not sure how to calculate this in general such that buffers on the incoming side don't fill up and the receiving side's buffers don't get starved.
This seems like a basic tenet of streaming but I can't find any info on methods. Thanks for any help!

Resources