how to convert (RTSP) rtp data from stream to jpg image - rtsp

/* send RTSP PLAY request */
static void rtsp_play(CURL *curl, const char *uri, const char *range)
{
CURLcode res = CURLE_OK;
printf("\nRTSP: PLAY %s\n", uri);
my_curl_easy_setopt(curl, CURLOPT_RTSP_STREAM_URI, uri);
my_curl_easy_setopt(curl, CURLOPT_RANGE, range);
my_curl_easy_setopt(curl, CURLOPT_RTSP_REQUEST, (long)CURL_RTSPREQ_PLAY);
my_curl_easy_perform(curl);
my_curl_easy_setopt(curl, CURLOPT_RANGE, NULL);
/* fp=popen( "gst-launch-1.0 videotestsrc ! openh264enc ! rtph264pay config-interval=10 pt=96 ! udpsink rtsp://freja.hiof.no:1935/rtplive/definst/hessdalen03.stream","r" ); */
my_curl_easy_perform(curl);
/* switch off using range again */
my_curl_easy_setopt(curl, CURLOPT_RANGE, NULL);
}

Follow these Steps and create the gstreamer pipeline.
RTP data (NALU) received from the UDP socket need combined - to frame.
Encoded frame need to be decoded using the H.264 decoder.
Decoded RAW frame (YUV*/NV*) -> Jpeg Encoder - create a jpeg image.
Write this Jpeg image to .jpg file.

Related

Interpolate silence in Discord.js stream

I'm making a discord bot with Discord.js v14 that records users' audio as individual files and one collective file. As Discord.js streams do not interpolate silence, my question is how to interpolate silence into streams.
My code is based off the Discord.js recording example.
In essence, a privileged user enters a voice channel (or stage), runs /record and all the users in that channel are recorded up until the point that they run /leave.
I've tried using Node packages like combined-stream, audio-mixer, multistream and multipipe, but I'm not familiar enough with Node streams to use the pros of each to fill in the gaps the cons add to the problem. I'm not entirely sure how to go about interpolating silence, either, whether it be through a Transform (likely requires the stream to be continuous, or for the receiver stream to be applied onto silence) or through a sort of "multi-stream" that swaps between piping the stream and a silence buffer. I also have yet to overlay the audio files (e.g, with ffmpeg).
Would it even be possible for a Readable to await an audio chunk and, if none is given within a certain timeframe, push a chunk of silence instead? My attempt at doing so is below (again, based off the Discord.js recorder example):
// CREDIT TO: https://stackoverflow.com/a/69328242/8387760
const SILENCE = Buffer.from([0xf8, 0xff, 0xfe]);
async function createListeningStream(connection, userId) {
// Creating manually terminated stream
let receiverStream = connection.receiver.subscribe(userId, {
end: {
behavior: EndBehaviorType.Manual
},
});
// Interpolating silence
// TODO Increases file length over tenfold by stretching audio?
let userStream = new Readable({
read() {
receiverStream.on('data', chunk => {
if (chunk) {
this.push(chunk);
}
else {
// Never occurs
this.push(SILENCE);
}
});
}
});
/* Piping userStream to file at 48kHz sample rate */
}
As an unnecessary bonus, it would help if it were possible to check whether a user ever spoke or not to eliminate creating empty recordings.
Thanks in advance.
Related:
Record all users in a voice channel in discord js v12
Adding silent frames to a node js stream when no data is received
After a lot of reading about Node streams, the solution I procured was unexpectedly simple.
Create a boolean variable recording that is true when the recording should continue and false when it should stop
Create a buffer to handle backpressuring (i.e, when data is input at a higher rate than its output)
let buffer = [];
Create a readable stream for which the receiving user audio stream is piped into
// New audio stream (with silence)
let userStream = new Readable({
// ...
});
// User audio stream (without silence)
let receiverStream = connection.receiver.subscribe(userId, {
end: {
behavior: EndBehaviorType.Manual,
},
});
receiverStream.on('data', chunk => buffer.push(chunk));
In that stream's read method, handle stream recording with a 48kHz timer to match the sample rate of the user audio stream
read() {
if (recording) {
let delay = new NanoTimer();
delay.setTimeout(() => {
if (buffer.length > 0) {
this.push(buffer.shift());
}
else {
this.push(SILENCE);
}
}, '', '20m');
}
// ...
}
In the same method, also handle ending the stream
// ...
else if (buffer.length > 0) {
// Stream is ending: sending buffered audio ASAP
this.push(buffer.shift());
}
else {
// Ending stream
this.push(null);
}
If we put it all together:
const NanoTimer = require('nanotimer'); // node
/* import NanoTimer from 'nanotimer'; */ // es6
const SILENCE = Buffer.from([0xf8, 0xff, 0xfe]);
async function createListeningStream(connection, userId) {
// Accumulates very, very slowly, but only when user is speaking: reduces buffer size otherwise
let buffer = [];
// Interpolating silence into user audio stream
let userStream = new Readable({
read() {
if (recording) {
// Pushing audio at the same rate of the receiver
// (Could probably be replaced with standard, less precise timer)
let delay = new NanoTimer();
delay.setTimeout(() => {
if (buffer.length > 0) {
this.push(buffer.shift());
}
else {
this.push(SILENCE);
}
// delay.clearTimeout();
}, '', '20m'); // A 20.833ms period makes for a 48kHz frequency
}
else if (buffer.length > 0) {
// Sending buffered audio ASAP
this.push(buffer.shift());
}
else {
// Ending stream
this.push(null);
}
}
});
// Redirecting user audio to userStream to have silence interpolated
let receiverStream = connection.receiver.subscribe(userId, {
end: {
behavior: EndBehaviorType.Manual, // Manually closed elsewhere
},
// mode: 'pcm',
});
receiverStream.on('data', chunk => buffer.push(chunk));
// pipeline(userStream, ...), etc.
}
From here, you can pipe that stream into a fileWriteStream, etc. for individual purposes. Note that it's a good idea to also close the receiverStream whenever recording = false with something like:
connection.receiver.subscriptions.delete(userId);
As well, the userStream should, too be closed if it's not, e.g, the first argument of the pipeline method.
As a side note, although outside the scope of my original question, there are many other modifications you can make to this. For instance, you can prepend silence to the audio before piping the receiverStream's data to the userStream, e.g, to make multiple audio streams of the same length:
// let startTime = ...
let creationTime;
for (let i = startTime; i < (creationTime = Date.now()); i++) {
buffer.push(SILENCE);
}
Happy coding!

Swift How to merge video and audio into a one mp4 file

I want to merge video captured by AVCaptureSession() and audio recorded by AVAudioRecorder() into one mp4 file ..
I tried this solution
Swift Merge audio and video files into one video
and this Merging audio and video Swift
but both of them didn't work. They both gave me the same error : Thread1 : Signal SIGABRT
So what is the right way to merge audio and video ?
UPDATE:-
I found where the app crashes every time. It crashes on the code line that tries to get an AVAssetTrack from AVAsset
func merge(audio:URL, withVideo video : URL){
// create composition
let mutableComposition = AVMutableComposition()
// Create the video composition track.
let mutableCompositionVideoTrack : AVMutableCompositionTrack = mutableComposition.addMutableTrack(withMediaType: AVMediaTypeVideo, preferredTrackID: kCMPersistentTrackID_Invalid)
// Create the audio composition track.
let mutableCompositionAudioTrack : AVMutableCompositionTrack = mutableComposition.addMutableTrack(withMediaType: AVMediaTypeAudio, preferredTrackID: kCMPersistentTrackID_Invalid)
// create media assets and tracks
let videoAsset = AVAsset(url: video)
let videoAssetTrack = videoAsset.tracks(withMediaType: AVMediaTypeVideo)[0] // ** always crashes here ** //
let audioAsset = AVAsset(url: audio)
let audioAssetTrack = audioAsset.tracks(withMediaType: AVMediaTypeAudio)[0] // ** always crashes here ** //
....
}

Decode streaming audio with gstreamer 1.0 and access the waveform data?

The actual gst version is 1.8.1.
Currently I have code that receives a gstreamer encoded stream and plays it through my soundcard. I want to modify it to instead give my application access to the raw un-compressed audio data. This should result in an array of integer sound samples, and if I were to plot them I would see the audio wave form (e.g. a perfect tone would be a nice sine wave), and if I were to append the most recent array to the last one received by a callback I wouldn't see any discontinuity.
This is the current playback code:
https://github.com/lucasw/audio_common/blob/master/audio_play/src/audio_play.cpp
I think I need to change the alsasink to an appsink, and setting up a callback that will get the latest chunk of audio after it has passed through the decoder. This is adapted from https://github.com/jojva/gst-plugins-base/blob/master/tests/examples/app/appsink-src.c :
_sink = gst_element_factory_make("appsink", "sink");
g_object_set (G_OBJECT (_sink), "emit-signals", TRUE,
"sync", FALSE, NULL);
g_signal_connect (_sink, "new-sample",
G_CALLBACK (on_new_sample_from_sink), this);
And then there is the callback:
static GstFlowReturn
on_new_sample_from_sink (GstElement * elt, gpointer data)
{
RosGstProcess *client = reinterpret_cast<RosGstProcess*>(data);
GstSample *sample;
GstBuffer *app_buffer, *buffer;
GstElement *source;
/* get the sample from appsink */
sample = gst_app_sink_pull_sample (GST_APP_SINK (elt));
buffer = gst_sample_get_buffer (sample);
/* make a copy */
app_buffer = gst_buffer_copy (buffer);
/* we don't need the appsink sample anymore */
gst_sample_unref (sample);
/* get source and push new buffer */
source = gst_bin_get_by_name (GST_BIN (client->_sink), "app_source");
return gst_app_src_push_buffer (GST_APP_SRC (source), app_buffer);
}
Can I get at the data in that callback? What am I supposed to do with the GstFlowReturn? If that is passing data to another pipeline element I don't want to do that, I'd rather get it there and be done.
https://github.com/lucasw/audio_common/blob/appsink/audio_process/src/audio_process.cpp
Is the gpointer data passed to that callback exactly what I want (cast to a gint16 array?), or otherwise how do I convert and access it?
The GstFlowReturn is merely a return value for the underlying base classes. If you would return an error there the pipeline probably stops because.. well there was a critical error.
The cb_need_data events are triggered by your appsrc element. This can be used as a throttling mechanism if needed. Since you probably use the appsrc in a pure push mode (as soon something arrives at the appsink you push it to the appsrc) you can ignore these. You also explicitly disable these events on the appsrc element. (Or do you still use the one?)
The data format in the buffer depends on the caps that the decoder and appsink agreed on. That is usually the decoder preferred format. You may have some control over this format depending on the decoder or convert it to your preferred format. May be worthwhile to check the format, Float32 is not that uncommon..
I kind of forgot what your actual question was, I'm afraid..
I can interpret the data out of the modified callback below (there is a script that plots it to the screen), it looks like it is signed 16-bit samples in the uint8 array.
I'm not clear about the proper return value for the callback, there is a cb_need_data callback setup elsewhere in the code that is getting triggered all the time with this code.
static void // GstFlowReturn
on_new_sample_from_sink (GstElement * elt, gpointer data)
{
RosGstProcess *client = reinterpret_cast<RosGstProcess*>(data);
GstSample *sample;
GstBuffer *buffer;
GstElement *source;
/* get the sample from appsink */
sample = gst_app_sink_pull_sample (GST_APP_SINK (elt));
buffer = gst_sample_get_buffer (sample);
GstMapInfo map;
if (gst_buffer_map (buffer, &map, GST_MAP_READ))
{
audio_common_msgs::AudioData msg;
msg.data.resize(map.size);
// TODO(lucasw) copy this more efficiently
for (size_t i = 0; i < map.size; ++i)
{
msg.data[i] = map.data[i];
}
gst_buffer_unmap (buffer, &map);
client->_pub.publish(msg);
}
}
https://github.com/lucasw/audio_common/tree/appsink

How to capture the first 10 seconds of an mp3 being streamed over HTTP

disclaimer: newbie to nodeJS and audio parsing
I'm trying to proxy a digital radio stream through an expressJS app with the help of node-icecast which works great. I am getting the radio's mp3 stream, and via node-lame decoding the mp3 to PCM and then sending it to the speakers. All of this just works straight from the github project's readme example:
var lame = require('lame');
var icecast = require('icecast');
var Speaker = require('speaker');
// URL to a known Icecast stream
var url = 'http://firewall.pulsradio.com';
// connect to the remote stream
icecast.get(url, function (res) {
// log the HTTP response headers
console.error(res.headers);
// log any "metadata" events that happen
res.on('metadata', function (metadata) {
var parsed = icecast.parse(metadata);
console.error(parsed);
});
// Let's play the music (assuming MP3 data).
// lame decodes and Speaker sends to speakers!
res.pipe(new lame.Decoder())
.pipe(new Speaker());
});
I'm now trying to setup a service to identify the music using the Doreso API. Problem is I'm working with a stream and don't have the file (and I don't know enough yet about readable and writable streams, and slow learning). I have been looking around for a while at trying to write the stream (ideally to memory) until I had about 10 seconds worth. Then I would pass that portion of audio to my API, however I don't know if that's possible or know where to start with slicing 10 seconds of a stream. I thought possibly trying passing the stream to ffmpeg as it has a -t option for duration, and perhaps that could limit it, however I haven't got that to work yet.
Any suggestions to cut a stream down to 10 seconds would be awesome. Thanks!
Updated: Changed my question as I originally thought I was getting PCM and converting to mp3 ;-) I had it backwards. Now I just want to slice off part of the stream while the stream still feeds the speaker.
It's not that easy.. but I've managed it this weekend. I would be happy if you guys could point out how to even improve this code. I don't really like the approach of simulating the "end" of a stream. Is there something like "detaching" or "rewiring" parts of a pipe-wiring of streams in node?
First, you should create your very own Writable Stream class which itself creates a lame encoding instance. This writable stream will receive the decoded PCM data.
It works like this:
var stream = require('stream');
var util = require('util');
var fs = require('fs');
var lame = require('lame');
var streamifier = require('streamifier');
var WritableStreamBuffer = require("stream-buffers").WritableStreamBuffer;
var SliceStream = function(lameConfig) {
stream.Writable.call(this);
this.encoder = new lame.Encoder(lameConfig);
// we need a stream buffer to buffer the PCM data
this.buffer = new WritableStreamBuffer({
initialSize: (1000 * 1024), // start as 1 MiB.
incrementAmount: (150 * 1024) // grow by 150 KiB each time buffer overflows.
});
};
util.inherits(SliceStream, stream.Writable);
// some attributes, initialization
SliceStream.prototype.writable = true;
SliceStream.prototype.encoder = null;
SliceStream.prototype.buffer = null;
// will be called each time the decoded steam emits "data"
// together with a bunch of binary data as Buffer
SliceStream.prototype.write = function(buf) {
//console.log('bytes recv: ', buf.length);
this.buffer.write(buf);
//console.log('buffer size: ', this.buffer.size());
};
// this method will invoke when the setTimeout function
// emits the simulated "end" event. Lets encode to MP3 again...
SliceStream.prototype.end = function(buf) {
if (arguments.length) {
this.buffer.write(buf);
}
this.writable = false;
//console.log('buffer size: ' + this.buffer.size());
// fetch binary data from buffer
var PCMBuffer = this.buffer.getContents();
// create a stream out of the binary buffer data
streamifier.createReadStream(PCMBuffer).pipe(
// and pipe it right into the MP3 encoder...
this.encoder
);
// but dont forget to pipe the encoders output
// into a writable file stream
this.encoder.pipe(
fs.createWriteStream('./fooBar.mp3')
);
};
Now you can pipe the decoded stream into an instance of your SliceStream class, like this (additional to the other pipes):
icecast.get(streamUrl, function(res) {
var lameEncoderConfig = {
// input
channels: 2, // 2 channels (left and right)
bitDepth: 16, // 16-bit samples
sampleRate: 44100, // 44,100 Hz sample rate
// output
bitRate: 320,
outSampleRate: 44100,
mode: lame.STEREO // STEREO (default), JOINTSTEREO, DUALCHANNEL or MONO
};
var decodedStream = res.pipe(new lame.Decoder());
// pipe decoded PCM stream into a SliceStream instance
decodedStream.pipe(new SliceStream(lameEncoderConfig));
// now play it...
decodedStream.pipe(new Speaker());
setTimeout(function() {
// after 10 seconds, emulate an end of the stream.
res.emit('end');
}, 10 * 1000 /*milliseconds*/)
});
Can I suggest using removeListener after 10 seconds? That will prevent future events from being sent through the listener.
var request = require('request'),
fs = require('fs'),
masterStream = request('-- mp3 stream --')
var writeStream = fs.createWriteStream('recording.mp3'),
handler = function(bit){
writeStream.write(bit);
}
masterStream.on('data', handler);
setTimeout(function(){
masterStream.removeListener('data', handler);
writeStream.end();
}, 1000 * 10);

Libav and xaudio2 - audio not playing

I am trying to get audio playing with libav using xaudio2. The xaudio2 code I am using works with an older ffmpeg using avcodec_decode_audio2, but that has been deprecated for avcodec_decode_audio4. I have tried following various libav examples, but can't seem to get the audio to play. Video plays fine (or rather it just plays right fast now, as I haven't coded any sync code yet).
Firstly audio gets init, no errors, video gets init, then packet:
while (1) {
//is this packet from the video or audio stream?
if (packet.stream_index == player.v_id) {
add_video_to_queue(&packet);
} else if (packet.stream_index == player.a_id) {
add_sound_to_queue(&packet);
} else {
av_free_packet(&packet);
}
}
Then in add_sound_to_queue:
int add_sound_to_queue(AVPacket * packet) {
AVFrame *decoded_frame = NULL;
int done = AVCODEC_MAX_AUDIO_FRAME_SIZE;
int got_frame = 0;
if (!decoded_frame) {
if (!(decoded_frame = avcodec_alloc_frame())) {
printf("[ADD_SOUND_TO_QUEUE] Out of memory\n");
return -1;
}
} else {
avcodec_get_frame_defaults(decoded_frame);
}
if (avcodec_decode_audio4(player.av_acodecctx, decoded_frame, &got_frame, packet) < 0) {
printf("[ADD_SOUND_TO_QUEUE] Error in decoding audio\n");
av_free_packet(packet);
//continue;
return -1;
}
if (got_frame) {
int data_size;
if (packet->size > done) {
data_size = done;
} else {
data_size = packet->size;
}
BYTE * snd = (BYTE *)malloc( data_size * sizeof(BYTE));
XMemCpy(snd,
AudioBytes,
data_size * sizeof(BYTE)
);
XMemSet(&g_SoundBuffer,0,sizeof(XAUDIO2_BUFFER));
g_SoundBuffer.AudioBytes = data_size;
g_SoundBuffer.pAudioData = snd;
g_SoundBuffer.pContext = (VOID*)snd;
XAUDIO2_VOICE_STATE state;
while( g_pSourceVoice->GetState( &state ), state.BuffersQueued > 60 ) {
WaitForSingleObject( XAudio2_Notifier.hBufferEndEvent, INFINITE );
}
g_pSourceVoice->SubmitSourceBuffer( &g_SoundBuffer );
}
return 0;
}
I can't seem to figure out the problem, I have added error messages in init, opening video, codec handling etc. As mentioned before the xaudio2 code is working with an older ffmpeg, so maybe I have missed something with the avcodec_decode_audio4?
If this snappet of code isn't enough, I can post the whole code, these are just the places in the code I think the problem would be :(
I don't see you accessing decoded_frame anywhere after decoding. How do you expect to get the data out otherwise?
BYTE * snd = (BYTE *)malloc( data_size * sizeof(BYTE));
This also looks very fishy, given that data_size is derived from the packet size. The packet size is the size of the compressed data, it has very little to do with the size of the decoded PCM frame.
The decoded data is located in decoded_frame->extended_data, which is an array of pointers to data planes, see here for details. The size of the decoded data is determined by decoded_frame->nb_samples. Note that with recent Libav versions, many decoders return planar audio, so different channels live in different data buffers. For many use cases you need to convert that to interleaved format, where there's just one buffer with all the channels. Use libavresample for that.

Resources