I want to transcode an audio stream from YouTube (webm) to PCM on the fly using a buffer, but ffmpeg can only process the first received buffer due to the lack of metadata in subsequent buffers. Is there any way to make this work? I've thought about attaching metadata to other chunks but couldn't make this work. Maybe there's a better approach?
Related
I am using moviepy (Python) to read video and audio frames of a video and after making some changes I am writing them back to a videofile, say new.avi, to preserve the changes, or to avoid compression, I am using codec= 'rawvideo' in write_videofile function. But when I read the video and audio frames back, the number of video and audio frames are different than when they were when written, they are usually increased.
Can anybody tell me the reason,? is it because of the ffmpeg used or some other reason? Does it happen always or there is some problem in my machine? Thank you :-)
I want to add a 5.1 .flac audio track to a .ts file that already has three audio tracks. I tried with tsMuxer and ffmpeg with unsuccessful results. In tsMuxeR the .flac track is not recognized and in ffmpeg everything seems to work fine until the very last moment when I check the file and the .flac audio track is not included in the "output.ts". The .flac track is about 3GB and its lenght is around two and a half hours.
Thank you so much.
I don't think you'll find any existing software that maps FLAC into a MPEG-2 Transport Stream.
This gives you an idea what sort of issues you run into: https://xiph.org/flac/ogg_mapping.html
Let's say you came up with a reasonable way of mapping FLAC into a MPEG-2 Transport Stream - there won't be anything reading it.
Unless there is a specified way of mapping FLAC into a MPEG-2 Tranport Stream - you are on your own.
But PCM is supported in a MPEG-2 Transport Stream (for example Blu-Ray).
I'd use ffmpeg to transcode your audio from FLAC to PCM and then mux it into your transport stream.
Your audio transcode (FLAC to PCM) is lossless.
Audio playback. All the examples I've seen for Audio or Media in JavaFX8 have a source, either http: or file:. In my case the byte stream will be coming through as a ByteArrayInputStream. Ultimately I suspect this is what the Audio or Media class objects are processing. The source would start life as compressed audio where I would decompress it and then feed it to the Audio object. I am not seeing how to feed a byte array into a JavaFX audio object? Would someone please point me at a solution.
Thanks.
I would like to know if any of you tried to stream RAW audio data. I tried to do that using a WAV file but this does not support streaming. Could anyone provide me a container for that (except Matroska) :)
Thank you
I discovered OggPCM. I think is the only container which allows you to stream PCM audio (and quite standard).
WAV file does not allows you to stream raw data, so I eliminated this one from the list. I wanted to stream some audio data thought in my wireless network, so the bandwidth it was not an issue.
I am trying to capture the video and audio using AVCaptureSession and I done with videocapturing and converted into pixel buffer and I played the output captured video at server side using ffmpeg n rtmp server. But the thing is how can I make the audio to be converted info data and play it at sever side where the data received. And want to know what the audio format is the audio that is captured.
Thank's All,
MONISH