Gstreamer: read raw image from stdout and convert it to h264 stream - linux

I have a program that receives raw image data of known width, height and format from USB camera. Then it outputs each frame to stdout. Format is BGR24.
I need to transfer it as h264 stream using gstreamer but unable to find how to encode the raw video stream.
For example, using ffmpeg this is done like this:
my_video_reader | ffmpeg -f rawvideo -pix_fmt bgr24 -s:v 752x480 -i - -f h264 - | <send data here>

Try below pipeline, This is used to convert raw YUV Frames into H264stream.
gst-launch-1.0 videotestsrc ! "video/x-raw,format=I420,width=352,height=288,framerate=30/1" ! videoparse width=352 height=288 framerate=30/1 ! x264enc bitrate=1024 ref=4 key-int-max=20 ! video/x-h264,stream-format=byte-stream,profile=main ! filesink location=v1
If you want to convert a file, instead of videotestsrc, then simply replace the videotestsrc with filesrc location=filename

Related

ffmpeg equivalent for sox -t ima

I am trying to use ffmpeg to combine 1 audio file (ADPCM) and 1 video file (h264) into single mp4. Video by file conversion works fine but ffmpeg chokes on guessing audio input. I can't figure out how to tell ffmpeg which params to use to decode raw audio file.
Currently I first run sox to convert raw audio to wav:
sox -t ima -r 8000 audio.raw audio.wav
... then feed audio.wav from sox as ffmpeg input
ffmpeg -i video.raw -i audio.wav movie.mp4
I am trying to avoid sox step and use audio.raw in ffmpeg.
Thank you
Since you have headless audio, you should tell ffmpeg about the sample format and (optionally) sample rate, audio channels, e.g.:
ffmpeg -i video.raw -f s16le -ar 22050 -ac 1 -i audio.raw movie.mp4
To check supported PCM formats you may use this command:
ffmpeg -formats 2>&1 | grep -i pcm

ffmpeg to calculate audio/visual difference between compressed and non-compressed video

I'm trying to calculate the audio + visual difference between a harshly compressed video file and one that hasn't been.
I'm using pipes because ultimately I wish this to take src from a camera stream.
I've managed to get the video results that I'm looking for, but I'm struggling with the audio.
I've added a line to invert the phase of the compressed audio, so that when they add up in the blend they should almost cancel each other out, but that doesn't happen.
ffmpeg -i input.avi -f avi -c:v libxvid -qscale:v 30 -c:a wmav1 - | \
ffmpeg -i - -f avi -af "aeval='-val(0)':c=same" - | \
ffmpeg -i input.avi -i - -filter_complex "blend=all_mode=difference" -c:v libx264 -crf 18 -f avi - | \
ffplay -
I can still hear all the audio, when what I should be hearing are solely compression artifacts. thx
To preface, I'm not sure your method would identify audio compression 'artifacts'
Your command doesn't perform any audio comparison, it only inverts a single channel. Also, the audio and video are compressed twice and the codecs the last ffmpeg command receives are the default AVI codecs of mpeg4 and mp3.
Use
ffmpeg -i input.avi -f matroska -c:v libxvid -qscale:v 30 -c:a wmav1 - |\
ffmpeg -i input.avi -i - -filter_complex "[0][1]blend=all_mode=difference;[1]aselect=gt(n\,0),asetpts=PTS-STARTPTS[1a];[0][1a]amerge,aeval=val(0)-val(1):c=mono" -c:v rawvideo -c:a pcm_s16le -f matroska - |\
ffplay -
I assume your audio is mono. If your audio has N channels, your aeval will need N expressions where the Mth expression is val(M-1)-val(N+M-1)
I also trim out the first encoded audio frame in order to mitigate encoder delay that Paul mentioned, and it seems to work here.
There might be some delay introduced with encoded audio samples. Also your command is incorrect.

FFmpeg Creating mjpeg with boundary string

I am trying to play mjpeg real-time stream in HTML5 "img" tag generated by FFmpeg from my laptop webcam. When i open my website only the first jpeg displays. I think the cause of it is that my mjpeg stream is missing boundary string. Is it posible to stream mjpeg with boundary strings with FFmpeg
my cmd command: ffmpeg-f dshow -s 1280x720 -i video="HD WebCam" -f mjpeg -s 1280x720 -b 6000k pipe:1

How to produce Live video and audio streaming (not VoD) with ffmpeg?

I want to produce a Live audio/video stream from local file.
I tried the following:
ffmpeg -re -thread_queue_size 4 -i source_video_file.ts -strict -2
-vcodec copy -an -f rtp rtp://localhost:10000 -acodec copy -vn -sdp_file saved_sdp_file -f rtp rtp://localhost:20000
and then:
ffplay saved_sdp_file
It seems to work fine, but it looks like a Video on Demand, cause I can replay this file with ffplay whenever I want.
But I need ffplay to show video/audio only during ffmpeg streaming instance is running (the first command above).
How do I achieve this?
Thanks!
This code works for live video streaming :
proc liveStreaming {} {
#ffmpeg command to capture live streaming in background
exec ffplay -f dshow -i video="Integrated Webcam" >& $logFile &
}
liveStreaming
Make use of fmmpeg using following code, this also works :
proc liveStreaming {} {
#ffmpeg command to capture live streaming
exec ffmpeg -f dshow -i video="Integrated Webcam" -f sdl2 -
}
liveStreaming
You can also make use of "sdl" if sdl2 doesn't work.

How to record audio stream using ffmpeg?

I have a problem using ffmpeg:
when i trying to record video+audio from my webcam in result i got only video stream, wthout audio at all.
I have tried different codecs and nothing..
Maybe, someone can give me advice?
ffmpeg -f dshow -i video="Logitech HD Webcam C270" -r 25 -s 800x600 -acodec libmp3lame -vcodec mpeg4 -b 3000k -f avi D:\1.avi
Btw: virtualdub grabs both well.
Thanks.
Assuming that you have installed the driver and codecs, use something like:
ffmpeg -f dshow -i video="Logitech HD Webcam C270" [path]out.mp4
A short explanation is given in capture a webcam input. For using DirectShow you have this examples.

Resources