How to rename or define multiple files - gstreamer - rename

I'm a beginner with raspbian, and trying to do some time lapse.
My camera upload with ftp directly to the Raspberry pi, in file format like this:
192.168.1.140_01_20160118205122254_TIMING.jpg
192.168.1.140_01_20160118205222260_TIMING.jpg
192.168.1.140_01_20160118205322262_TIMING.jpg
This is every minute upload from my ip camera.
I'm using gstreamer to do the time lapse, but I don't know how I can specify the files?
gst-launch-1.0 -e multifilesrc location="192.168.1.140???.jpg" ! image/jpeg, framerate=12/1 ! \
decodebin ! video/x-raw, width=1296, height=976 ! progressreport name=progress ! \
omxh264enc target-bitrate=15000000 control-rate=variable ! video/x-h264, profile=high ! \
h264parse ! mp4mux ! filesink location=test.mp4
Would it be possible to keep the original output from camera, and what should the suffix be? 192.168.1.140_01_???.jpg?
Would it be better, to rename the output, to something else like timelapse0000.jpg timelapse0001.jpg and so on? Then I could use timelapse_%04d.jpg
In this case how can I do this?
I'm pretty much lost here, so I hope to get some hints.
Thanks

Select multiple files in a folder. For this press and hold down the CTRL key while you are clicking files.
After you select the files, press F2.
Type the new name, and then press ENTER.

Yes, it would be best to rename the output to sequentially numbered files so that you can easily input them to gstreamer. You can use a bash script to do this as shown here - Renaming files in a folder to sequential numbers

Related

How to capture a raw image (.png) using a nvargus camera in gstreamer

I am trying to capture a raw 4k image for my AI application using a 4k camera shown here. I want to capture a frame every 5 seconds and store it as a .png file which I will later run through my neural network for detection. I know the ommand to record a 4k video in raw format (.mkv). However I am not able to capture a single image (frame) in 3840x2160 resolution.
There is a sample command which is
gst-launch-1.0 nvarguscamerasrc sensor-id=0 num-buffers=1 ! "video/x-raw(memory:NVMM),format=(string)NV12, width=(int)3840, height=(int)2160" ! nvjpegenc ! filesink location=test.jpg
The above command works but it only stores in jpg which is around 1mb insize. This is not very clear and I want a png format which is more detailed. I tried changing the extension in the filename but it is not working. I am using a jetson xavier nx.
EDIT
I have tried to change the encoding by using the following command
gst-launch-1.0 nvarguscamerasrc sensor-id=0 num-buffers=1 ! "video/x-raw(memory:NVMM), format=(string)NV12, width=(int)3840, height=(int)2160" ! pngenc ! filesink location=test1.png
However I am getting the following error
WARNING: erroneous pipeline: could not link nvarguscamerasrc0 to pngenc0, pngenc0 can't handle caps video/x-raw(memory:NVMM), format=(string)NV12, width=(int)3840, height=(int)2160
You would just need to copy the image from Argus in NVMM memory into system memory. nvvidconv plugin may be used for that:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 num-buffers=1 ! "video/x-raw(memory:NVMM), format=(string)NV12, width=(int)3840, height=(int)2160" ! nvvidconv ! pngenc ! filesink location=test1.png
However, argus will auto tune many parameters unless otherwise specified, so the first frame may be dark depending on your scene. In such case, you may capture 21 images and use multifilesink so that you'll just keep the 21st image after 1s and then convert it to png:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 num-buffers=21 ! 'video/x-raw(memory:NVMM), format=NV12, width=3840, height=2160, framerate=21/1' ! nvvidconv ! video/x-raw,format=RGBA ! multifilesink location=test1.rgba max-files=1
gst-launch-1.0 filesrc location=test1.rgba ! videoparse format=rgba width=3840 height=2160 framerate=0/1 ! pngenc ! filesink location=test1.png
Note that pngenc is not very fast with Jetson.

gstreamer wrong colors when converting h264 to raw RGB

I have stream on one computer using this command:
gst-launch-1.0 -e v4l2src do-timestamp=true ! video264,width=1296,height=730,framerate=30/1 ! h264parse ! rtph264pay config-interval=1 ! gdppay ! udpsink host=192.168.1.116 port=5000
So the output is h264 in YU12 format. I need this format in raw RGB, so on receiver site I use:
gst-launch-1.0 -v udpsrc port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! decodebin ! videoconvert ! video/x-raw,format=\(string\)RGB ! videoconvert ! fpsdisplaysink sync=false text-overlay=true
Which results in image with right colors, as you can see bellow:
However when I pipe this output to other program, and I tried custom one which converts rgb frames to textures and also ffplay with parameter pix_fmt rgb24, the colors are wrong and the picture is shifted in some weird way.
What is weird is when I try bgr the red color was correct in second output the fdisplaysink one didn't change.
I am using gst-launch-1.0 --version
gst-launch-1.0 version 1.4.5
GStreamer 1.4.5
Any help is appreciated.
As noted in the comments below: the "-q" option is needed to prevent gst-launch from spitting out debug info to the stdout pipe.
Ok, funny story, looks like when you specify video size in ffplay you use HEIGHTxWIDTH, and in GStreamer you use WIDTHxHEIGHT. This command works fine:
gst-launch-1.0 -q videotestsrc pattern=ball ! video/x-raw,height=320,width=240,framerate=30/1,format=RGB ! fdsink | ffplay -f rawvideo -pixel_format rgb24 -video_size 240x320 -i -
If the colors are shifted you probably have an RGB mixed up with a BGR somewhere.
You can get a list of all the ffplay pixel formats like this:
ffplay -pix_fmts
And the GStreamer pixel formats that videoconvert supports are here:
gst-inspect-1.0 videoconvert
So it turned out, it's some kind of pipeing problem. I am not sure why but pipeing via stdout to another program just shifts everything. It looks like the frames are starting on wrong byte or something. I even got to the point where the video was doing some moving picture effect where with each frame the picture was more shifted. It is not really a matter of used colorspace, it just happens to every single one.
I am not sure how to remove this problem, and this solution I am posting is not how to remove it rather than how to avoid it.
Stdout doesn't work correctly, but saving to file for instance does. So I tried using named pipe. Classic mkfifo. Doing something like this
gst-launch-1.0 -v udpsrc port=5000 ! gdpdepay ! rtph264depay ! avdec_h264 ! decodebin ! videoconvert ! video/x-raw,height=730,width=1296,framerate=25/1,format=RGB ! videoconvert ! filesink sync=false location=pipe
and than either opening the pipe, or simply redirecting it like
cat pipe | program -
makes it working like a charm. No colors wrong, no shifted picture.
I am not sure what is the difference between named pipes and stdout pipeing in linux (I simply never had enough time to study them), I just once read there is less overhead in named ones.

How to play a raw audio file using gst-launch?

I'm using gst-launch-0.10.
I created a pcm file (at least, I think I have) from a mp3 file using the command:
gst-launch-0.10 filesrc location=my-sound.mp3 ! mad ! audioresample ! audioconvert ! 'audio/x-raw-int, rate=8000, channels=1, endianness=4321, width=16, depth=16, signed=true' ! filesink location=out.raw
I now have an out.raw file.
To test if everything worked, I'd like to play it back.
I tried this (among other things):
gst-launch-0.10 filesrc location=out.raw ! capsfilter caps="audio/x-raw-int, rate=8000, channels=1, endianness=4321, width=16, depth=16, signed=true" ! alsasink can-activate-pull=true
but I get this error everytime:
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstCapsFilter:capsfilter0: Filter caps do not completely specify the output format
Additional debug info:
gstcapsfilter.c(393): gst_capsfilter_prepare_buf (): /GstPipeline:pipeline0/GstCapsFilter:capsfilter0:
Output caps are unfixed: EMPTY
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
"Filter caps do not completely specify the output format"? What is missing here?
Well, adding the audioconvert filter before alsasink made it work. According to gst-inspect, the audio format specified was compatible with alsa. So I guess it was a problem with the soundcard, and that audioconvert somehow converted the data to something my sound card could handle. Just a guess.
I also removed the can-activate-pull=true option. The sound quality with this option activated was very bad. I'd like to understand why.

gstreamer split multi-channel wav file into separate channels and encode each channel as mp3, alac etc and save to file

I need to split a multi-channel wav file and encode each channel into mp3 files.
I know about deinterleave plugin for gtresamer, but I am not sure how to use it for wav file and how to encode the channel stream.
I prefer gtreamer(or ffmpeg) based solution, as I need to limit I/O. I.e, I don't want the intermediate single channel wav files to be written to the storage.
ffmpeg can be used for this in this way. But, both the switches are not available in ubuntu ffmpeg.
snsonic's solutions threw the following error:
(gst-launch-0.10:2218): GStreamer-WARNING **: Failed to load plugin '/usr/lib/gstreamer-0.10/libgstpng.so': libpng12.so.0: cannot open shared object file: No such file or directory
Setting pipeline to PAUSED ...
Pipeline is PREROLLING ...
ERROR: from element /GstPipeline:pipeline0/GstWavParse:wavparse0: Internal data flow error.
Additional debug info:
gstwavparse.c(1982): gst_wavparse_loop (): /GstPipeline:pipeline0/GstWavParse:wavparse0:
streaming task paused, reason not-linked (-1)
ERROR: pipeline doesn't want to preroll.
Setting pipeline to NULL ...
Freeing pipeline ...
(gst-launch-0.10:2218): GStreamer-CRITICAL **: gst_caps_unref: assertion `GST_CAPS_REFCOUNT_VALUE (caps) > 0' failed
When I tested using a single channel wav file without deinterleave, the mp3 file was created but it had only noise.
Do you know the number of channels in advance?
If you use gst-launch you can dome something like:
gst-launch-1.0 filesrc location="xx.wav" ! wavparse ! deinterleave name=d \
d.src_01 ! <encoder> ! filesink location="out1.mp3" \
d.src_02 ! <encoder> ! filesink location="out2.mp3"
and so on. If you don't know the number of channels beforehand, you'll need to write some code.
FFmpeg could probably do what you're asking.
You don't have to know in advance how many channel the wav file has.
You need sox installed.
#!/bin/bash
INPUT=$1
fbname=$(basename "$INPUT" .wav)
N=`soxi -c $INPUT`
for i in $(seq 1 $N)
do
sox ${INPUT} "${fbname}-${i}.wav" remix $i
avconv -i "${fbname}-${i}.wav" -f mp2 "${fbname}-${i}.mp3"
rm ${fbname}-${i}.wav
done
You may want to replace avconv with ffmpeg, depending on the distribution you are working on.

Segmenting video by GStreamer

gst-launch-0.10 v4l2src ! videorate ! x264enc ! avimux ! filesink location=result.avi
After executing command have the video "result.avi".
I need: "2012-04-22_15-30-00.avi", "2012-04-22_15-31-00.avi" etc. How do I do?
Thanks.
You can
pause recording, rename the recorded file, restart (will loose a handful of frames)
use multifilesink (no frame loss, but a simple naming pattern like file_0001.avi, file_0002.avi and you need signalling to switch to a new file (e.g. an eos timer)).
There are more way, but it quickly gets more complicated.

Resources