Get image from csi camera on google dev board - linux

I am trying to get an image with coral camera on dev board mini, but have a problem to construct right gstreamer pipeline. I am trying this one
gst-launch-1.0 v4l2src device='/dev/video1' num_buffers=1 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! jpegenc ! filesink location=image.jpg
gst-launch-1.0 v4l2src device='/dev/video1' num_buffers=1 ! video/x-raw,format=YUY2,width=640,height=480,framerate=30/1 ! videoconvert !jpegenc ! filesink location=image.jpg
but I am getting only green image. How to construct the proper pipeline?

Related

Using videobalance to adjust contrast and brightness in gstreamer pipeline saving camera stream to file

I have a working gstreamer pipeline, using videobalance to adjust the contrast and brightness of a camera stream, the output of which is displayed on screen:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 saturation=0 !
"video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1" !
nvvidconv ! videobalance contrast=1.5 brightness=-0.3 ! nvoverlaysink
I want to do the same again, but this time record the camera stream to a file. I tried adding the videobalance element to the pipeline suggested by the authors of the drivers I'm using (which works fine otherwise):
gst-launch-1.0 nvarguscamerasrc sensor-id=0 saturation=0 !
"video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1" !
nvv4l2h264enc ! videobalance contrast=1.5 brightness=-0.3 ! h264parse !
mp4mux ! filesink location=test.mp4 -e
But, I get the error:
WARNING: erroneous pipeline: could not link nvv4l2h264enc0 to videobalance0
Any suggestions for where I'm going wrong and/or possible solutions would be greatly appreciated.
NVidea encoders use NVMM memory, so it can't directly connect encoder to videobalance. Just adding simple video convert will solve the problem:
gst-launch-1.0 nvarguscamerasrc sensor-id=0 saturation=0 !
"video/x-raw(memory:NVMM),width=1920,height=1080,framerate=30/1" !
nvv4l2h264enc ! videoconvert ! videobalance contrast=1.5 brightness=-0.3 ! h264parse !
mp4mux ! filesink location=test.mp4 -e

udpsink doesnt seem to stream anything but filesink works

I have trouble streaming pulse audio monitor via rtp to an audio player like vlc or gst-launch with udpsrc
this command works and the file has audio that is currently being played
gst-launch-1.0 -v pulsesrc device = "alsa_output.pci-0000_00_1b.0.analog-stereo.monitor" ! opusenc ! oggmux ! filesink location=test.ogg
but when i use this,
gst-launch-1.0 -v pulsesrc device="alsa_output.pci-0000_00_1b.0.analog-stereo.monitor" ! opusenc ! rtpopuspay ! udpsink host=0.0.0.0 port=4000
vlc (from an android phone) tells me that it cannot play the stream with uri rtp://ip-addr:4000
and gst-launch from same machine starts but the resulting file is empty.
gst-launch-1.0 -v udpsrc uri=rtp://0.0.0.0:4000 ! rtpopusdepay ! oggmux ! filesink location=test.ogg
gstreamer version is
$ gst-launch-1.0 --version
gst-launch-1.0 version 1.16.0
GStreamer 1.16.0
Just started this account, I wanted to add this as a comment, but couldn't because of rep limitations.
I'm not experienced in using vlc, but at least I get your GStreamer pipelines working if I add the caps, definitions of the rtp stream parameters, to the rtpopusdepay.
So instead of:
gst-launch-1.0 -v udpsrc uri=rtp://0.0.0.0:4000 ! rtpopusdepay ! oggmux ! filesink location=test.ogg
you'll need to use:
gst-launch-1.0 udpsrc uri=udp://0.0.0.0:4000 ! application/x-rtp,payload=96,encoding-name=OPUS ! rtpopusdepay ! opusdec ! autoaudiosink
for GStreamer. The mandatory parts are the payload and encoding-name, others you can find from gst-inspect-1.0 rtpopuspay/rtpopusdepay. You might need to change the numbers depending on what you define on the server side/what's the default on your machine.
So in conclusion, I got that GStreamer pipeline working by moving the rtp definitions to the caps for rtpopusdepay. As I said, I'm not familiar with vlc, so I don't know how to define those GStreamer caps there, if it even depends on those, but I hope this gives some insight on your work.

Gstreamer audio problem on embedded linux

I work on embedded linux. I want play video with minimum CPU. So after I completed compile, I tried play video with mplayer and gstreamer. Mplayer use CPU avarage %10-20. I want to obtain this perform on gstreamer. So I tried these command:
1- gst-launch filesrc location=video_path.mpeg ! mpegdemux ! mpeg2dec ! autovideosink
2-gst-launch-0.10 filesrc location=video_path.mpeg ! dvddemux ! mpegvideoparse ! mpeg2dec ! xvimagesink
These commands use avarage %10-20 CPU. This number that I want number. But audio did not work with these command. I tried added audio element but I could not achieve.
I also tried gst-launch-1.0 playbin uri=file:///video_path.mpeg. Audio work with this command but CPU usage is so high and I don2t prefer this.
How can I work audio with 1 or 2 commands?
1- gst-launch filesrc location=video_path.mpeg ! mpegdemux ! mpeg2dec
! autovideosink
2-gst-launch-0.10 filesrc location=video_path.mpeg ! dvddemux !
mpegvideoparse ! mpeg2dec ! xvimagesink
With the above two pipelines you are asking gtreamer to just play video, as a result you aren’t getting any audio.
gst-launch filesrc location=video_path.mpeg ! mpegdemux name=demuxer
demuxer. ! queue ! mpeg2dec ! autovideosink demuxer. ! queue ! mad !
audioconvert ! audioresample ! autoaudiosink
The above pipeline should play both audio and video.
Note: If you have support for hardware decoding that would reduce further CPU usage.

gst-launch camera get wrong color space

I am using the following command to review the Raspberry Pi camera with Tinker board(Tinker OS V2.0.8).
gst-launch-1.0 v4l2src device=/dev/video0 !
video/x-raw,format=NV12,width=640,height=480 ! videoconvert !
autovideosink
But the colour of the images show green as below(suppose be white):
So what suppose be the problem?
Is there any way to adjust the colour-balance?
I'm guessing the problem is about the format of the output images, which gives NV12 that makes image look greenly.
Problem solved:
Base on the tutorial https://tinkerboarding.co.uk/wiki/index.php?title=CSI-camera
After Tinker OS V2.0.8 using following command to stream video:
gst-launch-1.0 rkcamsrc device=/dev/video0 io-mode=4 isp-mode=2A
tuning-xml-path=/etc/cam_iq/IMX219.xml ! videoconvert !
video/x-raw,format=NV12,width=1800,height=960 ! rkximagesink

gstreamer audio error on linux

i am using g streamer-0.10 on Ubuntu os for streaming an web cam video on to an rtmp server i am getting an video output but their is a problem in audio . Below command used for streaming
gst-launch-0.10 v4l2src ! videoscale method=0 ! video/x-raw-yuv,width=852,height=480,framerate=(fraction)24/1 ! ffmpegcolorspace ! x264enc pass=pass1 threads=0 bitrate=900 tune=zerolatency ! flvmux name=mux ! rtmpsink location='rtmp://..../live/testing' demux. alsasrc device="hw:0,0" ! audioresample ! audio/x-raw-int,rate=48000,channels=2,depth=16 ! pulseaudiosink
Blockquote
by running the above command i got an error
gstbaseaudiosrc.c(840): gst_base_audio_src_create (): /GstPipeline:pipeline0/GstAlsaSrc:alsasrc0:
Dropped 13920 samples. This is most likely because downstream can't keep up and is consuming samples too slowly.
Blockquote
so the audio is not audible.
Help me out to solve this problem.
Thanks in advance
Ameeth
I don't understand your pipeline. What is "demux." in the middle?
The problem you are facing is because you have not seperated your elements with queues. Keep a queue before your sinks and after your sources to give the rest all seperate threads to run. It should allow get rid of the issue.
Since I don't have pulse audio or rtmp reciever in my system i have tested out the following and it works.
gst-launch-0.10 v4l2src ! ffmpegcolorspace ! queue ! x264enc pass=pass1 threads=0 bitrate=900000 tune=zerolatency ! queue ! flvmux name=mux ! fakesink alsasrc ! queue ! audioresample ! audioconvert ! queue ! autoaudiosink
You can change it accordingly and use it. The only thing I had to do to make it work and remove the error your are facing is to add the queues.
For me (Logitech c920 on Raspberry Pi3 w/ GStreamer 1.4.4) I was able to get rid of the "Dropped samples" warning by using audioresample to set the sampling rate of the alsasrc to something that flvmux liked. From gst-inspect-1.0 flvmux, it looks like flvmux only supports 5512, 11025, 22050, 44100 sample rates for x-raw and 5512, 8000, 11025, 16000, 22050, 44100 for mp4. Here's my working pipeline
gst-launch-1.0 -v -e \
uvch264src initial-bitrate=800000 average-bitrate=800000 iframe-period=2000 device=/dev/video0 name=src auto-start=true \
src.vidsrc ! video/x-h264,width=864,height=480,framerate=30/1 ! h264parse ! mux. \
alsasrc device=hw:1 ! 'audio/x-raw, rate=32000, format=S16LE, channels=2' ! queue ! audioresample ! "audio/x-raw,rate=44100" ! queue ! voaacenc bitrate=96000 ! mux. \
flvmux name=mux ! rtmpsink location="rtmp://live-sea.twitch.tv/app/MYSTREAMKEY"
I was surprised that flvmux didn't complain about getting an audio source that was at an unsupported sampling rate. Not sure if that's expected behavior.

Resources