Open web video with nodejs - node.js

Hi i'm trying to build a nodejs app on my raspberry pi model 4 that after receiving an input (nfc tag)
open a url that contain a video.
Es 'http://www.example.com/webapi/content?rfid=05:66:77:66:93:67'.
The problem is that when i try to open my link with vlc or omxplayer it drop an error and doesn't show me any video. I try to put the link in the "open network stream" on vlc in windows and i have the same issue, but if i put the link in the browser I see the video correctly. If i try to open other videos i didn't get any problem.
Unfortunately I can't post the link to the site for privacy reasons. Any advice on what the problem might be?
Error log on raspberry pi:
[01cf9b80] vlcpulse audio output error: PulseAudio server connection failure: Connection refused
[01d089f0] dbus interface error: Failed to connect to the D-Bus session daemon: Unable to autolaunch a dbus-daemon without a $DISPLAY for X11
[01d089f0] main interface error: no suitable interface module
[01c72b58] main libvlc error: interface "dbus,none" initialization failed
[01d152c0] main interface error: no suitable interface module
[01c72b58] main libvlc error: interface "globalhotkeys,none" initialization failed
[01c72b58] main libvlc: Esecuzione di vlc con l'interfaccia predefinita. Usa 'cvlc' per utilizzare vlc senza interfaccia.
error: XDG_RUNTIME_DIR not set in the environment.
[01d152c0] skins2 interface error: cannot initialize OSFactory
[01d152c0] [cli] lua interface: Listening on host "*console".
VLC media player 3.0.8 Vetinari
Command Line Interface initialized. Type `help' for help.
> [b1002098] mp4 stream error: no moov before mdat and the stream is not seekable
[b1001e58] prefetch stream error: cannot seek (to offset 48)
[b1054990] mmal_codec decoder: VCSM init succeeded: CMA
[mov,mp4,m4a,3gp,3g2,mj2 # 0xb104e340] stream 0, offset 0x4d3: partial file

Related

I can't start libcamera on Raspberry pi. Error undefined symbol

I have little knowledge of Linux and have installed libcamera to stream video from rbpicamera to vlc (or kodi, tvheadend, etc). But it gives error and I don't know what is happening.
libcamera-vid -t 0 --width 1920 --height 1080 --codec h264 --inline --listen -o tcp://0.0.0.0:8888
[0:13:03.287872564] [2248] INFO Camera camera_manager.cpp:293 libcamera v0.0.0+3829-dfc6d711
[0:13:03.852263918] [2263] WARN RPI raspberrypi.cpp:1258 Mismatch between Unicam and CamHelper for embedded data usage!
[0:13:03.862314595] [2263] INFO RPI raspberrypi.cpp:1374 Registered camera /base/soc/i2c0mux/i2c#1/imx219#10 to Unicam device /dev/media2 and ISP device /dev/media0
[0:13:03.868779438] [2248] INFO Camera camera.cpp:1035 configuring streams: (0) 1280x960-YUV420
[0:13:03.872784647] [2263] INFO RPI raspberrypi.cpp:761 Sensor: /base/soc/i2c0mux/i2c#1/imx219#10 - Selected sensor format: 1640x1232-SBGGR10_1X10 - Selected unicam format: 1640x1232-pBAA
libcamera-vid: symbol lookup error: /lib/arm-linux-gnueabihf/libcamera_app.so: undefined symbol: _ZNK9libcamera11ControlList8containsERKNS_9ControlIdE
Let's see if you can give me a hand please.
It always gives the same error. I've reinstalled everything and it's still exactly the same. I've been around the net a thousand times and I haven't found the error or the solution.
Error:
libcamera-vid: symbol lookup error: /lib/arm-linux-gnueabihf/libcamera_app.so: undefined symbol: _ZNK9libcamera11ControlList8containsERKNS_9ControlIdE
I'm with the S.O. Dietpi on a Raspberry Pì 2. The camera works perfectly.
Here is the guide I have followed to install libcamera:
https://libcamera.org/getting-started.html#

Sink to the virtual v4l2 device

I have tried an example on Ubuntu 19.04
gst-launch-1.0 videotestsrc ! v4l2sink device=/dev/video10
But gstreamer fails
Setting pipeline to PAUSED ...
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstV4l2Sink:v4l2sink0: Cannot identify device '/dev/video10'.
Additional debug info:
v4l2_calls.c(609): gst_v4l2_open (): /GstPipeline:pipeline0/GstV4l2Sink:v4l2sink0:
system error: No such file or directory
Setting pipeline to NULL ...
Freeing pipeline ...
Why it doesn't work? I haven't found this in the documentation, do I need to create /dev/video10 somehow?
I did the same for the default device /dev/video1, but it is an input camera device on my laptop:
sudo gst-launch-1.0 videotestsrc ! v4l2sink
Setting pipeline to PAUSED ...
ERROR: Pipeline doesn't want to pause.
ERROR: from element /GstPipeline:pipeline0/GstV4l2Sink:v4l2sink0: Device '/dev/video1' is not a output device.
Additional debug info:
v4l2_calls.c(639): gst_v4l2_open (): /GstPipeline:pipeline0/GstV4l2Sink:v4l2sink0:
Capabilities: 0x4a00000
Setting pipeline to NULL ...
Freeing pipeline ...
Thank in advance.
The title of your questions suggests you would like to write to a virtual video device. v4l2 devices can be both input and output video devices. Your camera is a video input (capture) device. Directing a v4l2sink (so an endpoint of the pipeline) in gstreamer will likely fail.
You can however generate a virtual output device. What you are looking for is something like the v4l2-loopback device. It allows you to generate a virtual /dev/video10 device like this:
modprobe v4l2loopback video_nr=10
Another possible solution for the same error message: recreate the v4l2loopback interface:
sudo rmmod -f v4l2loopback
sudo modprobe v4l2loopback
This might apply to others experiencing the error message from the original question, but who are already aware they need a v4l2loopback device as gstreamer sink.
When trying to stream a video to the existing v4l2loopback device I streamed to using ffmpeg before, I got the same error message
Device '/dev/video0' is not a output device.
Investigation
When comparing the state of a working loopback video device and a non-working one (i.e. after writing to it with ffmpeg) with v4l2-ctl --all -d 0 using diff, I found the following difference :
--- working 2020-11-19 18:03:52.499440518 +0100
+++ non-working 2020-11-19 18:03:57.472802868 +0100
## -3,21 +3,18 ##
Card type : GPhoto2 Webcam
Bus info : platform:v4l2loopback-000
Driver version : 5.9.8
- Capabilities : 0x85208002
- Video Output
+ Capabilities : 0x85208000
Video Memory-to-Memory
Read/Write
Streaming
Extended Pix Format
Device Capabilities
- Device Caps : 0x05208002
- Video Output
+ Device Caps : 0x05208000
Video Memory-to-Memory
Read/Write
Streaming
Extended Pix Format
Priority: 0
-Video output: 0 (loopback in)
Format Video Output:
Width/Height : 960/640
Pixel Format : 'YU12' (Planar YUV 4:2:0)
Somehow that "Video Output" capability is required for gstreamer to work successfully and taken away by my previous ffmpeg call.
The behaviour only occured when I loaded the v4l2loopback module with the exclusive_caps=1 option, see 1.
The solution was to unload / load the v4l2loopback kernel commands, forcefully removing the v4l2loopback kernel module and adding it again using rmmod / modprobe (see above).

Raspberry Pi camera mmal Error

Recently I use my Picamera. When I run
sudo raspistill -o myImage.jpg
I get the following error
mmal: Cannot read camera info, keeping the defaults for OV5647
mmal: mmal_vc_component_create: failed to create component 'vc.ril.camera' (1:ENOMEM)
mmal: mmal_component_create_core: could not create component 'vc.ril.camera' (1)
mmal: Failed to create camera component
mmal: main: Failed to create camera component
mmal: Camera is not detected. Please check carefully the camera module is installed correctly
So I enabled Picamera but it doesn't works.
How should I do?

OpenWebRTC test-uri No decoder available for type audio/mpeg and owr_video_renderer_get_element assertion

Compiled the OpenWebRTC, so it builds its own GStreamer. The test-send-receive worked fine for audio. Ubuntu 15.04.
Launching the test-uri test program like that:
export PATH=/opt/openwebrtc-0.3/bin/:$PATH
export LD_LIBRARY_PATH=/opt/openwebrtc-0.3/lib
export GST_PLUGIN_PATH_1_0=/opt/openwebrtc-0.3/lib/gstreamer-1.0/
test-uri file://$HOME/Videos/small.mp4
The file is:
http://techslides.com/demos/sample-videos/small.mp4
The error:
==== Warning message start ====
Warning in element uridecodebin-1.
Warning: No decoder available for type 'audio/mpeg, mpegversion=(int)4, framed=(boolean)true, stream-format=(string)raw, level=(string)2, base-profile=(string)lc, profile=(string)lc, codec_data=(buffer)1188, rate=(int)48000, channels=(int)1'.
Debugging info: gsturidecodebin.c(939): unknown_type_cb (): /GstPipeline:uri-source-agent-1/GstURIDecodeBin:uridecodebin-1
==== Warning message stop ====
Got "video stream, id: 0" source!
**
ERROR:owr_video_renderer.c:314:owr_video_renderer_get_element: assertion failed: (flip)
The gst-launch-1.0 plays this file fine with audio and video on GStreamer that comes with the OS. But the one in the /opt/openwebrtc-0.3/lib/gstreamer-1.0/ doesn't play audio.
How to fix at least the audio warning in the GStreamer built by OpenWebRTC? And then that fatal video renderer error.

speechRecognition: jack server is not running

I'm setting up a sound recognizer with the speechRecognition python library.
This is my code so far:
#!/usr/bin/env python3
import speech_recognition as sr
r = sr.Recognizer('es-MX')
with sr.Microphone() as mic:
audio = r.listen(mic)
print(r.recognize(audio))
On running I get
ALSA lib pcm_dsnoop.c:618:(snd_pcm_dsnoop_open) unable to open slave
ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.rear
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.center_l$
ALSA lib pcm.c:2239:(snd_pcm_open_noupdate) Unknown PCM cards.pcm.side
bt_audio_service_open: connect() failed: Connection refused (111)
bt_audio_service_open: connect() failed: Connection refused (111)
bt_audio_service_open: connect() failed: Connection refused (111)
bt_audio_service_open: connect() failed: Connection refused (111)
ALSA lib pcm_dmix.c:1022:(snd_pcm_dmix_open) unable to open slave
Cannot connect to server socket err = No such file or directory
Cannot connect to server request channel
jack server is not running or cannot be started
I'm using SpeechRecognition version 1.3.1 running on Linux LXLE 14.04 x64 with python 3.4
It tells you it can not record audio on your device. It is not related to a jack server, it also tries to open alsa device and bluetooth audio device. Make sure that audio is properly set on your device. See also
PyAudio does not work and bricks sound on ubuntu
PyAudio working, but spits out error messages each time
I am having this same error. If you would like a work around you can use my code I wrote. I used the sounddevice library to record audio while there is audio it saves into one file, then convert it to text using the speech recognition library. The error comes when microphone is called in as it is using PyAudio.
https://shepai.github.io/code/PetSHEP/soundLib.py
The following lines should replace your "with microphone" bit
with sr.AudioFile(filename) as source:
Audio=r.record(source)
This is how I got round the problem, hope it helps :)

Resources