I am a surfer/kitesurfer and i live in the UAE. I'm trying to build a basic weather station that can provide wind and webcam details for a spot that is in a remote location. I am using a pi4 1GB and i am almost ready to install the station on site. My skills are fairly basic but this is where i am at:
pi4 runs ddns so it's dynamic address is accessible remotely with port forwarding - done
weewx uploads wind and weather info from the sensor to windguru - this is on track and will be done by the end of the week when a final part arrives
motion eye provides the video stream of camera 1 and camera 2 - done and visible from outside the LAN
Run apache/mysql/wordpress to provide a basic interface for people to check the info from their browser - almost done.
Now, regarding point 3... i am noticing that this is crippling the pi. Running nmon i can see each camera is utilising 110% of the CPU per camera. That is with minimal video streaming settings and a 1 fps rate. With both cameras running the pi is almost inaccessible through vnc or ssh and it gets very hot - i need to keep restarting it as it freezes.
I don't need a live stream, i'd be happy with an image every 30 seconds. Even if i disable video streaming and use the still image capture, 'motion' is still costing the CPU 110% per camera just to monitor it. Is there a better piece of software that i can be using?
I tried to edit sudo nano /etc/motion/motion.conf hoping to reduce the fps that motion uses to initialize the device but it doesn't affect the CPU usage.
Important to note, my camera is connected via IP and motion is connected to the device via RTSP://
Would appreciate any suggestions.
Thanks,
Sean.
Try UV4L and RPi_Web_Cam_Interface as alternatives to Motion.
RPIWCI is nicely documented at this site
https://elinux.org/RPi-Cam-Web-Interface
The preview mjpeg stream from RPIWCI can be found at the URL http://YourPiIP:Port/cam_pic_new.php
You can set the quality and size using the 'camera control' bar at the bottom of the preview/control page found at yourPiIP:80/html/ (change the port to your forwarding port)
There is also the opportunity to use a timelapse function that might provide a different route to get a 1fps jpeg stream, I have not tried this.
I am currently streaming the preview at 1024x720 ~15fps compression quality 30% to several devices on my local network and the Pi4 is utilising only about 10% CPU.
Other comments.....
Have you tried setting the GPU memory split on the Pi to 1024
Also have you tried the command 'top' at the linux prompt to see what processes are using all the CPU, raspimjpeg uses between 2 and 3 % on my Pi4.
Hope this helps, Heath.
Related
I'm currently working on a research project which revolves around me getting to know the transfer speeds of BLE in a simple setting. To be specific, I'll be working with an Arduino Nano 33 BLE board. I'm well aware that BLE v5 is capable of reaching speeds of up to 1Mb/s (Mega-bits/s) but is unrealistic in real-world applications. Are there any resources that I can get the transfer speeds of BLE? If not, I'm guessing I will have to work with an experimental setup for finding the speeds for my specific use-case. Thank you in advance!
The Bluetooth Radio on the Arduino Nano 33 BLE is built into the NRF52840 MCU.
More information on the Radio is found here:
https://infocenter.nordicsemi.com/topic/ps_nrf52840/radio.html?cp=4_0_0_5_19
There are definitely ways to determine real throughput of this peripheral, you just need a packet sniffer (link below) or have a transmitter/receiver board-to-board setup.
Packet Sniffer:
https://www.nordicsemi.com/Products/Development-tools/nRF-Sniffer-for-Bluetooth-LE
The board-to-board setup will have the relay serially send the number of packets received to a PC for analysis. You could configure the relay board to send the number of packets received within a certain interval (using the Timers in the NRF52840 with <microsecond precision).
If your project changes and you no longer need to use BLE and you would like to increase throughput to ~1.5Mbps, I developed and tested a working configuration of the NRF52840's Radio on the Arduino Nano 33 BLE. Link below.
https://forum.arduino.cc/t/stream-binary-data-from-arduino-nano-33-ble-to-pc-via-ble/917206
I have an UWP app that capture a live video stream (webcam), encodes it in h264, and sends it through a TCP socket (in a local network, I need high performance) to a Linux device.
Is there a way to do this? I need the video not for playing it but for extract single frames. I could do that with opencv but it requires a local video file, instead I'm using a live stream.
I would send photos instead of a video stream if the time needed for capture one was acceptable, but it requires about 250 ms.
Is RTP required? Does UWP (windows) provides a way to achive this?
Thank you
P.S.: The UWP app runs in Hololens.
You can use WebRTC to transmit live video from the HoloLens easily to any target. That's probably the easiest way to do it without going really low level.
For an introduction just grab this repo and try the sample app which runs perfectly on the HoloLens https://github.com/webrtc-uwp/PeerCC/tree/e95f231e1dc9c248ca2ffa040276b8a1265da145/Client
I want to build a video security infrastructure with raspberry pis.
Please take a look at the rough layout I've in mind:
What the system should be capable of:
The RPis need to stream a low latency video to the
webserver, which displays it to all clients visiting the website.
If a client authenticates he can control one RPi sending commands that gets translated into GPIO commands.
All RPis should be controllable simultaneously by different clients in realtime.
Some kind of scaleability (Clients + RPis)
My Questions:
I wanted to program everything in node.js. Good idea?
Could WebRTC and Sockets.io help me in this project - if not is
there another library that would help me out?
How many clients could a VPS Server (8GB RAM, 4 vCores) handel in this setup?
Is it possible to bring the latency down to < 2 seconds or more?
Anything helps! Thanks!
Need Help!
Let's assume I have a robot in a room with a camera on it. I need the video source to be available live on a website. I don't want any latency at all (assuming that I have a good internet connection). Also if a user presses any keys while on the website, the robot needs to detect it and do actions accordingly. Now, I can handle all the actions the robot needs to do once I get the keys. There's a raspberry pi on the robot.
What would be the easiest way where we could achieve a bi-directional communication (one direction being video and another being plain text) between a browser and my robot, keeping the communication as fast as possible.
PS: I tried initiating a Google hangout and embedding the video, but there's a latency of atleast 1 minute.
Simple to do. get the camera for Raspberry Pi from here.
http://www.adafruit.com/products/1367
You could use Motion JPEG for transmitting the video. Follow the instructions below.
http://blog.miguelgrinberg.com/post/stream-video-from-the-raspberry-pi-camera-to-web-browsers-even-on-ios-and-android
Once you the the IP of your video stream, you could display it in a website.
To send commands to rap pi, whats the complication ? If your Rasp. pi have an internet (it should be for the video stream), you could write a program to read commands from your browser.
I'm developing a video chat-like application using Flash RTMFP and Stratus. So far, I'm having good success. I can build from source, tweak settings, and get video and audio in both directions.
There's one glaring problem I haven't been able to solve, however -- when using a client on a Linux machine, the video received by the other end looks very poor. It's blocky and pixellated, almost as if it's rendering 160x120 in a much larger frame. When sending from a Mac (my other dev machine), the video looks quite good.
I've tried modifying all the settings I can think of -- frame rate, "quality", size, audio settings -- with no discernible improvement. I've tried running it as a local file and from a remote server. The network where I'm working is extremely fast, so that shouldn't be an issue.
Is there anything else I can try? Any suggestions or ideas are greatly appreciated.
Many thanks!
Bad camera or bad camera driver?
Stratus does not change video encoding, it simply is another variation of the RTMFP protocol for transferring exactly the same compressed stream.
One way you can check whether Stratus indeed plays any role in this is to try to stream the same stuff through Adobe Flash Media Server, the development version is free from adobe.com.
I have done Stratus applications, and have not experienced any degradation of video quality compared to Flash Media Server solution. In fact when the camera quality is set to 100, you won't notice the difference between raw camera video and compressed stream when using loopback mode. Apart from possibly limited framerate, if you specify bandwidth (the three are intimately related - bandwidth, framerate, quality, as per documentation of Camera.setQuality or Camera.setMode)