I am trying to write a script in python or nodejs which can download a file, image or video at constant speed, lets say average download rate of my connection is 10 mbs/s but I want to dedicate 3 mbs/s speed to that script and it must download the media at that constant speed.
Related
as I said in the title, I need to record my screen from an electron app.
my needs are:
high quality (720p or 1080p)
minimum size
record audio + screen + mic
low impact on PC hardware while recording
no need for any wait after the recorder stopped
by minimum size I mean about 400MB on 720p and 700MB on 1080p for a 3 to 4 hours recording. we already could achieve this by bandicam and obs and it's possible
I already tried:
the simple MediaStreamRecorder API using RecordRTC.Js; produces huge file sizes, like 1GB per hour for 720p video.
compressing the output video using FFmpeg; it can take up to 1 hour for 3 hours recording
save every chunk with 'ondataavailable' event and right after, run FFmpeg and convert and reduce the size and append all the compressed files (also by FFmpeg); there are two problems. 1, because of different PTS but it can be fixed by tunning compress command args. 2, the main problem is the audio data headers are only available in the first chunk and this approach causes a video that only has audio for the first few seconds
recording the video with FFmpeg itself; the end-users need to change some things manually (Stereo Mix), the configs are too complex, it causes the whole PC to work slower while recording (like fps drop; even if I set -threads to 1), in some cases after recording is finished it needs many times to wrap it all up
searched through the internet to find applications that can be used from the command line; I couldn't find much, the famous applications like bandicam and obs have command line args but there are not many args to play with and I can't set many options which leads to other problems
I don't know what else I can do, please tell me if u know a way or simple tool that can be used through CLI to achieve this and guide me through this
I end up using the portable mode of high-level 3d-party applications like obs-studio and adding them to our final package. I also created a js file to control the application using CLI
this way I could pre-set my options (such as crf value, etc) and now our average output size for a 3:30 hour value with 1080p resolution is about 700MB which is impressive
I tried pydub's AudioSegment but it can only load small files. For large files my cpu runs at max and just stops after a couple minutes by killing the process.
I don't have any code written yet. The file is an mp3 audiobook that was downloaded from piratebay. Duration is 8 hours and filesize around 300mb. I want to cut the file into multiple files 30-60 mins each so that I can sync them with apple music(Doesn't allow large files).
Pydub doesnt't even load it so I haven't gone any further.
There are softwares to achieve this but I'm trying to do it with python.
I'm creating a Raspberry Pi Zero W security camera and am attempting to integrate motion detection using Node.js. Images are being taken with Pi camera module at 8 Megapixels (3280x2464 pixels, roughly 5MB per image).
On a Pi Zero, resources are limited, so loading an entire image from file to Node.js may limit how fast I can capture then evaluate large photographs. Surprisingly, I capture about two 8MB images per second in a background time lapse process and hope to continue to capture the largest sized images roughly once per second at least. One resource that could help with this is extracting the embedded thumbnail from the large image (thumbnail size customizable in raspistill application).
Do you have thoughts on how I could quickly extract the thumbnail from a large image without loading the full image in Node.js? So far I've found a partial answer here. I'm guessing I would manage this through a buffer somehow?
This question already has answers here:
opening a large pdf files on web
(5 answers)
Closed 6 years ago.
I am hosting around 18Mb pdf file to s3 bucket and trying to get it, but it takes a long time on a bit slow network, I also tried to covert the file to HTML and then render it but it becomes of around 48MB because of which the phone starts hanging. I have also moved the s3 to Singapore location to reduce latency and have also tried to pipe it through the server, Now I am only left with a option to disintegrate the PDF into images for every page and load them when requested, Is there anything that I am missing to make the load time of pdf bearable?
You have the following options as you are facing limitations on end-users devices:
Split large PDF files into several parts and allow users to download these parts separately.
Linearize PDF files, this will affect how files are loaded but will not decrease the size so you may face issue with crashes on end-user devices too.
Optimize file size of PDF files by re-compressing images inside.
Render low resolution JPEG images of PDF pages (with Ghostscript or ImageMagick) but please do not use JPEG as main format as JPEG compression is not designed for text compression (but for human faces).
I'm creating a computer vision project which requires me to capture images on a raspberry pi and send it over the network to a server to process it. The software that processes only accepts pictures and not videos but for a good user experience the faster the photos arrive the better the response time of the system. Currently, im struggling with capturing multiple images quickly and i've tried software such as fswebcam, motion , pygame.Camera and all have a delay of roughly 1 sec resulting in <=1fps. I would like to increase this to around 10 fps. In my current setup i run a bash script which takes a picture from a usb webcam and saves it on disk of the raspi and a separate piece of C code transfers the images over UDP sockets. Is there a fast way to achieve faster capture of frames and save them to disk?