Moviepy - Output video not playable - python-3.x

I'm using the library moviepy on Linux Mint 18.1.
Specifically, it's moviepy 0.2.3.2 on python 3.5.2
Since I'm getting started, I tried this simple script, which should concatenate two videos one after the other:
import moviepy.editor as mp
video1 = mp.VideoFileClip("short.mp4")
video2 = mp.VideoFileClip("motivation.mp4")
final_video = mp.concatenate_videoclips([video1,video2])
final_video.write_videofile("composition.mp4")
The two videos are short random videos that I downloaded from YouTube. They both play perfectly, both with VLC and the standard video player provided with Linux Mint.
The script runs fine with no errors, with the final message:
[MoviePy] >>>> Building video composition.mp4
[MoviePy] Writing audio in compositionTEMP_MPY_wvf_snd.mp3
100%|██████████████████████████████| 1449/1449 [00:23<00:00, 59.19it/s]
[MoviePy] Done.
[MoviePy] Writing video composition.mp4
100%|██████████████████████████████| 1971/1971 [11:34<00:00, 2.84it/s]
[MoviePy] Done.
[MoviePy] >>>> Video ready: composition.mp4
The file is indeed created, and it also have a size (about 20 MB). However, when I try to play it, nothing happens: it seems to be corrupted. The standard video player even tells me that "there is no video stream to be played".
If I try to do the same with the interactive console, and use final_video.preview(), I get an AttributeError, along with this traceback:
In [5]: final_video.preview()
Exception in thread Thread-417:
Traceback (most recent call last):
File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
self.run()
File "/usr/lib/python3.5/threading.py", line 862, in run
self._target(*self._args, **self._kwargs)
File "<decorator-gen-211>", line 2, in preview
File "/usr/local/lib/python3.5/dist-packages/moviepy/decorators.py", line 54, in requires_duration
return f(clip, *a, **k)
File "/usr/local/lib/python3.5/dist-packages/moviepy/audio/io/preview.py", line 49, in preview
sndarray = clip.to_soundarray(tt,nbytes=nbytes, quantize=True)
File "<decorator-gen-184>", line 2, in to_soundarray
File "/usr/local/lib/python3.5/dist-packages/moviepy/decorators.py", line 54, in requires_duration
return f(clip, *a, **k)
File "/usr/local/lib/python3.5/dist-packages/moviepy/audio/AudioClip.py", line 107, in to_soundarray
fps = self.fps
AttributeError: 'CompositeAudioClip' object has no attribute 'fps'
and the video seems frozen at the first frame.
I don't have any clue, since everything seems to work fine (except with the preview, which doesn't work because of the error). I tried to reinstall ffmpeg, but no succes: everything is exactly the same. Without any useful error, I can't figure out how to fix this problem. Can anyone help me?
EDIT: What are the 4 magic letters? R-T-F-M! I solved the problem by setting the kwarg method of mp.concatenate_videoclips to compose, since the original videos have a different frame size.

To hopefully figure out what's going on, I decided to take a more systematic approach, following these steps:
Create a virtual environment with no packages other than moviepy and its dependencies
Use videos from a different source
Try different codecs and/or other different parameters
Dig into the source code of moviepy
Sacrifice a goat to the Angel of Light, the Deceiver, the Father of Lies, the Roaring Lion, Son of Perdition Satan Lucifer
In each case I will this script (test.py):
import moviepy.editor as mp
video1 = mp.VideoFileClip("short.mp4")
video2 = mp.VideoFileClip("motiv_30.mp4")
final_video = mp.concatenate_videoclips([video1,video2])
final_video.write_videofile("composition.mp4")
with some minor changes, when needed. I'll update this post as I follow the steps.
1. Create a virtual environment
I created a virtual environment using virtualenv, activated it and installed moviepy with pip. This is the output of pip freeze:
decorator==4.0.11
imageio==2.1.2
moviepy==0.2.3.2
numpy==1.13.3
olefile==0.44
Pillow==4.3.0
tqdm==4.11.2
All with python 3.5.2.
After running test.py, the video is created, with no apparent problems. However, the video can't be played, neither by VLC nor by the default video player of Linux Mint 18.1.
Then, I noticed that mp.concatenate_videoclips has the kwarg method, which is by default set to chain. In the documentation, I read that:
- method="compose", if the clips do not have the same
resolution, the final resolution will be such that no clip has
to be resized.
So, I tried to use the kwarg method="compose", since the two videos have different frame sizes and... it worked. I am an idiot. Oh well, no goats for Satan, I suppose. Lesson learned: RTFM

I was having the same trouble.
I fond the solution at https://zulko.github.io/moviepy/FAQ.html
It turns out that the video cannot have a odd ratio like 1080 x 351, so you just have to check if it's not even them add ou subtract one from that.

Related

How to use yolov5 api with flask offline?

I was able to run the Flask app with yolov5 on a PC with an internet connection. I followed the steps mentioned in yolov5 docs and used this file: yolov5/utils/flask_rest_api/restapi.py,
But I need to achieve the same offline(On a particular PC). Now the issue is, when I am using the following:
model = torch.hub.load("ultralytics/yolov5", "yolov5", force_reload=True)
It tries to download model from internet. And throws an error.
Urllib.error.URLError: <urlopen error [Errno - 2] name or service not known>
How to get the same results offline.
Thanks in advance.
If you want to run detection offline, you need to have the model already downloaded.
So, download the model (for example yolov5s.pt) from https://github.com/ultralytics/yolov5/releases and store it for example to the yolov5/models.
After that, replace
# model = torch.hub.load("ultralytics/yolov5", "yolov5s", force_reload=True) # force_reload to recache
with
model = torch.hub.load(r'C:\Users\Milan\Projects\yolov5', 'custom', path=r'C:\Users\Milan\Projects\yolov5\models\yolov5s.pt', source='local')
With this line, you can run detection also offline.
Note: When you start the app for the first time with the updated torch.hub.load, it will download the model if not present (so you do not need to download it from https://github.com/ultralytics/yolov5/releases).
There is one more issue involved here. When this code is run on a machine that has no internet connection at all. Then you may face the following error.
Downloading https://ultralytics.com/assets/Arial.ttf to /home/<local_user>/.config/Ultralytics/Arial.ttf...
Traceback (most recent call last):
File "/home/<local_user>/Py_Prac_WSL/yolov5-flask-master/yolov5/utils/plots.py", line 56, in check_pil_font
return ImageFont.truetype(str(font) if font.exists() else font.name, size)
File "/home/<local_user>/.local/share/virtualenvs/23_Jun-82xb8nrB/lib/python3.8/site-packages/PIL/ImageFont.py", line 836, in truetype
return freetype(font)
File "/home/<local_user>/.local/share/virtualenvs/23_Jun-82xb8nrB/lib/python3.8/site-packages/PIL/ImageFont.py", line 833, in freetype
return FreeTypeFont(font, size, index, encoding, layout_engine)
File "/home/<local_user>/.local/share/virtualenvs/23_Jun-82xb8nrB/lib/python3.8/site-packages/PIL/ImageFont.py", line 193, in __init__
self.font = core.getfont(
OSError: cannot open resource
To overcome this error, you need to download manually, the Arial.ttf file from https://ultralytics.com/assets/Arial.ttf and paste it to the following location, on Linux:
/home/<your_pc_user>/.config/Ultralytics
On windows, paste Arial.ttf here:
C:\Windows\Fonts
The first line of the error message mentions the same thing. After this, the code runs smoothly in offline mode.
Further as mentioned at https://docs.ultralytics.com/tutorials/pytorch-hub/, any custom-trained-model other than the one uploaded at PyTorch-model-hub can be accessed by this code.
path_hubconfig = 'absolute/path/to/yolov5'
path_trained_model = 'absolute/path/to/best.pt'
model = torch.hub.load(path_hubconfig, 'custom', path=path_trained_model, source='local') # local repo
With this code, object detection is carried out by the locally saved custom-trained model. Once, the custom trained model is saved locally this piece of code access it directly avoiding any necessity of the internet.

LIBUSB_ERROR_BUSY [-6] Sending a basic packet with Python 3.9 using LibUSB1

I am developing some software in python 3.9 and I am at the point where I have a device connected to my USB port and would like to send a basic packet to test the interface before I proceed. I am using this example to try and get my interface to work. I am not bothered about speed or byte count. I would like to see any response on the interface (But on reflection Im wondering if usb speed could be the issue):
import usb1
import usb.util
import os
import sys
import libusb
import usb.core
from usb import util
import math
dev = usb.core.find(idVendor=0x11ac,idProduct=0x317d)
with usb1.USBContext() as context:
handle=context.openByVendorIDAndProductID(
0x11ac,
0x317d,)
handle.claimInterface(0)
handle.setInterface(0)
data = bytearray(b"\\xf0\\x0f"* (int(math.ceil(0xb5db91/4.0))))
handle.controlWrite(0x40, 0xb0, 0xb5A6, 0xdb91, b"")
handle.bulkWrite(2,data,timeout=5000)
`
https://github.com/vpelletier/python-libusb1/issues/21
I have had a look in various forums for several days and cannot seem to get an answer that works. Here is the trace: Its worth noting that from time to time, this py file does run without error but does nothing, and I see no traffic traveling to the USB interface.
Can someone please help me configure a working example of how to send a packet to the interface? I have tried various things like detaching the kernel, setting configuration, etc.
For 4 days I have struggled with libusb01 & 10, after discovering libusb1, I have changed my wrapper and got a lot more success
I also see a lot of examples in forums like this one, and I always get the same response. Also Im curious as to where it appears that 0x40 is the endpoint(out)
Traceback (most recent call last):
File "/home/jbgilbert/Desktop/Packets/Backend_replace.py", line 16, in <module>
handle.claimInterface(0)
File "/usr/lib/python3/dist-packages/usb1/__init__.py", line 1213, in claimInterface
libusb1.libusb_claim_interface(self.__handle, interface),
File "/usr/lib/python3/dist-packages/usb1/__init__.py", line 133, in mayRaiseUSBError
__raiseUSBError(value)
File "/usr/lib/python3/dist-packages/usb1/__init__.py", line 125, in raiseUSBError
raise __STATUS_TO_EXCEPTION_DICT.get(value, __USBError)(value)
usb1.USBErrorBusy: LIBUSB_ERROR_BUSY [-6]
My device is a laptop, using lsmod reveals all devices linked to that particular end point, in this case because if the presence of a webcam I was unable to write to an available end point. Despite disabling the driver I had no avail and had to try to code on a machine that had less onboard accessories that proved more successful

PyTorch Fashion-MNIST (ETL)

I'm new to Deep Learning and PyTorch, so please do bear with me if some questions seem silly or I'm not asking in the correct format.
I was watching this video as part of a PyTorch series on Deep Learning: https://www.youtube.com/watch?v=8n-TGaBZnk4 . This video specifically is about ETL (using Fashion-MNIST dataset).
I have a few questions on the video at 7:05.
Question 1: In the Fashion-MNIST subclass constructor we passed it the argument:
‘root’, where the instructor mentioned: this is the location in disk where data is located. Sorry maybe this is a silly question, but is this where the data is located on the source server (from the URL) disk, or is this the path location where you want to save the data on your computer locally?
Question 2: Also for the Fashion-MNIST is the 'root' always the same location path: i.e. './data/FashionMNIST'?
Question 3: If the 'root' defines the location path where the data is located on the source server, then where would it be downloaded on locally? I checked my 'download' folder (I'm using Windows 7 laptop), and couldn't find the files there?
Question 4: The video mentioned that we should check if the data, in subsequent calls, are downloaded already or not (i.e. in the argument we pass download=true).
4(a): What's a good approach to do this? Do we put an if statement in place to check for this? Or is there a smarter way of checking for downloaded data?
4(b): Also what does it mean by "subsequent calls"? Does it mean when we need to call the 'FashionMNIST' constructor again for the test_data download?
Question 5: Finally, I tried running the code below (which is the one in the video) on Spyder IDE (Python 3.5):
import torch
import torchvision
import torchvision.transforms as transforms
train_set = torchvision.datasets.FashionMNIST(
root='./data/FashionMNIST'
,train=True
,download=True
,transform=transforms.Compose([
transforms.ToTensor()
])
)
I got the output:
Traceback (most recent call last):
File "<ipython-input-3-3ac000b9e90a>", line 10, in <module>
transforms.ToTensor()
File "C:\Program Files\Anaconda3\lib\site-packages\torchvision\datasets\mnist.py", line 68, in __init__
self.download()
File "C:\Program Files\Anaconda3\lib\site-packages\torchvision\datasets\mnist.py", line 136, in download
makedir_exist_ok(self.raw_folder)
File "C:\Program Files\Anaconda3\lib\site-packages\torchvision\datasets\utils.py", line 41, in makedir_exist_ok
os.makedirs(dirpath)
File "C:\Program Files\Anaconda3\lib\os.py", line 241, in makedirs
mkdir(name, mode)
FileNotFoundError: [WinError 206] The filename or extension is too long: './data/FashionMNIST\\FashionMNIST\\raw'
Not sure why I got that error at the end. In addition I ran the code on Jupyter Notebook, as per the video, and it worked fine. But I'm wondering why it throws that error in Spyder IDE.
Many thanks in advance.
No genuine question is a silly question, Answering questions one bye one:
Ans 1 & 2 :
root is the path on your local disk where the data will be saved, you can give ny path according to your liking it will not cause an issue.
Ans 3:
The urls etc are defined within the files and the path of the data is all you need to do: in order to look at the urls from where the data is downloaded here is a link.
Ans 4. : download = True merely gives it permission to download if the data doesn't exists the downloader will automatically check if the data already exists, if it exists it will still not download, even if download is set to be true, again it happens in the background you don't have to worry about it.
Ans5 : The issue isn't a torch issue exactly it has more to do with how it is being compiled on in windows, the issue is discussed at length here & here

How to fix Python 3 PyAutoGUI screenshot error? (macOS)

I keep getting an error with any of PyAutoGUI's screenshot taking functions such as:
pyautogui.locateOnScreen('button.png')
pyautogui.pixelMatchesColor(x, y, (r, g, b))
im = pyautogui.screenshot()
The error I get is:
screencapture: cannot write file to intended destination, .screenshot2018-1009_16-43-26-003190.png
Traceback (most recent call last):
File "~/program.py", line 111, in <module>
pyautogui.locateOnScreen('/images/play!.png')
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pyscreeze/__init__.py", line 265, in locateOnScreen
screenshotIm = screenshot(region=None) # the locateAll() function must handle cropping to return accurate coordinates, so don't pass a region here.
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pyscreeze/__init__.py", line 331, in _screenshot_osx
im = Image.open(tmpFilename)
File "/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/PIL/Image.py", line 2609, in open
fp = builtins.open(filename, "rb")
FileNotFoundError: [Errno 2] No such file or directory: '.screenshot2018-1009_16-43-26-003190.png'
I don't tell it to or want it to save the new screenshotted image to any directory (and it shouldn't). With the pyautogui.screenshot() function I could manually save it to a real directory in my project, but I don't have an option to do that with the other methods. Any idea on how to fix this?
What I've tried:
I looked at all the documentation I could find online of pyautogui screenshots
Restarting computer
Downgrading versions for Pillow and pyscreeze
EDIT:
I tried it on another mac and got the same error.
Tried it on windows bootcamp (windows on my mac) and it works fine.
possible, very hack-ish fix - I don't actually like this answer but it was a quick and easy fix (done on OSX with Mojave):
PLEASE NOTE: modifying the source code of libraries you don't understand is usually a bad idea, so do so at your own risk! This worked for me, your milage may vary.
Go to your file (your file path may be different, I just copied this from your error):
/Library/Frameworks/Python.framework/Versions/3.6/lib/python3.6/site-packages/pyscreeze/__init__.py
find the line under the function "_screenshot_osx" that looks like
tmpFilename = '.screenshot%s.png' % (datetime.datetime.now().strftime('%Y-%m%d_%H-%M-%S-%f'))
copy it and then comment it out, paste the copied line directly below the commented out original and modify to something like this:
tmpFilename = r'<your preferred screenshot folder here>/screenshot%s.png' % (datetime.datetime.now().strftime('%Y-%m%d_%H-%M-%S-%f'))
save the changes, and see if it works.
Also note: pyautogui.locateOnScreen can be a bit finicky so even if this removes your error you still might not get the coordinates you want (might return none). That might be related to a different issue. To test that part I do this:
import pyautogui
pyautogui.screenshot('testFull.png')
placePos = pyautogui.locateOnScreen('testFull.png')
print(placePos)
even the cursor blinking can mess this up though, and osx has translucent user interfaces so it's kind of annoying to test this perfectly without careful image curation.
I was facing this same issue on MacOS Mojave after changing to Python 3.8.
Here is my solution.
Go the same file mentioned by #Richard W.
There, together with all your 'imports', add the following line so the script can find the tmpFilename folder
dirname = os.path.dirname(__file__)
then, replace the also mentioned line by
tmpFilename = os.path.join(dirname,r'screenshot%s.png' % (datetime.datetime.now().strftime('%Y-%m%d_%H-%M-%S-%f')))

play mp3 without default player

I'm trying play mp3 without default player use, so I want try pyglet, but nothing works
import pyglet
music = pyglet.resource.media('D:/folder/folder/audio.mp3')
music.play()
pyglet.app.run()
I've tried it this way
music = pyglet.resource.media('D:\folder\folder\audio.mp3')
and like this:
music = pyglet.resource.media('D:\\folder\\folder\\audio.mp3')
but have this error
Traceback (most recent call last):
File "C:\Users\User\AppData\Roaming\Python\Python35\site-packages\pyglet\resource.py", line 624, in media
location = self._index[name]
KeyError: 'D:\folder\folder\audio.mp3'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "D:/PC/PyCharm_project/0_TEMP.py", line 3, in <module>
music = pyglet.resource.media('D:\folder\folder\audio.mp3')
File "C:\Users\User\AppData\Roaming\Python\Python35\site-packages\pyglet\resource.py", line 634, in media
raise ResourceNotFoundException(name)
pyglet.resource.ResourceNotFoundException: Resource "D:\folder\folder\audio.mp3" was not found on the path. Ensure that the filename has the correct captialisation.
It's because of the way you load the resource.
Try this code for instance:
import pyglet
music = pyglet.media.load(r'D:\folder\folder\audio.mp3')
music.play()
pyglet.app.run()
The way this works is that it loads a file and places it in a pyglet.resource.media container. Due to namespaces and other things, the code you wrote is only allowed to load resources from the working directory. So instead, you use pyglet.media.load which is able to load the resource you need into the current namespace (Note: I might be missusing the word "namespace" here, for lack of a better term without looking at the source code of pyglet, this is the best description I could come up with).
You could experiment by placing the .mp3 in the script folder and run your code again but with a relative path:
import pyglet
music = pyglet.resource.media('audio.mp3')
music.play()
pyglet.app.run()
But I'd strongly suggest you use pyglet.media.load() and have a look at the documentation

Resources