Matplotlib animation error "Requested MovieWriter (ffmpeg) not available" - python-3.x

I'm trying to run an animation using matplotlib FuncAnimation and I keep running into the error
"Requested MovieWriter (ffmpeg) not available". I realize this question has been asked before, and I have looked at every response to this and none have worked.
I'm running a jupyter notebook on Windows 10
I've got the following code.
from matplotlib.animation import FuncAnimation
def init():
ax.clear()
nice_axes(ax)
ax.set_ylim(.2, 6.8)
def update(i):
for bar in ax.containers:
bar.remove()
y = df_rank_expanded.iloc[i]
width = df_expanded.iloc[i]
ax.barh(y=y, width=width, color=colors, tick_label=labels)
date_str = df_expanded.index[i].strftime('%B %-d, %Y')
ax.set_title(f'Racial Unemployment - {date_str}', fontsize='smaller')
fig = plt.Figure(figsize=(4, 2.5), dpi=144)
ax = fig.add_subplot()
anim = FuncAnimation(fig=fig, func=update, init_func=init, frames=len(df_expanded),
interval=100, repeat=False)
When I run
from IPython.display import HTML
HTML(anim.to_html5_video())
I get the error RuntimeError: Requested MovieWriter (ffmpeg) not available
Here is what I've tried.
1) installing ffmpeg on my system, and setting the path value. I followed the instructions here https://www.wikihow.com/Install-FFmpeg-on-Windows
I verified FFmpeg was installed by typing ffmpeg -version in the cmd window
2) conda install -c conda-forge ffmpeg
This still results in the ffmpeg not available error.
3)I've followed instructions here
Matplotlib-Animation "No MovieWriters Available" which just say to do 1 and 2 above
4) Here Stop Matplotlib Jupyter notebooks from showing plot with animation
which suggests using
HTML(anim.to_jshtml())
However, this gives me an invalid format string error for date_str = df_expanded.index[i].strftime('%B %-d, %Y')
5) I've set the path variable directly in the jupyter notebook
plt.rcParams['animation.ffmpeg_path'] = 'C:\FFmpeg\ffmpeg-20200610-9dfb19b-win64-static\bin\ffmpeg.exe'
6) Restarting my kernel
7) Rebooting my system
8) Breaking my computer into small pieces, grinding those through an industrial shredder, burning the pieces, salting the earth where they lay, and then getting an entirely new computer and trying everything all over.
So far, nothing has worked. When I run sample code at http://louistiao.me/posts/notebooks/embedding-matplotlib-animations-in-jupyter-as-interactive-javascript-widgets/ I can get it to work, using their code only.
But I cannot get my own code to work. Any help would be much appreciated. Thanks!

plt.rcParams['animation.ffmpeg_path'] = 'C:\FFmpeg\ffmpeg-20200610-9dfb19b-win64-static\bin\ffmpeg.exe'
here you have to add r in front of 'C:~~~' as r'C:~~~'
plt.rcParams['animation.ffmpeg_path'] = r'C:\FFmpeg\ffmpeg-20200610-9dfb19b-win64-static\bin\ffmpeg.exe'

I don't know if you have tried this, but for me here it worked when I set a variable before calling HTML:
from IPython.display import HTML
html = anim.to_html5_video()
HTML(html)

Related

Alternative methods for deprecated gym.wrappers.Monitor()

The example is from Chapter 8 of Grokking Deep Reinforcement Learning written by Miguel Morales.
Please click here
The wrappers.Monitor is deprecated after the book is published. The code in question is as below:
env = wrappers.Monitor(
env, mdir, force=True,
mode=monitor_mode,
video_callable=lambda e_idx: record) if monitor_mode else env
I searched the internet and tried 2 methods but failed.
1- gnwrapper.Monitor
I pip install gym-notebook-wrapper and import gnwrapper, then rewrite the code
env = gnwrapper.Monitor(gym.make(env_name),directory="./")
A [WinError 2] The system cannot find the file specified error message is returned.
2- gym.wrappers.RecordVideo
I from gym.wrappers import RecordVideo then rewrite the code
env = RecordVideo(gym.make(env_name), "./")
A AttributeError: 'CartPoleEnv' object has no attribute 'videos' error message is returned.
Is there any way to solve the problem?
After some experiments, the best way to work around the problem is to downgrade the gym version to 0.20.0, in order to preserve the wrappers.Monitor function.
Moreover, the subprocess.Popen and subprocess.check_output does not work as the original author suggested, so I use moviepy (please pip install moviepy if you do not have this libarary and from moviepy.editor import VideoFileClip at the very beginning) to change MP4 to GIF files. The code in get_gif_html is amended as follows.
gif_path = basename + '.gif'
if not os.path.exists(gif_path):
videoClip = VideoFileClip(video_path)
videoClip.write_gif(gif_path, logger=None)
gif = io.open(gif_path, 'r+b').read()
The program now completely works.
Edit (24 Jul 2022):
The following solution is for gym version 0.25.0, as I hope the program works for later versions. Windows 10 and Juptyer Notebook is used for the demonstration.
(1) Maintain the moviepy improvement stated above
(2) import from gym.wrappers.record_video import RecordVideo and substitute
env = wrappers.Monitor(
env, mdir, force=True,
mode=monitor_mode,
video_callable=lambda e_idx: record) if monitor_mode else
with this
if monitor_mode:
env = RecordVideo(env, mdir, episode_trigger=lambda e_idx:record)
env.reset()
env.start_video_recorder()
else:
env = env
RecordVideo function takes (env, video_folder, episode_trigger) arguments, and video_callable and episode_trigger are the same thing.
(3) Loss of env.videos in get_gif_html function in new versions.
In the old setting, after the video is recorded, the MP4 and meta.json files are stored in the following address with a random file name
C:\Users\xxx\AppData\Local\Temp
https://i.stack.imgur.com/Za3Sd.jpg
This env.videos function returns a list of tuples, each tuples consisted of address path of MP4 and meta.json.
I tried to recreate the env.videos by
(a) declare the mdir as global variable
def get_make_env_fn(**kargs):
def make_env_fn(env_name, seed=None, render=None, record=False,
unwrapped=False, monitor_mode=None,
inner_wrappers=None, outer_wrappers=None):
global mdir
mdir = tempfile.mkdtemp()
(b) in the demo_progression and demo_last function, grouping all MP4 and meta_json files, zipping them and use list comprehension to make a new env_videos variable (as opposed to env.videos)
env.close()
mp4_files = ([(mdir + '\\' + file) for file in os.listdir(mdir) if file.endswith('.mp4')])
meta_json_files = ([(mdir + '\\' + file) for file in os.listdir(mdir) if file.endswith('.meta.json')])
env_videos = [(mp4, meta_json) for mp4, meta_json in zip(mp4_files, meta_json_files)]
data = get_gif_html(env_videos=env_videos,
These are all the changes.

Cannot import name 'Instalysis' from 'instagramy'

I cannot figure out what this error requires... any ideas for a Python newbie? All pre requisites are installed... this is version 3.9 64-bit.
Details: "ADO.NET: Python script error.
ImportError: cannot import name 'Instalysis' from 'instagramy' (C:\Python\Python39\lib\site-packages\instagramy_init_.py)
"
Here's the test script I'm running:
from instagramy import Instalysis
# Instagram user_id of ipl teams
teams = ["chennaiipl", "mumbaiindians",
"royalchallengersbangalore", "kkriders",
"delhicapitals", "sunrisershyd",
"kxipofficial"]
data = Instalysis(teams)
# return the dataframe
data_frame = data.analyis()
data_frame
The instagramy package doesn't have the 'Instalysis' section to it. But it does have these:
from instagramy import InstagramUser
This will give uyou all of the info also I saw that you had parentheses at data_frame = data.analyis()
You will recieve an error if you have them as it isn't an executable function.
Go have a look at the pypi page here: Pypi- Instagramy
Hope this could help!

decodestrings is not an attribute of base64 error in python 3.9.1

After upgrading from python 3.8.0 to python 3.9.1, the tremc front-end of transmission bitTorrent client is throwing decodestrings is not an attribute of base64 error whenever i click on a torrent entry to check the details.
My system specs:
OS: Arch linux
kernel: 5.6.11-clear-linux
base64.encodestring() and base64.decodestring(), aliases deprecated since Python 3.1, have been removed.
use base64.encodebytes() and base64.decodebytes()
So i went to the site-packages directory and with ripgrep tried searching for the decodestring string.
rg decodestring
paramiko/py3compat.py
39: decodebytes = base64.decodestring
Upon examining the py3compat.py file,i found this block:
PY2 = sys.version_info[0] < 3
if PY2:
string_types = basestring # NOQA
text_type = unicode # NOQA
bytes_types = str
bytes = str
integer_types = (int, long) # NOQA
long = long # NOQA
input = raw_input # NOQA
decodebytes = base64.decodestring
encodebytes = base64.encodestring
So decodebytes have replaced(aliased) decodestring attribute of base64 for python version >= 3
This must be a new addendum because tremc was working fine in uptil version 3.8.*.
Opened tremc script, found the erring line (line 441), just replaced the attribute decodestring with decodebytes.A quick fix till the next update.
PS: Checked the github repository, and there's a pull request for it in waiting.
If you don't want to wait for the next release and also don't want to hack the way i did, you can get it freshly build from the repository, though that would be not much of a difference than my method

is it possible to get exactly the same results from tensorflow mfcc and librosa mfcc?

I'm trying to make tensorflow mfcc give me the same results as python lybrosa mfcc
i have tried to match all the default parameters that are used by librosa
in my tensorflow code and got a different result
this is the tensorflow code that i have used :
waveform = contrib_audio.decode_wav(
audio_binary,
desired_channels=1,
desired_samples=sample_rate,
name='decoded_sample_data')
sample_rate = 16000
transwav = tf.transpose(waveform[0])
stfts = tf.contrib.signal.stft(transwav,
frame_length=2048,
frame_step=512,
fft_length=2048,
window_fn=functools.partial(tf.contrib.signal.hann_window,
periodic=False),
pad_end=True)
spectrograms = tf.abs(stfts)
num_spectrogram_bins = stfts.shape[-1].value
lower_edge_hertz, upper_edge_hertz, num_mel_bins = 0.0,8000.0, 128
linear_to_mel_weight_matrix =
tf.contrib.signal.linear_to_mel_weight_matrix(
num_mel_bins, num_spectrogram_bins, sample_rate, lower_edge_hertz,
upper_edge_hertz)
mel_spectrograms = tf.tensordot(
spectrograms,
linear_to_mel_weight_matrix, 1)
mel_spectrograms.set_shape(spectrograms.shape[:-1].concatenate(
linear_to_mel_weight_matrix.shape[-1:]))
log_mel_spectrograms = tf.log(mel_spectrograms + 1e-6)
mfccs = tf.contrib.signal.mfccs_from_log_mel_spectrograms(
log_mel_spectrograms)[..., :20]
the equivalent in librosa:
libr_mfcc = librosa.feature.mfcc(wav, 16000)
the following are the graphs of the results:
I'm the author of tf.signal. Sorry for not seeing this post sooner, but you can get librosa and tf.signal.stft to match if you center-pad the signal before passing it to tf.signal.stft. See this GitHub issue for more details.
I spent a whole 1 day trying to make them match. Even the rryan's solution didn't work for me (center=False in librosa), but I finally found out, that TF and librosa STFT's match only for the case win_length==n_fft in librosa and frame_length==fft_length in TF. That's why rryan's colab example is working, but you can try that if you set frame_length!=fft_length, the amplitudes are very different (although visually, after plotting, the patterns look similar). Typical example - if you choose some win_length/frame_length and then you want to set n_fft/fft_length to the smallest power of 2 greater than win_length/frame_length, then the results will be different. So you need to stick with the inefficient FFT given by your window size... I don't know why it is so, but that's how it is, hopefully it will be helpful for someone.
The output of contrib_audio.decode_wav should be DecodeWav with { audio, sample_rate } and audio shape is (sample_rate, 1), so what is the purpose for getting first item of waveform and do transpose?
transwav = tf.transpose(waveform[0])
No straight forward way, since librosa stft uses center=True which does not comply with tf stft.
Had it been center=False, stft tf/librosa would give near enough results. see colab sniff
But even though, trying to import the librosa code into tf is a big headache. Here is what I started and gave up. Near but not near enough.
def pow2db_tf(X):
amin=1e-10
top_db=80.0
ref_value = 1.0
log10 = 2.302585092994046
log_spec = (10.0/log10) * tf.log(tf.maximum(amin, X))
log_spec -= (10.0/log10) * tf.log(tf.maximum(amin, ref_value))
pow2db = tf.maximum(log_spec, tf.reduce_max(log_spec) - top_db)
return pow2db
def librosa_feature_like_tf(x, sr=16000, n_fft=2048, n_mfcc=20):
mel_basis = librosa.filters.mel(sr, n_fft).astype(np.float32)
mel_basis = mel_basis.reshape(1, int(n_fft/2+1), -1)
tf_stft = tf.contrib.signal.stft(x, frame_length=n_fft, frame_step=hop_length, fft_length=n_fft)
print ("tf_stft", tf_stft.shape)
tf_S = tf.matmul(tf.abs(tf_stft), mel_basis);
print ("tf_S", tf_S.shape)
tfdct = tf.spectral.dct(pow2db_tf(tf_S), norm='ortho'); print ("tfdct", tfdct.shape)
print ("tfdct before cut", tfdct.shape)
tfdct = tfdct[:,:,:n_mfcc];
print ("tfdct afer cut", tfdct.shape)
#tfdct = tf.transpose(tfdct,[0,2,1]);print ("tfdct afer traspose", tfdct.shape)
return tfdct
x = tf.placeholder(tf.float32, shape=[None, 16000], name ='x')
tf_feature = librosa_feature_like_tf(x)
print("tf_feature", tf_feature.shape)
mfcc_rosa = librosa.feature.mfcc(wav, sr).T
print("mfcc_rosa", mfcc_rosa.shape)
For anyone still looking for this: I had a similar problem some time ago: Matching librosa's mel filterbanks/mel spectrogram to a tensorflow implementation. The solution was to use a different windowing approach for the spectrogram and librosa's mel matrix as constant tensor. See here and here.

fipy viewer not plotting

from fipy import *
nx = 50
dx = 1.
mesh = Grid1D(nx=nx, dx=dx)
phi = CellVariable(name="solution variable",
mesh=mesh,
value=0.)
D = 1.
valueLeft = 1
valueRight = 0
phi.constrain(valueRight, mesh.facesRight)
phi.constrain(valueLeft, mesh.facesLeft)
eqX = TransientTerm() == ExplicitDiffusionTerm(coeff=D)
timeStepDuration = 0.9 * dx**2 / (2 * D)
steps = 100
phiAnalytical = CellVariable(name="analytical value",
mesh=mesh)
viewer = Viewer(vars=(phi, phiAnalytical),
datamin=0., datamax=1.)
viewer.plot()
x = mesh.cellCenters[0]
t = timeStepDuration * steps
try:
from scipy.special import erf
phiAnalytical.setValue(1 - erf(x / (2 * numerix.sqrt(D * t))))
except ImportError:
print "The SciPy library is not available to test the solution to \
the transient diffusion equation"
for step in range(steps):
eqX.solve(var=phi,
dt=timeStepDuration)
viewer.plot()
I am trying to implement an example from the fipy examples list which is the 1D diffusion problem. but I am not able to view the result as a plot.
I have defined viewer correctly as suggested in the code for the example. Still not helping.
The solution vector runs fine.
But I am not able to plot using the viewer function. can anyone help? thank you!
Your code works fine on my computer. Probably is a problem with a plotting library used by FiPy.
Check if the original example works (just run this from the fipy folder)
python examples/diffusion/mesh1D.py
If not, download FiPy from the github page of the project https://github.com/usnistgov/fipy and try the example again. If not, check if the plotting libraries are correctly installed.
Anyways, you should specify the platform you are using and the errors you are getting. The first time I had some problems with the plotting libraries too.

Resources