fipy viewer not plotting - viewer

from fipy import *
nx = 50
dx = 1.
mesh = Grid1D(nx=nx, dx=dx)
phi = CellVariable(name="solution variable",
mesh=mesh,
value=0.)
D = 1.
valueLeft = 1
valueRight = 0
phi.constrain(valueRight, mesh.facesRight)
phi.constrain(valueLeft, mesh.facesLeft)
eqX = TransientTerm() == ExplicitDiffusionTerm(coeff=D)
timeStepDuration = 0.9 * dx**2 / (2 * D)
steps = 100
phiAnalytical = CellVariable(name="analytical value",
mesh=mesh)
viewer = Viewer(vars=(phi, phiAnalytical),
datamin=0., datamax=1.)
viewer.plot()
x = mesh.cellCenters[0]
t = timeStepDuration * steps
try:
from scipy.special import erf
phiAnalytical.setValue(1 - erf(x / (2 * numerix.sqrt(D * t))))
except ImportError:
print "The SciPy library is not available to test the solution to \
the transient diffusion equation"
for step in range(steps):
eqX.solve(var=phi,
dt=timeStepDuration)
viewer.plot()
I am trying to implement an example from the fipy examples list which is the 1D diffusion problem. but I am not able to view the result as a plot.
I have defined viewer correctly as suggested in the code for the example. Still not helping.
The solution vector runs fine.
But I am not able to plot using the viewer function. can anyone help? thank you!

Your code works fine on my computer. Probably is a problem with a plotting library used by FiPy.
Check if the original example works (just run this from the fipy folder)
python examples/diffusion/mesh1D.py
If not, download FiPy from the github page of the project https://github.com/usnistgov/fipy and try the example again. If not, check if the plotting libraries are correctly installed.
Anyways, you should specify the platform you are using and the errors you are getting. The first time I had some problems with the plotting libraries too.

Related

is there any way to switch ImageJ macro code to python3 code?

I'm making an app in python3 and I want to use some function in imagej. I used macro recorder to switch to python code but it got really messy, now I don't know how to do next. Can someone help me, please.
Here is the marco recorder code and my marco code
imp = IJ.openImage("D:/data/data_classify/data_train/1/9700TEST.6.tiff40737183_2.jpg");
//IJ.setTool("line");
//IJ.setTool("polyline");
xpoints = [177,155,114,101,100,159,179];
ypoints = [82,94,109,121,133,163,173];
imp.setRoi(new PolygonRoi(xpoints,ypoints,Roi.POLYLINE));
IJ.run(imp, "Straighten...", "title=9700TEST.6.tiff40737183_2-1.jpg line=30");
my python3 code
mport imagej
from scyjava import jimport
ij = imagej.init('2.5.0', mode='interactive')
print(ij.getVersion())
imp = ij.IJ.openImage("D:/data/data_classify/data_train/1/9700TEST.6.tiff40737183_2.jpg")
xpoints = [177,155,114,101,100,159,179]
xpoints_int = ij.py.to_java(xpoints)
ypoints = [82,94,109,121,133,163,173]
ypoints_int = ij.py.to_java(xpoints)
straightener = jimport('ij.plugin.Straightener')
polyRoi = jimport('ij.gui.PolygonRoi')
and I don't know how to do next...
After a few days, I finally found the answer. It is important to understand the parameters of the function to write, I have referenced in:
https://pyimagej.readthedocs.io/en/latest/
https://imagej.nih.gov/ij/developer/api/ij/module-summary.html
in my example the next thing i need is polygonroi from the given coordinates. I found the required coefficients of PolygonRoi in the above website and determined to take as parameters PolygonRoi​(int[] xPoints, int[] yPoints, int nPoints, int type)
Next, I found a way to convert my list of coordinates to an int[] which was shown in the pyimagej tutorial.
In the type section, I can find it by trying print(int(roi.PolygonRoi)) and the result is 6, you can also find the reason in the website above in the Roi section
The rest, the last thing you need to do is put that PolygonRoi into the Straightener function with the line value you want
Here is my code for using macro Imagej in Python3
import imagej
from scyjava import jimport
from jpype import JArray, JInt
ij = imagej.init('2.5.0', mode='interactive')
print(ij.getVersion())
imp = ij.IJ.openImage("D:/AI lab/joint_detection/data/1/9700TEST.6.tiff133328134_1.jpg")
xpoints = [124,126,131,137,131,128,121,114]
xpoints_int = JArray(JInt)(xpoints)
ypoints = [44,63,105,128,148,172,194,206]
ypoints_int = JArray(JInt)(ypoints)
straightener = jimport('ij.plugin.Straightener')
polyRoi = jimport('ij.gui.PolygonRoi')
roi = jimport('ij.gui.Roi')
new_polyRoi = polyRoi(xpoints_int,ypoints_int,len(xpoints), int(roi.POLYLINE))
imp.setRoi(new_polyRoi)
straightened_img = ij.IJ.run(imp, "Straighten...", "title=9700TEST.6.tiff40737183_2-1.jpg line=30")
ij.IJ.saveAs(straightened_img, 'Jpeg', './test.jpg')

Matplotlib animation error "Requested MovieWriter (ffmpeg) not available"

I'm trying to run an animation using matplotlib FuncAnimation and I keep running into the error
"Requested MovieWriter (ffmpeg) not available". I realize this question has been asked before, and I have looked at every response to this and none have worked.
I'm running a jupyter notebook on Windows 10
I've got the following code.
from matplotlib.animation import FuncAnimation
def init():
ax.clear()
nice_axes(ax)
ax.set_ylim(.2, 6.8)
def update(i):
for bar in ax.containers:
bar.remove()
y = df_rank_expanded.iloc[i]
width = df_expanded.iloc[i]
ax.barh(y=y, width=width, color=colors, tick_label=labels)
date_str = df_expanded.index[i].strftime('%B %-d, %Y')
ax.set_title(f'Racial Unemployment - {date_str}', fontsize='smaller')
fig = plt.Figure(figsize=(4, 2.5), dpi=144)
ax = fig.add_subplot()
anim = FuncAnimation(fig=fig, func=update, init_func=init, frames=len(df_expanded),
interval=100, repeat=False)
When I run
from IPython.display import HTML
HTML(anim.to_html5_video())
I get the error RuntimeError: Requested MovieWriter (ffmpeg) not available
Here is what I've tried.
1) installing ffmpeg on my system, and setting the path value. I followed the instructions here https://www.wikihow.com/Install-FFmpeg-on-Windows
I verified FFmpeg was installed by typing ffmpeg -version in the cmd window
2) conda install -c conda-forge ffmpeg
This still results in the ffmpeg not available error.
3)I've followed instructions here
Matplotlib-Animation "No MovieWriters Available" which just say to do 1 and 2 above
4) Here Stop Matplotlib Jupyter notebooks from showing plot with animation
which suggests using
HTML(anim.to_jshtml())
However, this gives me an invalid format string error for date_str = df_expanded.index[i].strftime('%B %-d, %Y')
5) I've set the path variable directly in the jupyter notebook
plt.rcParams['animation.ffmpeg_path'] = 'C:\FFmpeg\ffmpeg-20200610-9dfb19b-win64-static\bin\ffmpeg.exe'
6) Restarting my kernel
7) Rebooting my system
8) Breaking my computer into small pieces, grinding those through an industrial shredder, burning the pieces, salting the earth where they lay, and then getting an entirely new computer and trying everything all over.
So far, nothing has worked. When I run sample code at http://louistiao.me/posts/notebooks/embedding-matplotlib-animations-in-jupyter-as-interactive-javascript-widgets/ I can get it to work, using their code only.
But I cannot get my own code to work. Any help would be much appreciated. Thanks!
plt.rcParams['animation.ffmpeg_path'] = 'C:\FFmpeg\ffmpeg-20200610-9dfb19b-win64-static\bin\ffmpeg.exe'
here you have to add r in front of 'C:~~~' as r'C:~~~'
plt.rcParams['animation.ffmpeg_path'] = r'C:\FFmpeg\ffmpeg-20200610-9dfb19b-win64-static\bin\ffmpeg.exe'
I don't know if you have tried this, but for me here it worked when I set a variable before calling HTML:
from IPython.display import HTML
html = anim.to_html5_video()
HTML(html)

How to run EnergyPlus-FMU using PyFMI

I am in trouble in simulating the EnergyPlus-FMU by PyFMI. I created an EnergyPlus FMU using the reference building model. I am using PyFMI2.5. How do I run the do_step() function?
from pyfmi import load_fmu
model = load_fmu("MyEnergyplus.fmu")
start_time = 0
final_time = 60.0 * 60 * 24 * 3 #seconds
step_size = 60 # seconds
opts = model.simulate_options()
idf_steps_per_hour = 60
ncp = (final_time - start_time)/(3600./idf_steps_per_hour)
opts['ncp'] = ncp
t = 0
status = model.do_step(current_t = t, step_size= step_size, new_step=True)
The error I got:
File "test_fmi2.py", line 15, in <module> status = model.do_step(current_t = t, step_size= step_size, new_step=True)
AttributeError: 'pyfmi.fmi.FMUModelME2' object has no attribute 'do_step'
I double checked the APIs of PyFMI, and didn't find any problem.
How to enable the simulation? Thanks.
From the output we can see that the FMU that you have loaded is an Model Exchange FMU that do not have a do step function (only Co-Simulation FMUs have that). For more information about the different FMU types, please see the FMI specification.
To simulate an Model Exchange FMU, please use the "simulate" method. The "simulate" method is also available for Co-Simulation FMUs and is the prefered way to perform a simulation
Not knowing how you setup the fmu, I can at least say that you forgot model.initialize(start_time,final_time).

I can't understand this shape from the python code

I'm new to python. I'm searching on google about python code to get street sign detection, I found some code but I can't understand what the code means.
elif dominant_color[0] > 80:
zone_0 = square[square.shape[0]*3//8:square.shape[0]
* 5//8, square.shape[1]*1//8:square.shape[1]*3//8]
cv2.imshow('Zone0', zone_0)
zone_0_color = warnadominan(zone_0, 1)
zone_1 = square[square.shape[0]*1//8:square.shape[0]
* 3//8, square.shape[1]*3//8:square.shape[1]*5//8]
cv2.imshow('Zone1', zone_1)
zone_1_color = warnadominan(zone_1, 1)
zone_2 = square[square.shape[0]*3//8:square.shape[0]
* 5//8, square.shape[1]*5//8:square.shape[1]*7//8]
cv2.imshow('Zone2', zone_2)
zone_2_color = warnadominan(zone_2, 1)
Thanks in advance
Assuming square has shape (8, 8)
I think this diagram might help:
Edit: The code is trying to extract the zones shown in the image

is it possible to get exactly the same results from tensorflow mfcc and librosa mfcc?

I'm trying to make tensorflow mfcc give me the same results as python lybrosa mfcc
i have tried to match all the default parameters that are used by librosa
in my tensorflow code and got a different result
this is the tensorflow code that i have used :
waveform = contrib_audio.decode_wav(
audio_binary,
desired_channels=1,
desired_samples=sample_rate,
name='decoded_sample_data')
sample_rate = 16000
transwav = tf.transpose(waveform[0])
stfts = tf.contrib.signal.stft(transwav,
frame_length=2048,
frame_step=512,
fft_length=2048,
window_fn=functools.partial(tf.contrib.signal.hann_window,
periodic=False),
pad_end=True)
spectrograms = tf.abs(stfts)
num_spectrogram_bins = stfts.shape[-1].value
lower_edge_hertz, upper_edge_hertz, num_mel_bins = 0.0,8000.0, 128
linear_to_mel_weight_matrix =
tf.contrib.signal.linear_to_mel_weight_matrix(
num_mel_bins, num_spectrogram_bins, sample_rate, lower_edge_hertz,
upper_edge_hertz)
mel_spectrograms = tf.tensordot(
spectrograms,
linear_to_mel_weight_matrix, 1)
mel_spectrograms.set_shape(spectrograms.shape[:-1].concatenate(
linear_to_mel_weight_matrix.shape[-1:]))
log_mel_spectrograms = tf.log(mel_spectrograms + 1e-6)
mfccs = tf.contrib.signal.mfccs_from_log_mel_spectrograms(
log_mel_spectrograms)[..., :20]
the equivalent in librosa:
libr_mfcc = librosa.feature.mfcc(wav, 16000)
the following are the graphs of the results:
I'm the author of tf.signal. Sorry for not seeing this post sooner, but you can get librosa and tf.signal.stft to match if you center-pad the signal before passing it to tf.signal.stft. See this GitHub issue for more details.
I spent a whole 1 day trying to make them match. Even the rryan's solution didn't work for me (center=False in librosa), but I finally found out, that TF and librosa STFT's match only for the case win_length==n_fft in librosa and frame_length==fft_length in TF. That's why rryan's colab example is working, but you can try that if you set frame_length!=fft_length, the amplitudes are very different (although visually, after plotting, the patterns look similar). Typical example - if you choose some win_length/frame_length and then you want to set n_fft/fft_length to the smallest power of 2 greater than win_length/frame_length, then the results will be different. So you need to stick with the inefficient FFT given by your window size... I don't know why it is so, but that's how it is, hopefully it will be helpful for someone.
The output of contrib_audio.decode_wav should be DecodeWav with { audio, sample_rate } and audio shape is (sample_rate, 1), so what is the purpose for getting first item of waveform and do transpose?
transwav = tf.transpose(waveform[0])
No straight forward way, since librosa stft uses center=True which does not comply with tf stft.
Had it been center=False, stft tf/librosa would give near enough results. see colab sniff
But even though, trying to import the librosa code into tf is a big headache. Here is what I started and gave up. Near but not near enough.
def pow2db_tf(X):
amin=1e-10
top_db=80.0
ref_value = 1.0
log10 = 2.302585092994046
log_spec = (10.0/log10) * tf.log(tf.maximum(amin, X))
log_spec -= (10.0/log10) * tf.log(tf.maximum(amin, ref_value))
pow2db = tf.maximum(log_spec, tf.reduce_max(log_spec) - top_db)
return pow2db
def librosa_feature_like_tf(x, sr=16000, n_fft=2048, n_mfcc=20):
mel_basis = librosa.filters.mel(sr, n_fft).astype(np.float32)
mel_basis = mel_basis.reshape(1, int(n_fft/2+1), -1)
tf_stft = tf.contrib.signal.stft(x, frame_length=n_fft, frame_step=hop_length, fft_length=n_fft)
print ("tf_stft", tf_stft.shape)
tf_S = tf.matmul(tf.abs(tf_stft), mel_basis);
print ("tf_S", tf_S.shape)
tfdct = tf.spectral.dct(pow2db_tf(tf_S), norm='ortho'); print ("tfdct", tfdct.shape)
print ("tfdct before cut", tfdct.shape)
tfdct = tfdct[:,:,:n_mfcc];
print ("tfdct afer cut", tfdct.shape)
#tfdct = tf.transpose(tfdct,[0,2,1]);print ("tfdct afer traspose", tfdct.shape)
return tfdct
x = tf.placeholder(tf.float32, shape=[None, 16000], name ='x')
tf_feature = librosa_feature_like_tf(x)
print("tf_feature", tf_feature.shape)
mfcc_rosa = librosa.feature.mfcc(wav, sr).T
print("mfcc_rosa", mfcc_rosa.shape)
For anyone still looking for this: I had a similar problem some time ago: Matching librosa's mel filterbanks/mel spectrogram to a tensorflow implementation. The solution was to use a different windowing approach for the spectrogram and librosa's mel matrix as constant tensor. See here and here.

Resources