I am just in the beginnings of learning how to make vtk files. I was trying to start really simple and just made a 1D structured points vtk file. Below is an extremely simple attempt:
# vtk DataFile Version 2.0
Vis Project Data
ASCII
DATASET STRUCTURED_POINTS
DIMENSIONS 2 1 1
ORIGIN 0 0 0
SPACING 1 1 1
POINT_DATA 2
SCALARS phi float 1
LOOKUP_TABLE default
0.1
0.2
However, whenever I try to load in the file on paraview I get the error "Incorrect Dimensionality"
This is probably a really stupid question but I will be forever grateful for an answer.
Thanks!
I have a mono wav file for a 'glass breaking' sound. When I graphically display it's levels in python using librosa library, it shows very large range of amplitudes, between +/ 20000 instead of +/- 1. When I open same wav file with Audacity, the levels are between +/- 1.
My question is what generates this difference in displayed amplitude levels and how can I correct it in Python? MinMax scaling will distort the sound and I want to avoid it if possible.
The code is:
from scipy.io import wavfile
fs1, glass_break_data = wavfile.read('test_break_glass_normalized.wav')
%matplotlib inline
import matplotlib.pyplot as plt
import librosa.display
sr=44100
x = glass_break_data.astype('float')
plt.figure(figsize=(14, 5))
librosa.display.waveplot(x, sr=sr)
These are the images from the notebook and Audacity:
WAV usually uses integer values to represent individual samples, not floats. So what you see in the librosa plot is accurate for a 16 bit/sample audio file.
Programs like VLC show the format, including bit depth per sample in their info dialog, so you can easily check.
Another way to check the format might be using soxi or ffmpeg.
Audacity normalizes everything to floats in the range of -1 to 1—it does not show you the original format.
The same is true for librosa.load()—it also normalizes to [-1,1]. wavfile.read() on the other hand, does not normalize. For more info on ways to read WAV audio, please see for example this answer.
If you use librosa.load instead of wavfile.read it will normalize the range to -1, 1
glass_break_data, fs1 = librosa.load('test_break_glass_normalized.wav')
I have around 500 .ply files that need to be converted to vtk file. I usually use Paraview to convert them manually, but it takes forever. I wonder if there is a tool to convert ply files into vtk files in Python?
Thanks in advance.
I have a 3D geometry (vessels) in vit format which I do not have any information about, but I just visualize it with paraview. I do not know how to read and then convert this vti file to a 3D numpy array in python. Does anyone know how can I do this conversion?
Certain wave plots generated by librosa’s display module are just flat lines that fill the entire axes.
I used native sampling rates to load some wav files into librosa and my dataset is a mix of stereo and mono files. I know the wave plots are incorrect because it looks nothing like the frequency-time plot of the same files in audacity.
I've tried playing with the figure width, height and DPI, however there are no improvements in the generated waveplots. Below is the waveplot generated by Librosa for one of these audio files and the expected wave plot in audacity.
Librosa Waveplot
Audacity Waveplot
The code used to generate the plot is derived from the librosa documentation:
sound, sr = librosa.load(input_dir, sr=None)
matplotlib.pyplot.figure(figsize=(width, height), dpi=dpi)
librosa.display.waveplot(numpy.array(sound), sr=sr)
matplotlib.pyplot.tight_layout()