gdcm2vtk usage to convert a stack of images into a vti file - vtk

I've managed to compile GDCM with VTK and I have a particular application I would like to use, which is the "gdcm2vtk.exe".
Now, how's the syntax for converting a stack of imags into a ".vti" file? so far I have this:
gdcm2vtk Input_Directory file.vti
Now, when I run somthing like this:
gdcm2vtk "C:/dicom/dicom directory" output.vti I get an error:
could not find no reader to handle file: "C:/dicom/dicom directory"
Is there anything I'm missing there?

gdcm2vtk does not handle a directory as input as specified in the documentation.
You may want to convert your DICOM Series into a single DICOM Instance using gdcmimg
As of GDCM 2.6 gdcm2vtk is now able to take as input a directory. Pay attention to sort the file according to the well known Image Orientation (Patient) & Image Position (Patient) strategy instead of relying on the filenames ordering to reconstruct your VTK (*.vti) file:
$ gdcm2vtk --ipp-sort input_dir output.vti

Related

How do I get started training a custom voice model with Mozilla TTS on Ubuntu 20.04?

I'd like to create a custom voice in Mozilla TTS using audio samples I have recorded but am not sure how to get started. The Mozilla TTS project has documentation and tutorials, but I'm having trouble putting the pieces together -- it seems like there's some basic information missing that someone starting out needs to know to get going.
Some questions I have:
I see that there is a Docker image for Mozilla TTS, but that the documentation for it covers creating speech and doesn't mention training. Can I use the Docker image for training?
If I can't use the Docker image for training, how do I get a functional copy of Mozilla TTS running on my system with Python 3? I've tried following the commands that the project provides, but I get dependency errors, version conflicts, or errors about not having sufficient permission to install packages.
What information do I need in order to train the model? What audio formats do I need? I see that I need a metadata.csv file -- what do I need to put in that file? What do I customize in the config file?
Most of the configs reference a scale_stats.npy file -- how do I generate this?
How do I run the training?
After a lot of research and experimentation, I can share my learnings to answer my own questions.
Can the Mozilla TTS Docker image be used for training (TL;DR: "No")
The Mozilla TTS docker image is really geared for playback and doesn't seem equipped to be used for training. At least, even when running a shell inside the container, I could not get training to work. But after figuring out what was causing PIP to be unhappy, the process of getting Mozilla TTS up and running in Ubuntu turns out to be pretty straightforward.
Installing Mozilla TTS using Python 3, PIP, and a Virtual Environment
The documentation for Mozilla TTS doesn't mention anything about virtual environments, but IMHO it really should. Virtual environments ensure that dependencies for different Python-based applications on your machine don't conflict.
I'm running Ubuntu 20.04 on WSL, so Python 3 is already installed. Given that, from within my home folder, here are the commands I used to get a working copy of Mozilla TTS:
sudo apt-get install espeak
git clone https://github.com/mozilla/TTS mozilla-tts
python3 -m venv mozilla-tts
cd mozilla-tts
./bin/pip install -e .
This created a folder called ~/mozilla-tts in my home folder that contains the Mozilla TTS code. The folder is setup as a virtual environment, which means that as long as I execute python commands via ~/mozilla-tts/bin/python and PIP via ~/mozilla-tts/bin/pip, Python will use only the packages that exist in that virtual environment. That eliminates the need to be root when running pip (since we're not affecting system-wide packages), and it ensures no package conflicts. Score!
Prerequisites for Training a Model
For the best results when training a model, you will need:
Short audio recordings (at least 100) that are:
In 16-bit, mono PCM WAV format.
Between 1 and 10 seconds each.
Have a sample rate of 22050 Hz.
Have a minimum of background noise and distortion.
Have no long pauses of silence at the beginning, throughout the middle, and at the end.
A metadata.csv file that references each WAV file and indicates what text is spoken in the WAV file.
A configuration file tailored to your data set and chosen vocoder (e.g. Tacotron, WavGrad, etc).
A machine with a fast CPU (ideally an nVidia GPU with CUDA support and at least 12 GB of GPU RAM; you cannot effectively use CUDA if you have less than 8 GB OF GPU RAM).
Lots of RAM (at least 16 GB of RAM is preferable).
Preparing the Audio Files
If your source of audio is in a different format than WAV, you will need to use a program like Audacity or SoX to convert the files into WAV format. You should also trim out portions of audio that are just noise, umms, ahs, and other sounds from the speaker that aren't really words you're training on.
If your source of audio isn't perfect (i.e. has some background noise), is in a different format, or happens to be a higher sample rate or different resolution (e.g. 24-bit, 32-bit, etc.), you can perform some clean-up and conversion. Here's a script that is based on an earlier script from the Mozilla TTS Discourse forums:
from pathlib import Path
import os
import subprocess
import soundfile as sf
import pyloudnorm as pyln
import sys
src = sys.argv[1]
rnn = "/PATH/TO/rnnoise_demo"
paths = Path(src).glob("**/*.wav")
for filepath in paths:
target_filepath=Path(str(filepath).replace("original", "converted"))
target_dir=os.path.dirname(target_filepath)
if (str(filepath) == str(target_filepath)):
raise ValueError("Source and target path are identical: " + str(target_filepath))
print("From: " + str(filepath))
print("To: " + str(target_filepath))
# Stereo to Mono; upsample to 48000Hz
subprocess.run(["sox", filepath, "48k.wav", "remix", "-", "rate", "48000"])
subprocess.run(["sox", "48k.wav", "-c", "1", "-r", "48000", "-b", "16", "-e", "signed-integer", "-t", "raw", "temp.raw"]) # convert wav to raw
subprocess.run([rnn, "temp.raw", "rnn.raw"]) # apply rnnoise
subprocess.run(["sox", "-r", "48k", "-b", "16", "-e", "signed-integer", "rnn.raw", "-t", "wav", "rnn.wav"]) # convert raw back to wav
subprocess.run(["mkdir", "-p", str(target_dir)])
subprocess.run(["sox", "rnn.wav", str(target_filepath), "remix", "-", "highpass", "100", "lowpass", "7000", "rate", "22050"]) # apply high/low pass filter and change sr to 22050Hz
data, rate = sf.read(target_filepath)
# peak normalize audio to -1 dB
peak_normalized_audio = pyln.normalize.peak(data, -1.0)
# measure the loudness first
meter = pyln.Meter(rate) # create BS.1770 meter
loudness = meter.integrated_loudness(data)
# loudness normalize audio to -25 dB LUFS
loudness_normalized_audio = pyln.normalize.loudness(data, loudness, -25.0)
sf.write(target_filepath, data=loudness_normalized_audio, samplerate=22050)
print("")
To use the script above, you will need to check out and build the RNNoise project:
sudo apt update
sudo apt-get install build-essential autoconf automake gdb git libffi-dev zlib1g-dev libssl-dev
git clone https://github.com/xiph/rnnoise.git
cd rnnoise
./autogen.sh
./configure
make
You will also need SoX installed:
sudo apt install sox
And, you will need to install pyloudnorm via ./bin/pip.
Then, just customize the script so that rnn points to the path of the rnnoise_demo command (after building RNNoise, you can find it in the examples folder). Then, run the script, passing the source path -- the folder where you have your WAV files -- as the first command line argument. Make sure that the word "original" appears somewhere in the path. The script will automatically place the converted files in a corresponding path, with original changed to converted; for example, if your source path is /path/to/files/original, the script will automatically place the converted results in /path/to/files/converted.
Preparing the Metadata
Mozilla TTS supports several different data loaders, but one of the most common is LJSpeech. To use it, we can organize our data set to follow LJSpeech conventions.
First, organize your files so that you have a structure like this:
- metadata.csv
- wavs/
- audio1.wav
- audio2.wav
...
- last_audio.wav
The naming of the audio files doesn't appear to be significant. But, the files must be in a folder called wavs. You can use sub-folders inside wavs though, if so desired.
The metadata.csv file should be in the following format:
audio1|line that's spoken in the first file
audio2|line that's spoken in the second file
last_audio|line that's spoken in the last file
Note that:
There is no header line.
The columns are joined together with a pipe symbol (|).
There should be one row per WAV file.
The WAV filename is in the first column, without the wavs/ folder prefix, and without the .wav suffix.
The textual description of what's spoken in the WAV is written out in the second column, with all numbers and abbreviations spelled-out.
(I did observe that steps in the documentation for Mozilla TTS have you then shuffle the metadata file and then split it into a "training" set (metadata_train.csv) and "validation" set (metadata_val.csv), but none of the sample configs provided in the repo are actually configured to use these files. I've filed an issue about that because it's confusing/counter-intuitive to a beginner.)
Preparing the config.json File
You need to prepare a configuration file that describes how your custom TTS will be configured. This file is used by multiple parts of Mozilla TTS when preparing for training, performing training, and generating audio from your custom TTS. Unfortunately, though this file is very important, the documentation for Mozilla TTS largely glosses over how to customize this file.
To start, create a copy of the default Tacotron config.json file from the Mozilla repo. Then, be sure to customize at least the audio.stats_path, output_path, phoneme_cache_path, and datasets.path file.
You can customize other parameters if you so choose, but the defaults are a good place to start. For example, you can change the run_name to control the naming of folders containing your datasets.
Do not change the datasets.name parameter (leave it set to "ljspeech"); otherwise you'll get strange errors related to an undefined dataset type. It appears that the dataset name refers to the type of data loader used, rather than what you call your data set. Similarly, I haven't risked changing the model setting, since I don't yet know how that value gets used by the system.
Preparing scale_stats.npy
Most of the training configurations rely on a statistics file called scale_stats.npy that's generated based on the training set. You can use the ./TTS/bin/compute_statistics.py script inside the Mozilla TTS repo to generate this file. This script requires your config.json file as an input, and is a good step to sanity check that everything looks good up to this point.
Here's an example of a command you can run if you are inside the Mozilla TTS folder you created at the start of this tutorial (adjust paths to fit your project):
./bin/python ./TTS/bin/compute_statistics.py --config_path /path/to/your/project/config.json --out_path /path/to/your/project/scale_stats.npy
If successful, this will generate a scale_stats.npy file under /path/to/your/project/scale_stats.npy. Be sure that the path in the audio.stats_path setting of your config.json file matches this path.
Training the Model
It's now time for the moment of truth -- it's time to start training your model!
Here's an example of a command you can run to train a Tacotron model if you are inside the Mozilla TTS folder you created at the start of this tutorial (adjust paths to fit your project):
./bin/python ./TTS/bin/train_tacotron.py --config_path /path/to/your/project/config.json
This process will take several hours, if not days. If your machine supports CUDA and has it properly configured, the process will run more quickly than if you are just relying on CPU alone.
If you get any errors related to a "signal error" or "signal received", this typically indicates that your machine does not have enough memory for the operation. You can run the training with less parallelism but it will run much more slowly.
Note, on windows, following GuyPaddock's advice from prior, I had to use pip install -e. instead of leading with ./bin/pip, and I had to use python instead of python3
Might be obvious to someone else but I am not so familiar with python or path shortcuts in shell being customized etc.

Read temperature, humdity, etc from grib2 files with EECodes in python3

I am trying to use EECodes in python to get various weather information, such as temperature, humidity, etc out of grib2 files. I am using the GFS files. I would like to be able to extract the data as (lat,lon,alt,$data_point), and as a 2d array for each altitude.
I have tried the example programs located here: https://confluence.ecmwf.int/display/ECC/grib_iterator_bitmap
I can't figure out what I am looking in the output of that program. When I load the messages using their keys, it is not obvious how to make a grid. When I load the grid, the data doesn't have labels I understand.
#craeft have a look to https://github.com/ecmwf/cfgrib. cfgrib is the new standard for python and grib file handling. It is easy to install and easy to access files. Please install the latest version because it supports GFS files.

Use images in s3 with SageMaker without .lst files

I am trying to create (what I thought was) a simple image classification pipeline between s3 and SageMaker.
Images are stored in an s3 bucket with their class labels in their file names currently, e.g.
My-s3-bucket-dir
cat-1.jpg
dog-1.jpg
cat-2.jpg
..
I've been trying to leverage several related example .py scripts, but most seem to be download data sets already in .rec format or containing special manifest or annotation files I don't have.
All I want is to pass the images from s3 to the SageMaker image classification algorithm that's located in the same region, IAM account, etc. I suppose this means I need a .lst file
When I try to manually create the .lst it doesn't seem to like it and it also takes too long doing manual work to be a good practice.
How can I automatically generate the .lst file (or otherwise send the images/classes for training)?
Things I read made it sound like im2rec.py was a solution, but I don't see how. The example I'm working with now is
Image-classification-fulltraining-highlevel.ipynb
but it seems to download the data as .rec,
download('http://data.mxnet.io/data/caltech-256/caltech-256-60-train.rec')
download('http://data.mxnet.io/data/caltech-256/caltech-256-60-val.rec')
which just skips working with the .jpeg files. I found another that converts them to .rec but again it has essentially the .lst already as .json and just converts it.
I have mostly been working in a Python Jupyter notebook within the AWS console (in my browser) but I have also tried using their GUI.
How can I simply and automatically generate the .lst or otherwise get the data/class info into SageMaker without manually creating a .lst file?
Update
It looks like im2py can't be run against s3. You'd have to completely download everything from all s3 buckets into the notebook's storage...
Please note that [...] im2rec.py is running locally,
therefore cannot take input from the S3 bucket. To generate the list
file, you need to download the data and then use the im2rec tool. - AWS SageMaker Team
There are 3 options to provide annotated data to the Image Classification algo: (1) packing labels in recordIO files, (2) storing labels in a JSON manifest file ("augmented manifest" option), (3) storing labels in a list file. All options are documented here: https://docs.aws.amazon.com/sagemaker/latest/dg/image-classification.html.
Augmented Manifest and .lst files option are quick to do since they just require you to create an annotation file with a usually quick for loop for example. RecordIO requires you to use im2rec.py tool, which is a little more work.
Using .lst files is another option that is reasonably easy: you just need to create annotation them with a quick for loop, like this:
# assuming train_index, train_class, train_pics store the pic index, class and path
with open('train.lst', 'a') as file:
for index, cl, pic in zip(train_index, train_class, train_pics):
file.write(str(index) + '\t' + str(cl) + '\t' + pic + '\n')

How to add dicom tags for a series of dicom images?

I want to add Dicom tags to a series of Dicom images and want to save that modified batch.
I have written a simple python script using pydicom which can edit and add dicom tags in a single Dicom image, but i want to do same procedure for complete image set (say 20 or 30 images).
can anybody suggest me a way to do such task using pydicom or python?
Just collect your filenames in a list and process each filename (read the file, edit contents, save as new or maybe use the same name).
Have a look at the os module from python. For instance, os.listdir('path') returns a list of filenames found in the given path. If that path points to a directory that contains only dicom images you now have a list of dicom filenames. Next use os.path.join('path', filename) to get an absolute path that you can use as input for reading a dicom file with pydicom.
Also you might want to use a for loop.
Let's suppose you have a list of dicom image file paths in an array named dicom_paths. Then:
import pydicom
dicom_paths = [ list of image paths here ]
dicom_data = [pydicom.read_file(s) for s in dicom_paths]
for dicom_data_item in dicom_data:
#do what you want here
Hope it helps

Python3 extract contents of an .iso file

I am trying to find a way to extract the contents of an iso image to a designated file path. This can be done using 7-Zip easily, but I can't find a Python way to do it. There seems to be a library isoparser https://github.com/barneygale/isoparser, but it does not give many examples of how to do this.
Does anyone have experience doing this or can provide some examples?
I'd switch gears and use pycdlib. Check out this library's example of an iso extract script. Note: I haven't used either pycdlib or isoparser. But the former looks more friendly.
Take a look a this lib7zip bindings.
Example usage:
# pip install git+https://github.com/topia/pylib7zip
from lib7zip import Archive, formats
archive = Archive("cd.iso", forcetype="Iso")
#extract all items to the directory, directory will be created if it doesn't exist
archive.extract('output_dir')
# iterate over archive contents
for f in archive:
if f.is_dir:
continue
print("; %12s %s %s" % ( f.size, f.mtime.strftime("%H:%M.%S %Y-%m-%d"), f.path))

Resources