Using paraview filters in Python, Paraview python api - vtk

I have been using Paraview to visualize and analyse VTU files. I find the calculate gradient filter quite useful. I would like to know if there is a python API for Paraview which I can use to use this filter.
I'm looking for something like this.
import paraview as pv
MyFile = "Myfile0001.vtu"
Divergence = pv.filters.GradientOfUnstructuredDataset.(Myfile)

ParaView is fully scriptable in python. Each part of this doc has a 'do it in python' version.
Whereas API doc does not necessary exist, you can use the Python Trace (in Tool menu), that records action from the GUI and save it as a python script.
EDIT
To get back data as an array, it needs some additional steps as ParaView works on a client/server mode. You should Fetch the data and then you can manipulate the vtkObject, extract the array and convert it to numpy.
Something like
from paraview.simple import *
from vtk.numpy_interface import dataset_adapter as dsa
gridvtu = XMLUnstructuredGridReader(registrationName='grid', FileName=['grid.vtu'])
gradient = GradientOfUnstructuredDataSet(registrationName='Gradient', Input=gridvtu)
vtk_grid = servermanager.Fetch(gradient)
wraped_grid = dsa.WrapObject(vtk_grid)
divergence_array = wraped_grid.PointData["Divergence"]
Note that divergence_array is a numpy.ndarray
You also can write pure vtk code, as in this example on SO

Related

Method for converting a skeletonized raster array to a polyline using Python 3.7

I am attempting to convert a rasterized line to a polyline. I have skeletonized the raster, but wish to export it as a shapefile (polyline feature) without resorting to ArcGIS. In ArcGIS there is a single tool 'raster to polyline' which completes this task. I've tried a few pythonic approaches, but they all seem to produce polygons rather than a single line feature as observed when running the skeletonizsation tool from skimage (below).
Any suggestions would be much appreciated.
The code I have up to the question raised above is posted below:
rasterClines = rasterpath + ClineRasterName
print(rasterClines)
raster = gdal.Open(rasterClines)
band = raster.GetRasterBand(1)
data = band.ReadAsArray()
final = morphology.skeletonize(data)
plt.figure(figsize=(15,15))
plt.imshow(final, cmap='gray')
#Method for exporting 'final' to .shp file
The plot looks correct, but I just can't find a method to export it.

Writing into a Jupyter Notebook from Python

Is it possible for a Python script to write into a iPython Notebook?
with open("my_notebook.ipynb", "w") as jup:
jup.write("print(\"Hello there!\")")
If there's some package for doing so, can I also control the way cells are split in the notebook?
I'm designing a software tool (that carries out some optimization) to prepare an iPython notebook that can be run on some server performing scientific computations.
I understand that a related solution is to output to a Python script and load it within a iPython Notebook using %load my_python_script.py. However, that involves a user to type stuff that I would ideally like to avoid.
Look at the nbformat repo on Github. The reference implementation is shown there.
From their docs
Jupyter (né IPython) notebook files are simple JSON documents, containing text, source code, rich media output, and metadata. Each segment of the document is stored in a cell.
It also sounds like you want to create the notebook programmatically, so you should use the NotebookNode object.
For the code, something like, should get you what you need. new_cell_code should be used if you have code cells versus just plain text cells. Text cells should use the existing markdown formatting.
import nbformat
notebook = nbformat.v4.new_notebook()
text = """Hello There """
notebook['cells'] = [nbformat.v4.new_markdown_cell(text)]
notebook= nbformat.v4.new_notebook()
nbformat.write(notebook,'filename.ipynb')

Why pandas profiling isn't showing any output in ipython?

I've a quick question about "pandas_profiling" .So basically i'm trying to use the pandas 'profiling' but instead of showing the output it says something like this:
<pandas_profiling.ProfileReport at 0x23c02ed77b8>
Where i'm making the mistake?? or Does it have anything to do with Ipython?? Because i'm using Ipython in Anaconda.
try this
pfr = pandas_profiling.ProfileReport(df)
pfr.to_notebook_iframe()
pandas_profiling creates an object that then needs to be displayed or output. One standard way of doing so is to save it as an HTML:
profile.to_file(outputfile="sample_file_name.html")
("profile" being the variable you used to save the profile itself)
It doesn't have to do with ipython specifically - the difference is that because you're going line by line (instead of running a full block of code, including the reporting step) it's showing you the object itself. The code above should allow you to see the report once you open it up.

Convert types of CartoDB georeference on Python

I found a dataframe in Carto web, where the frontier is something like:
0106000020E6100000010000000103000000010000001D030000E8ACD8AE0EBDF7BFCD5ADA
E288B5434089C31B1EB0D0F7BFCB057D1C6DB543400E1D88BF88CCF7BFF179B153C2B44340
44AA908B84D6F7BF9D0BCF72F1B343406EFB026F69CFF7BF065C06D547B34340E81D8C7C3D
B5F7BF0BD8C9B95CB34340802C9EE699A7F7BF4844336AF5B343406E59C33BBE97F7BF2320
F7D152B443403422368D7187F7BF87342C4029B443405EAABCA67F77F7BFE03280667EB343
4020A3204F9F7AF7BFE28FBE1450B34340F2D007528D9AF7BFC04AE51506B343406F934569
...
I don't know how can I show this frontier in some python libraries like Bokeh or Matplotlib, I would like be able to paint the areas inside the line having an hover with the area's name.
All the examples I've found come from a public dataset inside the libraries, but I don't have it
This should be WKB binary, Shapely library can decode it, see https://shapely.readthedocs.io/en/latest/manual.html?highlight=wkb#shapely.wkb.loads

Python: Universal XML parser

I'm trying to make simple Python 3 program to read weather information from XML web source, convert it into Python-readable object (maybe dictionary) and process it (for example visualize multiple observations into graph).
Source of data is national weather service's (direct translation) xml file at link provided in code.
What's different from typical XML parsing related question in Stack Overflow is that there are repetitive tags without in-tag identificator (<station> tags in my example) and some with (1st line, <observations timestamp="14568.....">). Also I would like to try parse it straight from website, not local file. Of course, I could create local temporary file too.
What I have so far, is simply loading script, that gives string containing xml code for both forecast and latest weather observations.
from urllib.request import urlopen
#Read 4-day forecast
forecast= urlopen("http://www.ilmateenistus.ee/ilma_andmed/xml/forecast.php").read().decode("iso-8859-1")
#Get current weather
observ=urlopen("http://www.ilmateenistus.ee/ilma_andmed/xml/observations.php").read().decode("iso-8859-1")
Shortly, I'm looking for as universal as possible way to parse XML to Python-readable object (such as dictionary/JSON or list) while preserving all of the information in XML-file.
P.S I prefer standard Python 3 module such as xml, which I didn't understand.
Try xmltodict package for simple conversion of XML structure to Python dict: https://github.com/martinblech/xmltodict

Resources