Access mesh imported with gmsh - trimesh

I think I have a very common problem. Can you help me out? I want to examine a 3D-mesh with python. I want to use trimesh in order to examine the mesh but the mesh comes with the .STEP Format. I use gmsh to load the mesh but I have no idea how I can access the mesh I have or how I can convert it into a trimesh.mesh.
Can you help me out?
My code looks like this so far:
import trimesh
import gmsh
import pygmsh
gmsh.initialize()
gmsh.option.setNumber("General.Terminal", 1)
gmsh.model.add("modelo_1")
gmsh.merge(
"C:/Users/PythonFan/RandomFile.STEP")
gmsh.model.mesh.generate(3)
[...]
x.center_mass
So how do I get from here to there?

I found I what I have to do :D
x = trimesh.Trimesh(**trimesh.interfaces.gmsh.load_gmsh("C:/Users/....STEP")
This allows me to use use the gmsh loading abilities while examining the object with trimesh. I'll post my source once I found it again.

Related

The file is in the same directory but cannot be imported

The python version I am using is 3.8.2
I searched a lot and most of the solutions are to use sys.path.append()
But it didn't solve the problem for me, if I use from . import players
it will say
ImportError: attempted relative import with no known parent package
if i use import players it will say
ModuleNotFoundError: No module named 'players'
The code I used to fix this:
sys.path.append(".")
sys.path.append(os.getcwd() + "\\players.py")
sys.path.append(os.getcwd())
still can't fix this,It's worth mentioning that at some point sys.path.append(os.getcwd() + "\\players.py") can run
Make sure that the name of the file is actually players.py.
And not for example players.py, players .py (note the spaces).
Also check for all other "invisible" Unicode characters that would not necessarily show up.
And make sure that there is no directory players as well.
I deleted the file completely so I couldn't test it, but I ran into this problem again, it's an import order problem, I tried to import attributes.py first and then content.py, there was a problem Check, can't find attributes.py , the problem is solved when their import order is swapped, I can't understand why, but if you encounter such problems please try once (other files are not imported)
I've had similar issues and I've created a new, experimental import library for Python to solve this kind of import error: ultraimport
Instead of from . import players it allows you to write
import ultraimport
players = ultraimport('__dir__/players.py')
This will always work, no matter how you run your code, no matter what is your current working directory and no matter if there's another directory somewhere called 'players'.

Using paraview filters in Python, Paraview python api

I have been using Paraview to visualize and analyse VTU files. I find the calculate gradient filter quite useful. I would like to know if there is a python API for Paraview which I can use to use this filter.
I'm looking for something like this.
import paraview as pv
MyFile = "Myfile0001.vtu"
Divergence = pv.filters.GradientOfUnstructuredDataset.(Myfile)
ParaView is fully scriptable in python. Each part of this doc has a 'do it in python' version.
Whereas API doc does not necessary exist, you can use the Python Trace (in Tool menu), that records action from the GUI and save it as a python script.
EDIT
To get back data as an array, it needs some additional steps as ParaView works on a client/server mode. You should Fetch the data and then you can manipulate the vtkObject, extract the array and convert it to numpy.
Something like
from paraview.simple import *
from vtk.numpy_interface import dataset_adapter as dsa
gridvtu = XMLUnstructuredGridReader(registrationName='grid', FileName=['grid.vtu'])
gradient = GradientOfUnstructuredDataSet(registrationName='Gradient', Input=gridvtu)
vtk_grid = servermanager.Fetch(gradient)
wraped_grid = dsa.WrapObject(vtk_grid)
divergence_array = wraped_grid.PointData["Divergence"]
Note that divergence_array is a numpy.ndarray
You also can write pure vtk code, as in this example on SO

Problems Converting Numpy/OpenCV Array Image into a Wand Image

I'm currently trying to perform a Polar to Cartesian Coordinate Image transformation, to display a raw sonar image into a 'fan-display'.
Initially I have a Numpy Array image of type np.float64, that can be seen below:
After doing some searching, I came across this StackOverflow post Inverse transform an image from Polar to Cartesian in OpenCV with a very similar problem, in which the poster seemed to have solved his/her issue by using the Python Wand library (http://docs.wand-py.org/en/0.5.9/index.html), specifically using their set of Distortion functions.
However, when I tried to use Wand and read the image in, I instead ended up with Wand getting the image below, which seems to be smaller than the original one. However, the weird thing is that img.size still gives the same size number as the original image's shape.
The code for this transformation can be seen below:
print(raw_img.shape)
wand_img = Image.from_array(raw_img.astype(np.uint8), channel_map="I") #=> (369, 256)
display(wand_img)
print("Current image size", wand_img.size) #=> "Current image size (369, 256)"
This is definitely quite problematic as Wand will automatically give the wrong 'fan image'. Is anybody familiar with this kind of problem with the Wand library previously, and if yes, may I ask what is the recommended solution to fix this issue?
If this issue isn't resolved soon I have an alternative backup of using OpenCV's cv::remap function (https://docs.opencv.org/4.1.2/da/d54/group__imgproc__transform.html#ga5bb5a1fea74ea38e1a5445ca803ff121). However the problem with this is that I'm not sure what mapping arrays (i.e. map_x and map_y) to use to perform the Polar->Cartesian transformation, as using a mapping matrix that implements the transformation equations below:
r = polar_distances(raw_img)
x = r * cos(theta)
y = r * sin(theta)
didn't seem to work and instead threw out errors from OpenCV as well.
Any kind of help and insight into this issue is greatly appreciated. Thank you!
- NickS
EDIT I've tried on another image example as well, and it still shows a similar problem. So first, I imported the image into Python using OpenCV, using these lines of code:
import matplotlib.pyplot as plt
from wand.image import Image
from wand.display import display
import cv2
img = cv2.imread("Test_Img.jpg")
img_rgb = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
plt.figure()
plt.imshow(img_rgb)
plt.show()
which showed the following display as a result:
However, as I continued and tried to open the img_rgb object with Wand, using the code below:
wand_img = Image.from_array(img_rgb)
display(img_rgb)
I'm getting the following result instead.
I tried to open the image using wand.image.Image() on the file directly, which is able to display the image correctly when using display() function, so I believe that there isn't anything wrong with the wand library installation on the system.
Is there a missing step that I required to convert the numpy into Wand Image that I'm missing? If so, what would it be and what is the suggested method to do so?
Please do keep in mind that I'm stressing the conversion of Numpy to Wand Image quite crucial, the raw sonar images are stored as binary data, thus the required use of Numpy to convert them to proper images.
Is there a missing step that I required to convert the numpy into Wand Image that I'm missing?
No, but there is a bug in Wand's Numpy implementation in Wand 0.5.x. The shape of OpenCV's ndarray is (ROWS, COLUMNS, CHANNELS), but Wand's ndarray is (WIDTH, HEIGHT, CHANNELS). I believe this has been fixed for the future 0.6.x releases.
If so, what would it be and what is the suggested method to do so?
Swap the values in img_rgb.shape before passing to Wand.
img_rgb.shape = (img_rgb.shape[1], img_rgb.shape[0], img_rgb.shape[2],)
with Image.from_array(img_rgb) as img:
display(img)

module 'statsmodels.tsa.api' has no attribute 'arima_model'

I'm trying to use "statsmodels.api" to work with time series data and trying to fit a simple ARIMA model using
sm.tsa.arima_model.ARIMA(dta,(4,1,1)).fit()
but I got the following error
module 'statsmodels.tsa.api' has no attribute 'arima_model'
I'm using 'statsmodels' version 0.9.0 with 'spyder' version 3.2.8 I'd be pleased to get your help thanks
The correct path is :
import statsmodels.api as sm
sm.tsa.ARIMA()
You can view it using a shell that allows autocomplete like ipython.
It is also viewable in the example provided by statsmodels such as this one.
And more informations about package structure may be found here.

Weird way of importing Python turtle works but I don't know why

I hope you will understand my question.
I noticed that I get the same results when I import turtle module in following two ways.
from turtle import Turtle
t=Turtle()
t.screen.bgcolor("black")
and also
import turtle
turtle.bgcolor("black")
I am confused about this, “from turtle import Turtle”.
According to what I know, it means “to import Turtle.py from turtle (folder / package)”. I may be wrong, you can help me out to understand better.
But I can’t find any Turtle.py module. It is only turtle.py I saw.
What's weird about it is that it works.
Can anyone tell me why?
I am using Python version 3.6
Python's turtle.py is unusual in that it presents both a function-based interface and an object-oriented interface. Depending on how you import it, you can work with one, or the other, or both.
Here, we are using the object-oriented interface to invoke the screen method bgcolor():
from turtle import Turtle
t = Turtle()
t.screen.bgcolor("black")
I usually write this as:
from turtle import Turtle, Screen
screen = Screen()
screen.bgcolor("black")
t = Turtle()
as having direct access to the Screen object simplifies things. Using this style import, you cannot access the function-based interface.
When we do this simpler import, we have access to both the function-based interface and the object-oriented interface. Here, we're using the function bgcolor() to set the background color:
import turtle
turtle.bgcolor("black")
Using either the function-based or object-oriented interface to turtle.py is fine, but you can get yourself seriously confused when mixing the two.

Resources