Hi i got a script im working on and its not working out as well as I want it to
This is what I got so far
import bpy
def Key_Frame_Points(): #Gets the key-frame values as an array.
fcurves = bpy.context.active_object.animation_data.action.fcurves
for curve in fcurves:
keyframePoints = fcurves[4].keyframe_points # selects Action channel's axis / attribute
for keyframe in keyframePoints:
print('KEY FRAME POINTS ARE #T ',keyframe.co[0])
KEYFRAME_POINTS_ARRAY = keyframe.co[0]
print(KEYFRAME_POINTS_ARRAY)
Key_Frame_Points()
When I run this its printing out all the keyframes on the selected Objects as I wanted it to. But the problem is that I cant for some reason get the Values its printing into a variable. If you run it and check the the System concole. its acting odd.Like as in its printing out the Values of the Keyframed object.But when I ask it to get those values as an array, its just printing out the last frame.
Here is how it looks like briefly
I think what you want to do is add each keyframe.co[1] to an array which means you want to use KEYFRAME_POINTS_ARRAY.append(keyframe.co[1]) and for that to work you will need to define it as an empty array outside the loop with KEYFRAME_POINTS_ARRAY = []
Note that keyframe.co[0] is the frame that is keyed while keyframe.co[1] is the keyed value at that frame.
Also of note is that you are looping through fcurves but not using each curve.
for curve in fcurves:
keyframePoints = fcurves[4].keyframe_points
By using fcurves[4] here you are reading the same fcurve every time, you probably meant to use keyframePoints = curve.keyframe_points
So I expect you want to have -
import bpy
def Key_Frame_Points(): #Gets the key-frame values as an array.
KEYFRAME_POINTS_ARRAY = []
fcurves = bpy.context.active_object.animation_data.action.fcurves
for curve in fcurves:
keyframePoints = curve.keyframe_points
for keyframe in keyframePoints:
print('KEY FRAME POINTS ARE frame:{} value:{}'.format(keyframe.co[0],keyframe.co[1]))
KEYFRAME_POINTS_ARRAY.append(keyframe.co[1])
return KEYFRAME_POINTS_ARRAY
print(Key_Frame_Points())
You may also be interested to use fcurves.find(data_path) to find a specific curve by it's path.
There is also fcurve.evaluate(frame) that will give you the curve value at any frame not just the keyed values.
Related
So I have a system where I need to be able to determine the exact position of my ions and run an equation on the average position of that ion. I found my ion positions were inconsistent due to some ions wrapping across the periodic boundary and severely changing the position for that one window. Leading me to have an average of say +20 when the ion just shuffled between +40 and -40.
I was wanting to correct that by implementing a way to unwrap my wrapped coordinates for ions on the edge of my box.
Essentially I was thinking that for each frame in my trajectory, MDAnalysis would check the position of ION 1 in frame 1. Then in frame 2 it would check the same ion once more and compare it to the previous position. If it for example goes from + coordinates to - coordinates then I would have a count that adds +1 meaning that it wrapped once. If it goes from - to + I would have it subtract 1. Then by the end of all of the frames I would have a number that could help me identify how I could perform my analysis.
However my coding skills are less than lackluster and I wanted to know how I would go about implementing this? I have essentially gotten the count down, but the comparison between frames is where I am confused. How would I do this comparison?
Thanks in advance
There are a few ways to answer this question. Firstly,
Essentially I was thinking that for each frame in my trajectory, MDAnalysis would check the position of ION 1 in frame 1. Then in frame 2 it would check the same ion once more and compare it to the previous position. If it for example goes from + coordinates to - coordinates then I would have a count that adds +1 meaning that it wrapped once. If it goes from - to + I would have it subtract 1. Then by the end of all of the frames I would have a number that could help me identify how I could perform my analysis.
You could write your own analysis class.
One untested way to do it is prototyped below -- the tutorial goes more into what each method (_prepare, _conclude, etc) does.
from MDAnalysis.analysis.base import AnalysisBase
import numpy as np
class CountWrappings(AnalysisBase):
def __init__(self, universe, select="name NA"):
super().__init__(universe.universe.trajectory)
# these are your selected ions
self.atomgroup = universe.select_atoms(select)
self.n_atoms = len(self.atomgroup)
def _prepare(self):
# self.results is a dictionary of results
self.results.wrapping_per_frame = np.zeros((self.n_frames, self.n_atoms), dtype=bool)
self._last_positions = self.atomgroup.positions
def _single_frame(self):
# does sign change for any element in 2D array?
compare_signs = np.sign(self.atomgroup.positions) == np.sign(self._last_positions)
sign_changes_any_axis = np.any(compare_signs, axis=1)
# _frame_index is the relative index of the frame being currently analyzed
self.results.wrapping_per_frame[self._frame_index] = sign_changes_any_axis
self._last_positions = self.atomgroup.positions
def _conclude(self):
self.results.n_wraps = self.results.wrapping_per_frame.sum(axis=0)
n_wraps = CountWrappings(my_universe, select="name NA CL MG")
n_wraps.run()
print(n_wraps.results.wrapping_per_frame)
print(n_wraps.results.n_wraps)
However, I'm not sure that addresses your actual aim:
I was wanting to correct that by implementing a way to unwrap my wrapped coordinates for ions on the edge of my box.
Are you computing the ion positions relative to anything? Potentially you could add bonds between each ion and the center so that you can use the AtomGroup.unwrap() function. Alternatively, is your data compatible with GROMACS? GROMACS has an unwrapping utility called "nojump" that unwraps atoms jumping across box edges, e.g.
gmx trjconv -f my_trajectory.xtc -s my_topology.gro -pbc nojump -o my_unwrapped_trajectory.xtc
As Lily mentioned, you could write your own analysis to do this or use GROMACS. However, both Lily's example and the GROMACS implementation of 'nojump' fail to account for box size fluctuations under the NPT ensemble (assuming you've used NPT). von Bulow et al. wrote about this widespread problem a couple of years ago. As far as I'm aware, the only implementation of nojump unwrapping that accounts for box size fluctuations is in LiPyphilic (disclaimer: I am the author of LiPyphilic).
Using LiPyphilic, you can unwrap your trajectory like so:
import MDAnalysis as mda
from lipyphilic.transformations import nojump
u = mda.Universe(pdb, xtc)
ions = u.select_atoms('name NA CLA')
u.trajectory.add_transformations(
nojump(
ag=ions,
nojump_x=True,
nojump_y=True,
nojump_z=True)
)
Then, when you do further analysis with your MDAnalysis Universe, the atoms will automatically be unwrapped at each frame.
I have been trying to calculate a distance between two locations from a given latitude and longitude.
The variables are random and coming from a generated semantic annotations.
for coordinates in g.objects(None, predicate=URIRef('http://www.w3.org/ns/prov#generatedAtCoordinates')):
final_coordinates = coordinates
data = final_coordinates.split(",")
latitude = float(data[0])
longitude = float(data[1])
calc_distance = (latitude, longitude)
print(geopy.distance.distance(calc_distance, calc_distance).km)
The printing of the calc_distance is:
(41.40254048829099, 2.095129446086901)
(41.409459057215216, 2.0941780489196367)
(41.40506757803259, 2.0903966716404754)
(41.40081272870648, 2.0936534707551835)
(41.40569962731158, 2.0947062169907373)
(41.40513399283631, 2.09682546591496)
These are randomly generated values, and I want to move forward from this moment. I am using geopy distance calculator.
I think I can't do it, because I am all the time in the same loop, how can I define a global variable, save them in a list, then iterate through that list to calculate the distances in Python?
I am sorry if the question is too basic, or overwhelming but my brain stopped and I can't think.
The original script was:
import geopy.distance
coords_1 = (41.43737599306035, 2.1302325410348084)
coords_2 = (41.42535528994254, 2.1898592767949556)
print(geopy.distance.distance(coords_1, coords_2).km)
Hence, it would start from the first value, then distance between first and second, then second and third, then third and forth etc.
A simple way to solve this is to append all the points into a list. Then separate the list however you want, lets say you want the distance of adjacent points. Then iterate over the list to get the distance of every two. So something like this:
distance_list = []
for coordinates in g.objects(None, predicate=URIRef('http://www.w3.org/ns/prov#generatedAtCoordinates')):
final_coordinates = coordinates
data = final_coordinates.split(",")
latitude = float(data[0])
longitude = float(data[1])
calc_distance = (latitude, longitude)
print(geopy.distance.distance(calc_distance, calc_distance).km)
distance_list + geopy.distance.distance(calc_distance, calc_distance).km
for dis_1, dis_2 in zip(distance_list, distance_list[1:])[::2]
# Split the list into groups of two
print(geopy.distance.distance(dis_1, dis_2).km)
Here is an example of how to split a list into pairs.
I am new with python and image processing. I am trying to find a list of given RGB values in an input image using np.isin and np.where ( i wanted to avoid nested looping over all the pixels ).
So, here is the input and output
Input: https://imgur.com/eNylzA9
Output: https://imgur.com/lDctkj9
I am using the following code -
fliterlist = [
[244,240,255],
[253,239,255],
[255,234,249],
[255,230,245],
[255,229,243],
[255,228,242]
]
# actual list has more than 100 elements
def imageTest(img,count=0):
outImg = np.zeros(img.shape,dtype=np.uint8)
posArray=(np.isin(img,bb)).all(axis=2)
outImg[np.where(posArray)] = [255,255,255]
outname = './fast/imageTest_'+str(count)+'.jpg'
outputlist[outname]=outImg
return
For some reason, I am not getting the output as expected. I mean if I use a double nested loop for iterating over all the pixel, I get the desired result. But here it looks like np.isin is giving me a different outarray.
Please help me identify the issue.
Here is the example idea which works perfectly -
image - https://imgur.com/zP3zuLj
Use the inrange function of opencv, as follows:
mask = cv2.inRange(image, lower_range, upper_range)
I understand you want to find a particular pixel, so adjust range accordingly.
In my experience, whatever task you will try to accomplish its better to use a range of pixel, so that will be beneficial. But later is up to you.
What I want: strain values LE11, LE22, LE12 at nodal points
My script is:
#!/usr/local/bin/python
# coding: latin-1
# making the ODB commands available to the script
from odbAccess import*
import sys
import csv
odbPath = "my *.odb path"
odb = openOdb(path=odbPath)
assembly = odb.rootAssembly
# count the number of frames
NumofFrames = 0
for v in odb.steps["Step-1"].frames:
NumofFrames = NumofFrames + 1
# create a variable that refers to the reference (undeformed) frame
refFrame = odb.steps["Step-1"].frames[0]
# create a variable that refers to the node set ‘Region Of Interest (ROI)’
ROINodeSet = odb.rootAssembly.nodeSets["ROI"]
# create a variable that refers to the reference coordinate ‘REFCOORD’
refCoordinates = refFrame.fieldOutputs["COORD"]
# create a variable that refers to the coordinates of the node
# set in the test frame of the step
ROIrefCoords = refCoordinates.getSubset(region=ROINodeSet,position= NODAL)
# count the number of nodes
NumofNodes =0
for v in ROIrefCoords.values:
NumofNodes = NumofNodes +1
# looping over all the frames in the step
for i1 in range(NumofFrames):
# create a variable that refers to the current frame
currFrame = odb.steps["Step-1"].frames[i1+1]
# looping over all the frames in the step
for i1 in range(NumofFrames):
# create a variable that refers to the strain 'LE'
Str = currFrame.fieldOutputs["LE"]
ROIStr = Str.getSubset(region=ROINodeSet, position= NODAL)
# initialize list
list = [[]]
# loop over all the nodes in each frame
for i2 in range(NumofNodes):
strain = ROIStr.values [i2]
list.insert(i2,[str(strain.dataDouble[0])+";"+str(strain.dataDouble[1])+\
";"+str(strain.dataDouble[3]))
# write the list in a new *.csv file (code not included for brevity)
odb.close()
The error I get is:
strain = ROIStr.values [i2]
IndexError: Sequence index out of range
Additional info:
Details for ROIStr:
ROIStr.name
'LE'
ROIStr.type
TENSOR_3D_FULL
OIStr.description
'Logarithmic strain components'
ROIStr.componentLabels
('LE11', 'LE22', 'LE33', 'LE12', 'LE13', 'LE23')
ROIStr.getattribute
'getattribute of openOdb(r'path to .odb').steps['Step-1'].frames[1].fieldOutputs['LE'].getSubset(position=INTEGRATION_POINT, region=openOdb(r'path to.odb').rootAssembly.nodeSets['ROI'])'
When I use the same code for VECTOR objects, like 'U' for nodal displacement or 'COORD' for nodal coordinates, everything works without a problem.
The error happens in the first loop. So, it is not the case where it cycles several loops before the error happens.
Question: Does anyone know what is causing the error in the above code?
Here the reason you get an IndexError. Strains are (obviously) calculated at the integration points; according to the ABQ Scripting Reference Guide:
A SymbolicConstant specifying the position of the output in the element. Possible values are:
NODAL, specifying the values calculated at the nodes.
INTEGRATION_POINT, specifying the values calculated at the integration points.
ELEMENT_NODAL, specifying the values obtained by extrapolating results calculated at the integration points.
CENTROID, specifying the value at the centroid obtained by extrapolating results calculated at the integration points.
In order to use your code, therefore, you should get the results using position= ELEMENT_NODAL
ROIrefCoords = refCoordinates.getSubset(region=ROINodeSet,position= ELEMENT_NODAL)
With
ROIStr.values[0].data
You will then get an array containing the 6 independent components of your tensor.
Alternative Solution
For reading time series of results for a nodeset, you can use the function xyPlot.xyDataListFromField(). I noticed that this function is much faster than using odbread. The code also is shorter, the only drawback is that you have to get an abaqus license for using it (in contrast to odbread which works with abaqus python which only needs an installed version of abaqus and does not need to get a network license).
For your application, you should do something like:
from abaqus import *
from abaqusConstants import *
from abaqusExceptions import *
import visualization
import xyPlot
import displayGroupOdbToolset as dgo
results = session.openOdb(your_file + '.odb')
# without this, you won't be able to extract the results
session.viewports['Viewport: 1'].setValues(displayedObject=results)
xyList = xyPlot.xyDataListFromField(odb=results, outputPosition=NODAL, variable=((
'LE', INTEGRATION_POINT, ((COMPONENT, 'LE11'), (COMPONENT, 'LE22'), (
COMPONENT, 'LE33'), (COMPONENT, 'LE12'), )), ), nodeSets=(
'ROI', ))
(Of course you have to add LE13 etc.)
You will get a list of xyData
type(xyList[0])
<type 'xyData'>
Containing the desired data for each node and each output. It size will therefore be
len(xyList)
number_of_nodes*number_of_requested_outputs
Where the first number_of_nodes elements of the list are the LE11 at each nodes, then LE22 and so on.
You can then transform this in a NumPy array:
LE11_1 = np.array(xyList[0])
would be LE11 at the first node, with dimensions:
LE.shape
(NumberTimeFrames, 2)
That is, for each time step you have time and output variable.
NumPy arrays are also very easy to write on text files (check out numpy.savetxt).
Im trying to load an array at a specific time frame (for example if it has 50 frames or time units then get an array corresponding to the 2nd time frame) from netCDF files (.nc). Im currently using vtkNetCDFCFReader and getting the data array "vwnd" from the 1st time frame like this:
vtkSmartPointer<vtkNetCDFCFReader> reader = vtkSmartPointer<vtkNetCDFCFReader>::New();
reader->SetFileName(path.c_str());
reader->UpdateMetaData();
vtkSmartPointer<vtkStructuredGridGeometryFilter> geometryFilter = vtkSmartPointer<vtkStructuredGridGeometryFilter>::New();
geometryFilter->SetInputConnection(reader->GetOutputPort());
geometryFilter->Update();
vtkSmartPointer<vtkPolyData> ncPolydata = vtkSmartPointer<vtkPolyData>::New();
ncPolydata = geometryFilter->GetOutput();
vtkSmartPointer<vtkDataArray> dataArray = ncPolydata->GetCellData()->GetArray("vwnd");
Variable Arrays are : lat, lon, time, vwnd (vwnd has dimensions (lat,lon)). Im also interested in getting arrays for lat and lon. Any help would be appreciated.
Thanks in advance
As the dimension of lat/lon is different from vwnd, you will need 2 vtknetCDFreaders to read in data with different dimensions. Just remember to set the dimension after creating the reader.
For example in C++:
vtknetCDFReader* reader = vtknetCDFReader::New();
reader->SetFileName(fileName.c_str());
reader->UpdateMetaData();
//here you specify the dimension of the reader
reader->SetDimension(dim);
reader->SetVariableArrayStatus("lat",1)
reader->SetVariableArrayStatus("lon",1)
reader->Update();
If you are doing it correctly, you could read in any arrays and store it into vtkDataArray.
If you want to read in the vwnd data in the second time step, just skip the first lat*lon values.