Mnesia pagination with fragmented table - pagination

I have a mnesia table configured as follow:
-record(space, {id, t, s, a, l}).
mnesia:create_table(space, [ {disc_only_copies, nodes()},
{frag_properties, [ {n_fragments, 400}, {n_disc_copies, 1}]},
{attributes, record_info(fields, space)}]),
I have at least 4 million records for test purposes on this table.
I have implemented something like this Pagination search in Erlang Mnesia
fetch_paged() ->
MatchSpec = {'_',[],['$_']},
{Record, Continuation} = mnesia:activity(async_dirty, fun mnesia:select/4, [space, [MatchSpec], 10000, read], mnesia_frag).
next_page(Cont) ->
mnesia:activity(async_dirty, fun() -> mnesia:select(Cont) end, mnesia_frag).
When I execute the pagination methods it brings batch between 3000 and 8000 but never 10000.
What I have to do to bring me the batches consistently?

The problem is that you expect mnesia:select/4, which is documented as:
select(Tab, MatchSpec, NObjects, Lock) -> transaction abort | {[Object],Cont} | '$end_of_table'
to get you the NObjects limit, being NObjects in your example 10,000.
But the same documentation also says:
For efficiency the NObjects is a recommendation only and the result may contain anything from an empty list to all available results.
and that's the reason you are not getting consistent batches of 10,000 records, because NObjects is not a limit but a recommendation batch size.
If you want to get your 10,000 records you won't have no other option that writing your own function, but select/4 is written in this way for optimization purposes, so most probably the code you will be written will be slower than the original code.
BTW, you can find the mnesia source code on https://github.com/erlang/otp/tree/master/lib/mnesia/src

Related

Natural Nodejs NLP classifier gives heap dump when training more than 2000 data points

We are using natural nodejs library to classify our users queries in the following way:
const natural = require("natural");
const cls = new natural.LogisticRegressionClassifier();
cls.addDocument("book", "stationary");
cls.addDocument("pen", "stationary");
...
.....
....
5000 + data points
cls.addDocument("last one", "last one");
cls.train(); ----- But This gives heap error and the program crashes.
pino.info("StockNames training successfully completed.");
The train() functions works fine when dataset size is under few hundreds but throwing heap errors and crashing when dataset size is in few thousands. Any suggestions please help. Thanks

PACF function in statsmodels.tsa.stattools gives numbers greater than 1 when using ywunbiased?

I have a dataframe which is of length 177 and I want to calculate and plot the partial auto-correlation function (PACF).
I have the data imported etc and I do:
from statsmodels.tsa.stattools import pacf
ys = pacf(data[key][array].diff(1).dropna(), alpha=0.05, nlags=176, method="ywunbiased")
xs = range(lags+1)
plt.figure()
plt.scatter(xs,ys[0])
plt.grid()
plt.vlines(xs, 0, ys[0])
plt.plot(ys[1])
The method used results in numbers greater than 1 for very long lags (90ish) which is incorrect and I get a RuntimeWarning: invalid value encountered in sqrtreturn rho, np.sqrt(sigmasq) but since I can't see their source code I don't know what this means.
To be honest, when I search for PACF, all the examples only carry out PACF up to 40 lags or 60 or so and they never have any significant PACF after lag=2 and so I couldn't compare to other examples either.
But when I use:
method="ols"
# or
method="ywmle"
the numbers are corrected. So it must be the algo they use to solve it.
I tried importing inspect and getsource method but its useless it just shows that it uses another package and I can't find that.
If you also know where the problem arises from, I would really appreciate the help.
For your reference, the values for data[key][array] are:
[1131.130005, 1144.939941, 1126.209961, 1107.300049, 1120.680054, 1140.839966, 1101.719971, 1104.23999, 1114.579956, 1130.199951, 1173.819946, 1211.920044, 1181.27002, 1203.599976, 1180.589966, 1156.849976, 1191.5, 1191.329956, 1234.180054, 1220.329956, 1228.810059, 1207.01001, 1249.47998, 1248.290039, 1280.079956, 1280.660034, 1294.869995, 1310.609985, 1270.089966, 1270.199951, 1276.660034, 1303.819946, 1335.849976, 1377.939941, 1400.630005, 1418.300049, 1438.23999, 1406.819946, 1420.859985, 1482.369995, 1530.619995, 1503.349976, 1455.27002, 1473.98999, 1526.75, 1549.380005, 1481.140015, 1468.359985, 1378.550049, 1330.630005, 1322.699951, 1385.589966, 1400.380005, 1280.0, 1267.380005, 1282.829956, 1166.359985, 968.75, 896.23999, 903.25, 825.880005, 735.090027, 797.869995, 872.8099980000001, 919.1400150000001, 919.320007, 987.4799800000001, 1020.6199949999999, 1057.079956, 1036.189941, 1095.630005, 1115.099976, 1073.869995, 1104.48999, 1169.430054, 1186.689941, 1089.410034, 1030.709961, 1101.599976, 1049.329956, 1141.199951, 1183.26001, 1180.550049, 1257.640015, 1286.119995, 1327.219971, 1325.829956, 1363.609985, 1345.199951, 1320.640015, 1292.280029, 1218.890015, 1131.420044, 1253.300049, 1246.959961, 1257.599976, 1312.410034, 1365.680054, 1408.469971, 1397.910034, 1310.329956, 1362.160034, 1379.319946, 1406.579956, 1440.670044, 1412.160034, 1416.180054, 1426.189941, 1498.109985, 1514.680054, 1569.189941, 1597.569946, 1630.73999, 1606.280029, 1685.72998, 1632.969971, 1681.550049, 1756.540039, 1805.810059, 1848.359985, 1782.589966, 1859.449951, 1872.339966, 1883.949951, 1923.569946, 1960.22998, 1930.6700440000002, 2003.369995, 1972.290039, 2018.050049, 2067.560059, 2058.899902, 1994.9899899999998, 2104.5, 2067.889893, 2085.51001, 2107.389893, 2063.110107, 2103.840088, 1972.180054, 1920.030029, 2079.360107, 2080.409912, 2043.939941, 1940.2399899999998, 1932.22998, 2059.73999, 2065.300049, 2096.949951, 2098.860107, 2173.600098, 2170.949951, 2168.27002, 2126.149902, 2198.810059, 2238.830078, 2278.8701170000004, 2363.639893, 2362.719971, 2384.199951, 2411.800049, 2423.409912, 2470.300049, 2471.649902, 2519.360107, 2575.26001, 2584.840088, 2673.610107, 2823.810059, 2713.830078, 2640.8701170000004, 2648.050049, 2705.27002, 2718.3701170000004, 2816.290039, 2901.52002, 2913.97998]
Your time series is pretty clearly not stationary, so that Yule-Walker assumptions are violated.
More generally, PACF is usually appropriate with stationary time series. You might difference your data first, before considering the partial autocorrelations.

Pull random results from a database?

I have been coding in Python for a 2 months or so, but I mostly ask for help from a more experienced friend when I run in to these kinds of issues. I should also, before I begin, specify that I use Python solely for a personal project; any questions I ask will relate to each other through that.
With those two things out of the way, I have a database of weaponry items that I created using the following script, made in Python 3.X:
#Start by making a list of every material, weapontype, and upgrade.
Materials=("Unobtanium","IvorySilk","BoneLeather","CottonWood","Tin","Copper","Bronze","Gold","Cobalt","Tungsten")
WeaponTypes=("Knife","Sword","Greatsword","Polearm","Battlestaff","Claw","Cane","Wand","Talis","Slicer","Rod","Bow","Crossbow","Handbow","Pistol","Mechgun","Rifle","Shotgun")
Upgrades=("0","1","2","3","4","5","6","7","8","9","10")
ForgeWInputs=[]
#Go through every material...
for m in Materials:
#And in each material, go through every weapontype...
for w in WeaponTypes:
#And in every weapontype, go through each upgrade...
for u in Upgrades:
ForgeWInputs.append((m,w,u))
#We now have a list "ForgeWInputs", which contains the 3-element list needed to
#Forge any weapon. For example...
MAT={}
MAT["UnobtaniumPD"]=0
MAT["UnobtaniumMD"]=0
MAT["UnobtaniumAC"]=0
MAT["UnobtaniumPR"]=0
MAT["UnobtaniumMR"]=0
MAT["UnobtaniumWT"]=0
MAT["UnobtaniumBuy"]=0
MAT["UnobtaniumSell"]=0
MAT["IvorySilkPD"]=0
MAT["IvorySilkMD"]=12
MAT["IvorySilkAC"]=3
MAT["IvorySilkPR"]=0
MAT["IvorySilkMR"]=3
MAT["IvorySilkWT"]=6
MAT["IvorySilkBuy"]=10
MAT["IvorySilkSell"]=5
MAT["CottonWoodPD"]=8
MAT["CottonWoodMD"]=8
MAT["CottonWoodAC"]=5
MAT["CottonWoodPR"]=0
MAT["CottonWoodMR"]=3
MAT["CottonWoodWT"]=6
MAT["CottonWoodBuy"]=14
MAT["CottonWoodSell"]=7
MAT["BoneLeatherPD"]=12
MAT["BoneLeatherMD"]=0
MAT["BoneLeatherAC"]=3
MAT["BoneLeatherPR"]=3
MAT["BoneLeatherMR"]=0
MAT["BoneLeatherWT"]=6
MAT["BoneLeatherBuy"]=10
MAT["BoneLeatherSell"]=5
MAT["TinPD"]=18
MAT["TinMD"]=6
MAT["TinAC"]=3
MAT["TinPR"]=5
MAT["TinMR"]=2
MAT["TinWT"]=12
MAT["TinBuy"]=20
MAT["TinSell"]=10
MAT["CopperPD"]=6
MAT["CopperMD"]=18
MAT["CopperAC"]=3
MAT["CopperPR"]=2
MAT["CopperMR"]=5
MAT["CopperWT"]=12
MAT["CopperBuy"]=20
MAT["CopperSell"]=10
MAT["BronzePD"]=10
MAT["BronzeMD"]=10
MAT["BronzeAC"]=5
MAT["BronzePR"]=3
MAT["BronzeMR"]=3
MAT["BronzeWT"]=15
MAT["BronzeBuy"]=30
MAT["BronzeSell"]=15
MAT["GoldPD"]=10
MAT["GoldMD"]=30
MAT["GoldAC"]=0
MAT["GoldPR"]=5
MAT["GoldMR"]=10
MAT["GoldWT"]=25
MAT["GoldBuy"]=50
MAT["GoldSell"]=25
MAT["CobaltPD"]=30
MAT["CobaltMD"]=10
MAT["CobaltAC"]=0
MAT["CobaltPR"]=10
MAT["CobaltMR"]=0
MAT["CobaltWT"]=25
MAT["CobaltBuy"]=50
MAT["CobaltSell"]=25
MAT["TungstenPD"]=20
MAT["TungstenMD"]=20
MAT["TungstenAC"]=0
MAT["TungstenPR"]=7
MAT["TungstenMR"]=7
MAT["TungstenWT"]=20
MAT["TungstenBuy"]=70
MAT["TungstenSell"]=35
WEP={}
WEP["KnifePD"]=0.5
WEP["KnifeMD"]=0.5
WEP["KnifeAC"]=1.25
WEP["SwordPD"]=1.0
WEP["SwordMD"]=1.0
WEP["SwordAC"]=1.0
WEP["GreatswordPD"]=1.67
WEP["GreatswordMD"]=0.67
WEP["GreatswordAC"]=0.5
WEP["PolearmPD"]=1.15
WEP["PolearmMD"]=1.15
WEP["PolearmAC"]=1.15
WEP["CanePD"]=1.15
WEP["CaneMD"]=1.15
WEP["CaneAC"]=0.7
WEP["ClawPD"]=1.1
WEP["ClawMD"]=1.1
WEP["ClawAC"]=0.8
WEP["BattlestaffPD"]=1.15
WEP["BattlestaffMD"]=1
WEP["BattlestaffAC"]=1.25
WEP["TalisPD"]=1.15
WEP["TalisMD"]=0.7
WEP["TalisAC"]=1.15
WEP["WandPD"]=0.0
WEP["WandMD"]=1
WEP["WandAC"]=1.33
WEP["RodPD"]=0.0
WEP["RodMD"]=1.67
WEP["RodAC"]=0.67
WEP["SlicerPD"]=0.67
WEP["SlicerMD"]=0.67
WEP["SlicerAC"]=0.67
WEP["BowPD"]=1.15
WEP["BowMD"]=1.15
WEP["BowAC"]=0.85
WEP["CrossbowPD"]=1.4
WEP["CrossbowMD"]=1.4
WEP["CrossbowAC"]=1
WEP["PistolPD"]=0.65
WEP["PistolMD"]=0.65
WEP["PistolAC"]=1.15
WEP["MechgunPD"]=0.2
WEP["MechgunMD"]=0.2
WEP["MechgunAC"]=1.5
WEP["ShotgunPD"]=1.3
WEP["ShotgunMD"]=1.3
WEP["ShotgunAC"]=0.4
WEP["RiflePD"]=0.75
WEP["RifleMD"]=0.75
WEP["RifleAC"]=1.75
WEP["HandbowPD"]=0.8
WEP["HandbowMD"]=0.8
WEP["HandbowAC"]=1.2
UP={}
UP["0PD"]=1.0
UP["1PD"]=1.1
UP["2PD"]=1.2
UP["3PD"]=1.3
UP["4PD"]=1.4
UP["5PD"]=1.5
UP["6PD"]=1.6
UP["7PD"]=1.7
UP["8PD"]=1.8
UP["9PD"]=1.9
UP["10PD"]=2.0
UP["0MD"]=1.0
UP["1MD"]=1.1
UP["2MD"]=1.2
UP["3MD"]=1.3
UP["4MD"]=1.4
UP["5MD"]=1.5
UP["6MD"]=1.6
UP["7MD"]=1.7
UP["8MD"]=1.8
UP["9MD"]=1.9
UP["10MD"]=2.0
UP["0AC"]=1.0
UP["1AC"]=1.1
UP["2AC"]=1.2
UP["3AC"]=1.3
UP["4AC"]=1.4
UP["5AC"]=1.5
UP["6AC"]=1.6
UP["7AC"]=1.7
UP["8AC"]=1.8
UP["9AC"]=1.9
UP["10AC"]=2.0
UP["0PR"]=1.0
UP["1PR"]=1.1
UP["2PR"]=1.2
UP["3PR"]=1.3
UP["4PR"]=1.4
UP["5PR"]=1.5
UP["6PR"]=1.6
UP["7PR"]=1.7
UP["8PR"]=1.8
UP["9PR"]=1.9
UP["10PR"]=2.0
UP["0MR"]=1.0
UP["1MR"]=1.1
UP["2MR"]=1.2
UP["3MR"]=1.3
UP["4MR"]=1.4
UP["5MR"]=1.5
UP["6MR"]=1.6
UP["7MR"]=1.7
UP["8MR"]=1.8
UP["9MR"]=1.9
UP["10MR"]=2.0
UP["0WT"]=1.0
UP["1WT"]=0.95
UP["2WT"]=0.9
UP["3WT"]=0.85
UP["4WT"]=0.8
UP["5WT"]=0.75
UP["6WT"]=0.7
UP["7WT"]=0.65
UP["8WT"]=0.6
UP["9WT"]=0.55
UP["10WT"]=0.5
def ForgeW(Material,WeaponType,UpgradeLevel):
"""The ForgeW function Forges a Weapon from its base components into a lethal tool."""
#Get the appropriate material stats...
OrePD=MAT[Material+"PD"]
OreMD=MAT[Material+"MD"]
OreAC=MAT[Material+"AC"]
#And weapon type stats...
SmithPD=WEP[WeaponType+"PD"]
SmithMD=WEP[WeaponType+"MD"]
SmithAC=WEP[WeaponType+"AC"]
#And apply the upgrade...
UpgradePD=UP[UpgradeLevel+"PD"]
UpgradeMD=UP[UpgradeLevel+"MD"]
UpgradeAC=UP[UpgradeLevel+"AC"]
#Then, add them all together.
ProductPD=(OrePD*SmithPD)*UpgradePD
ProductMD=(OreMD*SmithMD)*UpgradeMD
ProductAC=(OreAC*SmithAC)*UpgradeAC
return(ProductPD,ProductMD,ProductAC)
#Recall that ForgeW simply needs its three inputs, which we have a list of. So, let's make our
#database of weapon information.
OmniWeapData={}
#Go through every set of inputs we have...
for Inputs in ForgeWInputs:
#And create a key in the dictionary by combining their three names. Then, set that
#key equal to whatever ForgeW returns when those three inputs are put in.
OmniWeapData[Inputs[0]+Inputs[1]+Inputs[2]] = ForgeW(Inputs[0],Inputs[1],Inputs[2])
I would like to refer to the database created by this code and pull out weapons at random, and frankly I have no idea how. As an example of what I would like to do...
Well, hum. The code in question should spit out a certain number of results based on the complete products of the ForgeW function - if I specify, either within the code or through an input, that I would like 3 outputs, it might output a GoldKnife0, a TinPolearm5, and a CobaltGreatsword10. If I were to run the code again, it should dispense new equipment - not the same three every time.
I apologize if this is too much or too little data - it's my first time asking a question here.
"Take this... it may help you on your quest."
There is a library called random with a method called choice().
e.g.
import random
random.choice([1,2,3])
>>> 2
It sounds like you need one item from Materials, one item from WeaponTypes, and one from Upgrades.
Also, rarely is there ever a need for a triple nested FOR statement. This should get you started.

distributed Tensorflow tracking timestamps for synchronization operations

I am new to TensorFlow. Currently, I am trying to evaluate the performance of distributed TensorFlow using Inception model provided by TensorFlow team.
The thing I want is to generate timestamps for some critical operations in a Parameter Server - Worker architecture, so I can measure the bottleneck (the network lag due to parameter transfer/synchronization or parameter computation cost) on replicas for one iteration (batch).
I came up with the idea of adding a customized dummy py_func operator designated of printing timestamps inside inception_distributed_train.py, with some control dependencies. Here are some pieces of code that I added:
def timer(s):
print ("-------- thread ID ", threading.current_thread().ident, ", ---- Process ID ----- ", getpid(), " ~~~~~~~~~~~~~~~ ", s, datetime.fromtimestamp(time.time()).strftime('%Y-%m-%d %H:%M:%S.%f'))
return Falsedf
dummy1 = tf.py_func(timer, ["got gradients, before dequeues token "], tf.bool)
dummy2 = tf.py_func(timer, ["finished dequeueing the token "], tf.bool)
I modified
apply_gradients_op = opt.apply_gradients(grads, global_step=global_step)
with tf.control_dependencies([apply_gradients_op]):
train_op = tf.identity(total_loss, name='train_op')
into
with tf.control_dependencies([dummy1]):
apply_gradients_op = opt.apply_gradients(grads, global_step=global_step)
with tf.control_dependencies([apply_gradients_op]):
with tf.control_dependencies([dummy2]):
train_op = tf.identity(total_loss, name='train_op')
hoping to print the timestamps before evaluating the apply_gradient_op and after finishing evaluating the apply_gradient_op by enforcing node dependencies.
I did similar things inside sync_replicas_optimizer.apply_gradients, by adding two dummy print nodes before and after update_op:
dummy1 = py_func(timer, ["---------- before update_op "], tf_bool)
dummy2 = py_func(timer, ["---------- finished update_op "], tf_bool)
# sync_op will be assigned to the same device as the global step.
with ops.device(global_step.device), ops.name_scope(""):
with ops.control_dependencies([dummy1]):
update_op = self._opt.apply_gradients(aggregated_grads_and_vars, global_step)
# Clear all the gradients queues in case there are stale gradients.
clear_queue_ops = []
with ops.control_dependencies([update_op]):
with ops.control_dependencies([dummy2]):
for queue, dev in self._one_element_queue_list:
with ops.device(dev):
stale_grads = queue.dequeue_many(queue.size())
clear_queue_ops.append(stale_grads)
I understand that apply_gradient_op is the train_op returned by sync_replicas_optimizer.apply_gradient. And apply_gradient_op is the op to dequeue a token (global_step) from sync_queue managed by the chief worker using chief_queue_runner, so that replica can exit current batch and start a new batch.
In theory, apply_gradient_op should take some time as replica has to wait before it can dequeue the token (global_step) from sync_queue, but the print result for one replica I got, such as the time differences for executing apply_gradient_op is pretty short (~1/1000 sec) and sometimes the print output is indeterministic (especially for chief worker). Here is a snippet of the output on the workers (I am running 2 workers and 1 PS):
chief worker (worker 0) output
worker 1 output
My questions are:
1) I need to record the time TensorFlow takes to execute an op (such as train_op, apply_gradients_op, compute_gradients_op, etc.)
2) Is this the right direction to go, given my ultimate goal is to record the elapsed time for executing certain operations (such as the difference between the time a replica finishes computing gradients and the time it gets the global_step from sync_token)?
3) If this is not the way it should go, please guide me with some insights about the possible ways I could achieve my ultimate goal.
Thank you so much for reading my long long posts. as I have spent weeks working on this!

Multi-Threading and Parallel Processing in Matlab

I'm coding a project in Matlab, however I want the great efficiency and speed of my execution, for that sake, I want to use parallel processing threads in Matlab as I have multiple objects working or changing their states in a for loop. Is Multi-Threading is appropriate for this purpose? If so, where can I take start or can create a simple thread?
My Code:
% P=501x3 array
for i=1:length(P)
% I used position for example's sake, meaning object changing its state
Object1.position=P(i,:);
Object2.position=P(i,:);
Object3.position=P(i,:);
Object4.position=P(i,:);
% Mulitple objects changing their state on each iteration, after some calculation/formulation.
end
What I need is the basic structure of Multi-Threads according to my scenario if Threading is appropriate in my case. More suggestions for Parallel-Execution or fast processing are welcomed.
Edit1: Parray:
P =
-21.8318 19.2251 -16.0000
-21.7386 19.1620 -15.9640
-21.6455 19.0988 -15.9279
-21.5527 19.0357 -15.8918
-21.4600 18.9727 -15.8556
-21.3675 18.9096 -15.8194
-21.2752 18.8466 -15.7831
-21.1831 18.7836 -15.7468
-21.0911 18.7206 -15.7105
-20.9993 18.6577 -15.6741
-20.9078 18.5947 -15.6377
-20.8163 18.5318 -15.6012
-20.7251 18.4689 -15.5647
-20.6340 18.4061 -15.5281
-20.5432 18.3432 -15.4915
-20.4524 18.2804 -15.4548
-20.3619 18.2176 -15.4181
-20.2715 18.1548 -15.3814
-20.1813 18.0921 -15.3446
-20.0913 18.0293 -15.3078
-20.0015 17.9666 -15.2709
-19.9118 17.9039 -15.2340
-19.8223 17.8412 -15.1970
-19.7329 17.7786 -15.1601
-19.6438 17.7160 -15.1230
-19.5547 17.6534 -15.0860
-19.4659 17.5908 -15.0489
-19.3772 17.5282 -15.0117
-19.2887 17.4656 -14.9745
-19.2004 17.4031 -14.9373
-19.1122 17.3406 -14.9001
-19.0241 17.2781 -14.8628
-18.9363 17.2156 -14.8254
-18.8486 17.1532 -14.7881
-18.7610 17.0907 -14.7507
-18.6736 17.0283 -14.7132
-18.5864 16.9659 -14.6758
-18.4994 16.9035 -14.6383
-18.4124 16.8412 -14.6007
-18.3257 16.7788 -14.5632
-18.2391 16.7165 -14.5255
-18.1526 16.6542 -14.4879
-18.0663 16.5919 -14.4502
-17.9802 16.5296 -14.4125
-17.8942 16.4673 -14.3748
-17.8084 16.4051 -14.3370
-17.7227 16.3429 -14.2992
-17.6372 16.2807 -14.2614
-17.5518 16.2185 -14.2235
-17.4665 16.1563 -14.1856
-17.3815 16.0941 -14.1477
-17.2965 16.0320 -14.1097
-17.2117 15.9698 -14.0718
-17.1271 15.9077 -14.0338
-17.0426 15.8456 -13.9957
-16.9582 15.7835 -13.9576
-16.8740 15.7214 -13.9196
-16.7899 15.6594 -13.8814
-16.7060 15.5973 -13.8433
-16.6222 15.5353 -13.8051
-16.5385 15.4733 -13.7669
-16.4550 15.4113 -13.7287
-16.3716 15.3493 -13.6904
-16.2884 15.2873 -13.6521
-16.2053 15.2253 -13.6138
-16.1223 15.1634 -13.5755
-16.0395 15.1014 -13.5372
-15.9568 15.0395 -13.4988
-15.8742 14.9776 -13.4604
-15.7918 14.9157 -13.4220
-15.7095 14.8538 -13.3835
-15.6273 14.7919 -13.3451
-15.5453 14.7301 -13.3066
-15.4634 14.6682 -13.2681
-15.3816 14.6063 -13.2295
-15.2999 14.5445 -13.1910
-15.2184 14.4827 -13.1524
-15.1370 14.4209 -13.1138
-15.0557 14.3591 -13.0752
-14.9746 14.2973 -13.0366
-14.8936 14.2355 -12.9979
-14.8127 14.1737 -12.9593
-14.7319 14.1120 -12.9206
-14.6513 14.0502 -12.8819
-14.5707 13.9885 -12.8432
-14.4903 13.9267 -12.8044
-14.4100 13.8650 -12.7657
-14.3299 13.8033 -12.7269
-14.2498 13.7416 -12.6881
-14.1699 13.6799 -12.6493
-14.0901 13.6182 -12.6105
-14.0104 13.5565 -12.5717
-13.9308 13.4948 -12.5328
-13.8513 13.4332 -12.4940
-13.7720 13.3715 -12.4551
-13.6927 13.3099 -12.4162
-13.6136 13.2482 -12.3773
-13.5346 13.1866 -12.3384
-13.4556 13.1250 -12.2995
-13.3768 13.0633 -12.2605
-13.2982 13.0017 -12.2216
-13.2196 12.9401 -12.1826
-13.1411 12.8785 -12.1437
-13.0627 12.8169 -12.1047
-12.9845 12.7553 -12.0657
-12.9063 12.6937 -12.0267
-12.8283 12.6321 -11.9877
-12.7503 12.5706 -11.9487
-12.6725 12.5090 -11.9097
-12.5947 12.4474 -11.8706
-12.5171 12.3859 -11.8316
-12.4395 12.3243 -11.7925
-12.3621 12.2628 -11.7535
-12.2848 12.2012 -11.7144
-12.2075 12.1397 -11.6754
-12.1304 12.0781 -11.6363
-12.0533 12.0166 -11.5972
-11.9764 11.9550 -11.5581
-11.8995 11.8935 -11.5190
-11.8228 11.8320 -11.4799
-11.7461 11.7705 -11.4408
-11.6695 11.7089 -11.4017
-11.5930 11.6474 -11.3626
-11.5166 11.5859 -11.3235
-11.4403 11.5244 -11.2844
-11.3641 11.4629 -11.2453
-11.2880 11.4014 -11.2062
-11.2120 11.3398 -11.1671
-11.1360 11.2783 -11.1280
-11.0602 11.2168 -11.0889
-10.9844 11.1553 -11.0497
-10.9087 11.0938 -11.0106
-10.8331 11.0323 -10.9715
-10.7576 10.9708 -10.9324
-10.6821 10.9093 -10.8933
-10.6068 10.8478 -10.8542
-10.5315 10.7863 -10.8150
-10.4563 10.7248 -10.7759
-10.3812 10.6633 -10.7368
-10.3062 10.6018 -10.6977
-10.2312 10.5403 -10.6586
-10.1564 10.4788 -10.6195
-10.0816 10.4173 -10.5804
-10.0068 10.3557 -10.5414
-9.9322 10.2942 -10.5023
-9.8576 10.2327 -10.4632
-9.7831 10.1712 -10.4241
-9.7087 10.1097 -10.3851
-9.6343 10.0482 -10.3460
-9.5600 9.9866 -10.3069
-9.4858 9.9251 -10.2679
-9.4117 9.8636 -10.2289
-9.3376 9.8021 -10.1898
-9.2636 9.7405 -10.1508
-9.1897 9.6790 -10.1118
-9.1158 9.6174 -10.0728
-9.0420 9.5559 -10.0338
-8.9683 9.4944 -9.9948
-8.8946 9.4328 -9.9558
-8.8210 9.3712 -9.9169
-8.7474 9.3097 -9.8779
-8.6739 9.2481 -9.8390
-8.6005 9.1865 -9.8000
-8.5272 9.1250 -9.7611
-8.4539 9.0634 -9.7222
-8.3806 9.0018 -9.6833
-8.3074 8.9402 -9.6444
-8.2343 8.8786 -9.6056
-8.1612 8.8170 -9.5667
-8.0882 8.7554 -9.5279
-8.0152 8.6938 -9.4890
-7.9423 8.6322 -9.4502
-7.8695 8.5705 -9.4114
-7.7967 8.5089 -9.3727
-7.7239 8.4473 -9.3339
-7.6513 8.3856 -9.2951
-7.5786 8.3240 -9.2564
-7.5060 8.2623 -9.2177
-7.4335 8.2006 -9.1790
-7.3610 8.1389 -9.1403
-7.2885 8.0772 -9.1017
-7.2161 8.0155 -9.0630
-7.1438 7.9538 -9.0244
-7.0715 7.8921 -8.9858
-6.9992 7.8304 -8.9472
-6.9270 7.7687 -8.9086
-6.8548 7.7069 -8.8701
-6.7827 7.6452 -8.8316
-6.7106 7.5834 -8.7931
-6.6385 7.5217 -8.7546
-6.5665 7.4599 -8.7161
-6.4945 7.3981 -8.6777
-6.4226 7.3363 -8.6393
-6.3507 7.2745 -8.6009
-6.2788 7.2127 -8.5625
-6.2070 7.1508 -8.5242
-6.1352 7.0890 -8.4858
-6.0635 7.0271 -8.4475
-5.9917 6.9653 -8.4093
-5.9200 6.9034 -8.3710
-5.8484 6.8415 -8.3328
-5.7768 6.7796 -8.2946
-5.7052 6.7177 -8.2564
-5.6336 6.6558 -8.2183
-5.5621 6.5938 -8.1802
-5.4906 6.5319 -8.1421
-5.4191 6.4699 -8.1040
-5.3476 6.4079 -8.0660
-5.2762 6.3459 -8.0280
-5.2048 6.2839 -7.9900
-5.1334 6.2219 -7.9521
-5.0621 6.1599 -7.9142
-4.9908 6.0978 -7.8763
-4.9194 6.0358 -7.8384
-4.8482 5.9737 -7.8006
-4.7769 5.9116 -7.7628
-4.7057 5.8495 -7.7250
-4.6344 5.7874 -7.6873
-4.5632 5.7253 -7.6496
-4.4920 5.6631 -7.6119
-4.4209 5.6010 -7.5743
-4.3497 5.5388 -7.5367
-4.2786 5.4766 -7.4992
-4.2074 5.4144 -7.4616
-4.1363 5.3522 -7.4241
-4.0652 5.2899 -7.3867
-3.9941 5.2277 -7.3492
-3.9231 5.1654 -7.3118
-3.8520 5.1031 -7.2745
-3.7809 5.0408 -7.2372
-3.7099 4.9785 -7.1999
-3.6389 4.9161 -7.1626
-3.5678 4.8538 -7.1254
-3.4968 4.7914 -7.0883
-3.4258 4.7290 -7.0511
-3.3548 4.6666 -7.0140
-3.2838 4.6041 -6.9770
-3.2128 4.5417 -6.9400
-3.1418 4.4792 -6.9030
-3.0708 4.4167 -6.8661
-2.9998 4.3542 -6.8292
-2.9288 4.2917 -6.7923
-2.8578 4.2291 -6.7555
-2.7868 4.1666 -6.7187
-2.7158 4.1040 -6.6820
-2.6448 4.0414 -6.6453
-2.5738 3.9788 -6.6087
-2.5028 3.9161 -6.5720
-2.4318 3.8534 -6.5355
-2.3608 3.7908 -6.4990
-2.2897 3.7280 -6.4625
-2.2187 3.6653 -6.4261
-2.1477 3.6026 -6.3897
-2.0766 3.5398 -6.3534
-2.0056 3.4770 -6.3171
-1.9345 3.4142 -6.2808
-1.8634 3.3513 -6.2446
-1.7924 3.2885 -6.2085
-1.7213 3.2256 -6.1724
-1.6501 3.1627 -6.1363
-1.5790 3.0998 -6.1003
-1.5079 3.0368 -6.0643
-1.4367 2.9739 -6.0284
-1.3656 2.9109 -5.9925
-1.2944 2.8478 -5.9567
-1.2232 2.7848 -5.9210
-1.1519 2.7217 -5.8852
-1.0807 2.6586 -5.8496
-1.0094 2.5955 -5.8140
-0.9381 2.5324 -5.7784
-0.8668 2.4692 -5.7429
-0.7955 2.4060 -5.7074
-0.7242 2.3428 -5.6720
-0.6528 2.2796 -5.6366
-0.5814 2.2163 -5.6013
-0.5100 2.1530 -5.5661
-0.4385 2.0897 -5.5309
-0.3670 2.0264 -5.4957
-0.2955 1.9630 -5.4607
-0.2240 1.8996 -5.4256
-0.1524 1.8362 -5.3906
-0.0808 1.7727 -5.3557
-0.0092 1.7093 -5.3209
0.0625 1.6458 -5.2860
0.1341 1.5822 -5.2513
0.2059 1.5187 -5.2166
0.2776 1.4551 -5.1819
0.3494 1.3915 -5.1474
0.4212 1.3279 -5.1128
0.4931 1.2642 -5.0784
0.5650 1.2005 -5.0440
0.6369 1.1368 -5.0096
0.7089 1.0730 -4.9753
0.7809 1.0092 -4.9411
0.8530 0.9454 -4.9069
0.9251 0.8816 -4.8728
0.9972 0.8177 -4.8388
1.0694 0.7538 -4.8048
1.1416 0.6899 -4.7709
1.2139 0.6260 -4.7370
1.2862 0.5620 -4.7032
1.3585 0.4980 -4.6695
1.4309 0.4339 -4.6358
1.5034 0.3698 -4.6022
1.5759 0.3057 -4.5686
1.6484 0.2416 -4.5352
1.7210 0.1774 -4.5017
1.7936 0.1132 -4.4684
1.8663 0.0490 -4.4351
1.9391 -0.0153 -4.4019
2.0119 -0.0796 -4.3687
2.0847 -0.1439 -4.3356
2.1576 -0.2083 -4.3026
2.2306 -0.2726 -4.2697
2.3036 -0.3371 -4.2368
2.3767 -0.4015 -4.2040
2.4498 -0.4660 -4.1712
2.5230 -0.5305 -4.1385
2.5962 -0.5951 -4.1059
2.6695 -0.6597 -4.0734
2.7429 -0.7243 -4.0409
2.8163 -0.7890 -4.0085
2.8898 -0.8537 -3.9762
2.9634 -0.9184 -3.9439
3.0370 -0.9831 -3.9117
3.1106 -1.0479 -3.8796
3.1844 -1.1128 -3.8476
3.2582 -1.1776 -3.8156
3.3320 -1.2425 -3.7837
3.4060 -1.3075 -3.7519
3.4800 -1.3724 -3.7201
3.5541 -1.4374 -3.6884
3.6282 -1.5025 -3.6568
3.7024 -1.5675 -3.6253
3.7767 -1.6327 -3.5939
3.8510 -1.6978 -3.5625
3.9255 -1.7630 -3.5312
4.0000 -1.8282 -3.4999
4.0745 -1.8935 -3.4688
4.1492 -1.9588 -3.4377
4.2239 -2.0241 -3.4067
4.2987 -2.0895 -3.3758
4.3736 -2.1549 -3.3450
4.4485 -2.2203 -3.3142
4.5236 -2.2858 -3.2835
4.5987 -2.3513 -3.2529
4.6739 -2.4169 -3.2224
4.7491 -2.4825 -3.1920
4.8245 -2.5481 -3.1616
4.8999 -2.6138 -3.1313
4.9755 -2.6795 -3.1011
5.0511 -2.7453 -3.0710
5.1267 -2.8111 -3.0409
5.2025 -2.8769 -3.0110
5.2784 -2.9428 -2.9811
5.3543 -3.0087 -2.9513
5.4303 -3.0747 -2.9216
5.5065 -3.1407 -2.8920
5.5827 -3.2067 -2.8624
5.6590 -3.2728 -2.8330
5.7354 -3.3389 -2.8036
5.8118 -3.4051 -2.7743
5.8884 -3.4713 -2.7451
5.9651 -3.5375 -2.7160
6.0418 -3.6038 -2.6870
6.1187 -3.6701 -2.6580
6.1957 -3.7365 -2.6292
6.2727 -3.8029 -2.6004
6.3498 -3.8693 -2.5717
6.4271 -3.9358 -2.5431
6.5044 -4.0024 -2.5146
6.5819 -4.0690 -2.4862
6.6594 -4.1356 -2.4579
6.7370 -4.2023 -2.4296
6.8148 -4.2690 -2.4015
6.8926 -4.3357 -2.3734
6.9706 -4.4025 -2.3455
7.0486 -4.4694 -2.3176
7.1268 -4.5362 -2.2898
7.2050 -4.6032 -2.2621
7.2834 -4.6702 -2.2345
7.3619 -4.7372 -2.2070
7.4405 -4.8042 -2.1796
7.5191 -4.8713 -2.1523
7.5979 -4.9385 -2.1250
7.6769 -5.0057 -2.0979
7.7559 -5.0730 -2.0709
7.8350 -5.1402 -2.0439
7.9142 -5.2076 -2.0171
7.9936 -5.2750 -1.9903
8.0731 -5.3424 -1.9637
8.1526 -5.4099 -1.9371
8.2323 -5.4774 -1.9106
8.3122 -5.5450 -1.8842
8.3921 -5.6126 -1.8580
8.4721 -5.6803 -1.8318
8.5523 -5.7480 -1.8057
8.6326 -5.8157 -1.7797
8.7130 -5.8835 -1.7539
8.7935 -5.9514 -1.7281
8.8742 -6.0193 -1.7024
8.9549 -6.0873 -1.6768
9.0358 -6.1553 -1.6513
9.1169 -6.2233 -1.6260
9.1980 -6.2914 -1.6007
9.2793 -6.3596 -1.5755
9.3607 -6.4278 -1.5504
9.4422 -6.4960 -1.5254
9.5238 -6.5643 -1.5006
9.6056 -6.6327 -1.4758
9.6875 -6.7011 -1.4511
9.7695 -6.7695 -1.4266
9.8517 -6.8380 -1.4021
9.9340 -6.9066 -1.3778
10.0164 -6.9752 -1.3535
10.0990 -7.0438 -1.3294
10.1817 -7.1125 -1.3053
10.2645 -7.1813 -1.2814
10.3475 -7.2501 -1.2576
10.4306 -7.3189 -1.2338
10.5138 -7.3878 -1.2102
10.5972 -7.4568 -1.1867
10.6807 -7.5258 -1.1633
10.7644 -7.5949 -1.1400
10.8482 -7.6640 -1.1168
10.9321 -7.7332 -1.0938
11.0162 -7.8024 -1.0708
11.1004 -7.8717 -1.0479
11.1848 -7.9410 -1.0252
11.2693 -8.0104 -1.0026
11.3539 -8.0798 -0.9800
11.4387 -8.1493 -0.9576
11.5237 -8.2188 -0.9353
11.6087 -8.2884 -0.9131
11.6940 -8.3581 -0.8910
11.7794 -8.4278 -0.8691
11.8649 -8.4976 -0.8472
11.9506 -8.5674 -0.8255
12.0364 -8.6373 -0.8038
12.1224 -8.7072 -0.7823
12.2085 -8.7772 -0.7609
12.2948 -8.8472 -0.7396
12.3813 -8.9173 -0.7185
12.4679 -8.9875 -0.6974
12.5546 -9.0577 -0.6765
12.6415 -9.1279 -0.6557
12.7286 -9.1983 -0.6350
12.8158 -9.2686 -0.6144
12.9032 -9.3391 -0.5939
12.9907 -9.4096 -0.5735
13.0784 -9.4801 -0.5533
13.1663 -9.5507 -0.5332
13.2543 -9.6214 -0.5132
13.3425 -9.6921 -0.4933
13.4309 -9.7629 -0.4735
13.5194 -9.8337 -0.4539
13.6080 -9.9046 -0.4344
13.6969 -9.9756 -0.4150
13.7859 -10.0466 -0.3957
13.8751 -10.1177 -0.3765
13.9644 -10.1888 -0.3575
14.0539 -10.2600 -0.3386
14.1436 -10.3313 -0.3198
14.2334 -10.4026 -0.3011
14.3235 -10.4740 -0.2826
14.4137 -10.5454 -0.2641
14.5040 -10.6169 -0.2458
14.5946 -10.6884 -0.2277
14.6853 -10.7600 -0.2096
14.7761 -10.8317 -0.1917
14.8672 -10.9035 -0.1739
14.9584 -10.9753 -0.1562
15.0499 -11.0471 -0.1387
15.1414 -11.1190 -0.1213
15.2332 -11.1910 -0.1040
15.3252 -11.2631 -0.0868
15.4173 -11.3352 -0.0697
15.5096 -11.4073 -0.0528
15.6021 -11.4796 -0.0360
15.6948 -11.5519 -0.0194
15.7876 -11.6242 -0.0029
15.8806 -11.6966 0.0135
15.9739 -11.7691 0.0298
16.0673 -11.8417 0.0459
16.1609 -11.9143 0.0619
16.2547 -11.9869 0.0778
16.3486 -12.0597 0.0936
16.4428 -12.1325 0.1092
16.5371 -12.2053 0.1247
16.6317 -12.2783 0.1400
16.7264 -12.3513 0.1552
16.8213 -12.4243 0.1703
16.9164 -12.4974 0.1853
17.0117 -12.5706 0.2001
17.1072 -12.6439 0.2148
17.2029 -12.7172 0.2293
17.2988 -12.7906 0.2437
17.3949 -12.8640 0.2580
17.4911 -12.9376 0.2721
17.5876 -13.0111 0.2861
17.6843 -13.0848 0.3000
Ahsan,
I think parfor might be what you're looking for. It requires the Parallel Computing Toolbox (PCT) to use. It works like this:
parfor i = 1:length(P)
Object1.position = P(i,:);
end
Also, I would recommend using indexed structure fields, as I've done in the example below, as it increases the flexibility of any code that you write. Let me know if this doesn't work for you, and we'll try something else. I know another technique, but it's much messier. Good luck!
parfor i = 1:length(P)
Object(i).position = P(i,:);
end
Edit: Ok, this may not work because parfor is pretty particular. But it the general concept should apply to the problem you're trying to solve. I can't be more specific without knowing more about your specific problem. The key point is, inside a parfor, you must iterate something (Object, below) by the parfors iterator (i, below) and assign that thing a value. I'm not sure if you can use a for-loop inside a parfor, so the first example below may break. The point is, this is how you do parallel processing in MATLAB. Try this:
parfor i = 1:4
Object(i).position = P(i,:);
end
Edit 2:
parfor i = length(P)
Object1 = struct();
Object1.position = P(i, :);
end
Edit 3: Ok, one last thing. You can't use "sliced" structs (structs with fields) as input or output from a parfor so you have to do this:
parfor i = 1:length(P)
positionArray1(i) = P(i, :);
end
Object1 = struct('position', positionArray1);

Resources