Our system writes the ManagedThreadID with every message written and for years we've used that to help distinguish a particular unit of work among many in the logs. So far, so good.
Now, we're beginning to use the Task Parallel Library and noticing an interesting effect:
public static void Main(string[] args) {
WriteLine("BEGIN");
Parallel.For(0, 32, (index) => {
WriteLine(" Loop " + index.ToString());
});
WriteLine("END");
}
The output looks something like:
ThreadID=1, Message=BEGIN
ThreadID=1, Message= Loop 0
ThreadID=3, Message= Loop 16
ThreadID=3, Message= Loop 17
...
ThreadID=4, Message= Loop 4
ThreadID=4, Message= Loop 5
ThreadID=1, Message= Loop 8
ThreadID=1, Message= Loop 9
ThreadID=1, Message= Loop 10
ThreadID=3, Message= Loop 21
ThreadID=4, Message= Loop 6
...
ThreadID=3, Message= Loop 24
ThreadID=3, Message= Loop 25
ThreadID=1, Message= Loop 11
ThreadID=1, Message= Loop 12
ThreadID=1, Message= Loop 13
ThreadID=1, Message= Loop 31
ThreadID=3, Message= Loop 26
...
ThreadID=3, Message= Loop 30
ThreadID=1, Message=END
You'll notice that the ThreadID of the main thread (marked "BEGIN") is recycled in the Loop threads on occasion.
My question is: can this happen anywhere else -- such as thread pool or when using other features of the Task Parallel Library? I have spent a rediculous amount of time trying to figure out other ways to provoke the behavior and cannot.
The concern here is that if we cannot rely on the ThreadID anymore (we have many tools to rely on this behavior) then we will just avoid using Parallel.For. But if the problem will manifest in other ways, we'll need to figure out how to avoid them UNTIL we revamp our logging strategy and tooling support.
If there are other ways to provoke the behavior, I'd like to know about it so I can determine if any of our usage meets such conditions so we can correct it accordingly. More imporantly, so I can get a sample program to provoke the behavior and study any side-effects in our tooling.
Parallel.For indeed runs one of the worker tasks on the calling thread. The rationale is that since the calling thread must wait until the parallel loop completes, it might as well participate in the parallel operation.
As far as other features of Task Parallel Library go, methods that are blocking will often use the calling thread. So, Parallel.For, Parallel.ForEach, Parallel.Invoke and blocking PLINQ queries will all reuse the calling thread as one of the workers. On the other hand, operations that just "kick-off" some work and immediately return - like Task.Factory.StartNew, Threadpool.QueueUserWorkItem, and non-blocking PLINQ queries - cannot use the calling thread.
As a workaround, you can run the Parallel.For inside a task and wait on the task:
public static void Main(string[] args) {
WriteLine("BEGIN");
Task.Factory.StartNew(() =>
Parallel.For(0, 32, (index) => {
WriteLine(" Loop " + index.ToString());
})
).Wait();
WriteLine("END");
}
Warning: the workaround above will not work if Task.Factory.StartNew() is called from a ThreadPool thread. In that case, the Wait call may end up executing the task inline on the calling ThreadPool thread.
You would have to not use TPL to avoid that behavior
It sounds like you need to find a different way of identifying a unit of work besides the ThreadID, possibly using the logical call context to pass along information about the current thread if you cant rewrite code to pass along some kind of identifier.
Related
I have a high qps Netty app (2500-3000qps per vm).
The EventLoop threads look like below.
bossGroup = new NioEventLoopGroup(2, new DefaultThreadFactory("inbound-netty-boss"));
workerGroup = new NioEventLoopGroup(32 * 5, new DefaultThreadFactory("inbound-netty-worker"));
The request lifecycle looks like this:
Incoming request--> Do X --> Do Y
What I want to do is :
Incoming request --> Do X(params1) --> Do X(params2) after 100ms delay -->Do Y
So far I have tried:
CompletableFuture.runAsync(RunnableX, CompletableFuture.delayedExecutor(
100,
TimeUnit.MILLISECONDS, new ForkJoinPool(32 * 5));
Executors.newScheduledThreadPool(32*5,
new ThreadFactoryBuilder().schedule(RunnableX, 100, TimeUnit.MILLISECONDS)
And finally
channelHandlerContext.executor().schedule(RunnableX, 100, TimeUnit.MILLISECONDS);
All of these have cause a tremendous bottleneck with the qps almost(1/10th) of original.
I keep seeing the threads in RUNNABLE state from the eventloopgroup and all threads in dedicated executor in TIMED WAITING state.
What am I missing? I need to find a way to not block the inbound thread, so that it is free to serve other requests.
I run 10 processes with 10 threads per each, and they constantly and quite often write to 10 log file (one per process) using logging.info() & logging.debug() during 30 seconds.
Once, usually after 10 seconds, there happens a deadlock. Processes stops processing (all of them).
gdp python [pid] with py-bt & info threads shows that it stuck here:
Id Target Id Frame
* 1 Thread 0x7ff50f020740 (LWP 1622) "python" 0x00007ff50e8276d6 in futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x564f17c8aa80)
at ../sysdeps/unix/sysv/linux/futex-internal.h:205
2 Thread 0x7ff509636700 (LWP 1624) "python" 0x00007ff50eb57bb7 in epoll_wait (epfd=8, events=0x7ff5096351d0, maxevents=256, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30
3 Thread 0x7ff508e35700 (LWP 1625) "python" 0x00007ff50eb57bb7 in epoll_wait (epfd=12, events=0x7ff508e341d0, maxevents=256, timeout=-1) at ../sysdeps/unix/sysv/linux/epoll_wait.c:30
4 Thread 0x7ff503fff700 (LWP 1667) "python" 0x00007ff50e8276d6 in futex_abstimed_wait_cancelable (private=0, abstime=0x0, expected=0, futex_word=0x564f17c8aa80)
at ../sysdeps/unix/sysv/linux/futex-internal.h:205
...[threads 5-6 like 4]...
7 Thread 0x7ff5027fc700 (LWP 1690) "python" 0x00007ff50eb46187 in __GI___libc_write (fd=2, buf=0x7ff50967bc24, nbytes=85) at ../sysdeps/unix/sysv/linux/write.c:27
...[threads 8-13 like 4]...
Stack of thread 7:
Traceback (most recent call first):
File "/usr/lib/python2.7/logging/__init__.py", line 889, in emit
stream.write(fs % msg)
...[skipped useless lines]...
And this code (I guess the code of logging __init__ function):
884 #the codecs module, but fail when writing to a
885 #terminal even when the codepage is set to cp1251.
886 #An extra encoding step seems to be needed.
887 stream.write((ufs % msg).encode(stream.encoding))
888 else:
>889 stream.write(fs % msg)
890 except UnicodeError:
891 stream.write(fs % msg.encode("UTF-8"))
892 self.flush()
893 except (KeyboardInterrupt, SystemExit):
894 raise
Stack of the rest threads is similar -- waiting for GIL:
Traceback (most recent call first):
Waiting for the GIL
File "/usr/lib/python2.7/threading.py", line 174, in acquire
rc = self.__block.acquire(blocking)
File "/usr/lib/python2.7/logging/__init__.py", line 715, in acquire
self.lock.acquire()
...[skipped useless lines]...
Its written that package logging is multithreaded without additional locks. So why does package logging may deadlock? Does it open too many file descriptors or limit anything else?
That's how I initialize it (if it is important):
def get_logger(log_level, file_name='', log_name=''):
if len(log_name) != 0:
logger = logging.getLogger(log_name)
else:
logger = logging.getLogger()
logger.setLevel(logger_state[log_level])
formatter = logging.Formatter('%(asctime)s [%(levelname)s][%(name)s:%(funcName)s():%(lineno)s] - %(message)s')
# file handler
if len(file_name) != 0:
fh = logging.FileHandler(file_name)
fh.setLevel(logging.DEBUG)
fh.setFormatter(formatter)
logger.addHandler(fh)
# console handler
console_out = logging.StreamHandler()
console_out.setLevel(logging.DEBUG)
console_out.setFormatter(formatter)
logger.addHandler(console_out)
return logger
Problem was because I've been writing output to console & to file, but all those processes were initialized with redirection to pipe, which was never listened.
p = Popen(proc_params,
stdout=PIPE,
stderr=STDOUT,
close_fds=ON_POSIX,
bufsize=1
)
So it seems pipes in this case have it's buffer size limit, and after filling it deadlocks.
Here the explanation: https://docs.python.org/2/library/subprocess.html
Note
Do not use stdout=PIPE or stderr=PIPE with this function as that can deadlock based on the child process output volume. Use Popen with the communicate() method when you need pipes.
It's done for functions which I don't use, but it seems valid for Popen run, if then you don't read out the pipes.
I am new to BSD TCP stack code here, but we have an application that converted and uses BSD TCP stack code in the user mode. I am debugging an issue that this application would hold (or not sending data) in a strange situation. I included part of the related callstack below.
Before calling sosend in uipc_socket, I have verified that the length stored in the top, _m0->m_pkthdr.len has the right value. The chain should hold 12 pieces of TCP payload each with 1368 bytes long. In the end, my callback function only got called 10 times instead of 12 and the 10 smaller mbuf chains contains less payload data after they were combined.
I checked each function in the call stack and I could not find a loop anywhere to iterate through from the head to the end of the mbuf chain as I expected. Only loops I could find was the nested do ... while() loops in sosend_generic of uipc_socket.c, however, my code path only executed the loop once, since the resid was set to 0 immediately after the (uio == NULL) check.
\#2 0x00007ffff1acb868 in ether_output_frame (ifp=0xbf5f290, m=0x7ffff7dfeb00) at .../freebsd_plebnet/net/if_ethersubr.c:457
\#3 0x00007ffff1acb7ef in ether_output (ifp=0xbf5f290, m=0x7ffff7dfeb00, dst=0x7fffffff2c8c, ro=0x7fffffff2c70) at .../freebsd_plebnet/net/if_ethersubr.c:429
\#4 0x00007ffff1ada20b in ip_output (m=0x7ffff7dfeb00, opt=0x0, ro=0x7fffffff2c70, flags=0, imo=0x0, inp=0x7fffd409e000) at .../freebsd_plebnet/netinet/ip_output.c:663
\#5 0x00007ffff1b0743b in tcp_output (tp=0x7fffd409c000) at /scratch/vindu/ec_intg/ims/src/third-party/nse/nsnet.git/freebsd_plebnet/netinet/tcp_output.c:1288
\#6 0x00007ffff1ae2789 in tcp_usr_send (so=0x7fffd40a0000, flags=0, m=0x7ffeda1b7270, nam=0x0, control=0x0, td=0x7ffff1d5f3d0) at .../freebsd_plebnet/netinet/tcp_usrreq.c:882
\#7 0x00007ffff1a93743 in sosend_generic (so=0x7fffd40a0000, addr=0x0, uio=0x0, top=0x7ffeda1b7270, control=0x0, flags=128, td=0x7ffff1d5f3d0) at .../freebsd_plebnet/kern/uipc_socket.c:1390
\#8 0x00007ffff1a9387d in sosend (so=0x7fffd40a0000, addr=0x0, uio=0x0, top=0x7ffeda1b7270, control=0x0, flags=128, td=0x7ffff1d5f3d0) at .../freebsd_plebnet/kern/uipc_socket.c:1434
I'm coding a project in Matlab, however I want the great efficiency and speed of my execution, for that sake, I want to use parallel processing threads in Matlab as I have multiple objects working or changing their states in a for loop. Is Multi-Threading is appropriate for this purpose? If so, where can I take start or can create a simple thread?
My Code:
% P=501x3 array
for i=1:length(P)
% I used position for example's sake, meaning object changing its state
Object1.position=P(i,:);
Object2.position=P(i,:);
Object3.position=P(i,:);
Object4.position=P(i,:);
% Mulitple objects changing their state on each iteration, after some calculation/formulation.
end
What I need is the basic structure of Multi-Threads according to my scenario if Threading is appropriate in my case. More suggestions for Parallel-Execution or fast processing are welcomed.
Edit1: Parray:
P =
-21.8318 19.2251 -16.0000
-21.7386 19.1620 -15.9640
-21.6455 19.0988 -15.9279
-21.5527 19.0357 -15.8918
-21.4600 18.9727 -15.8556
-21.3675 18.9096 -15.8194
-21.2752 18.8466 -15.7831
-21.1831 18.7836 -15.7468
-21.0911 18.7206 -15.7105
-20.9993 18.6577 -15.6741
-20.9078 18.5947 -15.6377
-20.8163 18.5318 -15.6012
-20.7251 18.4689 -15.5647
-20.6340 18.4061 -15.5281
-20.5432 18.3432 -15.4915
-20.4524 18.2804 -15.4548
-20.3619 18.2176 -15.4181
-20.2715 18.1548 -15.3814
-20.1813 18.0921 -15.3446
-20.0913 18.0293 -15.3078
-20.0015 17.9666 -15.2709
-19.9118 17.9039 -15.2340
-19.8223 17.8412 -15.1970
-19.7329 17.7786 -15.1601
-19.6438 17.7160 -15.1230
-19.5547 17.6534 -15.0860
-19.4659 17.5908 -15.0489
-19.3772 17.5282 -15.0117
-19.2887 17.4656 -14.9745
-19.2004 17.4031 -14.9373
-19.1122 17.3406 -14.9001
-19.0241 17.2781 -14.8628
-18.9363 17.2156 -14.8254
-18.8486 17.1532 -14.7881
-18.7610 17.0907 -14.7507
-18.6736 17.0283 -14.7132
-18.5864 16.9659 -14.6758
-18.4994 16.9035 -14.6383
-18.4124 16.8412 -14.6007
-18.3257 16.7788 -14.5632
-18.2391 16.7165 -14.5255
-18.1526 16.6542 -14.4879
-18.0663 16.5919 -14.4502
-17.9802 16.5296 -14.4125
-17.8942 16.4673 -14.3748
-17.8084 16.4051 -14.3370
-17.7227 16.3429 -14.2992
-17.6372 16.2807 -14.2614
-17.5518 16.2185 -14.2235
-17.4665 16.1563 -14.1856
-17.3815 16.0941 -14.1477
-17.2965 16.0320 -14.1097
-17.2117 15.9698 -14.0718
-17.1271 15.9077 -14.0338
-17.0426 15.8456 -13.9957
-16.9582 15.7835 -13.9576
-16.8740 15.7214 -13.9196
-16.7899 15.6594 -13.8814
-16.7060 15.5973 -13.8433
-16.6222 15.5353 -13.8051
-16.5385 15.4733 -13.7669
-16.4550 15.4113 -13.7287
-16.3716 15.3493 -13.6904
-16.2884 15.2873 -13.6521
-16.2053 15.2253 -13.6138
-16.1223 15.1634 -13.5755
-16.0395 15.1014 -13.5372
-15.9568 15.0395 -13.4988
-15.8742 14.9776 -13.4604
-15.7918 14.9157 -13.4220
-15.7095 14.8538 -13.3835
-15.6273 14.7919 -13.3451
-15.5453 14.7301 -13.3066
-15.4634 14.6682 -13.2681
-15.3816 14.6063 -13.2295
-15.2999 14.5445 -13.1910
-15.2184 14.4827 -13.1524
-15.1370 14.4209 -13.1138
-15.0557 14.3591 -13.0752
-14.9746 14.2973 -13.0366
-14.8936 14.2355 -12.9979
-14.8127 14.1737 -12.9593
-14.7319 14.1120 -12.9206
-14.6513 14.0502 -12.8819
-14.5707 13.9885 -12.8432
-14.4903 13.9267 -12.8044
-14.4100 13.8650 -12.7657
-14.3299 13.8033 -12.7269
-14.2498 13.7416 -12.6881
-14.1699 13.6799 -12.6493
-14.0901 13.6182 -12.6105
-14.0104 13.5565 -12.5717
-13.9308 13.4948 -12.5328
-13.8513 13.4332 -12.4940
-13.7720 13.3715 -12.4551
-13.6927 13.3099 -12.4162
-13.6136 13.2482 -12.3773
-13.5346 13.1866 -12.3384
-13.4556 13.1250 -12.2995
-13.3768 13.0633 -12.2605
-13.2982 13.0017 -12.2216
-13.2196 12.9401 -12.1826
-13.1411 12.8785 -12.1437
-13.0627 12.8169 -12.1047
-12.9845 12.7553 -12.0657
-12.9063 12.6937 -12.0267
-12.8283 12.6321 -11.9877
-12.7503 12.5706 -11.9487
-12.6725 12.5090 -11.9097
-12.5947 12.4474 -11.8706
-12.5171 12.3859 -11.8316
-12.4395 12.3243 -11.7925
-12.3621 12.2628 -11.7535
-12.2848 12.2012 -11.7144
-12.2075 12.1397 -11.6754
-12.1304 12.0781 -11.6363
-12.0533 12.0166 -11.5972
-11.9764 11.9550 -11.5581
-11.8995 11.8935 -11.5190
-11.8228 11.8320 -11.4799
-11.7461 11.7705 -11.4408
-11.6695 11.7089 -11.4017
-11.5930 11.6474 -11.3626
-11.5166 11.5859 -11.3235
-11.4403 11.5244 -11.2844
-11.3641 11.4629 -11.2453
-11.2880 11.4014 -11.2062
-11.2120 11.3398 -11.1671
-11.1360 11.2783 -11.1280
-11.0602 11.2168 -11.0889
-10.9844 11.1553 -11.0497
-10.9087 11.0938 -11.0106
-10.8331 11.0323 -10.9715
-10.7576 10.9708 -10.9324
-10.6821 10.9093 -10.8933
-10.6068 10.8478 -10.8542
-10.5315 10.7863 -10.8150
-10.4563 10.7248 -10.7759
-10.3812 10.6633 -10.7368
-10.3062 10.6018 -10.6977
-10.2312 10.5403 -10.6586
-10.1564 10.4788 -10.6195
-10.0816 10.4173 -10.5804
-10.0068 10.3557 -10.5414
-9.9322 10.2942 -10.5023
-9.8576 10.2327 -10.4632
-9.7831 10.1712 -10.4241
-9.7087 10.1097 -10.3851
-9.6343 10.0482 -10.3460
-9.5600 9.9866 -10.3069
-9.4858 9.9251 -10.2679
-9.4117 9.8636 -10.2289
-9.3376 9.8021 -10.1898
-9.2636 9.7405 -10.1508
-9.1897 9.6790 -10.1118
-9.1158 9.6174 -10.0728
-9.0420 9.5559 -10.0338
-8.9683 9.4944 -9.9948
-8.8946 9.4328 -9.9558
-8.8210 9.3712 -9.9169
-8.7474 9.3097 -9.8779
-8.6739 9.2481 -9.8390
-8.6005 9.1865 -9.8000
-8.5272 9.1250 -9.7611
-8.4539 9.0634 -9.7222
-8.3806 9.0018 -9.6833
-8.3074 8.9402 -9.6444
-8.2343 8.8786 -9.6056
-8.1612 8.8170 -9.5667
-8.0882 8.7554 -9.5279
-8.0152 8.6938 -9.4890
-7.9423 8.6322 -9.4502
-7.8695 8.5705 -9.4114
-7.7967 8.5089 -9.3727
-7.7239 8.4473 -9.3339
-7.6513 8.3856 -9.2951
-7.5786 8.3240 -9.2564
-7.5060 8.2623 -9.2177
-7.4335 8.2006 -9.1790
-7.3610 8.1389 -9.1403
-7.2885 8.0772 -9.1017
-7.2161 8.0155 -9.0630
-7.1438 7.9538 -9.0244
-7.0715 7.8921 -8.9858
-6.9992 7.8304 -8.9472
-6.9270 7.7687 -8.9086
-6.8548 7.7069 -8.8701
-6.7827 7.6452 -8.8316
-6.7106 7.5834 -8.7931
-6.6385 7.5217 -8.7546
-6.5665 7.4599 -8.7161
-6.4945 7.3981 -8.6777
-6.4226 7.3363 -8.6393
-6.3507 7.2745 -8.6009
-6.2788 7.2127 -8.5625
-6.2070 7.1508 -8.5242
-6.1352 7.0890 -8.4858
-6.0635 7.0271 -8.4475
-5.9917 6.9653 -8.4093
-5.9200 6.9034 -8.3710
-5.8484 6.8415 -8.3328
-5.7768 6.7796 -8.2946
-5.7052 6.7177 -8.2564
-5.6336 6.6558 -8.2183
-5.5621 6.5938 -8.1802
-5.4906 6.5319 -8.1421
-5.4191 6.4699 -8.1040
-5.3476 6.4079 -8.0660
-5.2762 6.3459 -8.0280
-5.2048 6.2839 -7.9900
-5.1334 6.2219 -7.9521
-5.0621 6.1599 -7.9142
-4.9908 6.0978 -7.8763
-4.9194 6.0358 -7.8384
-4.8482 5.9737 -7.8006
-4.7769 5.9116 -7.7628
-4.7057 5.8495 -7.7250
-4.6344 5.7874 -7.6873
-4.5632 5.7253 -7.6496
-4.4920 5.6631 -7.6119
-4.4209 5.6010 -7.5743
-4.3497 5.5388 -7.5367
-4.2786 5.4766 -7.4992
-4.2074 5.4144 -7.4616
-4.1363 5.3522 -7.4241
-4.0652 5.2899 -7.3867
-3.9941 5.2277 -7.3492
-3.9231 5.1654 -7.3118
-3.8520 5.1031 -7.2745
-3.7809 5.0408 -7.2372
-3.7099 4.9785 -7.1999
-3.6389 4.9161 -7.1626
-3.5678 4.8538 -7.1254
-3.4968 4.7914 -7.0883
-3.4258 4.7290 -7.0511
-3.3548 4.6666 -7.0140
-3.2838 4.6041 -6.9770
-3.2128 4.5417 -6.9400
-3.1418 4.4792 -6.9030
-3.0708 4.4167 -6.8661
-2.9998 4.3542 -6.8292
-2.9288 4.2917 -6.7923
-2.8578 4.2291 -6.7555
-2.7868 4.1666 -6.7187
-2.7158 4.1040 -6.6820
-2.6448 4.0414 -6.6453
-2.5738 3.9788 -6.6087
-2.5028 3.9161 -6.5720
-2.4318 3.8534 -6.5355
-2.3608 3.7908 -6.4990
-2.2897 3.7280 -6.4625
-2.2187 3.6653 -6.4261
-2.1477 3.6026 -6.3897
-2.0766 3.5398 -6.3534
-2.0056 3.4770 -6.3171
-1.9345 3.4142 -6.2808
-1.8634 3.3513 -6.2446
-1.7924 3.2885 -6.2085
-1.7213 3.2256 -6.1724
-1.6501 3.1627 -6.1363
-1.5790 3.0998 -6.1003
-1.5079 3.0368 -6.0643
-1.4367 2.9739 -6.0284
-1.3656 2.9109 -5.9925
-1.2944 2.8478 -5.9567
-1.2232 2.7848 -5.9210
-1.1519 2.7217 -5.8852
-1.0807 2.6586 -5.8496
-1.0094 2.5955 -5.8140
-0.9381 2.5324 -5.7784
-0.8668 2.4692 -5.7429
-0.7955 2.4060 -5.7074
-0.7242 2.3428 -5.6720
-0.6528 2.2796 -5.6366
-0.5814 2.2163 -5.6013
-0.5100 2.1530 -5.5661
-0.4385 2.0897 -5.5309
-0.3670 2.0264 -5.4957
-0.2955 1.9630 -5.4607
-0.2240 1.8996 -5.4256
-0.1524 1.8362 -5.3906
-0.0808 1.7727 -5.3557
-0.0092 1.7093 -5.3209
0.0625 1.6458 -5.2860
0.1341 1.5822 -5.2513
0.2059 1.5187 -5.2166
0.2776 1.4551 -5.1819
0.3494 1.3915 -5.1474
0.4212 1.3279 -5.1128
0.4931 1.2642 -5.0784
0.5650 1.2005 -5.0440
0.6369 1.1368 -5.0096
0.7089 1.0730 -4.9753
0.7809 1.0092 -4.9411
0.8530 0.9454 -4.9069
0.9251 0.8816 -4.8728
0.9972 0.8177 -4.8388
1.0694 0.7538 -4.8048
1.1416 0.6899 -4.7709
1.2139 0.6260 -4.7370
1.2862 0.5620 -4.7032
1.3585 0.4980 -4.6695
1.4309 0.4339 -4.6358
1.5034 0.3698 -4.6022
1.5759 0.3057 -4.5686
1.6484 0.2416 -4.5352
1.7210 0.1774 -4.5017
1.7936 0.1132 -4.4684
1.8663 0.0490 -4.4351
1.9391 -0.0153 -4.4019
2.0119 -0.0796 -4.3687
2.0847 -0.1439 -4.3356
2.1576 -0.2083 -4.3026
2.2306 -0.2726 -4.2697
2.3036 -0.3371 -4.2368
2.3767 -0.4015 -4.2040
2.4498 -0.4660 -4.1712
2.5230 -0.5305 -4.1385
2.5962 -0.5951 -4.1059
2.6695 -0.6597 -4.0734
2.7429 -0.7243 -4.0409
2.8163 -0.7890 -4.0085
2.8898 -0.8537 -3.9762
2.9634 -0.9184 -3.9439
3.0370 -0.9831 -3.9117
3.1106 -1.0479 -3.8796
3.1844 -1.1128 -3.8476
3.2582 -1.1776 -3.8156
3.3320 -1.2425 -3.7837
3.4060 -1.3075 -3.7519
3.4800 -1.3724 -3.7201
3.5541 -1.4374 -3.6884
3.6282 -1.5025 -3.6568
3.7024 -1.5675 -3.6253
3.7767 -1.6327 -3.5939
3.8510 -1.6978 -3.5625
3.9255 -1.7630 -3.5312
4.0000 -1.8282 -3.4999
4.0745 -1.8935 -3.4688
4.1492 -1.9588 -3.4377
4.2239 -2.0241 -3.4067
4.2987 -2.0895 -3.3758
4.3736 -2.1549 -3.3450
4.4485 -2.2203 -3.3142
4.5236 -2.2858 -3.2835
4.5987 -2.3513 -3.2529
4.6739 -2.4169 -3.2224
4.7491 -2.4825 -3.1920
4.8245 -2.5481 -3.1616
4.8999 -2.6138 -3.1313
4.9755 -2.6795 -3.1011
5.0511 -2.7453 -3.0710
5.1267 -2.8111 -3.0409
5.2025 -2.8769 -3.0110
5.2784 -2.9428 -2.9811
5.3543 -3.0087 -2.9513
5.4303 -3.0747 -2.9216
5.5065 -3.1407 -2.8920
5.5827 -3.2067 -2.8624
5.6590 -3.2728 -2.8330
5.7354 -3.3389 -2.8036
5.8118 -3.4051 -2.7743
5.8884 -3.4713 -2.7451
5.9651 -3.5375 -2.7160
6.0418 -3.6038 -2.6870
6.1187 -3.6701 -2.6580
6.1957 -3.7365 -2.6292
6.2727 -3.8029 -2.6004
6.3498 -3.8693 -2.5717
6.4271 -3.9358 -2.5431
6.5044 -4.0024 -2.5146
6.5819 -4.0690 -2.4862
6.6594 -4.1356 -2.4579
6.7370 -4.2023 -2.4296
6.8148 -4.2690 -2.4015
6.8926 -4.3357 -2.3734
6.9706 -4.4025 -2.3455
7.0486 -4.4694 -2.3176
7.1268 -4.5362 -2.2898
7.2050 -4.6032 -2.2621
7.2834 -4.6702 -2.2345
7.3619 -4.7372 -2.2070
7.4405 -4.8042 -2.1796
7.5191 -4.8713 -2.1523
7.5979 -4.9385 -2.1250
7.6769 -5.0057 -2.0979
7.7559 -5.0730 -2.0709
7.8350 -5.1402 -2.0439
7.9142 -5.2076 -2.0171
7.9936 -5.2750 -1.9903
8.0731 -5.3424 -1.9637
8.1526 -5.4099 -1.9371
8.2323 -5.4774 -1.9106
8.3122 -5.5450 -1.8842
8.3921 -5.6126 -1.8580
8.4721 -5.6803 -1.8318
8.5523 -5.7480 -1.8057
8.6326 -5.8157 -1.7797
8.7130 -5.8835 -1.7539
8.7935 -5.9514 -1.7281
8.8742 -6.0193 -1.7024
8.9549 -6.0873 -1.6768
9.0358 -6.1553 -1.6513
9.1169 -6.2233 -1.6260
9.1980 -6.2914 -1.6007
9.2793 -6.3596 -1.5755
9.3607 -6.4278 -1.5504
9.4422 -6.4960 -1.5254
9.5238 -6.5643 -1.5006
9.6056 -6.6327 -1.4758
9.6875 -6.7011 -1.4511
9.7695 -6.7695 -1.4266
9.8517 -6.8380 -1.4021
9.9340 -6.9066 -1.3778
10.0164 -6.9752 -1.3535
10.0990 -7.0438 -1.3294
10.1817 -7.1125 -1.3053
10.2645 -7.1813 -1.2814
10.3475 -7.2501 -1.2576
10.4306 -7.3189 -1.2338
10.5138 -7.3878 -1.2102
10.5972 -7.4568 -1.1867
10.6807 -7.5258 -1.1633
10.7644 -7.5949 -1.1400
10.8482 -7.6640 -1.1168
10.9321 -7.7332 -1.0938
11.0162 -7.8024 -1.0708
11.1004 -7.8717 -1.0479
11.1848 -7.9410 -1.0252
11.2693 -8.0104 -1.0026
11.3539 -8.0798 -0.9800
11.4387 -8.1493 -0.9576
11.5237 -8.2188 -0.9353
11.6087 -8.2884 -0.9131
11.6940 -8.3581 -0.8910
11.7794 -8.4278 -0.8691
11.8649 -8.4976 -0.8472
11.9506 -8.5674 -0.8255
12.0364 -8.6373 -0.8038
12.1224 -8.7072 -0.7823
12.2085 -8.7772 -0.7609
12.2948 -8.8472 -0.7396
12.3813 -8.9173 -0.7185
12.4679 -8.9875 -0.6974
12.5546 -9.0577 -0.6765
12.6415 -9.1279 -0.6557
12.7286 -9.1983 -0.6350
12.8158 -9.2686 -0.6144
12.9032 -9.3391 -0.5939
12.9907 -9.4096 -0.5735
13.0784 -9.4801 -0.5533
13.1663 -9.5507 -0.5332
13.2543 -9.6214 -0.5132
13.3425 -9.6921 -0.4933
13.4309 -9.7629 -0.4735
13.5194 -9.8337 -0.4539
13.6080 -9.9046 -0.4344
13.6969 -9.9756 -0.4150
13.7859 -10.0466 -0.3957
13.8751 -10.1177 -0.3765
13.9644 -10.1888 -0.3575
14.0539 -10.2600 -0.3386
14.1436 -10.3313 -0.3198
14.2334 -10.4026 -0.3011
14.3235 -10.4740 -0.2826
14.4137 -10.5454 -0.2641
14.5040 -10.6169 -0.2458
14.5946 -10.6884 -0.2277
14.6853 -10.7600 -0.2096
14.7761 -10.8317 -0.1917
14.8672 -10.9035 -0.1739
14.9584 -10.9753 -0.1562
15.0499 -11.0471 -0.1387
15.1414 -11.1190 -0.1213
15.2332 -11.1910 -0.1040
15.3252 -11.2631 -0.0868
15.4173 -11.3352 -0.0697
15.5096 -11.4073 -0.0528
15.6021 -11.4796 -0.0360
15.6948 -11.5519 -0.0194
15.7876 -11.6242 -0.0029
15.8806 -11.6966 0.0135
15.9739 -11.7691 0.0298
16.0673 -11.8417 0.0459
16.1609 -11.9143 0.0619
16.2547 -11.9869 0.0778
16.3486 -12.0597 0.0936
16.4428 -12.1325 0.1092
16.5371 -12.2053 0.1247
16.6317 -12.2783 0.1400
16.7264 -12.3513 0.1552
16.8213 -12.4243 0.1703
16.9164 -12.4974 0.1853
17.0117 -12.5706 0.2001
17.1072 -12.6439 0.2148
17.2029 -12.7172 0.2293
17.2988 -12.7906 0.2437
17.3949 -12.8640 0.2580
17.4911 -12.9376 0.2721
17.5876 -13.0111 0.2861
17.6843 -13.0848 0.3000
Ahsan,
I think parfor might be what you're looking for. It requires the Parallel Computing Toolbox (PCT) to use. It works like this:
parfor i = 1:length(P)
Object1.position = P(i,:);
end
Also, I would recommend using indexed structure fields, as I've done in the example below, as it increases the flexibility of any code that you write. Let me know if this doesn't work for you, and we'll try something else. I know another technique, but it's much messier. Good luck!
parfor i = 1:length(P)
Object(i).position = P(i,:);
end
Edit: Ok, this may not work because parfor is pretty particular. But it the general concept should apply to the problem you're trying to solve. I can't be more specific without knowing more about your specific problem. The key point is, inside a parfor, you must iterate something (Object, below) by the parfors iterator (i, below) and assign that thing a value. I'm not sure if you can use a for-loop inside a parfor, so the first example below may break. The point is, this is how you do parallel processing in MATLAB. Try this:
parfor i = 1:4
Object(i).position = P(i,:);
end
Edit 2:
parfor i = length(P)
Object1 = struct();
Object1.position = P(i, :);
end
Edit 3: Ok, one last thing. You can't use "sliced" structs (structs with fields) as input or output from a parfor so you have to do this:
parfor i = 1:length(P)
positionArray1(i) = P(i, :);
end
Object1 = struct('position', positionArray1);
When I analyzed a crush dump file, I often got such errors:
0:025> kP
Child-SP RetAddr Call Site
00000000`05a4fc78 00000000`77548638 ntdll!DbgBreakPoint(void) [d:\w7rtm\minkernel\ntos\rtl\amd64\debugstb.asm # 51]
00000000`05a4fc80 00000000`774b39cb ntdll!DbgUiRemoteBreakin(
void * Context = 0x00000000`00000000)+0x38 [d:\w7rtm\minkernel\ntdll\dlluistb.c # 310]
00000000`05a4fcb0 00000000`00000000 ntdll!RtlUserThreadStart(
<function> * StartAddress = 0x00000000`00000000,
void * Argument = 0x00000000`00000000)+0x25 [d:\w7rtm\minkernel\ntos\rtl\rtlexec.c # 3179]
It seems that the process crushed when creating a thread. So, I want to find who or which thread created the current thread. How can I get it?
You can look at the other threads in the process with ~*k to see if there's anything interesting. Other than that, this info simply isn't there in the dump.
-scott