Using TDD to drive out thread-safe code - multithreading

What's a good way to leverage TDD to drive out thread-safe code? For example, say I have a factory method that utilizes lazy initialization to create only one instance of a class, and return it thereafter:
private TextLineEncoder textLineEncoder;
...
public ProtocolEncoder getEncoder() throws Exception {
if(textLineEncoder == null)
textLineEncoder = new TextLineEncoder();
return textLineEncoder;
}
Now, I want to write a test in good TDD fashion that forces me to make this code thread-safe. Specifically, when two threads call this method at the same time, I don't want to create two instances and discard one. This is easily done, but how can I write a test that makes me do it?
I'm asking this in Java, but the answer should be more broadly applicable.

You could inject a "provider" (a really simple factory) that is responsible for just this line:
textLineEncoder = new TextLineEncoder();
Then your test would inject a really slow implementation of the provider. That way the two threads in the test could more easily collide. You could go as far as have the first thread wait on a Semaphore that would be released by the second thread. Then success of the test would ensure that the waiting thread times out. By giving the first thread a head-start you can make sure that it's waiting before the second one releases.

It's hard, though possible - possibly harder than it's worth. Known solutions involve instrumenting the code under test. The discussion here, "Extreme Programming Challenge Fourteen" is worth sifting through.

In the book Clean Code there are some tips on how to test concurrent code. One tip that has helped me to find concurrency bugs, is running concurrently more tests than the CPU has cores.
In my project, running the tests takes about 2 seconds on my quad core machine. When I want to test the concurrent parts (there are some tests for that), I hold down in IntelliJ IDEA the hotkey for running all tests, until I see in the status bar that 20, 50 or 100 test runs are in execution. I follow in Windows Task Manager the CPU and memory usage, to find out when all the test runs have finished executing (memory usage goes up by 1-2 GB when they all are running and then slowly goes back down).
Then I close one by one all the test run output dialogs, and check that there were no failures. Sometimes there are failed tests or tests which are in deadlock, and then I investigate them until I find the bug and have fixed it. That has helped me to find a couple of nasty concurrency bugs. The most important thing, when facing an exception/deadlock that should not have happened, is always assuming that the code is broken, and investigating the reason ruthlessly and fixing it. There are no cosmic rays which cause programs to crash randomly - bugs in code cause programs to crash.
There are also frameworks such as http://www.alphaworks.ibm.com/tech/contest which use bytecode manipulation to force the code to do more thread switching, thus increasing the probability of making concurrency bugs visible.

When I test drove an implementation that needed to be thread safe recently I came up with the solution I provided as an answer for this question. Hope that helps even though there are no tests there. Hope link is OK raher than duplicating teh answer...

Chapter 12 of Java Concurrency in Practice is called "Testing Concurrent Programs". It documents testing for safety and liveness, but says this is a hard subject. I am not sure this problem is solvable by the tools of that chapter.

Just off the top of my head could you compare the instances returned to see if they are indeed the same instance or if they are different? That's probably where I would start with C#, I would imagine you can do the same in java

Related

Scala 3 with ScalaFX thread related problem

I have an application that has multiple screens and a process that needs to get UI info from some and update others.
Tried many methods but the result always is always "not a Java FX thread". Without using some kind of thread the UI does not update Because of the multi screen nature of the app (not practical to change) I need to fundamentally change the application architecture which is why I am not posting any code - its all going to change.
What I cant work out is the best way to do this and as any changes are likely to require substantial work I am reluctant to try something that has little chance of success.
I know about Platform.runLater and tried adding that to the updates but that was complex and did not seem to be effective.
I do have the code on GitHub - its a personal leaning project that started in Scala 2 but if you have an interest in learning or pointing out my errors I can provide access.
Hope you have enjoyed a wonderful Christmas.
PS just make the repo public https://github.com/udsl/Processor6502
The problem is not that the Platform.runLater was not working its because the process is being called form a loop in a thread and without a yield the JavaFX thread never gets an opportunity to run. It just appeared to be failing – again I fall foul of an assumption.
The thread calls a method from within a loop which terminates on a condition set by the method.
The process is planned to emulate the execution of 6502 processor instructions in 2 modes run and run-slow, run-slow is run with a short delay after each instruction execution.
The updates are to the main screen the PC, status flags and register contents. The run (debug) screen gets the current instruction display updated and other items will be added. In the future.
The BRK instruction with a zero-byte following is captures and set the execution mode to single-step essentially being a break point though in the future it will be possible via the debug screen to set a breakpoint and for the execution of the breakpoint to restore the original contents. This is to enable the debugging of a future hardware item – time and finances permitting – it’s a hobby after all 😊
It terns out that the JavaFX thread issue only happens when a FX control is written to but not when read from. Placing all reads and writes in a Platform.runLater was too complex which is why I was originally searching for an alternative solution but now only needed it protect the writes is much less a hassle.
In the process loop calling Thread.’yield’() enables the code in the Platform.runLater blocks to be executed on the JavaFX thread so the UI updates without an exception.
The code in the Run method:
val thread = new Thread {
override def run =
while runMode == RunMode.Running || runMode == RunMode.RunningSlow do
executeIns
Thread.`yield`()
if runMode == RunMode.RunningSlow then
Thread.sleep(50) // slow the loop down a bit
}
thread.start
Note that because yield is a Scala reserved word needs to quote it!

How to see what started a thread in Xcode?

I have been asked to debug, and improve, a complex multithreaded app, written by someone I don't have access to, that uses concurrent queues (both GCD and NSOperationQueue). I don't have access to a plan of the multithreaded architecture, that's to say a high-level design document of what is supposed to happen when. I need to create such a plan in order to understand how the app works and what it's doing.
When running the code and debugging, I can see in Xcode's Debug Navigator the various threads that are running. Is there a way of identifying where in the source-code a particular thread was spawned? And is there a way of determining to which NSOperationQueue an NSOperation belongs?
For example, I can see in the Debug Navigator (or by using LLDB's "thread backtrace" command) a thread's stacktrace, but the 'earliest' user code I can view is the overridden (NSOperation*) start method - stepping back earlier in the stack than that just shows the assembly instructions for the framework that invokes that method (e.g. __block_global_6, _dispatch_call_block_and_release and so on).
I've investigated and sought various debugging methods but without success. The nearest I got was the idea of method swizzling, but I don't think that's going to work for, say, queued NSOperation threads. Forgive my vagueness please: I'm aware that having looked as hard as I have, I'm probably asking the wrong question, and probably therefore haven't formed the question quite clearly in my own mind, but I'm asking the community for help!
Thanks
The best I can think of is to put breakpoints on dispatch_async, -[NSOperation init], -[NSOperationQueue addOperation:] and so on. You could configure those breakpoints to log their stacktrace, possibly some other info (like the block's address for dispatch_async, or the address of the queue and operation for addOperation:), and then continue running. You could then look though the logs when you're curious where a particular block came from and see what was invoked and from where. (It would still take some detective work.)
You could also accomplish something similar with dtrace if the breakpoints method is too slow.

Why the window of my vb6 application stalls when calling a function written in C?

I'm using 3.9.7 cURL library to download files from the internet, so I created a dynamic bibioteca of viculo. dll written in C using VC + + 6.0 the problem is that when either I call my function from within my vb6 application window locks and unlocks only after you have downloaded the file how do I solve this problem?
The problem is that when you call the function from your DLL, it "blocks" your app's execution until it gets finished. Basically, execution goes from the piece of code that makes the function call, to the code inside of the function call, and then only comes back to the next line after the function call after the code inside of the function has finished running. In fact, that's how all function calls work. You can see this for yourself by single-stepping through your code in the VB 6 development environment.
You don't normally notice this because the code inside of a function being called doesn't take very long to execute before control is returned to the caller. But in this case, since the function you're calling from the DLL is doing a lot of processing, it takes a while to execute, so it "blocks" the execution of your application's code for quite a while.
This is a good general explanation for the reason why your application window appears to be frozen. A bit more technically, it's because the message pump that is responsible for processing user interaction with on-screen elements is not running (it's part of your code that has been temporarily suspended until the function that you called finishes processing). This is a bit more difficult for a VB programmer to appreciate, since none of this nitty-gritty stuff is exposed in the world of VB. It's all happening behind the scenes, just like it is in a C program, but you don't normally have to deal with any of it. Occasionally, though, the abstraction leaks, and the nitty-gritty rears its ugly head. This is one of those cases.
The correct solution to this general problem, as others have hinted at, is to run lengthy operations on a background thread. This leaves your main thread (right now, the only one you have, the one your application is running on) free to continue processing user input, while the other thread can process the data and return that processed data to the main thread when it is finished. Of course, computers can't actually do more than one thing at a time, but the magic of the operating system rapidly switching between one task and another means that you can simulate this. The mechanism for doing so involves threads.
The catch comes in the fact that the VB 6 environment does not have any type of support for creating multiple threads. You only get one thread, and that's the main thread that your application runs on. If you freeze execution of that one, even temporarily, your application freezes—as you've already found out.
However, if you're already writing a C++ DLL, there's no reason you can't create multiple threads in a VB 6 app. You just have to handle everything yourself as if you were using another lower-level language like C++. Run the C++ code on a background thread, and only return its results to the main thread when it is completely finished. In the mean time, your main thread is free.
This is still quite a bit of work, though, especially if you're inexperienced when it comes to Win32 programming and the issues surrounding multiple threads. It might be easier to find a different library that supports asynchronous function calls out-of-the-box. Antagony suggests using VB's AsyncRead method. That is probably a good option; as Karl Peterson says in the linked article, it keeps everything in pure VB 6 code, which can be a real time saver as well as a boon to future maintenance programmers. The only problem is that you'll still have to process the data somehow once you obtain it. And if that's slow, you're right back where you started from…
Check out this article, which demonstrates how to asynchronously transfer large files using a little-known method in user controls.

What are the benefits of coroutines?

I've been learning some lua for game development. I heard about coroutines in other languages but really came up on them in lua. I just don't really understand how useful they are, I heard a lot of talk how it can be a way to do multi-threaded things but aren't they run in order? So what benefit would there be from normal functions that also run in order? I'm just not getting how different they are from functions except that they can pause and let another run for a second. Seems like the use case scenarios wouldn't be that huge to me.
Anyone care to shed some light as to why someone would benefit from them?
Especially insight from a game programming perspective would be nice^^
OK, think in terms of game development.
Let's say you're doing a cutscene or perhaps a tutorial. Either way, what you have are an ordered sequence of commands sent to some number of entities. An entity moves to a location, talks to a guy, then walks elsewhere. And so forth. Some commands cannot start until others have finished.
Now look back at how your game works. Every frame, it must process AI, collision tests, animation, rendering, and sound, among possibly other things. You can only think every frame. So how do you put this kind of code in, where you have to wait for some action to complete before doing the next one?
If you built a system in C++, what you would have is something that ran before the AI. It would have a sequence of commands to process. Some of those commands would be instantaneous, like "tell entity X to go here" or "spawn entity Y here." Others would have to wait, such as "tell entity Z to go here and don't process anymore commands until it has gone here." The command processor would have to be called every frame, and it would have to understand complex conditions like "entity is at location" and so forth.
In Lua, it would look like this:
local entityX = game:GetEntity("entityX");
entityX:GoToLocation(locX);
local entityY = game:SpawnEntity("entityY", locY);
local entityZ = game:GetEntity("entityZ");
entityZ:GoToLocation(locZ);
do
coroutine.yield();
until (entityZ:isAtLocation(locZ));
return;
On the C++ size, you would resume this script once per frame until it is done. Once it returns, you know that the cutscene is over, so you can return control to the user.
Look at how simple that Lua logic is. It does exactly what it says it does. It's clear, obvious, and therefore very difficult to get wrong.
The power of coroutines is in being able to partially accomplish some task, wait for a condition to become true, then move on to the next task.
Coroutines in a game:
Easy to use, Easy to screw up when used in many places.
Just be careful and not use it in many places.
Don't make your Entire AI code dependent on Coroutines.
Coroutines are good for making a quick fix when a state is introduced which did not exist before.
This is exactly what java does. Sleep() and Wait()
Both functions are the best ways to make it impossible to debug your game.
If I were you I would completely avoid any code which has to use a Wait() function like a Coroutine does.
OpenGL API is something you should take note of. It never uses a wait() function but instead uses a clean state machine which knows exactly what state what object is at.
If you use coroutines you end with up so many stateless pieces of code that it most surely will be overwhelming to debug.
Coroutines are good when you are making an application like Text Editor ..bank application .. server ..database etc (not a game).
Bad when you are making a game where anything can happen at any point of time, you need to have states.
So, in my view coroutines are a bad way of programming and a excuse to write small stateless code.
But that's just me.
It's more like a religion. Some people believe in coroutines, some don't. The usecase, the implementation and the environment all together will result into a benefit or not.
Don't trust benchmarks which try to proof that coroutines on a multicore cpu are faster than a loop in a single thread: it would be a shame if it were slower!
If this runs later on some hardware where all cores are always under load, it will turn out to be slower - ups...
So there is no benefit per se.
Sometimes it's convenient to use. But if you end up with tons of coroutines yielding and states that went out of scope you'll curse coroutines. But at least it isn't the coroutines framework, it's still you.
We use them on a project I am working on. The main benefit for us is that sometimes with asynchronous code, there are points where it is important that certain parts are run in order because of some dependencies. If you use coroutines, you can force one process to wait for another process to complete. They aren't the only way to do this, but they can be a lot simpler than some other methods.
I'm just not getting how different they are from functions except that
they can pause and let another run for a second.
That's a pretty important property. I worked on a game engine which used them for timing. For example, we had an engine that ran at 10 ticks a second, and you could WaitTicks(x) to wait x number of ticks, and in the user layer, you could run WaitFrames(x) to wait x frames.
Even professional native concurrency libraries use the same kind of yielding behaviour.
Lots of good examples for game developers. I'll give another in the application extension space. Consider the scenario where the application has an engine that can run a users routines in Lua while doing the core functionality in C. If the user needs to wait for the engine to get to a specific state (e.g. waiting for data to be received), you either have to:
multi-thread the C program to run Lua in a separate thread and add in locking and synchronization methods,
abend the Lua routine and retry from the beginning with a state passed to the function to skip anything, least you rerun some code that should only be run once, or
yield the Lua routine and resume it once the state has been reached in C
The third option is the easiest for me to implement, avoiding the need to handle multi-threading on multiple platforms. It also allows the user's code to run unmodified, appearing as if the function they called took a long time.

How can threads be avoided?

I've read a lot recently about how writing multi-threaded apps is a huge pain in the neck, and have learned enough about the topic to understand, at least at some level, why it is so.
I've read that using functional programming techniques can help alleviate some of this pain, but I've never seen a simple example of functional code that is concurrent. So, what are some alternatives to using threads? At least, what are some ways to abstract them away so you needn't think about things like locking and whether a particular library's objects are thread-safe.
I know Google's MapReduce is supposed to help with the problem, but I haven't seen a succinct explanation of it.
Although I'm giving a specific example below, I'm more curious of general techniques than solving this specific problem (using the example to help illustrate other techniques would be helpful though).
I came to the question when I wrote a simple web crawler as a learning exercise. It works pretty well, but it is slow. Most of the bottleneck comes from downloading pages. It is currently single threaded, and thus only downloads a single page at a time. Thus, if the pages can be downloaded concurrently, it would speed things up dramatically, even if the crawler ran on a single processor machine. I looked into using threads to solve the issue, but they scare me. Any suggestions on how to add concurrency to this type of problem without unleashing a terrible threading nightmare?
The reason functional programming helps with concurrency is not because it avoids using threads.
Instead, functional programming preaches immutability, and the absence of side effects.
This means that an operation could be scaled out to N amount of threads or processes, without having to worry about messing with shared state.
Actually, threads are pretty easy to handle until you need to synchronize them. Usually, you use threadpool to add task and wait till they are finished.
It is when threads need to communicate and access shared data structures that multi threading becomes really complicated. As soon as you have two locks, you can get deadlocks, and this is where multithreading gets really hard. Sometimes, your locking code could be wrong by just a few instructions. In that case, you could only see bugs in production, on multi-core machines (if you developed on single core, happened to me) or they could be triggered by some other hardware or software. Unit testing doesn't help much here, testing finds bugs, but you can never be as sure as in "normal" apps.
I'll add an example of how functional code can be used to safely make code concurrent.
Here is some code you might want to do in parallel, so you don't have wait for one file to finish to start downloading the next:
void DownloadHTMLFiles(List<string> urls)
{
foreach(string url in urls)
{
DownlaodOneFile(url); //download html and save it to a file with a name based on the url - perhaps used for caching.
}
}
If you have a number of files the user might spend a minute or more waiting for them all. We can re-write this code functionally like this, and it basically does the exact same thing:
urls.ForEach(DownloadOneFile);
Note that this still runs sequentially. However, not only is it shorter, we've gained an important advantage here. Since each call to the DownloadOneFile function is completely isolated from the others (for our purposes, available bandwidth isn't an issue) you could very easily swap out the ForEach function for another very similar function: one that kicks off each call to DownlaodOneFile on a separate thread from a threadpool.
It turns out .Net has just such a function availabe using Parallel Extensions. So, by using functional programming you can change one line of code and suddenly have something run in parallel that used to run sequentially. That's pretty powerful.
There are a couple of brief mentions of asynchronous models but no one has really explained it so I thought I'd chime in. The most common method I've seen used as an alternative for multi-threading is asynchronous architectures. All that really means is that instead of executing code sequentially in a single thread, you use a polling method to initiate some functions and then come back and check periodically until there's data available.
This really only works in models like your aforementioned crawler, where the real bottleneck is I/O rather than CPU. In broad strokes, the asynchronous approach would initiate the downloads on several sockets, and a polling loop periodically checks to see if they're finished downloading and when that's done, we can move on to the next step. This allows you to run several downloads that are waiting on the network, by context switching within the same thread, as it were.
The multi-threaded model would work much the same, except using a separate thread rather than a polling loop checking multiple sockets in the same thread. In an I/O bound application, asynchronous polling works almost as well as threading for many use cases, since the real problem is simply waiting for the I/O to complete and not so much the waiting for the CPU to process the data.
Another real world example is for a system that needed to execute a number of other executables and wait for results. This can be done in threads, but it's also considerably simpler and almost as effective to simply fire off several external applications as Process objects, then check back periodically until they're all finished executing. This puts the CPU-intensive parts (the running code in the external executables) in their own processes, but the data processing is all handled asynchronously.
The Python ftp server lib I work on, pyftpdlib uses the Python asyncore library to handle serving FTP clients with only a single thread, and asynchronous socket communication for file transfers and command/response.
See for further reading the Python Twisted library's page on Asynchronous Programming - while somewhat specific to using Twisted, it also introduces async programming from a beginner perspective.
Concurrency is quite a complicated subject in computer science, which demands good understanding of hardware architecture as well as operating system behavior.
Multi-threading has many implementations based on your hardware and your hosting OS, and as tough as it is already, the pitfalls are numerous. It should be noted that in order to achieve "true" concurrency, threads are the only way to go. Basically, threads are the only way for you as a programmer to share resources between different parts of your software while allowing them to run in parallel. By parallel you should consider that a standard CPU (dual/multi-cores aside) can only do one thing at a time. Concepts like context switching now come into play, and they have their own set of rules and limitations.
I think you should seek more generic background on the subject, like you are saying, before you go about implementing concurrency in your program.
I guess the best place to start is the wikipedia article on concurrency, and go on from there.
What typically makes multi-threaded programming such a nightmare is when threads share resources and/or need to communicate with each other. In the case of downloading web pages, your threads would be working independently, so you may not have much trouble.
One thing you may want to consider is spawning multiple processes rather than multiple threads. In the case you mention--downloading web pages concurrently--you could split the workload up into multiple chunks and hand each chunk off to a separate instance of a tool (like cURL) to do the work.
If your goal is to achieve concurrency it will be hard to get away from using multiple threads or processes. The trick is not to avoid it but rather to manage it in a way that is reliable and non-error prone. Deadlocks and race conditions in particular are two aspects of concurrent programming that are easy to get wrong. One general approach to manage this is to use a producer/consumer queue... threads write work items to the queue and workers pull items from it. You must make sure you properly synchronize access to the queue and you're set.
Also, depending on your problem, you may also be able to create a domain specific language which does away with concurrency issues, at least from the perspective of the person using your language... of course the engine which processes the language still needs to handle concurrency, but if this will be leveraged across many users it could be of value.
There are some good libraries out there.
java.util.concurrent.ExecutorCompletionService will take a collection of Futures (i.e. tasks which return values), process them in background threads, then bung them in a Queue for you to process further as they complete. Of course, this is Java 5 and later, so isn't available everywhere.
In other words, all your code is single threaded - but where you can identify stuff safe to run in parallel, you can farm it off to a suitable library.
Point is, if you can make the tasks independent, then thread safety isn't impossible to achieve with a little thought - though it is strongly recommended you leave the complicated bit (like implementing the ExecutorCompletionService) to an expert...
One simple way to avoid threading in your simple scenario, Is to download from different processes. The main process will invoke other processes with parameters that will download the files to local directory, And then the main process can do the real job.
I don't think that there are any simple solution to those problems. Its not a threading problem. Its the concurrency that brake the human mind.
You might watch the MSDN video on the F# language: PDC 2008: An introduction to F#
This includes the two things you are looking for. (Functional + Asynchronous)
For python, this looks like an interesting approach: http://members.verizon.net/olsongt/stackless/why_stackless.html#introduction
Use Twisted. "Twisted is an event-driven networking engine written in Python" http://twistedmatrix.com/trac/. With it, I could make 100 asynchronous http requests at a time without using threads.
Your specific example is seldom solved with multi-threading. As many have said, this class of problems is IO-bound, meaning the processor has very little work to do, and spends most of it's time waiting for some data to arrive over the wire and to process that, and similarly it has to wait for disk buffers to flush so that it can put more of the recently downloaded data on disk.
The method to performance is through the select() facility, or an equivalent system call. The basic process is to open a number of sockets (for the web crawler downloads) and file handles (for storing them to disk). Next you set all of the different sockets and fh to non-blocking mode, meaning that instead of making your program wait until data is available to read after issuing a request, it returns right away with a special code (usually EAGAIN) to indicate that no data is ready. If you looped through all of the sockets in this way you would be polling, which works well, but is still a waste of cpu resources because your reads and writes will almost always return with EAGAIN.
To get around this, all of the sockets and fp's will be collected into a 'fd_set', which is passed to the select system call, then your program will block, waiting on ANY of the sockets, and will awaken your program when there's some data on any of the streams to process.
The other common case, compute bound work, is without a doubt best addressed with some sort of true parallelism (as apposed to the asynchronous concurrency presented above) to access the resources of multiple cpu's. In the case that your cpu bound task is running on a single threaded archetecture, definately avoid any concurrency, as the overhead will actually slow your task down.
Threads are not to be avoided nor are they "difficult". Functional programming is not necessarily the answer either. The .NET framework makes threading fairly simple. With a little thought you can make reasonable multithreaded programs.
Here's a sample of your webcrawler (in VB.NET)
Imports System.Threading
Imports System.Net
Module modCrawler
Class URLtoDest
Public strURL As String
Public strDest As String
Public Sub New(ByVal _strURL As String, ByVal _strDest As String)
strURL = _strURL
strDest = _strDest
End Sub
End Class
Class URLDownloader
Public id As Integer
Public url As URLtoDest
Public Sub New(ByVal _url As URLtoDest)
url = _url
End Sub
Public Sub Download()
Using wc As New WebClient()
wc.DownloadFile(url.strURL, url.strDest)
Console.WriteLine("Thread Finished - " & id)
End Using
End Sub
End Class
Public Sub Download(ByVal ud As URLtoDest)
Dim dldr As New URLDownloader(ud)
Dim thrd As New Thread(AddressOf dldr.Download)
dldr.id = thrd.ManagedThreadId
thrd.SetApartmentState(ApartmentState.STA)
thrd.IsBackground = False
Console.WriteLine("Starting Thread - " & thrd.ManagedThreadId)
thrd.Start()
End Sub
Sub Main()
Dim lstUD As New List(Of URLtoDest)
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file0.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file1.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file2.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file3.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file4.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file5.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file6.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file7.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file8.txt"))
lstUD.Add(New URLtoDest("http://stackoverflow.com/questions/382478/how-can-threads-be-avoided", "c:\file9.txt"))
For Each ud As URLtoDest In lstUD
Download(ud)
Next
' you will see this message in the middle of the text
' pressing a key before all files are done downloading aborts the threads that aren't finished
Console.WriteLine("Press any key to exit...")
Console.ReadKey()
End Sub
End Module

Resources