Office Automation with #import harmful? - visual-c++

http://code.msdn.microsoft.com/office/CppAutomateOutlook-55251528 states:
[...] It is very powerful, but often not recommended because of
reference-counting problems that typically occur when used with the
Microsoft Office applications. [...]
Which reference-counting problems are specifically meant here?
For example, does it apply to the particular example?
Similarly as in the example, I just want to open Outlook, create an appointment, done.
I wanted to use #import but this statements makes me feel afraid about it ...

A circular reference happens when you have two or more objects holding references to each other, directly or indirectly. In COM, it means that circularly linked objects have called IUnknown::AddRef on each other.
In case with Excel automation, that could happen if you connect your event handler (sink) to Excell objects sourcing events (via IConnectionPoint::Advise). This way, you may be keeping a reference to e.g., Application object, while Application object keeps a reference to your sink.
This problem is not specific to smart pointers generated by VC++ #import directive. It's about how you handle the shutdown of the COM objects when you no longer need them. You should explicitly break all connections you've made (i.e., do IConnectionPoint::Unadvise), and call any explicit shutdown API the object may expose (e.g., Workbook::Close or Application::Quit). Then you should explicitly release you reference (e.g, call workbookPtr.Release() on a smart pointer).
That said, if you don't handle any COM events sourced by Excel, you shouldn't be worrying much, the chance you might create a circular reference would be low. Besides, Excel is an out-of-process COM server, and COM has some garbage collection logic in place to manage life-time of out-of-process servers. However, while your application is still open, the Excel process will be kept alive until all references to its object have been released, or Application::Quit has been called.

That is pretty nonsensical. Programming Office interop with the aid of #import is the boilerplate and recommended way. The smart pointer types it auto-generates are explicitly intended to get reference counting done automatically for you so you can't forget to call Release(). There are a few sharp edges, you do have to understand what smart pointers can do and not do.
This is otherwise par for the course for the team that's behind the All-In-One Code Framework. The samples are created by a support team in Shanghai, originally hired to help out in the MSDN forums. These guys don't have the kind of credentials you'd expect from a Microsoft programmer that works in Redmond and their snippets are not being reviewed. Some of them are outright ill conceived. If you ever asked questions at the MSDN forums and saw the answers they post then you know what I mean.
Their Solution2.cpp sample uses late-binding through IDispatch. That's definitely the hard way to interop with Office, you get no help whatsoever when you write the code. IntelliSense is unable to give you any useful information when you write the method call nor can the compiler tell you that you missed an argument or got the argument type wrong. Your program will fail at runtime with an opaque error code, like DISP_E_BADVARTYPE or DISP_E_BADPARAMCOUNT. And the Release() calls have to be made explicitly, that of course makes it easier to miss one. Problems you don't have when you use the smart pointers, they give you auto-completion and type checking. You can see for yourself how much smaller and readable Solution1.cpp is.
Diagnosing a missed call to Release() is otherwise easy, your program completes but you'll still see Outlook.exe running in Task Manager. Something you'll get used to checking anyway, it will also happen when you debug your program, find a bug and stop the program to make a correction. Which of course also prevents Release() from being called so Outlook will keep running. You have to kill it yourself.
Do consider writing this kind of code in a managed language like C# or VB.NET. You'll get a lot more help if you have a problem and you'll find lots of sample code. And the garbage collector never forgets to make a Release call. It is just a bit slow at doing so.

Related

What is the best way to understand and analyze a multithreading code?

I'm not looking for programming techniques. My question is rather about what is the best way to understand a code developed by a third party.
I have a code for an application in a specific language (it could be C/C++, Java, etc.). This code uses several threads to control different processes. The application generates a log that shows all calls to relevant functions for each thread.
I have to analyze this code to understand its operation and be able to make an improvement of the algorithm. I worked little with threads, so I do not know which is the most convenient way to start the analysis and follow the execution of each thread.
Could you give me any recommendation?
If you are able to contact any of the code's original developers, having a conversation with them (by voice or by email) and asking them to describe how they intended things to work is always preferable to only trying to reverse-engineer their intent by looking at the code. If you can't contact the developers directly, then perhaps there is a library-specific developer's forum or other on-line resource where you can discuss the library's structure with people who have experience using/debugging it.
If that's not an option (or if you've done that and still don't feel like you understand things well enough), then I often find that profiling (either via a profiling tool, or just by temporarily putting printf() [or similar] tracing-calls into the codebase at various places and seeing what gets printed when) is a good way to find out which parts of the code are actually being used at which stages of the program's execution. That will help you confirm (or disprove) your theories about how the codebase works. Knowing where and when each thread is spawned, where its entry-function is, and where/when it gets joined again by its parent thread are particularly useful.
Finally, start looking at the various pieces of data (e.g. objects and member variables) each thread examines and/or modifies, and how accesses to each those pieces of data is synchronized/serialized. Assuming the code isn't buggy, the critical sections of the codebase are good indicators of where inter-thread communication is happening.

When exactly am I required to set objects to nothing in classic asp?

On one hand the advice to always close objects is so common that I would feel foolish to ignore it (e.g. VBScript Out Of Memory Error).
However it would be equally foolish to ignore the wisdom of Eric Lippert, who appears to disagree: http://blogs.msdn.com/b/ericlippert/archive/2004/04/28/when-are-you-required-to-set-objects-to-nothing.aspx
I've worked to fix a number of web apps with OOM errors in classic asp. My first (time consuming) task is always to search the code for unclosed objects, and objects not set to nothing.
But I've never been 100% convinced that this has helped. (That said, I have found it hard to pinpoint exactly what DOES help...)
This post by Eric is talking about standalone VBScript files, not classic ASP written in VBScript. See the comments, then Eric's own comment:
Re: ASP -- excellent point, and one that I had not considered. In ASP it is sometimes very difficult to know where you are and what scope you're in.
So from this I can say that everything he wrote isn't relevant for classic ASP i.e. you should always Set everything to Nothing.
As for memory issues, I think that assigning objects (or arrays) to global scope like Session or Application is the main reason for such problems. That's the first thing I would look for and rewrite to hold only single identifider in Session then use database to manage the data.
Basically by setting a COM object to Nothing, you are forcing its terminator to run deterministically, which gives you the opportunity to handle any errors it may raise.
If you don't do it, you can get into a situation like the following:
Your code raises an error
The error isn't handled in your code and therefore ...
other objects instantiated in your code go out of scope, and their terminators run
one of the terminators raises an error
and the error that is propagated is the one from the terminator going out of scope, masking the original error.
I do remember from the dark and distant past that it was specifically recommended to close ADO objects. I'm not sure if this was because of a bug in ADO objects, or simply for the above reason (which applies more generally to any objects that can raise errors in their terminators).
And this recommendation is often repeated, though often without any credible reason. ("While ASP should automatically close and free up all object instantiations, it is always a good idea to explicitly close and free up object references yourself").
It's worth noting that in the article, he's not saying you should never worry about setting objects to nothing - just that it should not be the default behaviour for every object in every script.
Though I do suspect he's a little too quick to dismiss the "I saw this elsewhere" method of coding behaviour, I'm willing to bet that there is a reason Eric didn't consider that has caused this to be passed along as a hard 'n' fast rule - dealing with junior programmers.
When you start looking more closely at the Dreyfus model of skill acquisition, you see that at the beginning levels of acquiring a new skill, learners need simple to follow recipes. They do not yet have the knowledge or ability to make the judgement calls Eric qualifies the recommendation with later on.
Think back to when you first started programming. Could you readily judge if you were "set[tting an] expensive objects to Nothing when you are done with them if you are done with them well before they go out of scope"? Did you really know which objects were expensive or when they truly went out of scope?
Thus, most entry level programmers are simply told "always set every object to Nothing when you are done with it" because it is within their grasp to understand and follow. Unfortunately, not many programmers take the time to self-educate, learn, and grow into the higher-level Dreyfus stages where you can use the more nuanced situational approach.
And then we come back to my earlier statement - even the best of us started out at that earlier stage, where we reflexively closed all objects because that was the best we were capable of. We left large bodies of code that people look at now, and project our current competence backwards to the earlier work and assume we did that for reasons we don't understand.
I've got to get going, but I hope to expand this a little futher...

declare code as side-effects free using microsoft code contracts

Is there a way to declare a method as side-effects free using Microsoft Code Contracts (.net 4)?
The [Pure] attribute might be what you're looking for. Just attach it to your method and Code Contracts will assume it doesn't involve any state changes. Note that it doesn't actually enforce or check anything, it just tells the system to make that assumption, so it's up to you to make sure you're using it appropriately.

Achieving Thread-Safety

Question How can I make sure my application is thread-safe? Are their any common practices, testing methods, things to avoid, things to look for?
Background I'm currently developing a server application that performs a number of background tasks in different threads and communicates with clients using Indy (using another bunch of automatically generated threads for the communication). Since the application should be highly availabe, a program crash is a very bad thing and I want to make sure that the application is thread-safe. No matter what, from time to time I discover a piece of code that throws an exception that never occured before and in most cases I realize that it is some kind of synchronization bug, where I forgot to synchronize my objects properly. Hence my question concerning best practices, testing of thread-safety and things like that.
mghie: Thanks for the answer! I should perhaps be a little bit more precise. Just to be clear, I know about the principles of multithreading, I use synchronization (monitors) throughout my program and I know how to differentiate threading problems from other implementation problems. But nevertheless, I keep forgetting to add proper synchronization from time to time. Just to give an example, I used the RTL sort function in my code. Looked something like
FKeyList.Sort (CompareKeysFunc);
Turns out, that I had to synchronize FKeyList while sorting. It just don't came to my mind when initially writing that simple line of code. It's these thins I wanna talk about. What are the places where one easily forgets to add synchronization code? How do YOU make sure that you added sync code in all important places?
You can't really test for thread-safeness. All you can do is show that your code isn't thread-safe, but if you know how to do that you already know what to do in your program to fix that particular bug. It's the bugs you don't know that are the problem, and how would you write tests for those? Apart from that threading problems are much harder to find than other problems, as the act of debugging can already alter the behaviour of the program. Things will differ from one program run to the next, from one machine to the other. Number of CPUs and CPU cores, number and kind of programs running in parallel, exact order and timing of stuff happening in the program - all of this and much more will have influence on the program behaviour. [I actually wanted to add the phase of the moon and stuff like that to this list, but you get my meaning.]
My advice is to stop seeing this as an implementation problem, and start to look at this as a program design problem. You need to learn and read all that you can find about multi-threading, whether it is written for Delphi or not. In the end you need to understand the underlying principles and apply them properly in your programming. Primitives like critical sections, mutexes, conditions and threads are something the OS provides, and most languages only wrap them in their libraries (this ignores things like green threads as provided by for example Erlang, but it's a good point of view to start out from).
I'd say start with the Wikipedia article on threads and work your way through the linked articles. I have started with the book "Win32 Multithreaded Programming" by Aaron Cohen and Mike Woodring - it is out of print, but maybe you can find something similar.
Edit: Let me briefly follow up on your edited question. All access to data that is not read-only needs to be properly synchronized to be thread-safe, and sorting a list is not a read-only operation. So obviously one would need to add synchronization around all accesses to the list.
But with more and more cores in a system constant locking will limit the amount of work that can be done, so it is a good idea to look for a different way to design your program. One idea is to introduce as much read-only data as possible into your program - locking is no longer necessary, as all access is read-only.
I have found interfaces to be a very valuable aid in designing multi-threaded programs. Interfaces can be implemented to have only methods for read-only access to the internal data, and if you stick to them you can be quite sure that a lot of the potential programming errors do not occur. You can freely share them between threads, and the thread-safe reference counting will make sure that the implementing objects are properly freed when the last reference to them goes out of scope or is assigned another value.
What you do is create objects that descend from TInterfacedObject. They implement one or more interfaces which all provide only read-only access to the internals of the object, but they can also provide public methods that mutate the object state. When you create the object you keep both a variable of the object type and a interface pointer variable. That way lifetime management is easy, because the object will be deleted automatically when an exception occurs. You use the variable pointing to the object to call all methods necessary to properly set up the object. This mutates the internal state, but since this happens only in the active thread there is no potential for conflict. Once the object is properly set up you return the interface pointer to the calling code, and since there is no way to access the object afterwards except by going through the interface pointer you can be sure that only read-only access can be performed. By using this technique you can completely remove the locking inside of the object.
What if you need to change the state of the object? You don't, you create a new one by copying the data from the interface, and mutate the internal state of the new objects afterwards. Finally you return the reference pointer to the new object.
By using this you will only need locking where you get or set such interfaces. It can even be done without locking, by using the atomic interchange functions. See this blog post by Primoz Gabrijelcic for a similar use case where an interface pointer is set.
Simple: don't use shared data. Every time you access shared data you risk running into a problem (if you forget to synchronize access). Even worse, each time you access shared data you risk blocking other threads which will hurt your paralelization.
I know this advice is not always applicable. Still, it doesn't hurt if you try to follow it as much as possible.
EDIT: Longer response to Smasher's comment. Would not fit in a comment :(
You are totally correct. That's why I like to keep a shadow copy of the main data in a readonly thread. I add a versioning to the structure (one 4-aligned DWORD) and increment this version in the (lock-protected) data writer. Data reader would compare global and private version (which can be done without locking) and only if they differr it would lock the structure, duplicate it to a local storage, update the local version and unlock. Then it would access the local copy of the structure. Works great if reading is the primary way to access the structure.
I'll second mghie's advice: thread safety is designed in. Read about it anywhere you can.
For a really low level look at how it is implemented, look for a book on the internals of a real time operating system kernel. A good example is MicroC/OS-II: The Real Time Kernel by Jean J. Labrosse, which contains the complete annotated source code to a working kernel along with discussions of why things are done the way they are.
Edit: In light of the improved question focusing on using a RTL function...
Any object that can be seen by more than one thread is a potential synchronization issue. A thread-safe object would follow a consistent pattern in every method's implementation of locking "enough" of the object's state for the duration of the method, or perhaps, narrowed to just "long enough". It is certainly the case that any read-modify-write sequence to any part of an object's state must be done atomically with respect to other threads.
The art lies in figuring out how to get useful work done without either deadlocking or creating an execution bottleneck.
As for finding such problems, testing won't be any guarantee. A problem that shows up in testing can be fixed. But it is extremely difficult to write either unit tests or regression tests for thread safety... so faced with a body of existing code your likely recourse is constant code review until the practice of thread safety becomes second nature.
As folks have mentioned and I think you know, being certain, in general, that your code is thread safe is impossible (I believe provably impossible but I would have to track down the theorem). Naturally, you want to make things easier than that.
What I try to do is:
Use a known pattern of multithreaded design: A thread pool, the actor model paradigm, the command pattern or some such approach. This way, the syncronization process happens in the same way, in a uniform way, throughout the application.
Limit and concentrate the points of synchronization. Write your code so you need synchronization in as few places as possible and the keep the synchronization code in one or few places in the code.
Write the synchronization code so that the logical relation between the values is clear on both on entering and on exiting the guard. I use lots of asserts for this (your environment may limit this).
Don't ever access shared variables without guards/synchronization. Be very clear what your shared data is. (I've heard there are paradigms for guardless multithreaded programming but that would require even more research).
Write your code as cleanly, clearly and DRY-ly as possible.
My simple answer combined with those answer is:
Create your application/program using
thread safety manner
Avoid using public static variable in
all places
Therefore it usually fall into this habit/practice easily but it needs some time to get used to:
program your logic (not the UI) in functional programming language such as F# or even using Scheme or Haskell. Also functional programming promotes thread safety practice while it also warns us to always code towards purity in functional programming.
If you use F#, there's also clear distinction about using mutable or immutable objects such as variables.
Since method (or simply functions) is a first class citizen in F# and Haskell, then the code you write will also have more disciplined toward less mutable state.
Also using the lazy evaluation style that usually can be found in these functional languages, you can be sure that your program is safe fromside effects, and you'll also realize that if your code needs effects, you have to clearly define it. IF side effects are taken into considerations, then your code will be ready to take advantage of composability within components in your codes and the multicore programming.

What are all the way that you try to make your code functional like?

so that you can make your program concurrent easily in the future.
I focus on making items Immutable. Immutable objects allow you to reason about multi-threaded code a lot easier than "thread safe" objects. The object has one visible state that can be passed between threads without any synchronization. It takes the thought out of multi-threaded programming.
If you're interested, I've published a lot of my work with immutable objects, in particular immutable collections on code gallery. The name of the project is RantPack. In the collection area I have
ImmutableCollection<T>
ImmutableMap<TKey,TValue>
ImmutableAvlTree<T>
ImmutableLinkedList<T>
ImmutableArray<T>
ImmutableStack<T>
ImmutableQueue<T>
There is an additional shim layer which (CollectionUtility) which will produce wrapper objects that implement BCL interfaces such as IList<T> and ICollection<T>. They can't fully implement the interfaces since they are immutable but all possible methods are implemented.
The source code (C#) including the unit testing is also available on the site.
I program mainly in Java. I'm waiting patiently for the day where closures will be added to the language. But as I am still stuck on Java 1.4.2, even if they get added, that's not going to be for me for a long time !
That said, my main "functional" way of programming is making a lot of use of the "final" keyword. I try to have as many classes as possible completely immutable, and for the rest to have a clear distinction between what's transient and what's immutable.
Don't use member variables or global variables. Use the local stack of functions/methods. When a method uses only internally scoped variables and call parameters and returns all information using out/inout/reference parameters or return values, it is functional.
Make everything asynchronic.
Use immutable objects, messages, etc.
Communicate via queues.
Here's a talk on rubyconf 2008 about the subject, it's mostly ruby centered, but several concepts remain valid.
http://rubyconf2008.confreaks.com/better-ruby-through-functional-programming-2.html

Resources