I want to convert a PDF to images. I am using Leadtools and to increase the speed, I am using multi-threading in the following way.
string multiPagePDF = #"Manual.pdf";
string destFileName = #"output\Manual";
Task.Factory.StartNew(() =>
{
using (RasterCodecs codecs = new RasterCodecs())
{
CodecsImageInfo info = codecs.GetInformation(multiPagePDF, true);
ParallelOptions po = new ParallelOptions();
po.MaxDegreeOfParallelism = 5;
Parallel.For(1, multiPagePDF.TotalPages+1, po, i =>
{
RasterImage image = codecs.Load(multiPagePDF, i);
codecs.Save(image, destFileName + i + ".png", RasterImageFormat.Png, 0);
});
}
});
Is this a thread-safe manner? Will it result in unexpected output?
I tried this a few times and there were instances when a specific page appeared twice in the output images.
Solution
According to Leadtools online chat support (which is very helpful btw), Rastercodecs.load is NOT thread safe and the above code would result in unexpected output (in my case, Page 1 occurred twice in the output set of images). The solution is to define codecs variable within the Parallel.For so that each iteration separately accesses its own RasterCodecs.
Amyn,
As you found out, the correct way to use the RasterCodecs object in this case is this:
Task.Factory.StartNew(() =>
{
using (RasterCodecs codecs = new RasterCodecs())
{
CodecsImageInfo info = codecs.GetInformation(multiPagePDF, true);
ParallelOptions po = new ParallelOptions();
po.MaxDegreeOfParallelism = 5;
Parallel.For(1, info.TotalPages + 1, po, i =>
{
using(RasterCodecs codecs2 = new RasterCodecs()) {
RasterImage image = codecs2.Load(multiPagePDF, i);
codecs2.Save(image, destFileName + i + ".png", RasterImageFormat.Png, 0);
}
});
}
});
This gives you the same speed benefits when running on a multi-core processor without causing any conflicts between concurrent threads.
The LEADTOOLS RasterCodecs.Load() and RasterCodecs.Save() methods are thread-safe. The reason behind creating multiple instances of the RasterCodecs class is because this class internally uses structures that hold many different loading & saving options for files. Using these structures (where these options are changed) across multiple threads can cause unpredictable results. One such property in the loading options structure is the page number. For this reason, using separate instances of this class is recommended.
Related
I want to create a custom Kofax module. When it comes to the batch processing the scanned documents get converted to PDF files. I want to fetch these PDF files, manipulate them (add a custom footer to the PDF document) and hand them back to Kofax.
So what I know so far:
create Kofax export scripts
add a custom module to Kofax
I have the APIRef.chm (Kofax.Capture.SDK.CustomModule) and the CMSplit as an example project. Unfortunately I struggle getting into it. Are there any resources out there showing step by step how to get into custom module development?
So I know that the IBatch interface represents one selected batch and the IBatchCollection represents the collection of all batches.
I would just like to know how to setup a "Hello World" example and could add my code to it and I think I don't even need a WinForms application because I only need to manipulate the PDF files and that's it...
Since I realized that your question was rather about how to create a custom module in general, allow me to add another answer. Start with a C# Console Application.
Add Required Assemblies
Below assemblies are required by a custom module. All of them reside in the KC's binaries folder (by default C:\Program Files (x86)\Kofax\CaptureSS\ServLib\Bin on a server).
Setup Part
Add a new User Control and Windows Form for setup. This is purely optional - a CM might not even have a setup form, but I'd recommend adding it regardless. The user control is the most important part, here - it will add the menu entry in KC Administration, and initialize the form itself:
[InterfaceType(ComInterfaceType.InterfaceIsIDispatch)]
public interface ISetupForm
{
[DispId(1)]
AdminApplication Application { set; }
[DispId(2)]
void ActionEvent(int EventNumber, object Argument, out int Cancel);
}
[ClassInterface(ClassInterfaceType.None)]
[ProgId("Quipu.KC.CM.Setup")]
public class SetupUserControl : UserControl, ISetupForm
{
private AdminApplication adminApplication;
public AdminApplication Application
{
set
{
value.AddMenu("Quipu.KC.CM.Setup", "Quipu.KC.CM - Setup", "BatchClass");
adminApplication = value;
}
}
public void ActionEvent(int EventNumber, object Argument, out int Cancel)
{
Cancel = 0;
if ((KfxOcxEvent)EventNumber == KfxOcxEvent.KfxOcxEventMenuClicked && (string)Argument == "Quipu.KC.CM.Setup")
{
SetupForm form = new SetupForm();
form.ShowDialog(adminApplication.ActiveBatchClass);
}
}
}
Runtime Part
Since I started with a console application, I could go ahead and put all the logic into Program.cs. Note that is for demo-purposes only, and I would recommend adding specific classes and forms later on. The example below logs into Kofax Capture, grabs the next available batch, and just outputs its name.
class Program
{
static void Main(string[] args)
{
AppDomain.CurrentDomain.AssemblyResolve += (sender, eventArgs) => KcAssemblyResolver.Resolve(eventArgs);
Run(args);
return;
}
static void Run(string[] args)
{
// start processing here
// todo encapsulate this to a separate class!
// login to KC
var login = new Login();
login.EnableSecurityBoost = true;
login.Login();
login.ApplicationName = "Quipu.KC.CM";
login.Version = "1.0";
login.ValidateUser("Quipu.KC.CM.exe", false, "", "");
var session = login.RuntimeSession;
// todo add timer-based polling here (note: mutex!)
var activeBatch = session.NextBatchGet(login.ProcessID);
Console.WriteLine(activeBatch.Name);
activeBatch.BatchClose(
KfxDbState.KfxDbBatchReady,
KfxDbQueue.KfxDbQueueNext,
0,
"");
session.Dispose();
login.Logout();
}
}
Registering, COM-Visibility, and more
Registering a Custom Module is done via RegAsm.exe and ideally with the help of an AEX file. Here's an example - please refer to the documentation for more details and all available settings.
[Modules]
Minimal CM
[Minimal CM]
RuntimeProgram=Quipu/CM/Quipu.KC.CM/Quipu.KC.CM.exe
ModuleID=Quipu.KC.CM.exe
Description=Minimal Template for a Custom Module in C#
Version=1.0
SupportsTableFields=True
SupportsNonImageFiles=True
SetupProgram=Minimal CM Setup
[Setup Programs]
Minimal CM Setup
[Minimal CM Setup]
Visible=0
OCXFile=Quipu/CM/Quipu.KC.CM/Quipu.KC.CM.exe
ProgID=Quipu.KC.CM.Setup
Last but not least, make sure your assemblies are COM-visible:
I put up the entire code on GitHub, feel free to fork it. Hope it helps.
Kofax exposes a batch as an XML, and DBLite is basically a wrapper for said XML. The structure is explained in AcBatch.htm and AcDocs.htm (to be found under the CaptureSV directory). Here's the basic idea (just documents are shown):
AscentCaptureRuntime
Batch
Documents
Document
A single document has child elements itself such as pages, and multiple properties such as Confidence, FormTypeName, and PDFGenerationFileName. This is what you want. Here's how you would navigate down the document collection, storing the filename in a variable named pdfFileName:
IACDataElement runtime = activeBatch.ExtractRuntimeACDataElement(0);
IACDataElement batch = runtime.FindChildElementByName("Batch");
var documents = batch.FindChildElementByName("Documents").FindChildElementsByName("Document");
for (int i = 0; i < documents.Count; i++)
{
// 1-based index in kofax
var pdfFileName = documents[i + 1]["PDFGenerationFileName"];
}
Personally, I don't like this structure, so I created my own wrapper for their wrapper, but that's up to you.
With regard to the custom module itself, the sample shipped is already a decent start. Basically, you would have a basic form that shows up if the user launches the module manually - which is entirely optional if work happens in the back, preferably as Windows Service. I like to start with a console application, adding forms only when needed. Here, I would launch the form as follows, or start the service. Note that I have different branches in case the user wants to install my Custom Module as service:
else if (Environment.UserInteractive)
{
// run as module
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new RuntimeForm(args));
}
else
{
// run as service
ServiceBase.Run(new CustomModuleService());
}
}
The runtime for itself just logs you into Kofax Capture, registers event handlers, and processes batch by batch:
// login to KC
cm = new CustomModule();
cm.Login("", "");
// add progress event handlers
cm.BatchOpened += Cm_BatchOpened;
cm.BatchClosed += Cm_BatchClosed;
cm.DocumentOpened += Cm_DocumentOpened;
cm.DocumentClosed += Cm_DocumentClosed;
cm.ErrorOccured += Cm_ErrorOccured;
// process in background thread so that the form does not freeze
worker = new BackgroundWorker();
worker.DoWork += (s, a) => Process();
worker.RunWorkerAsync();
Then, your CM fetches the next batch. This can either make use of Kofax' Batch Notification Service, or be based on a timer. For the former, just handle the BatchAvailable event of the session object:
session.BatchAvailable += Session_BatchAvailable;
For the latter, define a timer - preferrably with a configurable polling interval:
pollTimer.Interval = pollIntervalSeconds * 1000;
pollTimer.Elapsed += PollTimer_Elapsed;
pollTimer.Enabled = true;
When the timer elapses, you could do the following:
private void PollTimer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
mutex.WaitOne();
ProcessBatches();
mutex.ReleaseMutex();
}
I would like to use the Autodesk Design Automation API to extract all Text and Header information from a .dwg file into a json object. Is this possible with the Design Automation API?
Any example would help.
Thankyou
#Kaliph, yes, without a plugin in .NET/C++/Lisp code, it is impossible to extract block attributes by script only. I'd recommend .NET. It would be easier for you to get started with if you are not familiar with C++.
Firstly, I'd suggest you take a look at the training labs of AutoCAD .NET API:
https://www.autodesk.com/developer-network/platform-technologies/autocad
pick the latest version if you installed a latest version of AutoCAD. The main workflow of API is same across different versions, though. you can also pick C++ (ObjectARX) if you like.
In the tutorials above, it demos how to work with block. And the blog below talks about how to get attributes:
http://through-the-interface.typepad.com/through_the_interface/2006/09/getting_autocad.html
I copied here for convenience:
using Autodesk.AutoCAD;
using Autodesk.AutoCAD.Runtime;
using Autodesk.AutoCAD.ApplicationServices;
using Autodesk.AutoCAD.DatabaseServices;
using Autodesk.AutoCAD.EditorInput;
namespace MyApplication
{
public class DumpAttributes
{
[CommandMethod("LISTATT")]
public void ListAttributes()
{
Editor ed =
Application.DocumentManager.MdiActiveDocument.Editor;
Database db =
HostApplicationServices.WorkingDatabase;
Transaction tr =
db.TransactionManager.StartTransaction();
// Start the transaction
try
{
// Build a filter list so that only
// block references are selected
TypedValue[] filList = new TypedValue[1] {
new TypedValue((int)DxfCode.Start, "INSERT")
};
SelectionFilter filter =
new SelectionFilter(filList);
PromptSelectionOptions opts =
new PromptSelectionOptions();
opts.MessageForAdding = "Select block references: ";
PromptSelectionResult res =
ed.GetSelection(opts, filter);
// Do nothing if selection is unsuccessful
if (res.Status != PromptStatus.OK)
return;
SelectionSet selSet = res.Value;
ObjectId[] idArray = selSet.GetObjectIds();
foreach (ObjectId blkId in idArray)
{
BlockReference blkRef =
(BlockReference)tr.GetObject(blkId,
OpenMode.ForRead);
BlockTableRecord btr =
(BlockTableRecord)tr.GetObject(
blkRef.BlockTableRecord,
OpenMode.ForRead
);
ed.WriteMessage(
"\nBlock: " + btr.Name
);
btr.Dispose();
AttributeCollection attCol =
blkRef.AttributeCollection;
foreach (ObjectId attId in attCol)
{
AttributeReference attRef =
(AttributeReference)tr.GetObject(attId,
OpenMode.ForRead);
string str =
("\n Attribute Tag: "
+ attRef.Tag
+ "\n Attribute String: "
+ attRef.TextString
);
ed.WriteMessage(str);
}
}
tr.Commit();
}
catch (Autodesk.AutoCAD.Runtime.Exception ex)
{
ed.WriteMessage(("Exception: " + ex.Message));
}
finally
{
tr.Dispose();
}
}
}
}
I have a sample on making signs on a drawing. It covers getting attributes and modifying attributes:
https://forge.autodesk.com/cloud_and_mobile/2016/02/sign-title-block-of-dwg-file-with-autocad-io-view-data-api.html
And I also have a sample on getting Table cells of a drawing:
https://forge.autodesk.com/blog/get-cell-data-autocad-table-design-automation-api
Hope these could help you to make the plugin for your requirements.
What do you mean by "Header" information? Can you give an example?
Finding an extracting all text objects is relatively easy if you are familiar with the AutoCAD .NET API (or C++ or Lisp).
Here's an example that extracts blocks and layer names:
https://github.com/Autodesk-Forge/design.automation-.net-custom.activity.sample
I'm trying to optimize/multi-thread building a very large number of models (300+) all at once to try to speed up the creation this table to be saved to the database in my Rails 4 app.
I tried to move as many references to objects etc outside of the threads with things like memo variables and such, but I'm just not sure what to try anymore.
The code I have is as follows, I tried to keep the code that is being multi-threaded as small as possible but I keep running into circular dependency errors and/or not all of the record are created. Any help is appreciated.
Example 1:
def create
#checklist = Checklist.new(checklist_params)
respond_to do |format|
if #checklist.save
tasks = Task.where(:active => true)
checklist_date_as_time = Time.parse(#checklist.date.to_s).at_beginning_of_day
checklist_id = #checklist.id.to_i
threads = []
ActiveRecord::Base.transaction do
tasks.each do |task|
time = task.start_time
begin
threads << Thread.new do
complete_time = checklist_date_as_time + time.hour.hours + time.min.minutes
task.responses.build( task_start_time: complete_time, must_complete_by: complete_time + task.time_window, checklist_id: checklist_id, task_id: task.id)
end
end while (time += task.frequency.minutes) < task.end_time
threads.map(&:join)
task.save
end
end
format.html { redirect_to #checklist, notice: 'Checklist was successfully created.' }
format.json { render :show, status: :created, location: #checklist }
else
format.html { render :new }
format.json { render json: #checklist.errors, status: :unprocessable_entity }
end
end
AR is not "thread-safe" ... that means that a single record instance's behaviour/correctness when shared between threads is not defined/guaranteed by the framework.
the easiest answer to your question would be to perform the whole tasks = ...; ActiveRecord::Base.transaction do ... work in 1 background thread (frameworks such as DelayedJob might help) - so that the "heavy" computation is not part of the response cycle.
also be aware that using multiple threads might cause you to utilize multiple connections - thus essentially draining the AR pool. it also means that (depending on what's going on during task.responses.build) the desired intention with ActiveRecord::Base.transaction { ... } might not be correct (due multiple connection objects being involved).
I have a list of objects in my C# 4.0 app. Suppose this list contains 100 objects of student class. Is there any way in Reactive Framework to parallel execute 10 objects each at a time?
Each student object runs a method which is some what time consuming for about 10 to 15 seconds. So the first time through, take the first 10 student objects from the list and wait for all the 10 student objects to finish its work and then take next 10 student objects and so on until it completes the full items in the lists?
I have a List<Student> with 100 count.
First take 10 items from the lists and calls each object's long run method in parallel.
Receives each objects return value and update the UI [subscription part].
Next round starts only if the first 10 rounds completes and releases all the memory.
Repeat the same process for all the items in the lists.
How to catch the errors in each process ??
How to release each student object's resources and other resources from memory ?
Which is the best way to do all these things in Reactive Framework ?
This version will always have 10 students running at a time. As a student finishes, another will start. And as each student finishes, you can handle any error it had and then clean it up (this will happen before the next student starts running).
students
.ToObservable()
.Select(student => Observable.Defer(() => Observable.Start(() =>
{
// do the work for this student, then return a Tuple of the student plus any error
try
{
student.DoWork();
return { Student = student, Error = (Exception)null };
}
catch (Exception e)
{
return { Student = student, Error = e };
}
})))
.Merge(10) // let 10 students be executing in parallel at all times
.Subscribe(studentResult =>
{
if (studentResult.Error != null)
{
// handle error
}
studentResult.Student.Dispose(); // if your Student is IDisposable and you need to free it up.
});
This is not exactly what you asked since it does not finish the first batch of 10 before starting the next batch. This always keeps 10 running in parallel. If you really want batches of 10 I'll adjust the code for that.
My attempt....
var students = new List<Student>();
{....}
var cancel = students
.ToObservable(Scheduler.Default)
.Window(10)
.Merge(1)
.Subscribe(tenStudents =>
{
tenStudents.ObserveOn(Scheduler.Default)
.Do(x => DoSomeWork(x))
.ObserverOnDispatcher()
.Do(tenStudents => UpdateUI(tenStudents))
.Subscribe();
});
This to me sounds very much like a problem for TPL. You have a known set of data at rest. You want to partition up some heavy processing to run in parallel and you want to be able to batch process the load.
I don't see anywhere in your problem a source that is async, a source that is data in motion, or a consumer that needs to be reactive. This is my rationale for suggesting that you use TPL instead.
On a separate note, why the magic number of 10 to be processed in parallel? Is this a business requirement, or potentially an attempt to optimize performance? Normally it is best practice to allow the TaskPool to work out what is best for the client CPU based in the number of cores and current load. I imagine this becomes ever more important with the large variations in Devices and their CPU structures (Single Core, Multi Core, Many Core, low power/disabled cores etc).
Here is one way you could do it in LinqPad (but note the lack of Rx)
void Main()
{
var source = new List<Item>();
for (int i = 0; i < 100; i++){source.Add(new Item(i));}
//Put into batches of ten, but only then pass on the item, not the temporary tuple construct.
var batches = source.Select((item, idx) =>new {item, idx} )
.GroupBy(tuple=>tuple.idx/10, tuple=>tuple.item);
//Process one batch at a time (serially), but process the items of the batch in parallel (concurrently).
foreach (var batch in batches)
{
"Processing batch...".Dump();
var results = batch.AsParallel().Select (item => item.Process());
foreach (var result in results)
{
result.Dump();
}
"Processed batch.".Dump();
}
}
public class Item
{
private static readonly Random _rnd = new Random();
private readonly int _id;
public Item(int id)
{
_id = id;
}
public int Id { get {return _id;} }
public double Process()
{
var threadId = Thread.CurrentThread.ManagedThreadId;
string.Format("Processing on thread:{0}", threadId).Dump(Id);
var loopCount = _rnd.Next(10000,1000000);
Thread.SpinWait(loopCount);
return _rnd.NextDouble();
}
public override string ToString()
{
return string.Format("Item:{0}", _id);
}
}
I would be interested to find out if you do have a data-in-motion problem or a reactive consumer problem, but have just "dumbed down" the question to make it easier to explain.
Here's another one for releasing objects created by reflection:
We're working with a reporting tool (Active Reports 6) which creates a dll for each report.
We have lots of clients that use similar but still unique reports.
Reports are read through a web interface.
We run multiple sites, one for each client.
Our choices are:
1) Put all the reports in one big project which will be called by all the sites.
Cost: It will need to be recompiled every time we make a small change to any one report, potentially creating problems for all sites.
2) Create a whole bunch of similar little projects, with one for each site - let's say for sake of space that this creates problems, too.
3) Create a "Report Factory" which will use reflection to wire-up report dlls as needed.
We chose "3".
Problem: The final product works fine except for one thing: It won't release the report dll when done.
There is not currently a problem with the operation within a test environment, but if you try to do anything in the folder with the report dlls, you get the following error message: "This action can't be completed because the folder or a file in it is open in another program"
After research on this site and others, we realized that we needed an AppDomain for each call which can be cleanly unloaded.
After still having problems, we realized that the AppDomainSetup object needed to have a setting that allowed it to optimize for multiple users (LoaderOptimization.MultiDomain)
That didn't work.
Unfortunately, the base object (Active 6 report) can not be serialized, so we can't make a deep copy and chuck the original object.
After doing all of this, we're still experiencing problems.
Here is the code (C#):
private object WireUpReport(ReportArgs args)
{
//The parameter 'args' is a custom type (ReportArgs) which merely contains a
name/value pair collection.
object myReport = null;
string sPath = String.Empty;
string sFriendlyName = String.Empty;
sFriendlyName = System.Guid.NewGuid().ToString();
Assembly asmReport = null;
AppDomainSetup ads = null;
AppDomain adWireUp = null;
ConstructorInfo ci = null;
Type myReportType = null;
Type[] parametypes = null;
object[] paramarray = null;
object retObject = null;
try
{
//Get Report Object
sPath = GetWireUpPath(args); //Gets the path to the required dll; kept in a config file
//This parameter is used in an overloaded constructor further down
ads = new AppDomainSetup();
ads.ApplicationBase = Path.GetDirectoryName(sPath);
ads.LoaderOptimization = LoaderOptimization.MultiDomain;
adWireUp = AppDomain.CreateDomain(sFriendlyName, AppDomain.CurrentDomain.Evidence, ads);
asmReport = adWireUp.GetAssemblies()[0];
asmReport = Assembly.LoadFrom(sPath);
//Create parameters for wireup
myReportType = asmReport.GetExportedTypes()[0];
parametypes = new Type[1];
parametypes[0] = typeof(ReportArgs);
ci = myReportType.GetConstructor(parametypes);
paramarray = new object[1];
paramarray[0] = args;
//Instantiate object
myReport = ci.Invoke(paramarray);
return myReport;
}
catch (Exception ex)
{
throw ex;
}
finally
{
//Make sure Assembly object is released.
if (adWireUp != null)
{
AppDomain.Unload(adWireUp);
}
if (asmReport != null)
{
asmReport = null;
}
if (ads != null)
{
ads = null;
}
if (adWireUp != null)
{
adWireUp = null;
}
if (ci != null)
{
ci = null;
}
if (myReportType != null)
{
myReportType = null;
}
if (parametypes != null)
{
parametypes = null;
}
if (paramarray != null)
{
paramarray = null;
}
}
}
The object which is returned from this code is cast as type ActiveReports and then passed around our application.
Any help would be deeply appreciated. Thanks
Your code looks like you are seriously misunderstanding how to interact with a separate AppDomain.
Think of communicating with an AppDomain like talking to someone who's currently in another country. You know where they are, but you can't just walk over and talk to them. If you want them to do something for you, you have to open up a line of communication and tell them what you need.
The way that you open that line of communication is by defining a proxy object that can be created inside the other AppDomain and then cross the boundary back to your current AppDomain. Being able to cross the boundary requires that your object either be marked as [Serializable] or inherit from MarshalByRefObject. Because we actually want to talk to a reference in the other AppDomain and not just have a copy of it, we need the proxy to do the latter.
private class CrossDomainQuery : MarshalByRefObject
{
public void LoadDataFromAssembly(string assemblyPath)
{
var assembly = Assembly.LoadFrom(assemblyPath);
//TODO: Do something with your assembly
}
}
There is a method on the AppDomain called CreateInstanceAndUnwrap() that will create an instance of that communication object inside the other AppDomain and then hand you back a __TransparentProxy object that can be cast to the proxy type.
var crossDomainQuery = (CrossDomainQuery)adWireUp.CreateInstanceAndUnwrap(
typeof(CrossDomainQuery).Assembly.FullName,
typeof(CrossDomainQuery).FullName);
Once you have that proxy object, you can call methods on it and they will be invoked in the other AppDomain.
crossDomainQuery.LoadDataFromAssembly(assemblyPath);
So how is this different from what your current example code is doing?
Your current code does not actually execute anything useful inside the other AppDomain.
adWireUp = AppDomain.CreateDomain(sFriendlyName, AppDomain.CurrentDomain.Evidence, ads);
asmReport = adWireUp.GetAssemblies()[0];
asmReport = Assembly.LoadFrom(sPath);
This creates a new AppDomain, but then it loads all of the assemblies from that AppDomain into your current AppDomain. Additionally, it explicitly loads your report assembly into your current AppDomain.
Creating an AppDomain and calling methods on it doesn't mean that your code is executing inside of it any more than reading about another country means that you're now talking to someone inside it.
Even if you do create a proxy object and execute code inside that other AppDomain, there are a few things to be aware of.
1) Both AppDomains must be able to see the type used for the proxy, and you may have to handle AssemblyResolve events for either AppDomain manually (at least temporarily) to help resolve that.
2) AppDomains are fairly expensive to create. Generally, they are not used in situations where you need to spin something up really quickly, take some action and disappear. You should plan on either keeping them around as long as you can or be prepared to take the performance hit on every call.
3) You've said that the report type that you're instantiating is not serializable, and being able to serialize the object is a requirement for passing that type back from the other AppDomain. Defining a serializable class that can transport relevant data across the boundary and using that to pass the report data might be an option, but you'll have to determine if that works for your particular situation.
Also, as an aside, unless you have logic that depends on variables being set to null, setting everything to null in your finally does nothing useful and complicates your code.