I have a DSL project using Xtext together with the Language Server Protocol.
Now I want to trigger a Generator from the client (in my case VS Code) to be executed on the server.
Not automatically (I would know how to do that), but explicitly triggered by the user from a VS Code command.
I am aware of the IExecutableCommandService in Xtext and I know how to hook into it.
But I don't know how to retrieve the corresponding Resource from a given file path:
#Override
public Object execute(ExecuteCommandParams params, ILanguageServerAccess access, CancelIndicator cancelIndicator) {
if ("c4.generator.type".equals(params.getCommand())) {
// fileURI passed from client to the server
String fileURI = ((JsonPrimitive)params.getArguments().get(0)).getAsString();
// This is where I stuck
Resource resource = whatAPItoCallToRetrieveResourceFromFile(fileURI);
// Call the generator then
doGenerate(resource);
}
return "Unknown Command";
}
The use-case is the same as described in this blog: https://dietrich-it.de/xtext/2011/10/15/xtext-calling-the-generator-from-a-context-menu/
But the description is for Eclipse only (not using lsp).
If you already have the correct URI, you should be able to use XtextResourceSet to get access to the resource:
final XtextResourceSet rs = new XtextResourceSet();
final Resource r = rs.getResource(fileURI, true);
doGenerate(r);
otherwise you can get access to the Xtext index and iterate over all resources searching for the resource of interest, by using access.doReadIndex:
final XtextResourceSet rs = new XtextResourceSet();
final Function<ILanguageServerAccess.IndexContext, Boolean> func = (
ILanguageServerAccess.IndexContext ctxt) -> {
for (final IResourceDescription rd: ctxt.getIndex().getAllResourceDescriptions()) {
if(<check_rd>) {
Resource r = rs.getResource(rd.getURI(), true);
doGenerate(r);
}
}
return true;
};
access.doReadIndex(func);
Related
I want to create a custom Kofax module. When it comes to the batch processing the scanned documents get converted to PDF files. I want to fetch these PDF files, manipulate them (add a custom footer to the PDF document) and hand them back to Kofax.
So what I know so far:
create Kofax export scripts
add a custom module to Kofax
I have the APIRef.chm (Kofax.Capture.SDK.CustomModule) and the CMSplit as an example project. Unfortunately I struggle getting into it. Are there any resources out there showing step by step how to get into custom module development?
So I know that the IBatch interface represents one selected batch and the IBatchCollection represents the collection of all batches.
I would just like to know how to setup a "Hello World" example and could add my code to it and I think I don't even need a WinForms application because I only need to manipulate the PDF files and that's it...
Since I realized that your question was rather about how to create a custom module in general, allow me to add another answer. Start with a C# Console Application.
Add Required Assemblies
Below assemblies are required by a custom module. All of them reside in the KC's binaries folder (by default C:\Program Files (x86)\Kofax\CaptureSS\ServLib\Bin on a server).
Setup Part
Add a new User Control and Windows Form for setup. This is purely optional - a CM might not even have a setup form, but I'd recommend adding it regardless. The user control is the most important part, here - it will add the menu entry in KC Administration, and initialize the form itself:
[InterfaceType(ComInterfaceType.InterfaceIsIDispatch)]
public interface ISetupForm
{
[DispId(1)]
AdminApplication Application { set; }
[DispId(2)]
void ActionEvent(int EventNumber, object Argument, out int Cancel);
}
[ClassInterface(ClassInterfaceType.None)]
[ProgId("Quipu.KC.CM.Setup")]
public class SetupUserControl : UserControl, ISetupForm
{
private AdminApplication adminApplication;
public AdminApplication Application
{
set
{
value.AddMenu("Quipu.KC.CM.Setup", "Quipu.KC.CM - Setup", "BatchClass");
adminApplication = value;
}
}
public void ActionEvent(int EventNumber, object Argument, out int Cancel)
{
Cancel = 0;
if ((KfxOcxEvent)EventNumber == KfxOcxEvent.KfxOcxEventMenuClicked && (string)Argument == "Quipu.KC.CM.Setup")
{
SetupForm form = new SetupForm();
form.ShowDialog(adminApplication.ActiveBatchClass);
}
}
}
Runtime Part
Since I started with a console application, I could go ahead and put all the logic into Program.cs. Note that is for demo-purposes only, and I would recommend adding specific classes and forms later on. The example below logs into Kofax Capture, grabs the next available batch, and just outputs its name.
class Program
{
static void Main(string[] args)
{
AppDomain.CurrentDomain.AssemblyResolve += (sender, eventArgs) => KcAssemblyResolver.Resolve(eventArgs);
Run(args);
return;
}
static void Run(string[] args)
{
// start processing here
// todo encapsulate this to a separate class!
// login to KC
var login = new Login();
login.EnableSecurityBoost = true;
login.Login();
login.ApplicationName = "Quipu.KC.CM";
login.Version = "1.0";
login.ValidateUser("Quipu.KC.CM.exe", false, "", "");
var session = login.RuntimeSession;
// todo add timer-based polling here (note: mutex!)
var activeBatch = session.NextBatchGet(login.ProcessID);
Console.WriteLine(activeBatch.Name);
activeBatch.BatchClose(
KfxDbState.KfxDbBatchReady,
KfxDbQueue.KfxDbQueueNext,
0,
"");
session.Dispose();
login.Logout();
}
}
Registering, COM-Visibility, and more
Registering a Custom Module is done via RegAsm.exe and ideally with the help of an AEX file. Here's an example - please refer to the documentation for more details and all available settings.
[Modules]
Minimal CM
[Minimal CM]
RuntimeProgram=Quipu/CM/Quipu.KC.CM/Quipu.KC.CM.exe
ModuleID=Quipu.KC.CM.exe
Description=Minimal Template for a Custom Module in C#
Version=1.0
SupportsTableFields=True
SupportsNonImageFiles=True
SetupProgram=Minimal CM Setup
[Setup Programs]
Minimal CM Setup
[Minimal CM Setup]
Visible=0
OCXFile=Quipu/CM/Quipu.KC.CM/Quipu.KC.CM.exe
ProgID=Quipu.KC.CM.Setup
Last but not least, make sure your assemblies are COM-visible:
I put up the entire code on GitHub, feel free to fork it. Hope it helps.
Kofax exposes a batch as an XML, and DBLite is basically a wrapper for said XML. The structure is explained in AcBatch.htm and AcDocs.htm (to be found under the CaptureSV directory). Here's the basic idea (just documents are shown):
AscentCaptureRuntime
Batch
Documents
Document
A single document has child elements itself such as pages, and multiple properties such as Confidence, FormTypeName, and PDFGenerationFileName. This is what you want. Here's how you would navigate down the document collection, storing the filename in a variable named pdfFileName:
IACDataElement runtime = activeBatch.ExtractRuntimeACDataElement(0);
IACDataElement batch = runtime.FindChildElementByName("Batch");
var documents = batch.FindChildElementByName("Documents").FindChildElementsByName("Document");
for (int i = 0; i < documents.Count; i++)
{
// 1-based index in kofax
var pdfFileName = documents[i + 1]["PDFGenerationFileName"];
}
Personally, I don't like this structure, so I created my own wrapper for their wrapper, but that's up to you.
With regard to the custom module itself, the sample shipped is already a decent start. Basically, you would have a basic form that shows up if the user launches the module manually - which is entirely optional if work happens in the back, preferably as Windows Service. I like to start with a console application, adding forms only when needed. Here, I would launch the form as follows, or start the service. Note that I have different branches in case the user wants to install my Custom Module as service:
else if (Environment.UserInteractive)
{
// run as module
Application.EnableVisualStyles();
Application.SetCompatibleTextRenderingDefault(false);
Application.Run(new RuntimeForm(args));
}
else
{
// run as service
ServiceBase.Run(new CustomModuleService());
}
}
The runtime for itself just logs you into Kofax Capture, registers event handlers, and processes batch by batch:
// login to KC
cm = new CustomModule();
cm.Login("", "");
// add progress event handlers
cm.BatchOpened += Cm_BatchOpened;
cm.BatchClosed += Cm_BatchClosed;
cm.DocumentOpened += Cm_DocumentOpened;
cm.DocumentClosed += Cm_DocumentClosed;
cm.ErrorOccured += Cm_ErrorOccured;
// process in background thread so that the form does not freeze
worker = new BackgroundWorker();
worker.DoWork += (s, a) => Process();
worker.RunWorkerAsync();
Then, your CM fetches the next batch. This can either make use of Kofax' Batch Notification Service, or be based on a timer. For the former, just handle the BatchAvailable event of the session object:
session.BatchAvailable += Session_BatchAvailable;
For the latter, define a timer - preferrably with a configurable polling interval:
pollTimer.Interval = pollIntervalSeconds * 1000;
pollTimer.Elapsed += PollTimer_Elapsed;
pollTimer.Enabled = true;
When the timer elapses, you could do the following:
private void PollTimer_Elapsed(object sender, System.Timers.ElapsedEventArgs e)
{
mutex.WaitOne();
ProcessBatches();
mutex.ReleaseMutex();
}
I have implemented an Azure function that is triggered by a HttpRequest. A parameter called name is passed as part of the HttpRequest. In Integration section, I have used the following query to retrieve data from CosmosDB (as an input):
SELECT * FROM c.my_collection pm
WHERE
Contains(pm.first_name,{name})
As you see I am sending the 'name' without sanitizing it. Is there any SQLInjection concern here?
I searched and noticed that parameterization is available but that is not something I can do anything about here.
When the binding occurs (the data from the HTTP Trigger gets sent to the Cosmos DB Input bind), it is passed through a SQLParameterCollection that will handle sanitization.
Please view this article:
Parameterized SQL provides robust handling and escaping of user input, preventing accidental exposure of data through “SQL injection”
This will cover any attempt to inject SQL through the name property.
If you're using Microsoft.Azure.Cosmos instead of Microsoft.Azure.Documents:
public class MyContainerDbService : IMyContainerDbService
{
private Container _container;
public MyContainerDbService(CosmosClient dbClient)
{
this._container = dbClient.GetContainer("MyDatabaseId", "MyContainerId");
}
public async Task<IEnumerable<MyEntry>> GetMyEntriesAsync(string queryString, Dictionary<string, object> parameters)
{
if ((parameters?.Count ?? 0) < 1)
{
throw new ArgumentException("Parameters are required to prevent SQL injection.");
}
var queryDef = new QueryDefinition(queryString);
foreach(var parm in parameters)
{
queryDef.WithParameter(parm.Key, parm.Value);
}
var query = this._container.GetItemQueryIterator<MyEntry>(queryDef);
List<MyEntry> results = new List<MyEntry>();
while (query.HasMoreResults)
{
var response = await query.ReadNextAsync();
results.AddRange(response.ToList());
}
return results;
}
}
I am trying to set property setendpointmapping of object UriEndpointmapping with a hashmap with the followings values:
UriEndpointMapping uriEndpointMapping = new UriEndpointMapping();
Map<String,Object> endpointMap = new HashMap<>();
endpointMap.put("/miservicio/cliente", gateway);
endpointMap.put("/miservicio/cliente.wsdl", wsdlDefinition());
uriEndpointMapping.setEndpointMap(endpointMap);
Where: gateway is an bean of type int-ws:inbound-gateway
and wsdlDefinition is a method to return DefaultWsdl11Definition
So, When I call from of browser localhost:8080/miservicio/cliente.wsdl, I don't get a response. Thus, How should I work it?
You misunderstood the UriEndpointMapping logic a bit:
* Implementation of the {#code EndpointMapping} interface to map from the full request URI or request URI path to
* endpoint beans.
The WSDL definition logic is a bit different and it is done like a part of the MessageDispatcherServlet startup logic:
private void initWsdlDefinitions(ApplicationContext context) {
wsdlDefinitions = BeanFactoryUtils
.beansOfTypeIncludingAncestors(context, WsdlDefinition.class, true, false);
The WSDL selection logic from there by the incoming request is like this:
protected WsdlDefinition getWsdlDefinition(HttpServletRequest request) {
if (HttpTransportConstants.METHOD_GET.equals(request.getMethod()) &&
request.getRequestURI().endsWith(WSDL_SUFFIX_NAME)) {
String fileName = WebUtils.extractFilenameFromUrlPath(request.getRequestURI());
return wsdlDefinitions.get(fileName);
}
else {
return null;
}
}
And let's take some sample from the Spring WS documentation:
<sws:dynamic-wsdl id="holiday"
portTypeName="HumanResource"
locationUri="/holidayService/"
targetNamespace="http://mycompany.com/hr/definitions">
<sws:xsd location="/WEB-INF/hr.xsd"/>
</sws:dynamic-wsdl>
and this quote:
The id determines the URL where the WSDL can be retrieved. In this case, the id is holiday, which means that the WSDL can be retrieved as holiday.wsdl in the servlet context. The full URL will typically be http://localhost:8080/holidayService/holiday.wsdl
This is my first SO question so please let me know if this question is not very clear or if I am missing anything.
FYI SO prevented me from attaching links, so sorry for all the bad formatting.
Overview
I'm trying to read (and write) the "Actual work" for a resource in Project Server Online by using the CSOM library available by Microsoft. Reading and writing the assignments and Actual work is working perfectly, as long as I am reading the assignments for the currently authenticated user. If I attempt to read this for another resource, I receive a GeneralSecurityAccessDenied error.
I've done this in the past using Impersonation, which is supposed to be called transparently in the background if the user has the StatusBrokerPermission, but it doesn't seem to be working for me. Impersonation has been removed in 2013+, so that's no longer an option.
Problem summary
The CSOM is supposed to transparently enable statusing extensions to allow status updates to be made for resources other than the currently authenticated user (as long as the user has the status broker permission). This works fine for adding new assignments, but does not work when trying to update actual TimePhased hours via the TimePhased assignments. The assignments cannot be queried, and thus, we cannot call SubmitAllStatusUpdates to submit the hours.
Research
Usage scenarios for the CSOM: https:// msdn.microsoft.com/en-us/library/office/jj163082(v=office.15).aspx#pj15_WhatTheCSOM_UsageScenarios
Impersonation Deprecated: https:// msdn.microsoft.com/en-us/library/office/ee767690(v=office.15).aspx#pj15_WhatsNew_Deprecated)
Picture: Supposed to read on behalf of another user...
People with the same problem # 1: https:// social.technet.microsoft.com/Forums/projectserver/en-US/dccdb543-18a1-4a0e-a948-5d861305516e/how-to-get-resource-assignments-summary-view-data-project-server-online-2013?forum=projectonline)
People with the same problem # 2: http:// uzzai.com/ZB43wp95/ps2013-app-how-to-read-and-update-timephased-data-with-jsom-javascript-csom.html
People with the same problem # 4: https:// social.technet.microsoft.com/Forums/Sharepoint/en-US/be27d497-e959-44b6-97cb-8f19fe0278fe/csom-how-to-set-timephase-data-on-an-assignment?forum=project2010custprog
Other things I've tried
Using the CSOM with the MsOnlineClaimsHelper to retrieve the FedAuth cookies for a user (and assigning them using the CookieContainer).
Using the REST/OData API.
a) https:// URL.sharepoint.com/sites/pwa/_api/ProjectServer/EnterpriseResources('c39ba8f1-00fe-e311-8894-00155da45f0e')/Assignments/GetTimePhaseByUrl(start='2014-12-09',end='2014-12-09')/assignments
Enabling the "StatusBrokerPermission" for the user
Unchecking the “Only allow task updates via Tasks and Timesheets.” Option within the server settings screen (Task settings and display).
Creating a SharePoint-hosted app and using JSOM code equivalent to the CSOM code above.
a) The code we wrote was JavaScript being executed from within SharePoint app, so we did not need to provide authentication. The user who was logged in had the StatusBrokerPermission.
Using a Provider-hosted SharePoint app and using the CSOM code above. We tried using all authentication methods for CSOM above, with an additional test:
a) using Fiddler to view the FedAuth cookies being set by the SharePoint app authentication, and overriding the WebRequest to manually insert the FedAuth/rtFA cookies: webRequestEventArgs.WebRequestExecutor.WebRequest.CookieContainer = getStaticCookieContainer();
Using timesheets to submit time phased data.
a) We can only create a timesheet for the currently-authenticated user, and cannot populate timesheet lines with projects / assignments not available to him (or a GeneralItemDoesNotExist error is thrown).
Manually issuing a “SubmitAllStatusUpdates” CSOM request using fiddler, as a different user.
a) The purpose of this test was to determine if we can write time phased data, even if we can’t read it.
Making sure the project was checked out to the current user.
Using administrative delegation for a resource.
Setting all available options within project permissions.
Using the Project Web UI to enter the TimePhased data for other resources.
Using SharePoint permission mode instead of Project Permission Mode.
The code
See failing code screenshot here
using System;
using System.Security;
using Microsoft.ProjectServer.Client;
using Microsoft.SharePoint.Client;
namespace ProjectOnlineActuals
{
static class Program
{
const string projectSite = "https://URL.sharepoint.com/sites/pwa/";
private const string edward = "c39ba8f1-00fe-e311-8894-00155da45f0e";
private const string admin = "8b1bcfa4-1b7f-e411-af75-00155da4630b";
static void Main(string[] args)
{
TestActuals();
}
private static void TestActuals()
{
Console.WriteLine("Attempting test # 1 (login: admin, resource: admin)");
TestActuals("admin#URL.onmicrosoft.com", "123", admin);
Console.WriteLine("Attempting test # 2 (login: admin, resource: edward)");
TestActuals("adminy#hmssoftware.onmicrosoft.com", "123", edward);
Console.ReadLine();
}
private static void TestActuals(string username, string password, string resourceID)
{
try
{
using (ProjectContext context = new ProjectContext(projectSite))
{
DateTime startDate = DateTime.Now.Date;
DateTime endDate = DateTime.Now.Date;
Login(context, username, password);
context.Load(context.Web); // Query for Web
context.ExecuteQuery(); // Execute
Guid gResourceId = new Guid(resourceID);
EnterpriseResource enterpriseResource = context.EnterpriseResources.GetByGuid(gResourceId);
context.Load(enterpriseResource, p => p.Name, p => p.Assignments, p => p.Email);
Console.Write("Loading resource...");
context.ExecuteQuery();
Console.WriteLine("done! {0}".FormatWith(enterpriseResource.Name));
Console.Write("Adding new resource assignment to collection...");
enterpriseResource.Assignments.Add(new StatusAssignmentCreationInformation
{
Comment = "testing comment - 2016-02-17",
ProjectId = new Guid("27bf182c-2339-e411-8e76-78e3b5af0525"),
Task = new StatusTaskCreationInformation
{
Start = DateTime.Now,
Finish = DateTime.Now.AddDays(2),
Name = "testing - 2016-02-17",
}
});
Console.WriteLine("done!");
Console.Write("Trying to save new resource assignment...");
enterpriseResource.Assignments.Update();
context.ExecuteQuery();
Console.WriteLine("done!");
Console.Write("Loading TimePhase...");
TimePhase timePhase = enterpriseResource.Assignments.GetTimePhase(startDate.Date, endDate.Date);
context.ExecuteQuery();
Console.WriteLine("done!");
Console.Write("Loading TimePhase assignments...");
context.Load(timePhase.Assignments);
context.ExecuteQuery();
Console.WriteLine("done! Found {0} assignments.".FormatWith(timePhase.Assignments.Count));
Console.WriteLine("Updating TimePhase assignments...");
foreach (var assignment in timePhase.Assignments)
{
Console.WriteLine("Updating assignment: {0}. ActualWork: {1}".FormatWith(assignment.Name, assignment.ActualWork));
assignment.ActualWork = "9h";
assignment.RegularWork = "3h";
assignment.RemainingWork = "0h";
}
timePhase.Assignments.SubmitAllStatusUpdates("Status update comment test 2016-02-17");
context.ExecuteQuery();
Console.WriteLine("done!");
Console.WriteLine("Success (retrieved & updated {0} time phase assignments)!".FormatWith(timePhase.Assignments.Count));
}
}
catch (Exception ex)
{
if (ex.ToString().Contains("GeneralSecurityAccessDenied"))
Console.WriteLine("ERROR! - GeneralSecurityAccessDenied");
else
throw;
}
finally
{
Console.WriteLine();
Console.WriteLine();
}
}
private static void Login(ProjectContext projContext, string username, string password)
{
var securePassword = new SecureString();
foreach (char c in password)
securePassword.AppendChar(c);
projContext.Credentials = new SharePointOnlineCredentials(username, securePassword);
}
static string FormatWith(this string str, params object[] args)
{
return String.Format(str, args);
}
}
}
Can anyone help??
Here's another one for releasing objects created by reflection:
We're working with a reporting tool (Active Reports 6) which creates a dll for each report.
We have lots of clients that use similar but still unique reports.
Reports are read through a web interface.
We run multiple sites, one for each client.
Our choices are:
1) Put all the reports in one big project which will be called by all the sites.
Cost: It will need to be recompiled every time we make a small change to any one report, potentially creating problems for all sites.
2) Create a whole bunch of similar little projects, with one for each site - let's say for sake of space that this creates problems, too.
3) Create a "Report Factory" which will use reflection to wire-up report dlls as needed.
We chose "3".
Problem: The final product works fine except for one thing: It won't release the report dll when done.
There is not currently a problem with the operation within a test environment, but if you try to do anything in the folder with the report dlls, you get the following error message: "This action can't be completed because the folder or a file in it is open in another program"
After research on this site and others, we realized that we needed an AppDomain for each call which can be cleanly unloaded.
After still having problems, we realized that the AppDomainSetup object needed to have a setting that allowed it to optimize for multiple users (LoaderOptimization.MultiDomain)
That didn't work.
Unfortunately, the base object (Active 6 report) can not be serialized, so we can't make a deep copy and chuck the original object.
After doing all of this, we're still experiencing problems.
Here is the code (C#):
private object WireUpReport(ReportArgs args)
{
//The parameter 'args' is a custom type (ReportArgs) which merely contains a
name/value pair collection.
object myReport = null;
string sPath = String.Empty;
string sFriendlyName = String.Empty;
sFriendlyName = System.Guid.NewGuid().ToString();
Assembly asmReport = null;
AppDomainSetup ads = null;
AppDomain adWireUp = null;
ConstructorInfo ci = null;
Type myReportType = null;
Type[] parametypes = null;
object[] paramarray = null;
object retObject = null;
try
{
//Get Report Object
sPath = GetWireUpPath(args); //Gets the path to the required dll; kept in a config file
//This parameter is used in an overloaded constructor further down
ads = new AppDomainSetup();
ads.ApplicationBase = Path.GetDirectoryName(sPath);
ads.LoaderOptimization = LoaderOptimization.MultiDomain;
adWireUp = AppDomain.CreateDomain(sFriendlyName, AppDomain.CurrentDomain.Evidence, ads);
asmReport = adWireUp.GetAssemblies()[0];
asmReport = Assembly.LoadFrom(sPath);
//Create parameters for wireup
myReportType = asmReport.GetExportedTypes()[0];
parametypes = new Type[1];
parametypes[0] = typeof(ReportArgs);
ci = myReportType.GetConstructor(parametypes);
paramarray = new object[1];
paramarray[0] = args;
//Instantiate object
myReport = ci.Invoke(paramarray);
return myReport;
}
catch (Exception ex)
{
throw ex;
}
finally
{
//Make sure Assembly object is released.
if (adWireUp != null)
{
AppDomain.Unload(adWireUp);
}
if (asmReport != null)
{
asmReport = null;
}
if (ads != null)
{
ads = null;
}
if (adWireUp != null)
{
adWireUp = null;
}
if (ci != null)
{
ci = null;
}
if (myReportType != null)
{
myReportType = null;
}
if (parametypes != null)
{
parametypes = null;
}
if (paramarray != null)
{
paramarray = null;
}
}
}
The object which is returned from this code is cast as type ActiveReports and then passed around our application.
Any help would be deeply appreciated. Thanks
Your code looks like you are seriously misunderstanding how to interact with a separate AppDomain.
Think of communicating with an AppDomain like talking to someone who's currently in another country. You know where they are, but you can't just walk over and talk to them. If you want them to do something for you, you have to open up a line of communication and tell them what you need.
The way that you open that line of communication is by defining a proxy object that can be created inside the other AppDomain and then cross the boundary back to your current AppDomain. Being able to cross the boundary requires that your object either be marked as [Serializable] or inherit from MarshalByRefObject. Because we actually want to talk to a reference in the other AppDomain and not just have a copy of it, we need the proxy to do the latter.
private class CrossDomainQuery : MarshalByRefObject
{
public void LoadDataFromAssembly(string assemblyPath)
{
var assembly = Assembly.LoadFrom(assemblyPath);
//TODO: Do something with your assembly
}
}
There is a method on the AppDomain called CreateInstanceAndUnwrap() that will create an instance of that communication object inside the other AppDomain and then hand you back a __TransparentProxy object that can be cast to the proxy type.
var crossDomainQuery = (CrossDomainQuery)adWireUp.CreateInstanceAndUnwrap(
typeof(CrossDomainQuery).Assembly.FullName,
typeof(CrossDomainQuery).FullName);
Once you have that proxy object, you can call methods on it and they will be invoked in the other AppDomain.
crossDomainQuery.LoadDataFromAssembly(assemblyPath);
So how is this different from what your current example code is doing?
Your current code does not actually execute anything useful inside the other AppDomain.
adWireUp = AppDomain.CreateDomain(sFriendlyName, AppDomain.CurrentDomain.Evidence, ads);
asmReport = adWireUp.GetAssemblies()[0];
asmReport = Assembly.LoadFrom(sPath);
This creates a new AppDomain, but then it loads all of the assemblies from that AppDomain into your current AppDomain. Additionally, it explicitly loads your report assembly into your current AppDomain.
Creating an AppDomain and calling methods on it doesn't mean that your code is executing inside of it any more than reading about another country means that you're now talking to someone inside it.
Even if you do create a proxy object and execute code inside that other AppDomain, there are a few things to be aware of.
1) Both AppDomains must be able to see the type used for the proxy, and you may have to handle AssemblyResolve events for either AppDomain manually (at least temporarily) to help resolve that.
2) AppDomains are fairly expensive to create. Generally, they are not used in situations where you need to spin something up really quickly, take some action and disappear. You should plan on either keeping them around as long as you can or be prepared to take the performance hit on every call.
3) You've said that the report type that you're instantiating is not serializable, and being able to serialize the object is a requirement for passing that type back from the other AppDomain. Defining a serializable class that can transport relevant data across the boundary and using that to pass the report data might be an option, but you'll have to determine if that works for your particular situation.
Also, as an aside, unless you have logic that depends on variables being set to null, setting everything to null in your finally does nothing useful and complicates your code.