servicestack and Serilog not working properly - servicestack

I have not been able to successfully implemented logging in service stack. I posted here and on serilog GIT. The Serilog team believes it is a service stack issue. If you want access to my project let me know.
https://github.com/serilog/serilog/issues/1267
Cannot deconstruct object into Json serilog and Servicestack IReturn
I also am calling the service from my base class
public class ServiceBase: Service
{
/// <summary>
/// Examples
/// ILog.Debug(Exception ex, string messageTemplate, params object[] propertyValues)
/// ILog.Info(Exception ex, string messageTemplate, params object[] propertyValues)
/// ILog.Warn(Exception ex, string messageTemplate, params object[] propertyValues)
/// ILog.Error(Exception ex, string messageTemplate, params object[] propertyValues)
/// ILog.Fatal(Exception ex, string messageTemplate, params object[] propertyValues)
/// ILog.ForContext(Type type)
/// ILog.ForContext<T>()
/// ILog.ForContext(ILogEventEnricher enricher)
/// ILog.ForContext(IEnumerable<ILogEventEnricher> enrichers)
/// ILog.ForContext(string propertyName, object value, bool destructureObjects = false)
/// </summary>
public ILog Log = LogManager.GetLogger(typeof(ServiceBase));

I found the answer. You nee to initialize the logger after app.UseServiceStack(new AppHost {AppSettings = new NetCoreAppSettings(Configuration)});
Answer

Related

How to use current company in BQL query

I need to know how I can get the user's current company so that I can use it in the Where clause of a BQL query. I'm not talking about the tenant "company". I'm talking about the companies that are defined on screen CS101500. I'm not seeing anything on the AccessInfo that seems to indicate the current Company. The current Branch ID is there but, I need the company to which the branch belongs.
Using version 21.203
TIA!
This will require the creation of a custom BQL element class.
IBqlOperand - A BQL Scalar operand
IBqlCreator - A Bql elements creator that requires implementation of AppendExpression and Verify methods.
public class CurrentOrganization : IBqlCreator, IBqlOperand
{
#region Methods
#region AppendExpression
/// <summary>
/// Appends the SQL tree expression that corresponds to the BQL command to an SQL tree query.
/// </summary>
/// <param name="exp">The SQL tree expression to be appended.</param>
/// <param name="graph">A graph instance.</param>
/// <param name="info">The information about the BQL command.</param>
/// <param name="selection">The fragment of the BQL command that is translated to an SQL tree expression.</param>
/// <returns></returns>
public bool AppendExpression(ref SQLExpression exp, PXGraph graph, BqlCommandInfo info, BqlCommand.Selection selection)
{
if (graph == null || !info.BuildExpression)
{
return true;
}
PXMutableCollection.AddMutableItem(this);
exp = new SQLConst(graph.BranchOrganizationID());
return true;
}
#endregion
#region Verify
/// <summary>
/// No Clue.
/// </summary>
/// <param name="cache"></param>
/// <param name="item"></param>
/// <param name="pars"></param>
/// <param name="result"></param>
/// <param name="value"></param>
public void Verify(PXCache cache, object item, List<object> pars, ref bool? result, ref object value)
{
value = cache.Graph.BranchOrganizationID();
}
#endregion
#endregion
}
The method call .BranchOrganizationID() is a PXGraph extension method that returns the OrganizationID from a Branch IPrefetchable
Calls to this prefetchable can be replaced with a BQL PXSelect :
PXSelect<Branch, Where<Branch.branchID,
Equal<Current<AccessInfo.branchID>>>>.Select(...)
Example usage would be as follows :
[PXDefault(typeof(Search<Branch.branchID,Where<Branch.organizationID,Equal<CurrentOrganization>>>))]
You can try to use this code
public int? GetOrganizationID(PXGraph graph, int? branchID)
{
int? accountID = ((Organization)OrganizationMaint.FindOrganizationByID(graph, PXAccess.GetParentOrganizationID(branchID))).BAccountID;
return accountID;
}
This is the final code that got what I needed, per Josh's solution. Van Hoesen for the win!!
public class CurrentOrganization : IBqlCreator, IBqlOperand
{
public virtual bool AppendExpression(ref SQLExpression exp, PXGraph graph, BqlCommandInfo info, BqlCommand.Selection selection)
{
if ((graph == null) || (!info.BuildExpression))
return true;
PXMutableCollection.AddMutableItem(this);
//exp = new SQLConst(graph.BranchOrganizationID());
Branch branch = PXSelect<Branch, Where<Branch.branchID, Equal<Current<AccessInfo.branchID>>>>.Select(graph);
exp = new SQLConst( branch.OrganizationID);
return true;
}
public void Verify(PXCache cache, object item, List<object> pars, ref bool? result, ref object value)
{
Branch branch = PXSelect<Branch, Where<Branch.branchID, Equal<Current<AccessInfo.branchID>>>>.Select(cache.Graph);
value = branch.OrganizationID;
//value = cache.Graph.BranchOrganizationID();
}
}

Quartz.net RFC 2445 or RFC 5545 instead of CRON

We have a web server running in .NET which uses Quartz to schedule jobs.
The triggers for the jobs are provided in RFC 2445 format, but Quartz uses the CRON format. I would now like to either
A: Find a library which can convert my RFC 2445 rule to a CRON Rule
B: Rather, give Quartz the RFC rule.
In the latter case, I found some Java libraries but not for .NET.
I also tried writing my own library but I'm stuck with intervals. An RFC2445 rule can define a biweekly (or triweekly or n-weekly) job with
FREQ=WEEKLY;BYDAY=MO;INTERVAL=2
I.e. Every other monday. Yet CRON does not seem to have this functionality.
I have a similar requirement and I couldn't find a RFC 5545 compliant library to work with Quartz scheduler and ended up implementing a custom trigger myself following this suggestion
In my case we are using Telerik Scheduler control to populate the RRULE but you could probably do the same with iCal.Net library as well. This is not the full implementation here but it will get you started and the code is UNTESTED.
Another note: "FREQ=WEEKLY;BYDAY=MO;INTERVAL=2" will probably fail if you try to parse it using Telerik RecurrenceRule, since it's missing DTSTART, DTEND etc. This is an example of a recurrence rule string that will not fail: "DTSTART:20210309T050000Z\r\nDTEND:20210309T060000Z\r\nRRULE:FREQ=WEEKLY;BYDAY=TU;INTERVAL=1".
You need to implement ITrigger interface. A lot of it can be copied from CroneTriggerImpl class and modified.
public interface IRRuleTrigger : ITrigger
{
string RecurrenceRuleString { get; set; }
}
Then you need an implementation class inherited from AbstractTrigger
public class MyTriggerImpl: AbstractTrigger, IRRuleTrigger
{
//implement all members here. Look at CronTriggerImpl class in Quartz.Net source. I'm pasting some of the implementation code but not all.
//...
private RecurrenceRule rRule;
/// <summary>
/// Gets or sets the RRULE expression string.
/// </summary>
/// <value>The expression string.</value>
public string RecurrenceRuleString
{
set
{
TimeZoneInfo originalTimeZone = TimeZone;
var success = RecurrenceRule.TryParse(value, out var parsedRule);
if(success) rRule = parsedRule ;// RecurrenceRule(value!);
}
get => rRule?.ToString();
}
/// <summary>
/// Gets or sets the RRULE expression string.
/// </summary>
/// <value>The expression string like RRULE:FREQ=WEEKLY;BYDAY=MO;INTERVAL=2.</value>
public string RecurrenceRuleString
{
set
{
TimeZoneInfo originalTimeZone = TimeZone;
var success = RecurrenceRule.TryParse(value, out var parsedRule);
if(success) rRule = parsedRule ;// RecurrenceRule(value!);
}
get => rRule?.ToString();
}
////////////////////////////////////////////////////////////////////////////
//
// Computation Functions
//
////////////////////////////////////////////////////////////////////////////
/// <summary>
/// Gets the next time to fire after the given time.
/// </summary>
/// <param name="afterTime">The time to compute from.</param>
/// <returns></returns>
protected DateTimeOffset? GetTimeAfter(DateTimeOffset afterTime)
{
return rRule?.HasOccurrences == true ?
rRule?.Occurrences.Where(o => o > afterTime).Min()
: null;
}
/// <summary>
/// Returns the time before the given time
/// that this <see cref="IRRuleTrigger" /> will fire.
/// </summary>
/// <param name="date">The date.</param>
/// <returns></returns>
protected DateTimeOffset? GetTimeBefore(DateTimeOffset? date)
{
return rRule?.HasOccurrences == true ?
rRule?.Occurrences.Where(o=> o < date).Max()
: null;
}
}
public class RRuleScheduleBuilder : ScheduleBuilder<IRRuleTrigger>
{
private int misfireInstruction = MisfireInstruction.SmartPolicy;
private RecurrenceRule recurrenceRule;
public override IMutableTrigger Build()
{
MyTriggerImpl myTriggerImpl = new MyTriggerImpl();
myTriggerImpl.MisfireInstruction = misfireInstruction;
myTriggerImpl.RecurrenceRuleString = this.recurrenceRule.ToString();
return myTriggerImpl;
}
/// <summary>
/// Create a RRuleScheduleBuilder with the given string expression - which
/// is presumed to be valid expression (and hence only a RuntimeException
/// will be thrown if it is not).
/// </summary>
/// <remarks>
/// </remarks>
/// <param name="recurrenceRuleString">the RRule expression to base the schedule on.</param>
/// <returns>the new RRuleScheduleBuilder</returns>
public static RRuleScheduleBuilder RecurrenceRuleSchedule(string recurrenceRuleString)
{
var success = RecurrenceRule.TryParse(recurrenceRuleString, out var rRule);
if(!success) throw new ArgumentException($"Recurrence Rule String ({recurrenceRuleString}) is invalid.");
return new RRuleScheduleBuilder(rRule);
}
protected RRuleScheduleBuilder(RecurrenceRule rule)
{
this.recurrenceRule = rule ?? throw new ArgumentNullException(nameof(rule), "recurrenceRule cannot be null");
}
}
/// <summary>
/// Extension methods that attach <see cref="RRuleScheduleBuilder" /> to <see cref="TriggerBuilder" />.
/// </summary>
public static class RRuleScheduleTriggerBuilderExtensions
{
public static TriggerBuilder WithRRuleSchedule(this TriggerBuilder triggerBuilder, string recurrenceRuleString)
{
RRuleScheduleBuilder builder = RRuleScheduleBuilder.RecurrenceRuleSchedule(recurrenceRuleString);
return triggerBuilder.WithSchedule(builder);
}
public static TriggerBuilder WithRRuleSchedule(this TriggerBuilder triggerBuilder, string recurrenceRuleString, Action<RRuleScheduleBuilder> action)
{
RRuleScheduleBuilder builder = RRuleScheduleBuilder.RecurrenceRuleSchedule(recurrenceRuleString);
action(builder);
return triggerBuilder.WithSchedule(builder);
}
}
After implementing that, you can create and use your trigger like this:
// Grab the Scheduler instance from the Factory
StdSchedulerFactory factory = new StdSchedulerFactory();
var scheduler = await factory.GetScheduler();
await scheduler.Start();
var job = JobBuilder.Create<MyBusinessClassThatImplementsIJobInterface>()
.WithIdentity("someIdentity", "someGroupName")
.Build();
var trigger = (IRRuleTrigger)TriggerBuilder.Create()
.WithIdentity("someName", "myGroup")
.WithRRuleSchedule(rule.ToString())
.Build();
await scheduler.ScheduleJob(job, trigger);

Store data in google sheets from Azure functions

I am trying to insert data into google sheets from azure functions.
In the azure function, integrate tab I selected new Output as External Table and selected googleSheets and a connection string was created. But I don't see any documents showing how we can read/insert data from/into the excel sheets. Any quick sample for jump start?
Below is a simple example of using Azure function bindings to tabular connectors. I have verified that it works with SQL Server, Google Sheets and Salesforce. Theoretically it should work with any tabular connector as long as it implements the Connector Data Protocol (CDP).
Develop
#r "Microsoft.Azure.ApiHub.Sdk"
#r "Newtonsoft.Json"
using System;
using Microsoft.Azure.ApiHub;
public class Contact
{
public string Id { get; set; }
public string LastName { get; set; }
public string FirstName { get; set; }
}
public static async Task Run(string input, ITable<Contact> table, TraceWriter log)
{
ContinuationToken continuationToken = null;
do
{
var contactsSegment = await table.ListEntitiesAsync(
continuationToken: continuationToken);
foreach (var contact in contactsSegment.Items)
{
log.Info(string.Format("{0} {1}", contact.FirstName, contact.LastName));
}
continuationToken = contactsSegment.ContinuationToken;
}
while (continuationToken != null);
}
For simplicity the example uses a manual trigger. The trigger’s input value is not used.
The example assumes that the connector provides a Contact table with Id, LastName and FirstName columns. The code lists the Contact entities in the table and logs the first and last names.
Integrate
{
"bindings": [
{
"type": "manualTrigger",
"direction": "in",
"name": "input"
},
{
"type": "apiHubTable",
"direction": "in",
"name": "table",
"connection": "ConnectionAppSettingsKey",
"dataSetName": "default",
"tableName": "Contact",
"entityId": "",
}
],
"disabled": false
}
ConnectionAppSettingsKey identifies the app setting that stores the connection string.
A tabular connector provides data sets, and each data set contains tables. The name of the default data set is “default”. These concepts are identified by dataSetName and tableName and are specific to each connector:
Connector Dataset Table
SharePoint Site SharePoint List
SQL Database Table
Google Sheet Spreadsheet Worksheet
Excel Excel file Sheet
entityId must be empty for table bindings.
Bindings
The table binding (exemplified above) is probably the most useful. Here is the full list of supported C# bindings, with their requirements:
Table Client
The parameter type must be ITableClient.
dataSetName, tableName and entityId must be empty.
Table
The parameter type must be ITable (TEntity is a POCO type), ITable, IAsyncCollector or IAsyncCollector. dataSetName and tableName must be provided. entityId must be empty.
Entity
The parameter type must be TEntity (POCO type) or JObject. dataSetName, tableName and entityId must be provided.
 
Interfaces
public interface ITableClient
{
/// <summary>
/// Gets a reference to a data set.
/// </summary>
/// <param name="dataSetName">The name of the data set.</param>
/// <returns>The data set reference.</returns>
IDataSet GetDataSetReference(string dataSetName = null);
/// <summary>
/// Queries the table client for data sets.
/// </summary>
/// <param name="query">The query to be executed.</param>
/// <param name="continuationToken">A continuation token from the server
/// when the operation returns a partial result.</param>
/// <param name="cancellationToken">A cancellation token that can be used
/// by other objects or threads to receive notice of cancellation.</param>
/// <returns>The retrieved data sets. In case of partial result the
/// object returned will have a continuation token.</returns>
Task<SegmentedResult<IDataSet>> ListDataSetsAsync(
Query query = null,
ContinuationToken continuationToken = null,
CancellationToken cancellationToken = default(CancellationToken));
}
 
public interface IDataSet
{
/// <summary>
/// Gets the data set name.
/// </summary>
string DataSetName { get; }
/// <summary>
/// Gets the data set display name.
/// </summary>
string DisplayName { get; }
/// <summary>
/// Gets a reference to a table.
/// </summary>
/// <typeparam name="TEntity">The type of entities in the table.</typeparam>
/// <param name="tableName">The name of the table.</param>
/// <returns>The table reference.</returns>
ITable<TEntity> GetTableReference<TEntity>(string tableName)
where TEntity : class;
/// <summary>
/// Queries the data set for tables.
/// </summary>
/// <param name="query">The query to be executed.</param>
/// <param name="continuationToken">A continuation token from the server
/// when the operation returns a partial result.</param>
/// <param name="cancellationToken">A cancellation token that can be used
/// by other objects or threads to receive notice of cancellation.</param>
/// <returns>The retrieved tables. In case of partial result the
/// object returned will have a continuation token.</returns>
Task<SegmentedResult<ITable<JObject>>> ListTablesAsync(
Query query = null,
ContinuationToken continuationToken = null,
CancellationToken cancellationToken = default(CancellationToken));
}
 
public interface ITable<TEntity>
where TEntity : class
{
/// <summary>
/// Gets the data set name.
/// </summary>
string DataSetName { get; }
/// <summary>
/// Gets the table name.
/// </summary>
string TableName { get; }
/// <summary>
/// Gets the table display name.
/// </summary>
string DisplayName { get; }
/// <summary>
/// Retrieves table metadata.
/// </summary>
/// <param name="cancellationToken">A cancellation token that can be used
/// by other objects or threads to receive notice of cancellation.</param>
/// <returns>The table metadata.</returns>
Task<TableMetadata> GetMetadataAsync(
CancellationToken cancellationToken = default(CancellationToken));
/// <summary>
/// Retrieves the entity with the specified identifier.
/// </summary>
/// <param name="entityId">The entity identifier.</param>
/// <param name="cancellationToken">A cancellation token that can be used
/// by other objects or threads to receive notice of cancellation.</param>
/// <returns>The entity or null if not found.</returns>
Task<TEntity> GetEntityAsync(
string entityId,
CancellationToken cancellationToken = default(CancellationToken));
/// <summary>
/// Queries the table for entities.
/// </summary>
/// <param name="query">The query to be executed.</param>
/// <param name="continuationToken">A continuation token from the server
/// when the operation returns a partial result.</param>
/// <param name="cancellationToken">A cancellation token that can be used
/// by other objects or threads to receive notice of cancellation.</param>
/// <returns>The retrieved entities. In case of partial result the
/// object returned will have a continuation token.</returns>
Task<SegmentedResult<TEntity>> ListEntitiesAsync(
Query query = null,
ContinuationToken continuationToken = null,
CancellationToken cancellationToken = default(CancellationToken));
/// <summary>
/// Adds a new entity to the table.
/// </summary>
/// <param name="entity">The entity to be created.</param>
/// <param name="cancellationToken">A cancellation token that can be used
/// by other objects or threads to receive notice of cancellation.</param>
/// <returns></returns>
Task CreateEntityAsync(
TEntity entity,
CancellationToken cancellationToken = default(CancellationToken));
/// <summary>
/// Updates an existing entity.
/// </summary>
/// <param name="entityId">The entity identifier.</param>
/// <param name="entity">The updated entity.</param>
/// <param name="cancellationToken">A cancellation token that can be used
/// by other objects or threads to receive notice of cancellation.</param>
/// <returns></returns>
Task UpdateEntityAsync(
string entityId,
TEntity entity,
CancellationToken cancellationToken = default(CancellationToken));
/// <summary>
/// Deletes an existing entity.
/// </summary>
/// <param name="entityId">The entity identifier.</param>
/// <param name="cancellationToken">A cancellation token that can be used
/// by other objects or threads to receive notice of cancellation.</param>
/// <returns></returns>
Task DeleteEntityAsync(
string entityId,
CancellationToken cancellationToken = default(CancellationToken));
}
 
Notes
Here are some instructions to try the example:
SQL Server
The script to create and populate the Contact table is below. dataSetName is “default”.
CREATE TABLE Contact
(
Id int NOT NULL,
LastName varchar(20) NOT NULL,
FirstName varchar(20) NOT NULL,
CONSTRAINT PK_Contact_Id PRIMARY KEY (Id)
)
GO
INSERT INTO Contact(Id, LastName, FirstName)
VALUES (1, 'Bitt', 'Prad')
GO
INSERT INTO Contact(Id, LastName, FirstName)
VALUES (2, 'Glooney', 'Ceorge')
GO
Google Sheets
In Google docs create a spreadsheet with a worksheet named Contact. The connector cannot use the spreadsheet display name. The internal name (in bold) needs to be used as dataSetName, for example: https://docs.google.com/spreadsheets/d/1UIz545JF_cx6Chm_5HpSPVOenU4DZh4bDxbFgJOSMz0
Add the column names Id, LastName, FirstName to the first row, the populate with data on subsequent rows.
Salesforce
dataSetName is “default”.
There are some samples here: https://github.com/Azure/azure-webjobs-sdk-templates/tree/dev/Functions.Templates/Templates/ExternalTable-CSharp
Thanks

Serialise & deserialize CRM EntityCollection

I seem to recall in CRM 4, you could retrieve an EntityCollection to and from a file on disk. I would like to do this as part of writing both a backup mechanism, and a data transfer for a CRM Online instance.
However this does not seem to work correctly in CRM 2011 as the Attributes collection of each Entity contains a list of empty KeyValuePairOfStringObjects and the FormattedValues collection of each entity contains a list if empty KeyValuePairOfStringStrings.
Therefore the names and values of the entity's attributes have not been included in the serialization, however they definitely have values when viewed in the VS debugger.
Is there a way I can programatically store these collections to file so that they may be later deserialized and used to restore data to where they came from or to a parallel target instance eg for testing offline?
Here is my version of the serialization method proposed by #bigtv
private string Serialize(EntityCollection records)
{
string retVal = null;
using(var tw = new StringWriter())
using (var xw = new XmlTextWriter(tw))
{
var ser = new DataContractSerializer(typeof(EntityCollection));
ser.WriteObject(xw, records);
retVal = tw.ToString();
}
return retVal;
}
I had the exact same requirement to save the raw EntityCollection response back from a CRM FetchRequest. I got the same result as you from standard XmlSerializer, the trick is to use the same serializer that CRM is using under the hood.
Take a look at the DataContractSerializer class: MSDN reference is here
This is the helper class I then ended up writing:
class Serialiser
{
/// <summary>
/// The xml serialiser instance.
/// </summary>
private readonly DataContractSerializer dataContractSerialiser;
/// <summary>
/// Initializes a new instance of the <see cref="SerialiserService.Serialiser"/> class.
/// </summary>
/// <param name="typeToSerilaise">The type to serilaise.</param>
public Serialiser(Type typeToSerilaise)
{
this.dataContractSerialiser = new DataContractSerializer(typeToSerilaise);
}
/// <summary>
/// Serialises the specified candidate.
/// </summary>
/// <param name="candidate">The candidate.</param>
/// <returns>A serialised representaiton of the specified candidate.</returns>
public byte[] Serialise(object candidate)
{
byte[] output;
using (var ms = new MemoryStream())
{
this.dataContractSerialiser.WriteObject(ms, candidate);
var numberOfBytes = ms.Length;
output = new byte[numberOfBytes];
// Note: Only copy the exact stream length to avoid capturing trailing null bytes.
Array.Copy(ms.GetBuffer(), output, numberOfBytes);
}
return output;
}
/// <summary>
/// Deserialises the specified serialised instance.
/// </summary>
/// <param name="serialisedInstance">The serialised instance.</param>
/// <returns>A deserialised instance of the specified type.</returns>
public object Deserialise(byte[] serialisedInstance)
{
object output;
using (var ms = new MemoryStream(serialisedInstance))
using (var reader = XmlDictionaryReader.CreateTextReader(ms, new XmlDictionaryReaderQuotas()))
{
output = this.dataContractSerialiser.ReadObject(reader);
}
return output;
}
}
Usage:
new Serialiser(typeof(EntityCollection));
You can then read or write the byte[] to disk.

Merge two objects to produce third using AutoMapper

I know it's AutoMapper and not AutoMerge(r), but...
I've started using AutoMapper and have a need to Map A -> B, and to add some properties from C so that B become a kind of flat composite of A + C.
Is this possible in AutoMapper of should I just use AutoMapper to do the heavy lifting then manually map on the extra properties?
Would this not work?
var mappedB = _mapper.Map<A,B>(aInstance);
_mapper.Map(instanceC,mappedB);
You can do this with the ValueInjecter
a.InjectFrom(b)
.InjectFrom(c)
.InjectFrom<SomeOtherMappingAlgorithmDefinedByYou>(dOrBOrWhateverObject);
I searched hard and long on this question and ended up implementing an extension method that merge's objects together.
I reference the steps on my blog http://twistyvortek.blogspot.com and here's the code:
using System;
namespace Domain.Models
{
public static class ExtendedMethods
{
/// <summary>
/// Merges two object instances together. The primary instance will retain all non-Null values, and the second will merge all properties that map to null properties the primary
/// </summary>
/// <typeparam name="T">Type Parameter of the merging objects. Both objects must be of the same type.</typeparam>
/// <param name="primary">The object that is receiving merge data (modified)</param>
/// <param name="secondary">The object supplying the merging properties. (unmodified)</param>
/// <returns>The primary object (modified)</returns>
public static T MergeWith<T>(this T primary, T secondary)
{
foreach (var pi in typeof (T).GetProperties())
{
var priValue = pi.GetGetMethod().Invoke(primary, null);
var secValue = pi.GetGetMethod().Invoke(secondary, null);
if (priValue == null || (pi.PropertyType.IsValueType && priValue == Activator.CreateInstance(pi.PropertyType)))
{
pi.GetSetMethod().Invoke(primary, new[] {secValue});
}
}
return primary;
}
}
}
Usage includes method chaining so you can merge multiple objects into one.
What I would do is use automapper to map part of the properties from your various sources into the same class of DTOs, etc. and then use this extension method to merge them together.
var Obj1 = Mapper.Map(Instance1);
var Obj2 = Mapper.Map(Instance2);
var Obj3 = Mapper.Map(Instance3);
var Obj4 = Mapper.Map(Instance4);
var finalMerge = Obj1.MergeWith(Obj2)
.MergeWith(Obj3)
.MergeWith(Obj4);
Hope this helps someone.
From what I remember with AutoMapper you have to define your mappings as one input to one output (maybe this has changed since - haven't utilized it for many a month).
If this is the case, maybe your mapping should be of KeyValuePair<A,C> (or some sort of object composing both A & C) => B
This way you can have one defined input parameter mapping to your outputted object
There is a nice example of merging multiple sources into a destination using autoMapper, here in Owain Wraggs' EMC Consulting Blog.
EDIT: To guard against the old "dead-link" syndrome, the essence of the code in Owain's blog is below.
/// <summary>
/// Helper class to assist in mapping multiple entities to one single
/// entity.
/// </summary>
/// <remarks>
/// Code courtesy of Owain Wraggs' EMC Consulting Blog
/// Ref:
/// http://consultingblogs.emc.com/owainwragg/archive/2010/12/22/automapper-mapping-from-multiple-objects.aspx
/// </remarks>
public static class EntityMapper
{
/// <summary>
/// Maps the specified sources to the specified destination type.
/// </summary>
/// <typeparam name="T">The type of the destination</typeparam>
/// <param name="sources">The sources.</param>
/// <returns></returns>
/// <example>
/// Retrieve the person, address and comment entities
/// and map them on to a person view model entity.
///
/// var personId = 23;
/// var person = _personTasks.GetPerson(personId);
/// var address = _personTasks.GetAddress(personId);
/// var comment = _personTasks.GetComment(personId);
///
/// var personViewModel = EntityMapper.Map<PersonViewModel>(person, address, comment);
/// </example>
public static T Map<T>(params object[] sources) where T : class
{
// If there are no sources just return the destination object
if (!sources.Any())
{
return default(T);
}
// Get the inital source and map it
var initialSource = sources[0];
var mappingResult = Map<T>(initialSource);
// Now map the remaining source objects
if (sources.Count() > 1)
{
Map(mappingResult, sources.Skip(1).ToArray());
}
// return the destination object
return mappingResult;
}
/// <summary>
/// Maps the specified sources to the specified destination.
/// </summary>
/// <param name="destination">The destination.</param>
/// <param name="sources">The sources.</param>
private static void Map(object destination, params object[] sources)
{
// If there are no sources just return the destination object
if (!sources.Any())
{
return;
}
// Get the destination type
var destinationType = destination.GetType();
// Itereate through all of the sources...
foreach (var source in sources)
{
// ... get the source type and map the source to the destination
var sourceType = source.GetType();
Mapper.Map(source, destination, sourceType, destinationType);
}
}
/// <summary>
/// Maps the specified source to the destination.
/// </summary>
/// <typeparam name="T">type of teh destination</typeparam>
/// <param name="source">The source.</param>
/// <returns></returns>
private static T Map<T>(object source) where T : class
{
// Get thr source and destination types
var destinationType = typeof(T);
var sourceType = source.GetType();
// Get the destination using AutoMapper's Map
var mappingResult = Mapper.Map(source, sourceType, destinationType);
// Return the destination
return mappingResult as T;
}
}
The resultant calling code is nice an succinct.
public ActionResult Index()
{
// Retrieve the person, address and comment entities and
// map them on to a person view model entity
var personId = 23;
var person = _personTasks.GetPerson(personId);
var address = _personTasks.GetAddress(personId);
var comment = _personTasks.GetComment(personId);
var personViewModel = EntityMapper.Map<PersonViewModel>(person, address, comment);
return this.View(personViewModel);
}

Resources