I know it's AutoMapper and not AutoMerge(r), but...
I've started using AutoMapper and have a need to Map A -> B, and to add some properties from C so that B become a kind of flat composite of A + C.
Is this possible in AutoMapper of should I just use AutoMapper to do the heavy lifting then manually map on the extra properties?
Would this not work?
var mappedB = _mapper.Map<A,B>(aInstance);
_mapper.Map(instanceC,mappedB);
You can do this with the ValueInjecter
a.InjectFrom(b)
.InjectFrom(c)
.InjectFrom<SomeOtherMappingAlgorithmDefinedByYou>(dOrBOrWhateverObject);
I searched hard and long on this question and ended up implementing an extension method that merge's objects together.
I reference the steps on my blog http://twistyvortek.blogspot.com and here's the code:
using System;
namespace Domain.Models
{
public static class ExtendedMethods
{
/// <summary>
/// Merges two object instances together. The primary instance will retain all non-Null values, and the second will merge all properties that map to null properties the primary
/// </summary>
/// <typeparam name="T">Type Parameter of the merging objects. Both objects must be of the same type.</typeparam>
/// <param name="primary">The object that is receiving merge data (modified)</param>
/// <param name="secondary">The object supplying the merging properties. (unmodified)</param>
/// <returns>The primary object (modified)</returns>
public static T MergeWith<T>(this T primary, T secondary)
{
foreach (var pi in typeof (T).GetProperties())
{
var priValue = pi.GetGetMethod().Invoke(primary, null);
var secValue = pi.GetGetMethod().Invoke(secondary, null);
if (priValue == null || (pi.PropertyType.IsValueType && priValue == Activator.CreateInstance(pi.PropertyType)))
{
pi.GetSetMethod().Invoke(primary, new[] {secValue});
}
}
return primary;
}
}
}
Usage includes method chaining so you can merge multiple objects into one.
What I would do is use automapper to map part of the properties from your various sources into the same class of DTOs, etc. and then use this extension method to merge them together.
var Obj1 = Mapper.Map(Instance1);
var Obj2 = Mapper.Map(Instance2);
var Obj3 = Mapper.Map(Instance3);
var Obj4 = Mapper.Map(Instance4);
var finalMerge = Obj1.MergeWith(Obj2)
.MergeWith(Obj3)
.MergeWith(Obj4);
Hope this helps someone.
From what I remember with AutoMapper you have to define your mappings as one input to one output (maybe this has changed since - haven't utilized it for many a month).
If this is the case, maybe your mapping should be of KeyValuePair<A,C> (or some sort of object composing both A & C) => B
This way you can have one defined input parameter mapping to your outputted object
There is a nice example of merging multiple sources into a destination using autoMapper, here in Owain Wraggs' EMC Consulting Blog.
EDIT: To guard against the old "dead-link" syndrome, the essence of the code in Owain's blog is below.
/// <summary>
/// Helper class to assist in mapping multiple entities to one single
/// entity.
/// </summary>
/// <remarks>
/// Code courtesy of Owain Wraggs' EMC Consulting Blog
/// Ref:
/// http://consultingblogs.emc.com/owainwragg/archive/2010/12/22/automapper-mapping-from-multiple-objects.aspx
/// </remarks>
public static class EntityMapper
{
/// <summary>
/// Maps the specified sources to the specified destination type.
/// </summary>
/// <typeparam name="T">The type of the destination</typeparam>
/// <param name="sources">The sources.</param>
/// <returns></returns>
/// <example>
/// Retrieve the person, address and comment entities
/// and map them on to a person view model entity.
///
/// var personId = 23;
/// var person = _personTasks.GetPerson(personId);
/// var address = _personTasks.GetAddress(personId);
/// var comment = _personTasks.GetComment(personId);
///
/// var personViewModel = EntityMapper.Map<PersonViewModel>(person, address, comment);
/// </example>
public static T Map<T>(params object[] sources) where T : class
{
// If there are no sources just return the destination object
if (!sources.Any())
{
return default(T);
}
// Get the inital source and map it
var initialSource = sources[0];
var mappingResult = Map<T>(initialSource);
// Now map the remaining source objects
if (sources.Count() > 1)
{
Map(mappingResult, sources.Skip(1).ToArray());
}
// return the destination object
return mappingResult;
}
/// <summary>
/// Maps the specified sources to the specified destination.
/// </summary>
/// <param name="destination">The destination.</param>
/// <param name="sources">The sources.</param>
private static void Map(object destination, params object[] sources)
{
// If there are no sources just return the destination object
if (!sources.Any())
{
return;
}
// Get the destination type
var destinationType = destination.GetType();
// Itereate through all of the sources...
foreach (var source in sources)
{
// ... get the source type and map the source to the destination
var sourceType = source.GetType();
Mapper.Map(source, destination, sourceType, destinationType);
}
}
/// <summary>
/// Maps the specified source to the destination.
/// </summary>
/// <typeparam name="T">type of teh destination</typeparam>
/// <param name="source">The source.</param>
/// <returns></returns>
private static T Map<T>(object source) where T : class
{
// Get thr source and destination types
var destinationType = typeof(T);
var sourceType = source.GetType();
// Get the destination using AutoMapper's Map
var mappingResult = Mapper.Map(source, sourceType, destinationType);
// Return the destination
return mappingResult as T;
}
}
The resultant calling code is nice an succinct.
public ActionResult Index()
{
// Retrieve the person, address and comment entities and
// map them on to a person view model entity
var personId = 23;
var person = _personTasks.GetPerson(personId);
var address = _personTasks.GetAddress(personId);
var comment = _personTasks.GetComment(personId);
var personViewModel = EntityMapper.Map<PersonViewModel>(person, address, comment);
return this.View(personViewModel);
}
Related
I need to know how I can get the user's current company so that I can use it in the Where clause of a BQL query. I'm not talking about the tenant "company". I'm talking about the companies that are defined on screen CS101500. I'm not seeing anything on the AccessInfo that seems to indicate the current Company. The current Branch ID is there but, I need the company to which the branch belongs.
Using version 21.203
TIA!
This will require the creation of a custom BQL element class.
IBqlOperand - A BQL Scalar operand
IBqlCreator - A Bql elements creator that requires implementation of AppendExpression and Verify methods.
public class CurrentOrganization : IBqlCreator, IBqlOperand
{
#region Methods
#region AppendExpression
/// <summary>
/// Appends the SQL tree expression that corresponds to the BQL command to an SQL tree query.
/// </summary>
/// <param name="exp">The SQL tree expression to be appended.</param>
/// <param name="graph">A graph instance.</param>
/// <param name="info">The information about the BQL command.</param>
/// <param name="selection">The fragment of the BQL command that is translated to an SQL tree expression.</param>
/// <returns></returns>
public bool AppendExpression(ref SQLExpression exp, PXGraph graph, BqlCommandInfo info, BqlCommand.Selection selection)
{
if (graph == null || !info.BuildExpression)
{
return true;
}
PXMutableCollection.AddMutableItem(this);
exp = new SQLConst(graph.BranchOrganizationID());
return true;
}
#endregion
#region Verify
/// <summary>
/// No Clue.
/// </summary>
/// <param name="cache"></param>
/// <param name="item"></param>
/// <param name="pars"></param>
/// <param name="result"></param>
/// <param name="value"></param>
public void Verify(PXCache cache, object item, List<object> pars, ref bool? result, ref object value)
{
value = cache.Graph.BranchOrganizationID();
}
#endregion
#endregion
}
The method call .BranchOrganizationID() is a PXGraph extension method that returns the OrganizationID from a Branch IPrefetchable
Calls to this prefetchable can be replaced with a BQL PXSelect :
PXSelect<Branch, Where<Branch.branchID,
Equal<Current<AccessInfo.branchID>>>>.Select(...)
Example usage would be as follows :
[PXDefault(typeof(Search<Branch.branchID,Where<Branch.organizationID,Equal<CurrentOrganization>>>))]
You can try to use this code
public int? GetOrganizationID(PXGraph graph, int? branchID)
{
int? accountID = ((Organization)OrganizationMaint.FindOrganizationByID(graph, PXAccess.GetParentOrganizationID(branchID))).BAccountID;
return accountID;
}
This is the final code that got what I needed, per Josh's solution. Van Hoesen for the win!!
public class CurrentOrganization : IBqlCreator, IBqlOperand
{
public virtual bool AppendExpression(ref SQLExpression exp, PXGraph graph, BqlCommandInfo info, BqlCommand.Selection selection)
{
if ((graph == null) || (!info.BuildExpression))
return true;
PXMutableCollection.AddMutableItem(this);
//exp = new SQLConst(graph.BranchOrganizationID());
Branch branch = PXSelect<Branch, Where<Branch.branchID, Equal<Current<AccessInfo.branchID>>>>.Select(graph);
exp = new SQLConst( branch.OrganizationID);
return true;
}
public void Verify(PXCache cache, object item, List<object> pars, ref bool? result, ref object value)
{
Branch branch = PXSelect<Branch, Where<Branch.branchID, Equal<Current<AccessInfo.branchID>>>>.Select(cache.Graph);
value = branch.OrganizationID;
//value = cache.Graph.BranchOrganizationID();
}
}
We have a web server running in .NET which uses Quartz to schedule jobs.
The triggers for the jobs are provided in RFC 2445 format, but Quartz uses the CRON format. I would now like to either
A: Find a library which can convert my RFC 2445 rule to a CRON Rule
B: Rather, give Quartz the RFC rule.
In the latter case, I found some Java libraries but not for .NET.
I also tried writing my own library but I'm stuck with intervals. An RFC2445 rule can define a biweekly (or triweekly or n-weekly) job with
FREQ=WEEKLY;BYDAY=MO;INTERVAL=2
I.e. Every other monday. Yet CRON does not seem to have this functionality.
I have a similar requirement and I couldn't find a RFC 5545 compliant library to work with Quartz scheduler and ended up implementing a custom trigger myself following this suggestion
In my case we are using Telerik Scheduler control to populate the RRULE but you could probably do the same with iCal.Net library as well. This is not the full implementation here but it will get you started and the code is UNTESTED.
Another note: "FREQ=WEEKLY;BYDAY=MO;INTERVAL=2" will probably fail if you try to parse it using Telerik RecurrenceRule, since it's missing DTSTART, DTEND etc. This is an example of a recurrence rule string that will not fail: "DTSTART:20210309T050000Z\r\nDTEND:20210309T060000Z\r\nRRULE:FREQ=WEEKLY;BYDAY=TU;INTERVAL=1".
You need to implement ITrigger interface. A lot of it can be copied from CroneTriggerImpl class and modified.
public interface IRRuleTrigger : ITrigger
{
string RecurrenceRuleString { get; set; }
}
Then you need an implementation class inherited from AbstractTrigger
public class MyTriggerImpl: AbstractTrigger, IRRuleTrigger
{
//implement all members here. Look at CronTriggerImpl class in Quartz.Net source. I'm pasting some of the implementation code but not all.
//...
private RecurrenceRule rRule;
/// <summary>
/// Gets or sets the RRULE expression string.
/// </summary>
/// <value>The expression string.</value>
public string RecurrenceRuleString
{
set
{
TimeZoneInfo originalTimeZone = TimeZone;
var success = RecurrenceRule.TryParse(value, out var parsedRule);
if(success) rRule = parsedRule ;// RecurrenceRule(value!);
}
get => rRule?.ToString();
}
/// <summary>
/// Gets or sets the RRULE expression string.
/// </summary>
/// <value>The expression string like RRULE:FREQ=WEEKLY;BYDAY=MO;INTERVAL=2.</value>
public string RecurrenceRuleString
{
set
{
TimeZoneInfo originalTimeZone = TimeZone;
var success = RecurrenceRule.TryParse(value, out var parsedRule);
if(success) rRule = parsedRule ;// RecurrenceRule(value!);
}
get => rRule?.ToString();
}
////////////////////////////////////////////////////////////////////////////
//
// Computation Functions
//
////////////////////////////////////////////////////////////////////////////
/// <summary>
/// Gets the next time to fire after the given time.
/// </summary>
/// <param name="afterTime">The time to compute from.</param>
/// <returns></returns>
protected DateTimeOffset? GetTimeAfter(DateTimeOffset afterTime)
{
return rRule?.HasOccurrences == true ?
rRule?.Occurrences.Where(o => o > afterTime).Min()
: null;
}
/// <summary>
/// Returns the time before the given time
/// that this <see cref="IRRuleTrigger" /> will fire.
/// </summary>
/// <param name="date">The date.</param>
/// <returns></returns>
protected DateTimeOffset? GetTimeBefore(DateTimeOffset? date)
{
return rRule?.HasOccurrences == true ?
rRule?.Occurrences.Where(o=> o < date).Max()
: null;
}
}
public class RRuleScheduleBuilder : ScheduleBuilder<IRRuleTrigger>
{
private int misfireInstruction = MisfireInstruction.SmartPolicy;
private RecurrenceRule recurrenceRule;
public override IMutableTrigger Build()
{
MyTriggerImpl myTriggerImpl = new MyTriggerImpl();
myTriggerImpl.MisfireInstruction = misfireInstruction;
myTriggerImpl.RecurrenceRuleString = this.recurrenceRule.ToString();
return myTriggerImpl;
}
/// <summary>
/// Create a RRuleScheduleBuilder with the given string expression - which
/// is presumed to be valid expression (and hence only a RuntimeException
/// will be thrown if it is not).
/// </summary>
/// <remarks>
/// </remarks>
/// <param name="recurrenceRuleString">the RRule expression to base the schedule on.</param>
/// <returns>the new RRuleScheduleBuilder</returns>
public static RRuleScheduleBuilder RecurrenceRuleSchedule(string recurrenceRuleString)
{
var success = RecurrenceRule.TryParse(recurrenceRuleString, out var rRule);
if(!success) throw new ArgumentException($"Recurrence Rule String ({recurrenceRuleString}) is invalid.");
return new RRuleScheduleBuilder(rRule);
}
protected RRuleScheduleBuilder(RecurrenceRule rule)
{
this.recurrenceRule = rule ?? throw new ArgumentNullException(nameof(rule), "recurrenceRule cannot be null");
}
}
/// <summary>
/// Extension methods that attach <see cref="RRuleScheduleBuilder" /> to <see cref="TriggerBuilder" />.
/// </summary>
public static class RRuleScheduleTriggerBuilderExtensions
{
public static TriggerBuilder WithRRuleSchedule(this TriggerBuilder triggerBuilder, string recurrenceRuleString)
{
RRuleScheduleBuilder builder = RRuleScheduleBuilder.RecurrenceRuleSchedule(recurrenceRuleString);
return triggerBuilder.WithSchedule(builder);
}
public static TriggerBuilder WithRRuleSchedule(this TriggerBuilder triggerBuilder, string recurrenceRuleString, Action<RRuleScheduleBuilder> action)
{
RRuleScheduleBuilder builder = RRuleScheduleBuilder.RecurrenceRuleSchedule(recurrenceRuleString);
action(builder);
return triggerBuilder.WithSchedule(builder);
}
}
After implementing that, you can create and use your trigger like this:
// Grab the Scheduler instance from the Factory
StdSchedulerFactory factory = new StdSchedulerFactory();
var scheduler = await factory.GetScheduler();
await scheduler.Start();
var job = JobBuilder.Create<MyBusinessClassThatImplementsIJobInterface>()
.WithIdentity("someIdentity", "someGroupName")
.Build();
var trigger = (IRRuleTrigger)TriggerBuilder.Create()
.WithIdentity("someName", "myGroup")
.WithRRuleSchedule(rule.ToString())
.Build();
await scheduler.ScheduleJob(job, trigger);
I seem to recall in CRM 4, you could retrieve an EntityCollection to and from a file on disk. I would like to do this as part of writing both a backup mechanism, and a data transfer for a CRM Online instance.
However this does not seem to work correctly in CRM 2011 as the Attributes collection of each Entity contains a list of empty KeyValuePairOfStringObjects and the FormattedValues collection of each entity contains a list if empty KeyValuePairOfStringStrings.
Therefore the names and values of the entity's attributes have not been included in the serialization, however they definitely have values when viewed in the VS debugger.
Is there a way I can programatically store these collections to file so that they may be later deserialized and used to restore data to where they came from or to a parallel target instance eg for testing offline?
Here is my version of the serialization method proposed by #bigtv
private string Serialize(EntityCollection records)
{
string retVal = null;
using(var tw = new StringWriter())
using (var xw = new XmlTextWriter(tw))
{
var ser = new DataContractSerializer(typeof(EntityCollection));
ser.WriteObject(xw, records);
retVal = tw.ToString();
}
return retVal;
}
I had the exact same requirement to save the raw EntityCollection response back from a CRM FetchRequest. I got the same result as you from standard XmlSerializer, the trick is to use the same serializer that CRM is using under the hood.
Take a look at the DataContractSerializer class: MSDN reference is here
This is the helper class I then ended up writing:
class Serialiser
{
/// <summary>
/// The xml serialiser instance.
/// </summary>
private readonly DataContractSerializer dataContractSerialiser;
/// <summary>
/// Initializes a new instance of the <see cref="SerialiserService.Serialiser"/> class.
/// </summary>
/// <param name="typeToSerilaise">The type to serilaise.</param>
public Serialiser(Type typeToSerilaise)
{
this.dataContractSerialiser = new DataContractSerializer(typeToSerilaise);
}
/// <summary>
/// Serialises the specified candidate.
/// </summary>
/// <param name="candidate">The candidate.</param>
/// <returns>A serialised representaiton of the specified candidate.</returns>
public byte[] Serialise(object candidate)
{
byte[] output;
using (var ms = new MemoryStream())
{
this.dataContractSerialiser.WriteObject(ms, candidate);
var numberOfBytes = ms.Length;
output = new byte[numberOfBytes];
// Note: Only copy the exact stream length to avoid capturing trailing null bytes.
Array.Copy(ms.GetBuffer(), output, numberOfBytes);
}
return output;
}
/// <summary>
/// Deserialises the specified serialised instance.
/// </summary>
/// <param name="serialisedInstance">The serialised instance.</param>
/// <returns>A deserialised instance of the specified type.</returns>
public object Deserialise(byte[] serialisedInstance)
{
object output;
using (var ms = new MemoryStream(serialisedInstance))
using (var reader = XmlDictionaryReader.CreateTextReader(ms, new XmlDictionaryReaderQuotas()))
{
output = this.dataContractSerialiser.ReadObject(reader);
}
return output;
}
}
Usage:
new Serialiser(typeof(EntityCollection));
You can then read or write the byte[] to disk.
In CRM 4.0 I had the generic method in my repository:
public T GetEntityById(Guid id)
{
var entities =
from e in m_context.GetEntities(typeof (T).Name)
where e.GetPropertyValue<Guid>(IdentityFieldName) == id
select e;
return (T) entities.FirstOrDefault();
}
But what about CRM 2011? ICrmEntity with GetPropertyValue method is missed...
What is alternate to get generic entity by ID?
something like (T) m_context.Retrieve(typeof (T).Name, id, new ColumnSet()).
See here
Found this question when looking for the same answer, this is how I ended up solving it.
public T GetEntityByID<T>(Guid guid) where T : Entity
{
return (T) (_organizationService.Retrieve((typeof(T)).Name, guid, new ColumnSet()));
}
You really want to be using the ToEntity method, rather than casting. Sometimes the typeof(T).Name will have capitalization differences, so I wrote a helper funciton as well:
/// <summary>
/// Retrieves the Entity of the given type with the given Id, with the given columns
/// </summary>
/// <typeparam name="T">An early bound Entity Type</typeparam>
/// <param name="service">open IOrganizationService</param>
/// <param name="id">Primary Key of Entity</param>
/// <param name="columnSet">Columns to retrieve</param>
/// <returns></returns>
public static T GetEntity<T>(this IOrganizationService service, Guid id, ColumnSet columnSet)
where T : Entity
{
return service.Retrieve(EntityHelper.GetEntityLogicalName<T>(), id, columnSet).ToEntity<T>()
}
public static string GetEntityLogicalName<T>() where T : Entity
{
return GetEntityLogicalName(typeof(T));
}
public static string GetEntityLogicalName(Type type)
{
var field = type.GetField("EntityLogicalName");
if (field == null)
{
if (type == typeof(Entity))
{
return "entity";
}
else
{
throw new Exception("Type " + type.FullName + " does not contain an EntityLogicalName Field");
}
}
return (string)field.GetValue(null);
}
So I have a windows service process that performs a workflow process. The back end uses Repository and UnitofWork Pattern and Unity on top of Entity Framework with the entities class generated from the edmx. I won't go into a whole lot of detail as its not necessary but basically there are 5 steps that the workflow goes through. A particular process might be at any stage at any point in time (in order of course). Step one just generates data for step two, which validates the data via a long running process to another server. Then step there generates a pdf with that data. For each stage we spawn a timer, however it is configurable to allow more than one timer to be spawned for each stage. Therein lays the problem. When I add a processor to a particular stage, it the following error randomly:
The connection was not closed. The connection's current state is connecting.
Reading up on this it seems obvious that this is happening because the context is trying to access the same entity from two threads. But this is where it is kind of throwing me for a loop. All of the information I can find on this states that we should be using a instance context per thread. Which as far as I can tell I am doing (see the code below). I am not using singleton pattern or statics or anything so I am not really sure why this is happening or how to avoid it. I have posted the relevant bits of my code below for your review.
The base repository:
public class BaseRepository
{
/// <summary>
/// Initializes a repository and registers with a <see cref="IUnitOfWork"/>
/// </summary>
/// <param name="unitOfWork"></param>
public BaseRepository(IUnitOfWork unitOfWork)
{
if (unitOfWork == null) throw new ArgumentException("unitofWork");
UnitOfWork = unitOfWork;
}
/// <summary>
/// Returns a <see cref="DbSet"/> of entities.
/// </summary>
/// <typeparam name="TEntity">Entity type the dbset needs to return.</typeparam>
/// <returns></returns>
protected virtual DbSet<TEntity> GetDbSet<TEntity>() where TEntity : class
{
return Context.Set<TEntity>();
}
/// <summary>
/// Sets the state of an entity.
/// </summary>
/// <param name="entity">object to set state.</param>
/// <param name="entityState"><see cref="EntityState"/></param>
protected virtual void SetEntityState(object entity, EntityState entityState)
{
Context.Entry(entity).State = entityState;
}
/// <summary>
/// Unit of work controlling this repository.
/// </summary>
protected IUnitOfWork UnitOfWork { get; set; }
/// <summary>
///
/// </summary>
/// <param name="entity"></param>
protected virtual void Attach(object entity)
{
if (Context.Entry(entity).State == EntityState.Detached)
Context.Entry(entity).State = EntityState.Modified;
}
protected virtual void Detach(object entity)
{
Context.Entry(entity).State = EntityState.Detached;
}
/// <summary>
/// Provides access to the ef context we are working with
/// </summary>
internal StatementAutoEntities Context
{
get
{
return (StatementAutoEntities)UnitOfWork;
}
}
}
StatementAutoEntities is the autogenerated EF class.
The repository implementation:
public class ProcessingQueueRepository : BaseRepository, IProcessingQueueRepository
{
/// <summary>
/// Creates a new repository and associated with a <see cref="IUnitOfWork"/>
/// </summary>
/// <param name="unitOfWork"></param>
public ProcessingQueueRepository(IUnitOfWork unitOfWork) : base(unitOfWork)
{
}
/// <summary>
/// Create a new <see cref="ProcessingQueue"/> entry in database
/// </summary>
/// <param name="Queue">
/// <see cref="ProcessingQueue"/>
/// </param>
public void Create(ProcessingQueue Queue)
{
GetDbSet<ProcessingQueue>().Add(Queue);
UnitOfWork.SaveChanges();
}
/// <summary>
/// Updates a <see cref="ProcessingQueue"/> entry in database
/// </summary>
/// <param name="queue">
/// <see cref="ProcessingQueue"/>
/// </param>
public void Update(ProcessingQueue queue)
{
//Attach(queue);
UnitOfWork.SaveChanges();
}
/// <summary>
/// Delete a <see cref="ProcessingQueue"/> entry in database
/// </summary>
/// <param name="Queue">
/// <see cref="ProcessingQueue"/>
/// </param>
public void Delete(ProcessingQueue Queue)
{
GetDbSet<ProcessingQueue>().Remove(Queue);
UnitOfWork.SaveChanges();
}
/// <summary>
/// Gets a <see cref="ProcessingQueue"/> by its unique Id
/// </summary>
/// <param name="id"></param>
/// <returns></returns>
public ProcessingQueue GetById(int id)
{
return (from e in Context.ProcessingQueue_SelectById(id) select e).FirstOrDefault();
}
/// <summary>
/// Gets a list of <see cref="ProcessingQueue"/> entries by status
/// </summary>
/// <param name="status"></param>
/// <returns></returns>
public IList<ProcessingQueue> GetByStatus(int status)
{
return (from e in Context.ProcessingQueue_SelectByStatus(status) select e).ToList();
}
/// <summary>
/// Gets a list of all <see cref="ProcessingQueue"/> entries
/// </summary>
/// <returns></returns>
public IList<ProcessingQueue> GetAll()
{
return (from e in Context.ProcessingQueue_Select() select e).ToList();
}
/// <summary>
/// Gets the next pending item id in the queue for a specific work
/// </summary>
/// <param name="serverId">Unique id of the server that will process the item in the queue</param>
/// <param name="workTypeId">type of <see cref="WorkType"/> we are looking for</param>
/// <param name="operationId">if defined only operations of the type indicated are considered.</param>
/// <returns>Next pending item in the queue for the work type or null if no pending work is found</returns>
public int GetNextPendingItemId(int serverId, int workTypeId, int? operationId)
{
var id = Context.ProcessingQueue_GetNextPending(serverId, workTypeId, operationId).SingleOrDefault();
return id.HasValue ? id.Value : -1;
}
/// <summary>
/// Returns a list of <see cref="ProcessingQueueStatus_dto"/>s objects with all
/// active entries in the queue
/// </summary>
/// <returns></returns>
public IList<ProcessingQueueStatus_dto> GetActiveStatusEntries()
{
return (from e in Context.ProcessingQueueStatus_Select() select e).ToList();
}
/// <summary>
/// Bumps an entry to the front of the queue
/// </summary>
/// <param name="processingQueueId"></param>
public void Bump(int processingQueueId)
{
Context.ProcessingQueue_Bump(processingQueueId);
}
}
We use Unity for dependency injection, some calling code for example:
#region Members
private readonly IProcessingQueueRepository _queueRepository;
#endregion
#region Constructors
/// <summary>Initializes ProcessingQueue services with repositories</summary>
/// <param name="queueRepository"><see cref="IProcessingQueueRepository"/></param>
public ProcessingQueueService(IProcessingQueueRepository queueRepository)
{
Check.Require(queueRepository != null, "processingQueueRepository is required");
_queueRepository = queueRepository;
}
#endregion
The code in the windows service that kicks off the timers is as follows:
_staWorkTypeConfigLock.EnterReadLock();
foreach (var timer in from operation in (from o in _staWorkTypeConfig.WorkOperations where o.UseQueueForExecution && o.AssignedProcessors > 0 select o)
let interval = operation.SpawnInternval < 30 ? 30 : operation.SpawnInternval
select new StaTimer
{
Interval = _runImmediate ? 5000 : interval*1000,
Operation = (ProcessingQueue.RequestedOperation) operation.OperationId
})
{
timer.Elapsed += ApxQueueProcessingOnElapsedInterval;
timer.Enabled = true;
Logger.DebugFormat("Queue processing for operations of type {0} will execute every {1} seconds", timer.Operation, timer.Interval/1000);
}
_staWorkTypeConfigLock.ExitReadLock();
StaTimer is just a wrapper on timer adding operation type. ApxQueueProcessingOnElapsedInterval then bascially just assigns work to the process based on the operation.
I will also add a bit of the ApxQueueProcessingOnElapsedInterval code where we are spawning tasks.
_staTasksLock.EnterWriteLock();
for (var x = 0; x < tasksNeeded; x++)
{
var t = new Task(obj => ProcessStaQueue((QueueProcessConfig) obj),
CreateQueueProcessConfig(true, operation), _cancellationToken);
_staTasks.Add(new Tuple<ProcessingQueue.RequestedOperation, DateTime, Task>(operation, DateTime.Now,t));
t.Start();
Thread.Sleep(300); //so there are less conflicts fighting for jobs in the queue table
}
_staTasksLock.ExitWriteLock();
Looks like your service, repository and context are supposed to live for the whole life time of your application but that is incorrect. You can have multiple timers triggered at the same time. That means multiple threads will use your service in parallel and they will execute the code of your service in their thread = context is shared among multiple threads => exception because context is not thread safe.
The only option is to use a new context instance for each operation you want to execute. You can for example change your classes to accept context factory instead of context and get a new context for each operation.
In case this helps anyone:
In my case, I did ensure that the non-thread-safe DbContext had a TransientLifetime (using Ninject), but it was still causing concurrency issues! Turns out that in some of my custom ActionFilters I used Dependency Injection to get access to the DbContext in the constructor, but ActionFilters have a lifetime that keeps them instantiated over multiple requests, so the context didn't get recreated.
I fixed it by manually resolving the dependency in the OnActionExecuting method instead of in the constructor so that it is a fresh instance every time.
In my case I was getting this problem because I forgot the await keyword before one of my DAL function calls. Putting await there resolved it.