We use the logging application block in our ASP.NET 2.0 application which is called in the following way:
public class BaseLogEntry : LogEntry
{
public void CloseLog()
{
try
{
Logger.Writer.Dispose();
}
catch (Exception)
{ }
}
}
public class GeneralLogEntry : BaseLogEntry
{
/// <summary>
///
/// </summary>
/// <param name="message"></param>
public GeneralLogEntry(string message) : this(message, 2) { }
/// <summary>
///
/// </summary>
/// <param name="message"></param>
/// <param name="priority"></param>
public GeneralLogEntry(string message, int priority): base()
{
Categories.Add("General");
Priority = priority;
Severity = System.Diagnostics.TraceEventType.Information;
Message = message;
CloseLog();
}
}
When we increase the number of worker processes in IIS above 1 the log files are prepended with a unique GUID like this:
068aa49c-2bf6-4278-8f91-c6b65fd1ea3aApplication.log
There are several log files generated by the app all of type "Rolling Flat File Trace Listener"
Is there a way to avoid this?
Originally from: http://ykm001.springnote.com/pages/6348311?print=1 (now a dead link redirecting to a game site):
Known problems
A GUID might be prepended to the filename of the log file
A RollingFileTraceListener instance "owns" the log file it is writing to and
locks it for exclusive write access when it writes the first log entry. It
keeps the file locked until the instance is disposed. If another
RollingFileTraceListener instance is created that points to the same file,
before the first instance is disposed, the second instance cannot open this
file for writing and will write to a new file with a GUID prepended to its
name.
The RollingFileTraceListener indirectly derives from
System.Diagnostics.TextWriterTraceListener. This class changes the filename to
include a GUID when the file with the specified filename cannot be written to.
This is because RollingFileTraceListener indirectly calls the EnsureWriter
method on its base class TextWriterTraceListener. .NET Reflector shows this
code for System.Diagnostics.TextWriterTraceListener.EnsureWriter() in
System.dll (slightly rewritten to improve clarity):
try
{
this.writer = new StreamWriter(fileNameWithPath, true, encoding1, 0x1000);
break;
}
catch (IOException)
{
Guid guid1 = Guid.NewGuid();
fileName = guid1.ToString() + fileName;
fileNameWithPath = Path.Combine(folderPath, fileName );
}
Basically it seems to be a known problem, there is a workaround at
http://entlibcontrib.codeplex.com/workitem/7472
Using the NoGUIDRollingFlatFileListener doesn't help, the problem still occurs (even after much time spent recompiling the logging application block). It might well be fixable in EntLib 4 but I'm stuck with Ent Lib 3.1
Perhaps I should look at alternative logging mechanisms
Related
We're implementing a new web application in Asp.net core 2.0, and I'd like to be able to restrict actions based on a combination of things, rather than one particular user role (like admin, power user, etc). The problem space looks like this:
Each User can have a particular 'home' facility that they have default permissions at based on their job function.
CRUD actions each have a particular permission associated with them.
Each permission can be granted at any number of facilities, just one, or none at all (or some combination thereof).
A particular user could have different permissions at different facilities. For example, a regional manager could have View and Order permissions at all of the facilities they work with, but only have the View permission to facilities in neighboring regions.
Currently, we use a home-grown solution that's getting out of hand to limit permissions, but it only works with a users 'home' facility. We can't grant someone that orders inventory from another facility, for example, to view a different facility's inventory.
We've attempted to just apply roles for each action in each facility (Yikes!) that are generated on the fly, but this lead to some users getting permissions they shouldn't have. Not to mention, its a nightmare to maintain.
How can I extend the Roles Functionality in ASP.NET Core 2.0 to allow my users to have different permissions in different facilities without having to create roles for each action at each facility?
I'd recommend using policies. They give you much finer-grained control. Basically, you start with one or more "requirements". For example, you might start with something like:
public class ViewFacilitiesRequirement : IAuthorizationRequirement
{
}
public class OrderFacilitiesRequirement : IAuthorizationRequirement
{
}
These mostly function as attachments for authorization handlers, so they're pretty basic. The meat comes in those authorization handlers, where you define what meeting the requirement actually means.
public class ViewFacilitiesHandler : AuthorizationHandler<ViewFacilitiesRequirement>
{
protected override async Task HandleRequirementAsync(AuthorizationHandlerContext context, ViewFacilitiesRequirement requirement)
{
// logic here, if user is authorized:
context.Succeed(requirement);
}
}
Authorization handlers are dependency injected, so you can inject things like your DbContext, UserManager<TUser>, etc. into them in the normal way and then query those sources to determine whether or not the user is authorized.
Once you've got some requirements and handlers, you need to register them:
services.AddAuthorization(o =>
{
o.AddPolicy("ViewFacilities", p =>
p.Requirements.Add(new ViewFacilitiesRequirement()));
});
services.AddScoped<IAuthorizationHandler, ViewFacilitiesHandler>();
In case it's not obvious, a policy can utilize multiple requirements. All will have to pass for the policy to pass. The handlers just need to be registered with the DI container. They are applied automatically based on the type(s) of requirements they apply to.
Then, on the controller or action that needs this permission:
[Authorize(Policy = "ViewFacilities")]
This is a very basic example, of course. You can make handlers than can work with multiple different requirements. You can build out your requirements a bit more, so you don't need as many of those either. Or you may prefer to be more explicit, and have requirements/handlers for each specific scenario. It's entirely up to you.
For more detail, see the documentation.
You could create an AuthorizationFilterAttribute and assign it to each API endpoint or Controller class. This will allow you to assign case-by-case permissions to each of your users, then you just need a table containing specific permission IDs.
Here's an implementation that pulls username from basic authentication. You can change authentication to your implementation by changing the OnAuthorization method to retrieve user details from wherever you store it.
/// <summary>
/// Generic Basic Authentication filter that checks if user is allowed
/// to access a specific permssion.
/// </summary>
[AttributeUsage(AttributeTargets.Class | AttributeTargets.Method, AllowMultiple = false)]
public class BasicAuthenticationFilter: AuthorizationFilterAttribute
{
private readonly bool _active = true;
private readonly int _permissionId;
public BasicAuthenticationFilter(int permissionId)
{
private _permissionId = permissionId;
}
/// <summary>
/// Overriden constructor to allow explicit disabling of this
/// filter's behavior. Pass false to disable (same as no filter
/// but declarative)
/// </summary>
public BasicAuthenticationFilter(bool active) => _active = active;
/// <summary>
/// Override to Web API filter method to handle Basic Authentication.
/// </summary>
public override void OnAuthorization(HttpActionContext actionContext)
{
if (!_active) return;
BasicAuthenticationIdentity identity = ParseAuthorizationHeader(actionContext);
if (identity == null && !OnAuthorizeUser(identity.Name, identity.Password, actionContext))
{
Challenge(actionContext);
return;
}
Thread.CurrentPrincipal = new GenericPrincipal(identity, null);
base.OnAuthorization(actionContext);
}
/// <summary>
/// Base implementation for user authentication you'll want to override this implementing
/// requirements on a case-by-case basis.
/// </summary>
protected virtual bool OnAuthorizeUser(string username, string password, HttpActionContext actionContext)
{
if (!Authorizer.Validate(username,password)) // check if user is authentic
return false;
using (var db = new DbContext())
{
var userPermissions = _context.UserPermissions
.Where(user => user.UserName == username);
if (userPermissions.Permission.Contains(_permissionId))
return true;
else
return false;
return true;
}
}
/// <summary>
/// Parses the Authorization header and creates user credentials
/// </summary>
protected virtual BasicAuthenticationIdentity ParseAuthorizationHeader(HttpActionContext actionContext)
{
string authHeader = null;
System.Net.Http.Headers.AuthenticationHeaderValue auth = actionContext.Request.Headers.Authorization;
if (auth?.Scheme == "Basic")
authHeader = auth.Parameter;
if (String.IsNullOrEmpty(authHeader))
return null;
authHeader = Encoding.Default.GetString(Convert.FromBase64String(authHeader));
string[] tokens = authHeader.Split(':');
if (tokens.Length < 2)
return null;
return new BasicAuthenticationIdentity(tokens[0], tokens[1]);
}
/// <summary>
/// Send the Authentication Challenge request
/// </summary>
private void Challenge(HttpActionContext actionContext)
{
var host = actionContext.Request.RequestUri.DnsSafeHost;
actionContext.Response = actionContext.Request.CreateResponse(HttpStatusCode.Unauthorized);
actionContext.Response.Headers.Add("WWW-Authenticate", String.Format("Basic realm=\"{0}\"", host));
}
}
Then you just add the filter to your API methods:
private const int _orderPermission = 450;
/// <summary>
/// Submit a new order.
/// </summary>
[HttpPost, BasicAuthenticationFilter(_orderPermission)]
public void Submit([FromBody]OrderData value)
{
Task.Run(()=> ProcessInbound());
return Ok();
}
I upgraded to Azure SDK 2.5 and switched to semantic logging with EventSources.
Logging works locally with a custom EventListener.
When deployed, logs are written to a storage table, but only the EventId, Pid, Tid etc. are populated, the really interesting fields (Message, Task, Keyword, Opcode) are left blank.
The diagnostics infrastructure log is full of errors with regards to ETW, but I don't know what to make of them:
Failed to load backup EventSource manifest file C:\Resources\{13b7ec61-6424-d4d3-9972-a83e58d8d6bb}\directory\f71b19461fcf494d89d3717b3a13cadf. something.WorkerRole.DiagnosticStore\WAD0103\Configuration\EventSource_Manifest_fe06b63d-39aa-5419-0529-18c4dacf4f68_Ver_20.backup.xml;
EventSource events will be logged without a proper schema until provider sends the manifest packets
Load manifest file failed for C:\Resources\{13b7ec61-6424-d4d3-9972-a83e58d8d6bb}\directory\f71b19461fcf494d89d3717b3a13cadf.something. WorkerRole. DiagnosticStore\WAD0103\Configuration\EventSource_Manifest_fe06b63d-39aa-5419-0529-18c4dacf4f68_Ver_20.xml
Failed to manage manifest version for file C:\Resources\{13b7ec61-6424-d4d3-9972-a83e58d8d6bb}\directory\f71b19461fcf494d89d3717b3a13cadf. something. WorkerRole.DiagnosticStore\WAD0103\Configuration\EventSource_Manifest_fe06b63d-39aa-5419-0529-18c4dacf4f68_Pid_3436.xml
Failed to process EventSource manifest event GUID:fe06b63d-39aa-5419-0529-18c4dacf4f68, event id:0xFFFE
Change in the number of events lost since the last sample: EventsCaptured=2 EventsLogged=1 EventsLost=0
I do not use a manifest file and specify the EventSource via class / attribute name:
<EtwEventSourceProviderConfiguration scheduledTransferPeriod="PT3M" scheduledTransferLogLevelFilter="Information" provider="something.Core">
<DefaultEvents eventDestination="CoreEvents" />
</EtwEventSourceProviderConfiguration>
I must be missing something, but I do not know what.
The remaining diagnostic services all work (infrastructure logs, performance counter etc.).
The EventId that is being logged is the correct one, but all the important information of the log is missing, I suppose because of an incomplete configuration?
Edit: here is my EventSource code. I won't post the entire thing because it's quite large. I use another type that calls the EventSource methods and handles formatting of parameters (if the source is enabled in that level). Most method arguments are of type string, there are no objects or other complex types passed around (that handles the other type).
[EventSource(Name = "something.Core")]
public sealed class CoreEventSource : EventSource {
private static readonly CoreEventSource SoleInstance = new CoreEventSource();
static CoreEventSource() {}
private CoreEventSource() {}
public static CoreEventSource Instance {
get { return SoleInstance; }
}
public static EventKeywords AllKeywords = (EventKeywords)(-1);
public class Keywords {
public const EventKeywords None = (EventKeywords)(1 << 1);
public const EventKeywords Infrastructure = (EventKeywords)(1 << 2);
[...]
}
public class Tasks {
public const EventTask None = EventTask.None;
// generic operations
public const EventTask Create = (EventTask)11;
public const EventTask Update = (EventTask)12;
public const EventTask Delete = (EventTask)13;
public const EventTask Get = (EventTask)14;
public const EventTask Put = (EventTask)15;
public const EventTask Remove = (EventTask)16;
public const EventTask Process = (EventTask)17;
}
[Event(1, Message = "Initialization of {0} failed: {1}.", Level = EventLevel.Critical, Keywords = Keywords.Infrastructure)]
public void CriticalInitializationFailure(string component, string details, string exception) {
this.WriteEvent(1, component, details, exception);
}
[Event(2, Message = "[Role '{0}'] Startup: {1}", Level = EventLevel.Informational, Keywords = Keywords.Infrastructure)]
public void RoleStartup(string roleName, string message) {
this.WriteEvent(2, roleName, message);
}
[Event(3, Message = "[Role '{0}'] Stop failed: {1}.", Level = EventLevel.Error, Keywords = Keywords.Infrastructure)]
public void RoleStopFailed(string roleName, string details, string exception) {
this.WriteEvent(3, roleName, details, exception);
}
[Event(4, Message = "An unhandled exception occurred.", Level = EventLevel.Critical, Keywords = Keywords.Infrastructure)]
public void UnhandledException(string exception) {
this.WriteEvent(4, exception);
}
[Event(5, Message = "An unobserved exception occurred in a faulted task.", Level = EventLevel.Critical, Keywords = Keywords.Infrastructure)]
public void UnobservedTaskException(string exception) {
this.WriteEvent(5, exception);
}
[...]
}
Turns out there were quite a few problems with my EventSource. The first thing I'd recommend to anyone working with ETW is to use the Microsoft TraceEvent Library from NuGet, even if you use System.Diagnostics.Tracing, because it comes with a tool that will verify your EventSource code and notify you about problems.
I had to fix the following:
EventSource names must not contain a period .
Task/Opcode pairs must be unique within an EventSource
One must not declare a None field in a custom Keywords or Tasks enumeration
Hope this is of some use to anyone who encounters a similar problem.
Another thing that should be taken care of (which fixed our case)
- EventSources should only have a Name or a Guid, not both.
In our case, having both caused
- The EtwEventSourceProvider to not log anything
- The EtwEventManifestProvider to log the same way you outlined, with empty data points.
Double clicking on a file in Explorer correctly adds the file to the recent list for my application and I can open it again from the popup menu on my application which I have pinned on the start menu.
I've got a special file manager in the application so I am using SHAddToRecentDocs to add the projects opened in the application to recent files. But it just doesn't happen and I can't find what the problem is.
Here's what I got in the registry:
HKEY_CLASSES_ROOT\.abc\Content Type = application/MyApp
HKEY_CLASSES_ROOT\.abc\(Standard) = MyAppProjectFile
HKEY_CLASSES_ROOT\MyAppProjectFile\shell\open\command\(Standard) = "C:\MyApp\MyApp.exe" %1
HKEY_CLASSES_ROOT\Applications\MyApp.exe\shell\open\command\(Standard) = "C:\MyApp\MyApp.exe" %1
There are no other keys under HKCR\Applications\MyApp.exe.
Like I said, I can open applications by double clicking on them in Explorer, they get added to the recent documents and everything looks fine. I can open them from the popup fine.
My SHAddToRecentDocs call, which gets a correct path, doesn't seem to be doing anything at all. No link appears in the recent documents folder.
Here's the C# code I use to run SHAddToRecentDocs:
[DllImport("Shell32.dll", CharSet = CharSet.Unicode)]
static extern void SHAddToRecentDocs(ShellAddToRecentDocsFlags flags, string file);
[Flags]
public enum ShellAddToRecentDocsFlags
{
Pidl = 0x001,
Path = 0x002,
}
/// <summary>
/// Adds the file to recent files list in windows.
/// </summary>
/// <param name="fullPath"> Name of the file. </param>
public static void AddFileToRecentFilesList(string fullPath)
{
SHAddToRecentDocs(ShellAddToRecentDocsFlags.Path, fullPath);
}
If turned out that a fix to a FxCop code warning was the reason this didn't work.
The ShellAddToRecentDocsFlags API was defined as follows:
[DllImport("Shell32.dll", CharSet = CharSet.Unicode)]
static extern void SHAddToRecentDocs(ShellAddToRecentDocsFlags flags, string file);
Changing it to the following fixed the issue:
[DllImport("Shell32.dll", BestFitMapping = false, ThrowOnUnmappableChar = true)]
static extern void SHAddToRecentDocs(ShellAddToRecentDocsFlags flags, [MarshalAs(UnmanagedType.LPStr)]string file);
I am designing a static message bus that would allow subscribing to and publishing of messages of an arbitrary type. To avoid requiring observers to unsubscribe explicitly, I would like to keep track of WeakReference objects that point to delegates instead of tracking delegates themselves. I ended up coding something similar to what Paul Stovell described in his blog http://www.paulstovell.com/weakevents.
My problem is this: as opposed to Paul's code, my observers subscribe to messages on one thread, but messages may be published on another. In this case, I observe that by the time I need to notify observers, my WeakReference.Target values are null indicating that targets have been collected, even though I know for certain they weren't. The problem persists for both short and long weak references.
Conversely, when subscribing and publishing is done from the same thread, the code works fine. The latter is true even if I actually end up enumerating over targets on a new thread from ThreadPool, for as long as the request initially comes from the same thread I subscribe to messages on.
I understand that this is a very specific case, so any help is greatly appreciated.
My question is: should I not be able to reliably access WeakReference objects from multiple threads provided proper thread synchronization is in place? It appears that I cannot, which does not make much sense to me. So, what am I not doing right?
It looks like after reducing my code to a simpler form (see below), it now works fine. This means, the problem that caused weak reference targets to be collected too early must reside elsewhere in my code. So, to answer my own question, it appears that weak references can be securely accessed from multiple threads.
Here is my test code:
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.Linq;
using System.Linq.Expressions;
using System.Threading;
using System.Threading.Tasks;
namespace Test
{
class Program
{
static void Main(string[] args)
{
Console.WriteLine("Starting the app");
Test test = new Test();
// uncomment these lines to cause automatic unsubscription from Message1
// test = null;
// GC.Collect();
// GC.WaitForPendingFinalizers();
// publish Message1 on this thread
// MessageBus.Publish<Message1>(new Message1());
// publish Message1 on another thread
ThreadPool.QueueUserWorkItem(delegate
{
MessageBus.Publish<Message1>(new Message1());
});
while (!MessageBus.IamDone)
{
Thread.Sleep(100);
}
Console.WriteLine("Exiting the app");
Console.WriteLine("Press <ENTER> to terminate program.");
Console.WriteLine();
Console.ReadLine();
}
}
public class Test
{
public Test()
{
Console.WriteLine("Subscribing to message 1.");
MessageBus.Subscribe<Message1>(OnMessage1);
Console.WriteLine("Subscribing to message 2.");
MessageBus.Subscribe<Message2>(OnMessage2);
}
public void OnMessage1(Message1 message)
{
Console.WriteLine("Got message 1. Publishing message 2");
MessageBus.Publish<Message2>(new Message2());
}
public void OnMessage2(Message2 message)
{
Console.WriteLine("Got message 2. Closing the app");
MessageBus.IamDone = true;
}
}
public abstract class MessageBase
{
public string Message;
}
public class Message1 : MessageBase
{
}
public class Message2 : MessageBase
{
}
public static class MessageBus
{
// This is here purely for this test
public static bool IamDone = false;
/////////////////////////////////////
/// <summary>
/// A dictionary of lists of handlers of messages by message type
/// </summary>
private static ConcurrentDictionary<string, List<WeakReference>> handlersDict = new ConcurrentDictionary<string, List<WeakReference>>();
/// <summary>
/// Thread synchronization object to use with Publish calls
/// </summary>
private static object _lockPublishing = new object();
/// <summary>
/// Thread synchronization object to use with Subscribe calls
/// </summary>
private static object _lockSubscribing = new object();
/// <summary>
/// Creates a work queue item that encapsulates the provided parameterized message
/// and dispatches it.
/// </summary>
/// <typeparam name="TMessage">Message argument type</typeparam>
/// <param name="message">Message argument</param>
public static void Publish<TMessage>(TMessage message)
where TMessage : MessageBase
{
// create the dictionary key
string key = String.Empty;
key = typeof(TMessage).ToString();
// initialize a queue work item argument as a tuple of the dictionary type key and the message argument
Tuple<string, TMessage, Exception> argument = new Tuple<string, TMessage, Exception>(key, message, null);
// push the message on the worker queue
ThreadPool.QueueUserWorkItem(new WaitCallback(_PublishMessage<TMessage>), argument);
}
/// <summary>
/// Publishes a message to the bus, causing observers to be invoked if appropriate.
/// </summary>
/// <typeparam name="TArg">Message argument type</typeparam>
/// <param name="stateInfo">Queue work item argument</param>
private static void _PublishMessage<TArg>(Object stateInfo)
where TArg : class
{
try
{
// translate the queue work item argument to extract the message type info and
// any arguments
Tuple<string, TArg, Exception> arg = (Tuple<string, TArg, Exception>)stateInfo;
// call all observers that have registered to receive this message type in parallel
Parallel.ForEach(handlersDict.Keys
// find the right dictionary list entry by message type identifier
.Where(handlerKey => handlerKey == arg.Item1)
// dereference the list entry by message type identifier to get a reference to the observer
.Select(handlerKey => handlersDict[handlerKey]), (handlerList, state) =>
{
lock (_lockPublishing)
{
List<int> descopedRefIndexes = new List<int>(handlerList.Count);
// search the list of references and invoke registered observers
foreach (WeakReference weakRef in handlerList)
{
// try to obtain a strong reference to the target
Delegate dlgRef = (weakRef.Target as Delegate);
// check if the underlying delegate reference is still valid
if (dlgRef != null)
{
// yes it is, get the delegate reference via Target property, convert it to Action and invoke the observer
try
{
(dlgRef as Action<TArg>).Invoke(arg.Item2);
}
catch (Exception e)
{
// trouble invoking the target observer's reference, mark it for deletion
descopedRefIndexes.Add(handlerList.IndexOf(weakRef));
Console.WriteLine(String.Format("Error looking up target reference: {0}", e.Message));
}
}
else
{
// the target observer's reference has been descoped, mark it for deletion
descopedRefIndexes.Add(handlerList.IndexOf(weakRef));
Console.WriteLine(String.Format("Message type \"{0}\" has been unsubscribed from.", arg.Item1));
MessageBus.IamDone = true;
}
}
// remove any descoped references
descopedRefIndexes.ForEach(index => handlerList.RemoveAt(index));
}
});
}
// catch all Exceptions
catch (AggregateException e)
{
Console.WriteLine(String.Format("Error dispatching messages: {0}", e.Message));
}
}
/// <summary>
/// Subscribes the specified delegate to handle messages of type TMessage
/// </summary>
/// <typeparam name="TArg">Message argument type</typeparam>
/// <param name="action">WeakReference that represents the handler for this message type to be registered with the bus</param>
public static void Subscribe<TArg>(Action<TArg> action)
where TArg : class
{
// validate input
if (action == null)
throw new ArgumentNullException(String.Format("Error subscribing to message type \"{0}\": Specified action reference is null.", typeof(TArg)));
// build the queue work item key identifier
string key = typeof(TArg).ToString();
// check if a message of this type was already added to the bus
if (!handlersDict.ContainsKey(key))
{
// no, it was not, create a new dictionary entry and add the new observer's reference to it
List<WeakReference> newHandlerList = new List<WeakReference>();
handlersDict.TryAdd(key, newHandlerList);
}
lock (_lockSubscribing)
{
// append this new observer's reference to the list, if it does not exist already
if (!handlersDict[key].Any(existing => (existing.Target as Delegate) != null && (existing.Target as Delegate).Equals(action)))
{
// append the new reference
handlersDict[key].Add(new WeakReference(action, true));
}
}
}
}
}
This is an amendment to my previous answer. I have discovered why my original code did not work and this information may be useful for others. In my original code MessageBus was instantiated as Singleton:
public class MessageBus : Singleton<MessageBus> // Singleton<> is my library class
In the example above, it was declared as static:
public static class MessageBus
Once I converted my code to use a static, things started working. Having said that, I could not yet figure out why the singleton did not work.
Here at work we're working with an OData WCF Service to create our new API. To fully implement our API, we've started extending the service with custom functions that allow us to trigger specific functionality that can't be exposed through the normal means of OData.
One example is switching a Workspace entity into advanced mode. This requires alot of checks and mingling with data, that we opted to move this to a seperate function. This is the complete code of our Api.svc service:
using System.Net;
using System.ServiceModel.Web;
namespace TenForce.Execution.Web
{
using System;
using System.Data.Services;
using System.Data.Services.Common;
using System.Security.Authentication;
using System.ServiceModel;
using System.Text;
using Microsoft.Data.Services.Toolkit;
using Api2;
using Api2.Implementation.Security;
using Api2.OData;
/// <summary>
/// <para>This class represents the entire OData WCF Service that handles incoming requests and processes the data needed
/// for those requests. The class inherits from the <see cref="ODataService<T>">ODataService</see> class in the toolkit to
/// implement the desired functionality.</para>
/// </summary>
[ServiceBehavior(IncludeExceptionDetailInFaults = true)]
public class Api : ODataService<Context>
{
#region Initialization & Authentication
// This method is called only once to initialize service-wide policies.
public static void InitializeService(DataServiceConfiguration config)
{
config.UseVerboseErrors = true;
config.SetEntitySetAccessRule("*", EntitySetRights.All);
config.SetServiceOperationAccessRule("*", ServiceOperationRights.All);
config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
Factory.SetImplementation(typeof(Api2.Implementation.Api));
}
/// <summary>
/// <para>This function is called when a request needs to be processed by the OData API.</para>
/// <para>This function will look at the headers that are supplied to the request and try to extract the relevant
/// user credentials from these headers. Using those credentials, a login is attempted. If the login is successfull,
/// the request is processed. If the login fails, an AuthenticationException is raised instead.</para>
/// <para>The function will also add the required response headers to the service reply to indicate the success
/// or failure of the Authentication attempt.</para>
/// </summary>
/// <param name="args">The arguments needed to process the incoming request.</param>
/// <exception cref="AuthenticationException">Invalid username and/or password.</exception>
protected override void OnStartProcessingRequest(ProcessRequestArgs args)
{
#if DEBUG
Authenticator.Authenticate("secretlogin", string.Empty, Authenticator.ConstructDatabaseId(args.RequestUri.ToString()));
#else
bool authSuccess = Authenticate(args.OperationContext, args.RequestUri.ToString());
args.OperationContext.ResponseHeaders.Add(#"TenForce-RAuth", authSuccess ? #"OK" : #"DENIED");
if (!authSuccess) throw new AuthenticationException(#"Invalid username and/or password");
#endif
base.OnStartProcessingRequest(args);
}
/// <summary>
/// <para>Performs authentication based upon the data present in the custom headers supplied by the client.</para>
/// </summary>
/// <param name="context">The OperationContext for the request</param>
/// <param name="url">The URL for the request</param>
/// <returns>True if the Authentication succeeded; otherwise false.</returns>
private static bool Authenticate(DataServiceOperationContext context, string url)
{
// Check if the header is present
string header = context.RequestHeaders["TenForce-Auth"];
if (string.IsNullOrEmpty(header)) return false;
// Decode the header from the base64 encoding
header = Encoding.UTF8.GetString(Convert.FromBase64String(header));
// Split the header and try to authenticate.
string[] components = header.Split('|');
return (components.Length >= 2) && Authenticator.Authenticate(components[0], components[1], Authenticator.ConstructDatabaseId(url));
}
#endregion
#region Service Methods
/*
* All functions that are defined in this block, are special Service Methods on our API Service that become
* available on the web to be called by external parties. These functions do not belong in the REST specifications
* and are therefore placed here as public functions.
*
* Important to know is that these methods become case-sensitive in their signature as well as their parameters when
* beeing called from the web. Therefore we need to properly document these functions here so the generated document
* explains the correct usage of these functions.
*/
/// <summary>
/// <para>Switches the specified <see cref="Workspace">Workspace</see> into advanced mode, using the specified
/// Usergroup as the working <see cref="Usergroup">Usergroup</see> for the Workspace.</para>
/// <para>The method can be called using the following signature from the web:</para>
/// <para>http://applicationurl/api.svc/SwitchWorkspaceToAdvancedMode?workspaceId=x&usergroupId=y</para>
/// <para>Where x stands for the unique identifier of the <see cref="Workspace">Workspace</see> entity and y stands for the unique
/// identifier of the <see cref="Usergroup">Usergroup</see> entity.</para>
/// <para>This method can only be invoked by a HTTP GET operation and returns a server response 200 when properly executed.
/// If the request fails, the server will respond with a BadRequest error code.</para>
/// </summary>
/// <param name="workspaceId">The unique <see cref="Workspace">Workspace</see> entity identifier.</param>
/// <param name="usergroupId">The unique <see cref="Usergroup">Usergroup</see> entity identifier.</param>
[WebGet]
public void SwitchWorkspaceToAdvancedMode(int workspaceId, int usergroupId)
{
Api2.Objects.Workspace ws = Factory.CreateApi().Workspaces.Read(workspaceId);
Api2.Objects.Usergroup ug = Factory.CreateApi().UserGroups.Read(usergroupId);
if(!Factory.CreateApi().Workspaces.ConvertToAdvancedPrivilegeSetup(ws, ug))
throw new WebFaultException(HttpStatusCode.BadRequest);
}
#endregion
}
}
The code is a bit large, but basicly what this extra functions do is check the supplied headers for each request and authenticate against the application with the provided username and password to ensure only valid users can work with our OData service.
The problem currently exists in the new function we declared at the bottom. The API requires a usercontext to be set for executing the functionality. This is normally done through the Authenticator class.
With the debugger, I followed a request and checked if the Authenticator is beeing called and it is. However when the SwitchWorkspaceToAdvancedMode function is triggered, this context is lost and it appears as nobody ever logged in.
The function calls are like this:
Create a new Api.svc instance
Trigger the OnStartProcessingRequest
Trigger the Authenticate method Trigger the
SwitchWorkspaceToAdvancedMode method
But this last one receives an error from the API stating that no login occured and no user context has been set. This means that we set the current thread principal to the thread that logged in.
From the error messages, I'm concluding that the actuall request for the SwitchWorkspaceToAdvancedMode is running on a different thread, and therefor it seems that no login ever occured because this is done from a different thread.
Am I right in this assumption, and if so, can I prevent this or work around it?
I've solved this issue by adding a new ServiceBehavior to the DataService:
[ServiceBehavior(IncludeExceptionDetailInFaults = true, InstanceContextMode = InstanceContextMode.PerSession)]
This solved the apparent threading issue I had