I have implemented my own TraceListener similar to http://blogs.technet.com/b/meamcs/archive/2013/05/23/diagnostics-of-cloud-services-custom-trace-listener.aspx .
One thing I noticed is that that logs show up immediately in My Azure Table Storage. I wonder if this is expected with Custom Trace Listeners or because I am in a development environment.
My diagnosics.wadcfg
<?xml version="1.0" encoding="utf-8"?>
<DiagnosticMonitorConfiguration configurationChangePollInterval="PT1M""overallQuotaInMB="4096" xmlns="http://schemas.microsoft.com/ServiceHosting/2010/10/DiagnosticsConfiguration">
<DiagnosticInfrastructureLogs scheduledTransferLogLevelFilter="Information" />
<Directories scheduledTransferPeriod="PT1M">
<IISLogs container="wad-iis-logfiles" />
<CrashDumps container="wad-crash-dumps" />
</Directories>
<Logs bufferQuotaInMB="0" scheduledTransferPeriod="PT30M" scheduledTransferLogLevelFilter="Information" />
</DiagnosticMonitorConfiguration>
I have changed my approach a bit. Now I am defining in the web config of my webrole. I notice when I set autoflush to true in the webconfig, every thing works but scheduledTransferPeriod is not honored because the flush method pushes to the table storage. I would like to have scheduleTransferPeriod trigger the flush or trigger flush after a certain number of log entries like the buffer is full. Then I can also flush on server shutdown. Is there any method or event on the CustomTraceListener where I can listen to the scheduleTransferPeriod?
<system.diagnostics>
<!--http://msdn.microsoft.com/en-us/library/sk36c28t(v=vs.110).aspx
By default autoflush is false.
By default useGlobalLock is true. While we try to be threadsafe, we keep this default for now. Later if we would like to increase performance we can remove this. see http://msdn.microsoft.com/en-us/library/system.diagnostics.trace.usegloballock(v=vs.110).aspx -->
<trace>
<listeners>
<add name="TableTraceListener"
type="Pos.Services.Implementation.TableTraceListener, Pos.Services.Implementation"
/>
<remove name="Default" />
</listeners>
</trace>
</system.diagnostics>
I have modified the custom trace listener to the following:
namespace Pos.Services.Implementation
{
class TableTraceListener : TraceListener
{
#region Fields
//connection string for azure storage
readonly string _connectionString;
//Custom sql storage table for logs.
//TODO put in config
readonly string _diagnosticsTable;
[ThreadStatic]
static StringBuilder _messageBuffer;
readonly object _initializationSection = new object();
bool _isInitialized;
CloudTableClient _tableStorage;
readonly object _traceLogAccess = new object();
readonly List<LogEntry> _traceLog = new List<LogEntry>();
#endregion
#region Constructors
public TableTraceListener() : base("TableTraceListener")
{
_connectionString = RoleEnvironment.GetConfigurationSettingValue("DiagConnection");
_diagnosticsTable = RoleEnvironment.GetConfigurationSettingValue("DiagTableName");
}
#endregion
#region Methods
/// <summary>
/// Flushes the entries to the storage table
/// </summary>
public override void Flush()
{
if (!_isInitialized)
{
lock (_initializationSection)
{
if (!_isInitialized)
{
Initialize();
}
}
}
var context = _tableStorage.GetTableServiceContext();
context.MergeOption = MergeOption.AppendOnly;
lock (_traceLogAccess)
{
_traceLog.ForEach(entry => context.AddObject(_diagnosticsTable, entry));
_traceLog.Clear();
}
if (context.Entities.Count > 0)
{
context.BeginSaveChangesWithRetries(SaveChangesOptions.None, (ar) => context.EndSaveChangesWithRetries(ar), null);
}
}
/// <summary>
/// Creates the storage table object. This class does not need to be locked because the caller is locked.
/// </summary>
private void Initialize()
{
var account = CloudStorageAccount.Parse(_connectionString);
_tableStorage = account.CreateCloudTableClient();
_tableStorage.GetTableReference(_diagnosticsTable).CreateIfNotExists();
_isInitialized = true;
}
public override bool IsThreadSafe
{
get
{
return true;
}
}
#region Trace and Write Methods
/// <summary>
/// Writes the message to a string buffer
/// </summary>
/// <param name="message">the Message</param>
public override void Write(string message)
{
if (_messageBuffer == null)
_messageBuffer = new StringBuilder();
_messageBuffer.Append(message);
}
/// <summary>
/// Writes the message with a line breaker to a string buffer
/// </summary>
/// <param name="message"></param>
public override void WriteLine(string message)
{
if (_messageBuffer == null)
_messageBuffer = new StringBuilder();
_messageBuffer.AppendLine(message);
}
/// <summary>
/// Appends the trace information and message
/// </summary>
/// <param name="eventCache">the Event Cache</param>
/// <param name="source">the Source</param>
/// <param name="eventType">the Event Type</param>
/// <param name="id">the Id</param>
/// <param name="message">the Message</param>
public override void TraceEvent(TraceEventCache eventCache, string source, TraceEventType eventType, int id, string message)
{
base.TraceEvent(eventCache, source, eventType, id, message);
AppendEntry(id, eventType, eventCache);
}
/// <summary>
/// Adds the trace information to a collection of LogEntry objects
/// </summary>
/// <param name="id">the Id</param>
/// <param name="eventType">the Event Type</param>
/// <param name="eventCache">the EventCache</param>
private void AppendEntry(int id, TraceEventType eventType, TraceEventCache eventCache)
{
if (_messageBuffer == null)
_messageBuffer = new StringBuilder();
var message = _messageBuffer.ToString();
_messageBuffer.Length = 0;
if (message.EndsWith(Environment.NewLine))
message = message.Substring(0, message.Length - Environment.NewLine.Length);
if (message.Length == 0)
return;
var entry = new LogEntry()
{
PartitionKey = string.Format("{0:D10}", eventCache.Timestamp >> 30),
RowKey = string.Format("{0:D19}", eventCache.Timestamp),
EventTickCount = eventCache.Timestamp,
Level = (int)eventType,
EventId = id,
Pid = eventCache.ProcessId,
Tid = eventCache.ThreadId,
Message = message
};
lock (_traceLogAccess)
_traceLog.Add(entry);
}
#endregion
#endregion
}
}
I looked at the source code in the blog post you referred. If you notice in the code for Details method, it is calling Trace.Flush() method which is essentially writing the trace log data collected so far in table storage. In other words, the custom trace listener is not picking up the scheduled transfer period from diagnostics.wadcfg file at all.
At this point, I do not think there is a solution for leveraging the scheduledTransferPeriod and custom logs. I ended up living with the immediate transfer as I wanted my own table schema. At some point I may write my own transfer interval.
Related
We are using Azure cosmos DB for saving state information of a job processing pipeline. We use the table API and the corresponding SDK for this. Recently, we noticed that the system was frequently running into 429 – Request rate is too large error. Our transactional DTU utilization was way below the maximum configured on the table but we noticed under the metrics tab that system DTUs used by operations such as enumerating tables etc.. was being exhausted and hence the 429.
Our initial fix of removing a ‘CreateIfNotExists’ method call, helped fix it for a while but recently we have started running into the issue again (though not as frequently as before). It is difficult to debug/troubleshoot this since there I could not find enough documentation about which SDK method calls exhaust this non-scalable resource. I have enabled logging on our CosmosDB instance but I am not sure what I am looking for in the logs to troubleshoot this
Here is the singleton class we use for interfacing with Azure Cosmos DB
public class CosmosDbTableFacade : ICosmosDbTableFacade
{
/// <summary>
/// Initializes a new instance of the <see cref="CosmosDbTableFacade"/> class.
/// </summary>
/// <param name="connectionString">
/// The connection string.
/// </param>
/// <param name="tableName">
/// The table name.
/// </param>
public CosmosDbTableFacade(string connectionString)
{
var storageAccount = CloudStorageAccount.Parse(connectionString);
this.CosmosTableClient = storageAccount.CreateCloudTableClient();
}
/// <summary>
/// Gets or sets the cosmos table.
/// </summary>
public CloudTableClient CosmosTableClient { get; set; }
/// <summary>
/// The execute async.
/// </summary>
/// <param name="tableName">
/// The table Name.
/// </param>
/// <param name="operation">
/// The operation.
/// </param>
/// <returns>
/// The <see cref="Task"/>.
/// </returns>
public Task<TableResult> ExecuteAsync(string tableName, TableOperation operation)
{
var table = this.CosmosTableClient.GetTableReference(tableName);
return table.ExecuteAsync(operation);
}
/// <summary>
/// The execute query segmented async.
/// </summary>
/// <param name="tableName">
/// The table name.
/// </param>
/// <param name="query">
/// The query.
/// </param>
/// <param name="continuationToken">
/// The continuation token.
/// </param>
/// <returns>
/// The <see cref="Task"/> which returns the list of entities.
/// </returns>
public Task<TableQuerySegment<DynamicTableEntity>> ExecuteQuerySegmentedAsync(string tableName, TableQuery query, TableContinuationToken continuationToken)
{
var table = this.CosmosTableClient.GetTableReference(tableName);
return table.ExecuteQuerySegmentedAsync(query, continuationToken);
}
}
The following snippet lists the different queries we are using -
public async Task InsertOrMergeEntityAsync<T>(string tableName, T entity)
where T : TableEntity
{
var insertOrMergeOperation = TableOperation.InsertOrMerge(entity);
var result = await this.CosmosDbTableFacade.ExecuteAsync(tableName, insertOrMergeOperation).ConfigureAwait(false);
ValidateCosmosTableResult(result, "Failed to write to Cosmos Table");
}
public async Task<T> GetEntityAsync<T>(string tableName, string partitionKey, string rowKey)
where T : TableEntity
{
var retrieveOperation = TableOperation.Retrieve<T>(partitionKey, rowKey);
TableResult result = await this.CosmosDbTableFacade.ExecuteAsync(tableName, retrieveOperation).ConfigureAwait(false);
ValidateCosmosTableResult(result, "Failed to read from Cosmos Table");
return result.Result as T;
}
public async Task<IEnumerable<T>> GetEntitiesAsync<T>(string tableName, string filterCondition)
where T : TableEntity
{
var query = new TableQuery().Where(filterCondition);
var continuationToken = default(TableContinuationToken);
var results = new List<T>();
do
{
var currentQueryResults = await this.CosmosDbTableFacade.ExecuteQuerySegmentedAsync(tableName, query, continuationToken).ConfigureAwait(false);
results.AddRange(currentQueryResults.Select(currentQueryResult =>
{
var currentEntity = TableEntity.ConvertBack<T>(currentQueryResult.Properties, null);
currentEntity.RowKey = currentQueryResult.RowKey;
currentEntity.PartitionKey = currentQueryResult.PartitionKey;
currentEntity.Timestamp = currentQueryResult.Timestamp;
currentEntity.ETag = currentQueryResult.ETag;
return currentEntity;
}));
continuationToken = currentQueryResults.ContinuationToken;
}
while (continuationToken != null);
return results;
}
The filter in the last method below, contains a partition key and a custom column
For anyone running into similar issues, the root cause for metadata DTU throttling in my case turned out to be: GetTableReference(tableName) method (found by deploying a change with that line moved to startup code and monitoring the DTU utilization).
I had this so that I could dynamically point to which table to read/write to at runtime but since this was consuming metadata DTUs, I changed my code to use a singleton for the table reference instead.
I am creating a UIActivityIndicatorView in my Controller.ViewDidLoad
UIActivityIndicatorView spinner = new UIActivityIndicatorView();
spinner.StartAnimating();
spinner.Hidden = true;
this.Add(spinner);
Then I am binding it with MVVMCross
var set = this.CreateBindingSet<TipView, TipViewModel>();
set.Bind(spinner).For(v => v.Hidden).To(vm => vm.IsBusy).WithConversion("Inverse");
When the View initially loads the UIActivityIndicatorView is spinning and visible. This is incorrect as the IsBusy property is being explicitly set to False in the ViewModel's Init(). I can see this happening and I can see the Converter invert the value.
I know the binding is properly connected because if I fire a command that updates the IsBusy property the Indicator is shown and hidden as I would expect. It is just the initial state that is incorrect.
The StartAnimating method seems to cause the Hidden flag to be overridden. If I do not call StartAnimating the Indicator hides and shows as expected. Of course that means I have a non animating
Indicator.
I could get a WeakReference to the VM, listen to PropertyChanged and call StartAnimating but that seems a bit rubbish.
Does anyone have any better ideas?
Some options you can do:
Subscribe to PropertyChanged changes and write custom code in the event handler (as you suggest in your question)
Inherit from UIActivityIndicatorView and write a public get;set; property which provides the composite functionality (calling Start and Hidden) in the set handler
public class MyIndicatorView : UIActivityIndicatorView {
// ctors
private bool _superHidden;
public bool SuperHidden {
get { return _supperHidden; }
set { _superHidden = value; if (!value) StartAnimating() else StopAnimating(); Hidden = value; }
}
}
Provide a View public get;set; property and put the composite functionality in that (e.g. set.Bind(this).For(v => v.MyAdvanced)...
private bool _myAdvanced;
public bool MyAdvanced {
get { return myAdvanced; }
set { myAdvanced = value; if (!value) _spinner.StartAnimating() else _spinner.StopAnimating(); _spinner.Hidden = value; }
}
Write a custom binding for Hidden which replaces the default functionality and contains the combined Start and Hidden calls (for more on custom bindings, there's a couple of N+1 tutorials)
After reading #slodge's reply I went down the road of Weak Event Listener and ran the code to Hide and StartAnimating in the View. Having copy and pasted that approach 3 times I realised something had to change so I implemented his 4th suggestion and wrote a Custom Binding. FWIW here is that custom binding
/// <summary>
/// Custom Binding for UIActivityIndicator Hidden.
/// This binding will ensure the indicator animates when shown and stops when hidden
/// </summary>
public class ActivityIndicatorViewHiddenTargetBinding : MvxConvertingTargetBinding
{
/// <summary>
/// Initializes a new instance of the <see cref="ActivityIndicatorViewHiddenTargetBinding"/> class.
/// </summary>
/// <param name="target">The target.</param>
public ActivityIndicatorViewHiddenTargetBinding(UIActivityIndicatorView target)
: base(target)
{
if (target == null)
{
MvxBindingTrace.Trace(
MvxTraceLevel.Error,
"Error - UIActivityIndicatorView is null in ActivityIndicatorViewHiddenTargetBinding");
}
}
/// <summary>
/// Gets the default binding mode.
/// </summary>
/// <value>
/// The default mode.
/// </value>
public override MvxBindingMode DefaultMode
{
get { return MvxBindingMode.OneWay; }
}
/// <summary>
/// Gets the type of the target.
/// </summary>
/// <value>
/// The type of the target.
/// </value>
public override System.Type TargetType
{
get { return typeof(bool); }
}
/// <summary>
/// Gets the view.
/// </summary>
/// <value>
/// The view.
/// </value>
protected UIActivityIndicatorView View
{
get { return Target as UIActivityIndicatorView; }
}
/// <summary>
/// Sets the value.
/// </summary>
/// <param name="target">The target.</param>
/// <param name="value">The value.</param>
protected override void SetValueImpl(object target, object value)
{
var view = (UIActivityIndicatorView)target;
if (view == null)
{
return;
}
view.Hidden = (bool)value;
if (view.Hidden)
{
view.StopAnimating();
}
else
{
view.StartAnimating();
}
}
}
I have created the following singleton class to handle a Redis connection, and expose BookSleeve functionality:
public class RedisConnection
{
private static RedisConnection _instance = null;
private BookSleeve.RedisSubscriberConnection _channel;
private readonly int _db;
private readonly string[] _keys; // represent channel name
public BookSleeve.RedisConnection _connection;
/// <summary>
/// Initialize all class parameters
/// </summary>
private RedisConnection(string serverList, int db, IEnumerable<string> keys)
{
_connection = ConnectionUtils.Connect(serverList);
_db = db;
_keys = keys.ToArray();
_connection.Closed += OnConnectionClosed;
_connection.Error += OnConnectionError;
// Create a subscription channel in redis
_channel = _connection.GetOpenSubscriberChannel();
// Subscribe to the registered connections
_channel.Subscribe(_keys, OnMessage);
// Dirty hack but it seems like subscribe returns before the actual
// subscription is properly setup in some cases
while (_channel.SubscriptionCount == 0)
{
Thread.Sleep(500);
}
}
/// <summary>
/// Do something when a message is received
/// </summary>
/// <param name="key"></param>
/// <param name="data"></param>
private void OnMessage(string key, byte[] data)
{
// since we are just interested in pub/sub, no data persistence is active
// however, if the persistence flag is enabled, here is where we can save the data
// The key is the stream id (channel)
//var message = RedisMessage.Deserialize(data);
var message = Helpers.BytesToString(data);
if (true) ;
//_publishQueue.Enqueue(() => OnReceived(key, (ulong)message.Id, message.Messages));
}
public static RedisConnection GetInstance(string serverList, int db, IEnumerable<string> keys)
{
if (_instance == null)
{
// could include some sort of lock for thread safety
_instance = new RedisConnection(serverList, db, keys);
}
return _instance;
}
private static void OnConnectionClosed(object sender, EventArgs e)
{
// Should we auto reconnect?
if (true)
{
;
}
}
private static void OnConnectionError(object sender, BookSleeve.ErrorEventArgs e)
{
// How do we bubble errors?
if (true)
{
;
}
}
}
In OnMessage(), var message = RedisMessage.Deserialize(data); is commented out due to the following error:
RedisMessage is inaccessible due to its protection level.
RedisMessage is an abstract class in BookSleeve, and I'm a little stuck on why I cannot use this.
I ran into this issue because as I send messages to a channel (pub/sub) I may want to do something with them in OnMessage() - for example, if a persistence flag is set, I may choose to begin recording data. The issue is that the data is serialized at this point, and I wish to deserialize it (to string) and and persist it in Redis.
Here is my test method:
[TestMethod]
public void TestRedisConnection()
{
// setup parameters
string serverList = "dbcache1.local:6379";
int db = 0;
List<string> eventKeys = new List<string>();
eventKeys.Add("Testing.FaucetChannel");
BookSleeve.RedisConnection redisConnection = Faucet.Services.RedisConnection.GetInstance(serverList, db, eventKeys)._connection;
// broadcast to a channel
redisConnection.Publish("Testing.FaucetChannel", "a published value!!!");
}
Since I wasn't able to make use of the Deserialize() method, I created a static helper class:
public static class Helpers
{
/// <summary>
/// Serializes a string to bytes
/// </summary>
/// <param name="val"></param>
/// <returns></returns>
public static byte[] StringToBytes(string str)
{
try
{
byte[] bytes = new byte[str.Length * sizeof(char)];
System.Buffer.BlockCopy(str.ToCharArray(), 0, bytes, 0, bytes.Length);
return bytes;
}
catch (Exception ex)
{
/* handle exception omitted */
return null;
}
}
/// <summary>
/// Deserializes bytes to string
/// </summary>
/// <param name="bytes"></param>
/// <returns></returns>
public static string BytesToString(byte[] bytes)
{
string set;
try
{
char[] chars = new char[bytes.Length / sizeof(char)];
System.Buffer.BlockCopy(bytes, 0, chars, 0, bytes.Length);
return new string(chars);
}
catch (Exception ex)
{
// removed error handling logic!
return null;
}
}
}
Unfortunately, this is not properly deserializing the string back to its original form, and what I'm getting is something like this: 異汢獩敨慶畬㩥ㄠ, rather than the actual original text.
Suggestions?
RedisMessage represents a pending request that is about to be sent to the server; there are a few concrete implementations of this, typically relating to the nature and quantity of the parameters to be sent. It makes no sense to "deserialize" (or even "serialize") a RedisMessage - that is not their purpose. The only thing it is sensible to do is to Write(...) them to a Stream.
If you want information about a RedisMessage, then .ToString() has an overview, but this is not round-trippable and is frankly intended for debugging.
RedisMessage is an internal class; an implementation detail. Unless you're working on a pull request to the core code, you should never need to interact with a RedisMessage.
At a similar level, there is RedisResult which represents a response coming back from the server. If you want a quick way of getting data from that, fortunately that is much simpler:
object val = result.Parse(true);
(the true means "speculatively test to see if the data looks like a string"). But again, this is an internal implementation detail that you should not have to work with.
Obviously this was an encoding type issue, and in the meantime, with a little glance at this link, I simply added the encoding type of UTF8, and the output looks fine:
#region EncodingType enum
/// <summary>
/// Encoding Types.
/// </summary>
public enum EncodingType
{
ASCII,
Unicode,
UTF7,
UTF8
}
#endregion
#region ByteArrayToString
/// <summary>
/// Converts a byte array to a string using Unicode encoding.
/// </summary>
/// <param name="bytes">Array of bytes to be converted.</param>
/// <returns>string</returns>
public static string ByteArrayToString(byte[] bytes)
{
return ByteArrayToString(bytes, EncodingType.Unicode);
}
/// <summary>
/// Converts a byte array to a string using specified encoding.
/// </summary>
/// <param name="bytes">Array of bytes to be converted.</param>
/// <param name="encodingType">EncodingType enum.</param>
/// <returns>string</returns>
public static string ByteArrayToString(byte[] bytes, EncodingType encodingType)
{
System.Text.Encoding encoding=null;
switch (encodingType)
{
case EncodingType.ASCII:
encoding=new System.Text.ASCIIEncoding();
break;
case EncodingType.Unicode:
encoding=new System.Text.UnicodeEncoding();
break;
case EncodingType.UTF7:
encoding=new System.Text.UTF7Encoding();
break;
case EncodingType.UTF8:
encoding=new System.Text.UTF8Encoding();
break;
}
return encoding.GetString(bytes);
}
#endregion
--UPDATE--
Even simpler: var message = Encoding.UTF8.GetString(data);
I'm looking at doing some updates into Azure Storage Tables. I want to use the optimistic concurrency mechanism properly. It seems like you'd need to do something like:
Load row to update, possibly retrying failures
Apply updates to row
Save row, possibly retrying network errors
If there is a concurrency conflict, reload the data (possibly retrying failures) and attempt to save again (possible retrying failures)
Is there some generic class or code sample that handles this? I can code it up, but I have to imagine someone has already invented this particular wheel.
If someone invented this wheel they're not talking, so I went off and (re)invented it myself. This is intentionally very generic, more of a skeleton than a finished product. It's basically just the algorithm I outlined above. The caller has to wire in delegates to do the actual loading, updating and saving of the data. There is basic retry logic built in, but I would recommend overriding those functions with something more robust.
I believe this will work with tables or BLOBs, and single entities or batches, though I've only actually tried it with single-entity table updates.
Any comments, suggestions, improvements, etc would be appreciated.
using System;
using System.Collections.Generic;
using System.Linq;
using System.Text;
using System.Threading;
using System.Data.Services.Client;
using Microsoft.WindowsAzure.StorageClient;
using System.Net;
namespace SepiaLabs.Azure
{
/// <summary>
/// Attempt to write an update to storage while using optimistic concurrency.
/// Implements a basic state machine. Data will be fetched (with retries), then mutated, then updated (with retries, and possibly refetching & remutating).
/// Clients may pass in a state object with relevant information. eg, a TableServiceContext object.
/// </summary>
/// <remarks>
/// This object natively implements a very basic retry strategy.
/// Clients may want to subclass it and override the ShouldRetryRetrieval() and ShouldRetryPersist() functions to implement more advanced retry strategies.
///
/// This class intentionally avoids checking if the row is present before updating it. This is so callers may throw custom exceptions, or attempt to insert the row instead ("upsert" style interaction)
/// </remarks>
/// <typeparam name="RowType">The type of data that will be read and updated. Though it is called RowType for clarity, you could manipulate a collection of rows.</typeparam>
/// <typeparam name="StateObjectType">The type of the user-supplied state object</typeparam>
public class AzureDataUpdate<RowType, StateObjectType>
where RowType : class
{
/// <summary>
/// Function to retrieve the data that will be updated.
/// This function will be called at least once. It will also be called any time a concurrency update conflict occurs.
/// </summary>
public delegate RowType DataRetriever(StateObjectType stateObj);
/// <summary>
/// Function to apply the desired changes to the data.
/// This will be called after each time the DataRetriever function is called.
/// If you are using a TableServiceContext with MergeOption.PreserveChanges set, this function can be a no-op after the first call
/// </summary>
public delegate void DataMutator(RowType data, StateObjectType stateObj);
/// <summary>
/// Function to persist the modified data. The may be called multiple times.
/// </summary>
/// <param name="data"></param>
/// <param name="stateObj"></param>
public delegate void DataPersister(RowType data, StateObjectType stateObj);
public DataRetriever RetrieverFunction { get; set; }
public DataMutator MutatorFunction { get; set; }
public DataPersister PersisterFunction { get; set; }
public AzureDataUpdate()
{
}
public AzureDataUpdate(DataRetriever retrievalFunc, DataMutator mutatorFunc, DataPersister persisterFunc)
{
this.RetrieverFunction = retrievalFunc;
this.MutatorFunction = mutatorFunc;
this.PersisterFunction = persisterFunc;
}
public RowType Execute(StateObjectType userState)
{
if (RetrieverFunction == null)
{
throw new InvalidOperationException("Must provide a data retriever function before executing");
}
else if (MutatorFunction == null)
{
throw new InvalidOperationException("Must provide a data mutator function before executing");
}
else if (PersisterFunction == null)
{
throw new InvalidOperationException("Must provide a data persister function before executing");
}
//Retrieve and modify data
RowType data = this.DoRetrieve(userState);
//Call the mutator function.
MutatorFunction(data, userState);
//persist changes
int attemptNumber = 1;
while (true)
{
bool isPreconditionFailedResponse = false;
try
{
PersisterFunction(data, userState);
return data; //return the mutated data
}
catch (DataServiceRequestException dsre)
{
DataServiceResponse resp = dsre.Response;
int statusCode = -1;
if (resp.IsBatchResponse)
{
statusCode = resp.BatchStatusCode;
}
else if (resp.Any())
{
statusCode = resp.First().StatusCode;
}
isPreconditionFailedResponse = (statusCode == (int)HttpStatusCode.PreconditionFailed);
if (!ShouldRetryPersist(attemptNumber, dsre, isPreconditionFailedResponse, userState))
{
throw;
}
}
catch (DataServiceClientException dsce)
{
isPreconditionFailedResponse = (dsce.StatusCode == (int)HttpStatusCode.PreconditionFailed);
if (!ShouldRetryPersist(attemptNumber, dsce, isPreconditionFailedResponse, userState))
{
throw;
}
}
catch (StorageClientException sce)
{
isPreconditionFailedResponse = (sce.StatusCode == HttpStatusCode.PreconditionFailed);
if (!ShouldRetryPersist(attemptNumber, sce, isPreconditionFailedResponse, userState))
{
throw;
}
}
catch (Exception ex)
{
if (!ShouldRetryPersist(attemptNumber, ex, false, userState))
{
throw;
}
}
if (isPreconditionFailedResponse)
{
//Refetch the data, re-apply the mutator
data = DoRetrieve(userState);
MutatorFunction(data, userState);
}
attemptNumber++;
}
}
/// <summary>
/// Retrieve the data to be updated, possibly with retries
/// </summary>
/// <param name="userState">The UserState for this operation</param>
private RowType DoRetrieve(StateObjectType userState)
{
int attemptNumber = 1;
while (true)
{
try
{
return RetrieverFunction(userState);
}
catch (Exception ex)
{
if (!ShouldRetryRetrieval(attemptNumber, ex, userState))
{
throw;
}
}
attemptNumber++;
}
}
/// <summary>
/// Determine whether a data retrieval should be retried.
/// Implements a simplistic, constant wait time strategy. Users may override to provide a more complex implementation.
/// </summary>
/// <param name="attemptNumber">What number attempt is this. </param>
/// <param name="ex">The exception that was caught</param>
/// <param name="userState">The user-supplied state object for this operation</param>
/// <returns>True to attempt the retrieval again, false to abort the retrieval and fail the update attempt</returns>
protected virtual bool ShouldRetryRetrieval(int attemptNumber, Exception ex, StateObjectType userState)
{
//Simple, basic retry strategy - try 3 times, sleep for 1000msec each time
if (attemptNumber < 3)
{
Thread.Sleep(1000);
return true;
}
else
{
return false;
}
}
/// <summary>
/// Determine whether a data update should be retried. If the <paramref name="isPreconditionFailed"/> param is true,
/// then the retrieval and mutation process will be repeated as well
/// Implements a simplistic, constant wait time strategy. Users may override to provide a more complex implementation.
/// </summary>
/// <param name="attemptNumber">What number attempt is this. </param>
/// <param name="ex">The exception that was caught</param>
/// <param name="userState">The user-supplied state object for this operation</param>
/// <param name="isPreconditionFailedResponse">Indicates whether the exception is a PreconditionFailed response. ie, an optimistic concurrency failure</param>
/// <returns>True to attempt the update again, false to abort the retrieval and fail the update attempt</returns>
protected virtual bool ShouldRetryPersist(int attemptNumber, Exception ex, bool isPreconditionFailedResponse, StateObjectType userState)
{
if (isPreconditionFailedResponse)
{
return true; //retry immediately
}
else
{
//For other failures, wait to retry
//Simple, basic retry strategy - try 3 times, sleep for 1000msec each time
if (attemptNumber < 3)
{
Thread.Sleep(1000);
return true;
}
else
{
return false;
}
}
}
}
}
Our site uses ADFS for auth. To reduce the cookie payload on every request we're turning IsSessionMode on (see Your fedauth cookies on a diet).
The last thing we need to do to get this working in our load balanced environment is to implement a farm ready SecurityTokenCache. The implementation seems pretty straightforward, I'm mainly interested in finding out if there are any gotchas we should consider when dealing with SecurityTokenCacheKey and the TryGetAllEntries and TryRemoveAllEntries methods (SecurityTokenCacheKey has a custom implementation of the Equals and GetHashCode methods).
Does anyone have an example of this? We're planning on using AppFabric as the backing store but an example using any persistent store would be helpful- database table, Azure table-storage, etc.
Here are some places I've searched:
In Hervey Wilson's PDC09
session he uses a
DatabaseSecurityTokenCache. I haven't been able to find the sample
code for his session.
On page 192 of Vittorio Bertocci's excellent
book, "Programming Windows Identity Foundation" he mentions uploading
a sample implementation of an Azure ready SecurityTokenCache to the
book's website. I haven't been able to find this sample either.
Thanks!
jd
3/16/2012 UPDATE
Vittorio's blog links to a sample using the new .net 4.5 stuff:
ClaimsAwareWebFarm
This sample is an answer to the feedback we got from many of you guys: you wanted a sample showing a farm ready session cache (as opposed to a tokenreplycache) so that you can use sessions by reference instead of exchanging big cookies; and you asked for an easier way of securing cookies in a farm.
To come up with a working implementation we ultimately had to use reflector to analyze the different SessionSecurityToken related classes in Microsoft.IdentityModel. Below is what we came up with. This implementation is deployed on our dev and qa environments, seems to be working fine, it's resiliant to app pool recycles etc.
In global.asax:
protected void Application_Start(object sender, EventArgs e)
{
FederatedAuthentication.ServiceConfigurationCreated += this.OnServiceConfigurationCreated;
}
private void OnServiceConfigurationCreated(object sender, ServiceConfigurationCreatedEventArgs e)
{
var sessionTransforms = new List<CookieTransform>(new CookieTransform[]
{
new DeflateCookieTransform(),
new RsaEncryptionCookieTransform(
e.ServiceConfiguration.ServiceCertificate),
new RsaSignatureCookieTransform(
e.ServiceConfiguration.ServiceCertificate)
});
// following line is pseudo code. use your own durable cache implementation.
var durableCache = new AppFabricCacheWrapper();
var tokenCache = new DurableSecurityTokenCache(durableCache, 5000);
var sessionHandler = new SessionSecurityTokenHandler(sessionTransforms.AsReadOnly(),
tokenCache,
TimeSpan.FromDays(1));
e.ServiceConfiguration.SecurityTokenHandlers.AddOrReplace(sessionHandler);
}
private void WSFederationAuthenticationModule_SecurityTokenValidated(object sender, SecurityTokenValidatedEventArgs e)
{
FederatedAuthentication.SessionAuthenticationModule.IsSessionMode = true;
}
DurableSecurityTokenCache.cs:
/// <summary>
/// Two level durable security token cache (level 1: in memory MRU, level 2: out of process cache).
/// </summary>
public class DurableSecurityTokenCache : SecurityTokenCache
{
private ICache<string, byte[]> durableCache;
private readonly MruCache<SecurityTokenCacheKey, SecurityToken> mruCache;
/// <summary>
/// The constructor.
/// </summary>
/// <param name="durableCache">The durable second level cache (should be out of process ie sql server, azure table, app fabric, etc).</param>
/// <param name="mruCapacity">Capacity of the internal first level cache (in-memory MRU cache).</param>
public DurableSecurityTokenCache(ICache<string, byte[]> durableCache, int mruCapacity)
{
this.durableCache = durableCache;
this.mruCache = new MruCache<SecurityTokenCacheKey, SecurityToken>(mruCapacity, mruCapacity / 4);
}
public override bool TryAddEntry(object key, SecurityToken value)
{
var cacheKey = (SecurityTokenCacheKey)key;
// add the entry to the mru cache.
this.mruCache.Add(cacheKey, value);
// add the entry to the durable cache.
var keyString = GetKeyString(cacheKey);
var buffer = this.GetSerializer().Serialize((SessionSecurityToken)value);
this.durableCache.Add(keyString, buffer);
return true;
}
public override bool TryGetEntry(object key, out SecurityToken value)
{
var cacheKey = (SecurityTokenCacheKey)key;
// attempt to retrieve the entry from the mru cache.
value = this.mruCache.Get(cacheKey);
if (value != null)
return true;
// entry wasn't in the mru cache, retrieve it from the app fabric cache.
var keyString = GetKeyString(cacheKey);
var buffer = this.durableCache.Get(keyString);
var result = buffer != null;
if (result)
{
// we had a cache miss in the mru cache but found the item in the durable cache...
// deserialize the value retrieved from the durable cache.
value = this.GetSerializer().Deserialize(buffer);
// push this item into the mru cache.
this.mruCache.Add(cacheKey, value);
}
return result;
}
public override bool TryRemoveEntry(object key)
{
var cacheKey = (SecurityTokenCacheKey)key;
// remove the entry from the mru cache.
this.mruCache.Remove(cacheKey);
// remove the entry from the durable cache.
var keyString = GetKeyString(cacheKey);
this.durableCache.Remove(keyString);
return true;
}
public override bool TryReplaceEntry(object key, SecurityToken newValue)
{
var cacheKey = (SecurityTokenCacheKey)key;
// remove the entry in the mru cache.
this.mruCache.Remove(cacheKey);
// remove the entry in the durable cache.
var keyString = GetKeyString(cacheKey);
// add the new value.
return this.TryAddEntry(key, newValue);
}
public override bool TryGetAllEntries(object key, out IList<SecurityToken> tokens)
{
// not implemented... haven't been able to find how/when this method is used.
tokens = new List<SecurityToken>();
return true;
//throw new NotImplementedException();
}
public override bool TryRemoveAllEntries(object key)
{
// not implemented... haven't been able to find how/when this method is used.
return true;
//throw new NotImplementedException();
}
public override void ClearEntries()
{
// not implemented... haven't been able to find how/when this method is used.
//throw new NotImplementedException();
}
/// <summary>
/// Gets the string representation of the specified SecurityTokenCacheKey.
/// </summary>
private string GetKeyString(SecurityTokenCacheKey key)
{
return string.Format("{0}; {1}; {2}", key.ContextId, key.KeyGeneration, key.EndpointId);
}
/// <summary>
/// Gets a new instance of the token serializer.
/// </summary>
private SessionSecurityTokenCookieSerializer GetSerializer()
{
return new SessionSecurityTokenCookieSerializer(); // may need to do something about handling bootstrap tokens.
}
}
MruCache.cs:
/// <summary>
/// Most recently used (MRU) cache.
/// </summary>
/// <typeparam name="TKey">The key type.</typeparam>
/// <typeparam name="TValue">The value type.</typeparam>
public class MruCache<TKey, TValue> : ICache<TKey, TValue>
{
private Dictionary<TKey, TValue> mruCache;
private LinkedList<TKey> mruList;
private object syncRoot;
private int capacity;
private int sizeAfterPurge;
/// <summary>
/// The constructor.
/// </summary>
/// <param name="capacity">The capacity.</param>
/// <param name="sizeAfterPurge">Size to make the cache after purging because it's reached capacity.</param>
public MruCache(int capacity, int sizeAfterPurge)
{
this.mruList = new LinkedList<TKey>();
this.mruCache = new Dictionary<TKey, TValue>(capacity);
this.capacity = capacity;
this.sizeAfterPurge = sizeAfterPurge;
this.syncRoot = new object();
}
/// <summary>
/// Adds an item if it doesn't already exist.
/// </summary>
public void Add(TKey key, TValue value)
{
lock (this.syncRoot)
{
if (mruCache.ContainsKey(key))
return;
if (mruCache.Count + 1 >= this.capacity)
{
while (mruCache.Count > this.sizeAfterPurge)
{
var lru = mruList.Last.Value;
mruCache.Remove(lru);
mruList.RemoveLast();
}
}
mruCache.Add(key, value);
mruList.AddFirst(key);
}
}
/// <summary>
/// Removes an item if it exists.
/// </summary>
public void Remove(TKey key)
{
lock (this.syncRoot)
{
if (!mruCache.ContainsKey(key))
return;
mruCache.Remove(key);
mruList.Remove(key);
}
}
/// <summary>
/// Gets an item. If a matching item doesn't exist null is returned.
/// </summary>
public TValue Get(TKey key)
{
lock (this.syncRoot)
{
if (!mruCache.ContainsKey(key))
return default(TValue);
mruList.Remove(key);
mruList.AddFirst(key);
return mruCache[key];
}
}
/// <summary>
/// Gets whether a key is contained in the cache.
/// </summary>
public bool ContainsKey(TKey key)
{
lock (this.syncRoot)
return mruCache.ContainsKey(key);
}
}
ICache.cs:
/// <summary>
/// A cache.
/// </summary>
/// <typeparam name="TKey">The key type.</typeparam>
/// <typeparam name="TValue">The value type.</typeparam>
public interface ICache<TKey, TValue>
{
void Add(TKey key, TValue value);
void Remove(TKey key);
TValue Get(TKey key);
}
Here is a sample that I wrote. I use Windows Azure to store the tokens forever, defeating any possible replay.
http://tokenreplaycache.codeplex.com/releases/view/76652
You will need to place this in your web.config:
<service>
<securityTokenHandlers>
<securityTokenHandlerConfiguration saveBootstrapTokens="true">
<tokenReplayDetection enabled="true" expirationPeriod="50" purgeInterval="1">
<replayCache type="LC.Security.AzureTokenReplayCache.ACSTokenReplayCache,LC.Security.AzureTokenReplayCache, Version=1.0.0.0, Culture=neutral, PublicKeyToken=null" />
</tokenReplayDetection>
</securityTokenHandlerConfiguration>
</securityTokenHandlers>
</service>