Azure Table Storage Warning - WCF Data Services obsolete - azure

After upgrading to the new storage API version 4.2, I'm getting the following warning that I'm calling obsolete methods on some of my segmented queries.
'Microsoft.WindowsAzure.Storage.Table.CloudTableClient.GetTableServiceContext()'
is obsolete: 'Support for accessing Windows Azure Tables via WCF Data
Services is now obsolete. It's recommended that you use the
Microsoft.WindowsAzure.Storage.Table namespace for working with
tables.'
So far I haven't been able to figure out how to achieve this on the new API, and no examples have been put out that I have been able to find. The legacy code still runs fine, but if the new API supports something better I'd love to check it out and get rid of this warning. Could someone point me in the right direction on how a segmented query like this would look using the new API?
Here is what my code currently looks like with the warning:
public AzureTablePage<T> GetPagedResults<T>(Expression<Func<T, bool>> whereCondition, string ContinuationToken, int PageSize, string TableName) {
TableContinuationToken token = GetToken(ContinuationToken);
var query = AzureTableService.CreateQuery<T>(TableName).Where(whereCondition).Take(PageSize).AsTableServiceQuery(AzureTableClient.GetTableServiceContext());
var results = query.ExecuteSegmented(token, new TableRequestOptions() { PayloadFormat = TablePayloadFormat.JsonNoMetadata });
if (results.ContinuationToken != null) {
return new AzureTablePage<T>() { Results = results.ToList(), HasMoreResults = true, ContinuationToken = string.Join("|", results.ContinuationToken.NextPartitionKey, results.ContinuationToken.NextRowKey) };
} else {
return new AzureTablePage<T>() { Results = results.ToList(), HasMoreResults = false };
}
}
public TableServiceContext AzureTableService {
get {
var context = AzureTableClient.GetTableServiceContext();
context.IgnoreResourceNotFoundException = true;
return context;
}
}
public CloudTableClient AzureTableClient {
get {
return mStorageAccount.CreateCloudTableClient();
}
}
Solution
For anyone with the same question, here is the updated code.
/* Add the following Using Statement */
using Microsoft.WindowsAzure.Storage.Table.Queryable;
public AzureTablePage<T> GetPagedResults<T>(Expression<Func<T, bool>> whereCondition, string ContinuationToken, int PageSize, string TableName) where T : class, ITableEntity, new() {
TableContinuationToken token = GetToken(ContinuationToken);
var query = AzureTableClient.GetTableReference(TableName).CreateQuery<T>().Where(whereCondition).Take(PageSize).AsTableQuery();
var results = query.ExecuteSegmented(token, new TableRequestOptions() { PayloadFormat = TablePayloadFormat.JsonNoMetadata });
if (results.ContinuationToken != null) {
return new AzureTablePage<T>() { Results = results.ToList(), HasMoreResults = true, ContinuationToken = string.Join("|", results.ContinuationToken.NextPartitionKey, results.ContinuationToken.NextRowKey) };
} else {
return new AzureTablePage<T>() { Results = results.ToList(), HasMoreResults = false };
}
}

Please see the Tables Deep Dive blog post that we published when we first introduced the new Table Service Layer. If you need LINQ support, please also see the Azure Storage Client Library 2.1 blog post.
We strongly recommend upgrading to Table Service Layer, because it is optimized for NoSQL scenarios and therefore provides much better performance.

Related

Storage quota has been exceeded for this service. You must either delete documents first, or use a higher SKU for additional quota

In azure search getting error sometimes not every time when I'm create or update index in azure.
and I'm tying post multiple data as per guideline How to index large data sets in Azure Search
is there any way to update existing data in azure? How to rebuild an index
Some azure Available tiers Available tiers
Code this simple example
private static SearchServiceClient searchServiceClient;
public virtual void CreateSearchServiceClient()
{
searchServiceClient = new SearchServiceClient(searchServiceName, new SearchCredentials("APIKEY"));
}
public virtual Index CreateIndex()
{
CreateSearchServiceClient();
Index index = new Index();
string scoringProfile = "AzureSearchScoringProfile";
// Create index if search client service is available
if (searchServiceClient != null)
{
AzureSearchItem azureSearchItem = new AzureSearchItem();
// Map index schema
var definition = new Index()
{
Name = IndexName,
Fields = FieldBuilder.BuildForType<AzureSearchItem>(),
Suggesters = new List<Suggester>() {
new Suggester()
{
Name = ConstantKeys.AzureSearchTopBarSuggestor,
SourceFields = FormatSuggesterFields()
}
}
};
if (!string.IsNullOrEmpty(scoringProfile))
{
definition.ScoringProfiles = new List<ScoringProfile>()
{
new ScoringProfile()
{
Name = scoringProfile
}
};
}
definition = SetAnalyzer(definition);
// Create index
index = searchServiceClient.Indexes.CreateOrUpdate(definition);
}
return index;
}

Azure Cosmos Db, select after row?

I'm trying to select some rows after x rows, something like:
SELECT * from collection WHERE ROWNUM >= 235 and ROWNUM <= 250
Unfortunately it looks like ROWNUM isn't resolved in azure cosmos db.
Is there another way to do this? I've looked at using continuation tokens but it's not helpful if a user skips to page 50, would I need to keep querying with continuation tokens to get to page 50?
I've tried playing around with the page size option but that has some limitations in terms of how many things it can return at any one time.
For example I have 1,000,000 records in Azure. I want to query rows
500,000 to 500,010. I can't do SELECT * from collection WHERE ROWNUM >= 500,000 and ROWNUM <= 500,010 so how do I achieve this?
If you don't have any filters, you can't retrieve items in specific range via query sql direcly in cosmos db so far. So, you need to use pagination to locate your desire items. As I know, pagination is supported based on continuation token only so far.
Please refer to the function as below:
using JayGongDocumentDB.pojo;
using Microsoft.Azure.Documents.Client;
using Microsoft.Azure.Documents.Linq;
using System;
using System.Collections.Generic;
using System.Linq;
using System.Threading.Tasks;
namespace JayGongDocumentDB.module
{
class QuerySample1
{
public static async void QueryPageByPage()
{
// Number of documents per page
const int PAGE_SIZE = 2;
int currentPageNumber = 1;
int documentNumber = 1;
// Continuation token for subsequent queries (NULL for the very first request/page)
string continuationToken = null;
do
{
Console.WriteLine($"----- PAGE {currentPageNumber} -----");
// Loads ALL documents for the current page
KeyValuePair<string, IEnumerable<Student>> currentPage = await QueryDocumentsByPage(currentPageNumber, PAGE_SIZE, continuationToken);
foreach (Student student in currentPage.Value)
{
Console.WriteLine($"[{documentNumber}] {student.Name}");
documentNumber++;
}
// Ensure the continuation token is kept for the next page query execution
continuationToken = currentPage.Key;
currentPageNumber++;
} while (continuationToken != null);
Console.WriteLine("\n--- END: Finished Querying ALL Dcuments ---");
}
public static async Task<KeyValuePair<string, IEnumerable<Student>>> QueryDocumentsByPage(int pageNumber, int pageSize, string continuationToken)
{
DocumentClient documentClient = new DocumentClient(new Uri("https://***.documents.azure.com:443/"), "***");
var feedOptions = new FeedOptions
{
MaxItemCount = pageSize,
EnableCrossPartitionQuery = true,
// IMPORTANT: Set the continuation token (NULL for the first ever request/page)
RequestContinuation = continuationToken
};
IQueryable<Student> filter = documentClient.CreateDocumentQuery<Student>("dbs/db/colls/item", feedOptions);
IDocumentQuery<Student> query = filter.AsDocumentQuery();
FeedResponse<Student> feedRespose = await query.ExecuteNextAsync<Student>();
List<Student> documents = new List<Student>();
foreach (Student t in feedRespose)
{
documents.Add(t);
}
// IMPORTANT: Ensure the continuation token is kept for the next requests
return new KeyValuePair<string, IEnumerable<Student>>(feedRespose.ResponseContinuation, documents);
}
}
}
Output:
Hope it helps you.
Update Answer:
No such function like ROW_NUMBER() [How do I use ROW_NUMBER()? ] in cosmos db so far. I also thought skip and top.However, top is supported and skip yet(feedback).It seems skip is already in processing and will be released in the future.
I think you could push the feedback related to the paging function.Or just take above continuation token as workaround temporarily.

Access ServiceStack session from ConnectionFilter

I am using SQL Server and database triggers to keep a data-level audit of all changes to the system. This audit includes the userID / name of whomever initiated a change. Ideally I'd like to do something like this in my AppHost.Configure method:
SqlServerDialect.Provider.UseUnicode = true;
var dbFactory = new OrmLiteConnectionFactory(ConnectionString, SqlServerDialect.Provider)
{
ConnectionFilter = (db =>
{
IAuthSession session = this.Request.GetSession();
if (session != null && !session.UserName.IsNullOrEmpty())
{
System.Data.IDbCommand cmd = db.CreateCommand();
cmd.CommandText = "declare #ci varbinary(128); select #ci = CAST(#Username as varbinary(128)); set context_info #ci";
System.Data.IDbDataParameter param = cmd.CreateParameter();
param.ParameterName = "Username";
param.DbType = System.Data.DbType.String;
//param.Value = session.UserName;
param.Value = session.UserAuthId;
cmd.Parameters.Add(param);
cmd.ExecuteNonQuery();
}
return new ProfiledDbConnection(db, Profiler.Current);
}),
AutoDisposeConnection = true
};
container.Register<IDbConnectionFactory>(dbFactory);
Of course, this doesn't work because this.Request doesn't exist. Is there any way to access the current session from the ConnectionFilter or ExecFilter on an OrmLite connection?
The other approach I had started, doing an override of the Db property of Service, doesn't work any more because I've abstracted some activities into their own interfaced implementations to allow for mocks during testing. Each of these is passed a function that is expected to return the a DB connection. Example:
// Transaction processor
container.Register<ITransactionProcessor>(new MockTransactionProcessor(() => dbFactory.OpenDbConnection()));
So, how can I ensure that any DML executed has the (admittedly database-specific) context information needed for my database audit triggers?
The earlier multi tenant ServiceStack example shows how you can use the Request Context to store per-request items, e.g. you can populate the Request Context from a Global Request filter:
GlobalRequestFilters.Add((req, res, dto) =>
{
var session = req.GetSession();
if (session != null)
RequestContext.Instance.Items.Add(
"UserName", session.UserName);
});
And access it within your Connection Filter:
ConnectionFilter = (db =>
{
var userName = RequestContext.Instance.Items["UserName"] as string;
if (!userName.IsNullOrEmpty()) {
//...
}
}),
Another approach is to use a factory pattern, similar to how ServiceStack creates OrmLite db connections in the first place. Since all user-associated calls are made via the ServiceRunner, I piggy-back off of the session that's managed by ServiceStack.
public class TransactionProcessorFactory : ITransactionProcessorFactory
{
public ITransactionProcessor CreateTransactionProcessor(IDbConnection Db)
{
return new TransactionProcessor(Db);
}
}
public abstract MyBaseService : Service
{
private IDbConnection db;
public override System.Data.IDbConnection Db
{
get
{
if (this.db != null) return db;
this.db = this.TryResolve<IDbConnectionFactory>().OpenDbConnection();
IAuthSession session = this.Request.GetSession();
if (session != null && !session.UserName.IsNullOrEmpty())
{
IDbCommand cmd = db.CreateCommand();
cmd.CommandText = "declare #ci varbinary(128); select #ci = CAST(#Username as varbinary(128)); set context_info #ci";
IDbDataParameter param = cmd.CreateParameter();
param.ParameterName = "Username";
param.DbType = DbType.String;
//param.Value = session.UserName;
param.Value = session.UserAuthId;
cmd.Parameters.Add(param);
cmd.ExecuteNonQuery();
}
return db;
}
}
private ITransactionProcessor tp = null;
public virtual ITransactionProcessor TransactionProcessor
{
get
{
if (this.tp != null) return tp;
var factory = this.TryResolve<ITransactionProcessorFactory>();
this.tp = factory.CreateTransactionProcessor(this.Db);
return tp;
}
}
}
For the sake of potential future ServiceStack users, another approach would be to use OrmLite's Global Insert/Update filters combined with Mythz's approach above to inject the necessary SQL only when DML actions are made. It isn't 100%, since there may be stored procs or manual SQL, but that's potentially handled via an IDbConnection extension method to manually set desired auditing information.

QueryExpression with no results in Dynamics CRM plugin

I wrote the following function to get the SharePointDocumentLocation records regarding an account or contact. However, even though I provide an id which most definitely has got a SPDL record associated the result of a count on the EntityCollection that is returned is alway 0. Why does my query not return SPDL records?
internal static EntityCollection GetSPDocumentLocation(IOrganizationService service, Guid id)
{
SharePointDocumentLocation spd = new SharePointDocumentLocation();
QueryExpression query = new QueryExpression
{
EntityName = "sharepointdocumentlocation",
ColumnSet = new ColumnSet("sharepointdocumentlocationid"),
Criteria = new FilterExpression
{
Conditions =
{
new ConditionExpression
{
AttributeName = "regardingobjectid",
Operator = ConditionOperator.Equal,
Values = { id }
}
}
}
};
return service.RetrieveMultiple(query);
}
The following code does work
using System;
using Microsoft.Xrm.Sdk;
using Microsoft.Xrm.Sdk.Client;
using System.ServiceModel.Description;
using System.Net;
using Microsoft.Xrm.Sdk.Query;
namespace CRMConsoleTests
{
class Program
{
static void Main(string[] args)
{
ClientCredentials credentials = new ClientCredentials();
credentials.Windows.ClientCredential = CredentialCache.DefaultNetworkCredentials;
Uri orgUri = new Uri("http://localhost/CRMDEV2/XRMServices/2011/Organization.svc");
Uri homeRealmUri = null;
using (OrganizationServiceProxy service = new OrganizationServiceProxy(orgUri, homeRealmUri, credentials, null))
{
//ConditionExpression ce = new ConditionExpression("regardingobjectid", ConditionOperator.Equal, new Guid(""));
QueryExpression qe = new QueryExpression("sharepointdocumentlocation");
qe.ColumnSet = new ColumnSet(new String[] { "sharepointdocumentlocationid", "regardingobjectid" });
//qe.Criteria.AddCondition(ce);
EntityCollection result = service.RetrieveMultiple(qe);
foreach (Entity entity in result.Entities)
{
Console.WriteLine("Results for the first record: ");
SharePointDocumentLocation spd = entity.ToEntity<SharePointDocumentLocation>();
if (spd.RegardingObjectId != null)
{
Console.WriteLine("Id: " + spd.SharePointDocumentLocationId.ToString() + " with RoId: " + spd.RegardingObjectId.Id.ToString());
}
}
Console.ReadLine();
}
}
}
}
It retrieves 4 records, and when I debug the plugincode above it retrieves 3 records.
Everything looks good with your QueryExpression, although I'd write it a little more concise (something like this):
var qe = new QueryExpression(SharePointDocumentLocation.EntityLogicalName){
ColmnSet = new ColumnSet("sharepointdocumentlocationid"),
};
qe.Criteria.AddCondition("regardingobjectid", ConditionOperator.Equal, id);
Because I don't see anything wrong with the QueryExpression that leads me with two guesses.
You're using impersonation on the IOrganizationService and the impersonated user doesn't have rights to the SharePointDocumentLocation. You won't get an error, you just won't get any records returned.
The id you're passing in is incorrect.
I'd remove the Criteria and see how many records you get back. If you don't get all of the records back, you know your issue is with guess #1.
If you get all records, add the regardingobjectid to the ColumnSet and retrieve the first record without any Criteria in the QueryExpression, then call this method passing in the id of the regardingobject you returned. If nothing is received when adding the regardingobjectid constraint, then something else is wrong.
Update
Since this is executing within the delete of the plugin, it must be performing its cascade deletes before your plugin is firing. You can try the Pre-Validation.
Now that I think of it, it must perform the deletion of the cascading entities in the Validation stage, because if one of them is unable to be deleted, the entity itself can't be deleted.

tableclient.RetryPolicy Vs. TransientFaultHandling

Both myself and a colleague have been tasked with finding connection-retry logic for Azure Table Storage. After some searching, I found this really cool Enterprise Library suite, which contains the Microsoft.Practices.TransientFaultHandling namespace.
Following a few code examples, I ended up creating an Incremental retry strategy, and wrapping one of our storage calls with the retryPolicy's ExecuteAction callback handler :
/// <inheritdoc />
public void SaveSetting(int userId, string bookId, string settingId, string itemId, JObject value)
{
// Define your retry strategy: retry 5 times, starting 1 second apart, adding 2 seconds to the interval each retry.
var retryStrategy = new Incremental(5, TimeSpan.FromSeconds(1), TimeSpan.FromSeconds(2));
var storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting(StorageConnectionStringName));
try
{
retryPolicy.ExecuteAction(() =>
{
var tableClient = storageAccount.CreateCloudTableClient();
var table = tableClient.GetTableReference(SettingsTableName);
table.CreateIfNotExists();
var entity = new Models.Azure.Setting
{
PartitionKey = GetPartitionKey(userId, bookId),
RowKey = GetRowKey(settingId, itemId),
UserId = userId,
BookId = bookId.ToLowerInvariant(),
SettingId = settingId.ToLowerInvariant(),
ItemId = itemId.ToLowerInvariant(),
Value = value.ToString(Formatting.None)
};
table.Execute(TableOperation.InsertOrReplace(entity));
});
}
catch (StorageException exception)
{
ExceptionHelpers.CheckForPropertyValueTooLargeMessage(exception);
throw;
}
}
}
Feeling awesome, I went to go show my colleague, and he smugly noted that we could do the same thing without having to include Enterprise Library, as the CloudTableClient object already has a setter for a retry policy. His code ended up looking like :
/// <inheritdoc />
public void SaveSetting(int userId, string bookId, string settingId, string itemId, JObject value)
{
var storageAccount = CloudStorageAccount.Parse(CloudConfigurationManager.GetSetting(StorageConnectionStringName));
var tableClient = storageAccount.CreateCloudTableClient();
// set retry for the connection
tableClient.RetryPolicy = new ExponentialRetry(TimeSpan.FromSeconds(2), 3);
var table = tableClient.GetTableReference(SettingsTableName);
table.CreateIfNotExists();
var entity = new Models.Azure.Setting
{
PartitionKey = GetPartitionKey(userId, bookId),
RowKey = GetRowKey(settingId, itemId),
UserId = userId,
BookId = bookId.ToLowerInvariant(),
SettingId = settingId.ToLowerInvariant(),
ItemId = itemId.ToLowerInvariant(),
Value = value.ToString(Formatting.None)
};
try
{
table.Execute(TableOperation.InsertOrReplace(entity));
}
catch (StorageException exception)
{
ExceptionHelpers.CheckForPropertyValueTooLargeMessage(exception);
throw;
}
}
My Question :
Is there any major difference between these two approaches, aside from their implementations? They both seem to accomplish the same goal, but are there cases where it's better to use one over the other?
Functionally speaking both are the same - they both retries requests in case of transient errors. However there are few differences:
Retry policy handling in storage client library only handles retries for storage operations while transient fault handling retries not only handles storage operations but also retries SQL Azure, Service Bus and Cache operations in case of transient errors. So if you have a project where you're using more that storage but would like to have just one approach for handling transient errors, you may want to use transient fault handling application block.
One thing I liked about transient fault handling block is that you can intercept retry operations which you can't do with retry policy. For example, look at the code below:
var retryManager = EnterpriseLibraryContainer.Current.GetInstance<RetryManager>();
var retryPolicy = retryManager.GetRetryPolicy<StorageTransientErrorDetectionStrategy>(ConfigurationHelper.ReadFromServiceConfigFile(Constants.DefaultRetryStrategyForTableStorageOperationsKey));
retryPolicy.Retrying += (sender, args) =>
{
// Log details of the retry.
var message = string.Format(CultureInfo.InvariantCulture, TableOperationRetryTraceFormat, "TableStorageHelper::CreateTableIfNotExist", storageAccount.Credentials.AccountName,
tableName, args.CurrentRetryCount, args.Delay);
TraceHelper.TraceError(message, args.LastException);
};
try
{
var isTableCreated = retryPolicy.ExecuteAction(() =>
{
var table = storageAccount.CreateCloudTableClient().GetTableReference(tableName);
return table.CreateIfNotExists(requestOptions, operationContext);
});
return isTableCreated;
}
catch (Exception)
{
throw;
}
In the code example above, I could intercept retry operations and do something there if I want to. This is not possible with storage client library.
Having said all of this, it is generally recommended to go with storage client library retry policy for retrying storage operations as it is an integral part of the package and thus would be kept up to date with the latest changes to the library.

Resources