Whatever I tried I cannot set an extension property on a User object, here is a reproducible piece of code:
public async Task CleanTest(string extName)
{
ExtensionProperty ep = new ExtensionProperty
{
Name = extName,
DataType = "String",
TargetObjects = { "User" }
};
App app = (App)(await _client.Applications.Where(a => a.AppId == _managementAppClientId).ExecuteSingleAsync());
app.ExtensionProperties.Add(ep);
await app.UpdateAsync();
GraphUser user = (GraphUser)(await _client.Users.Where(u => u.UserPrincipalName.Equals("email")).ExecuteSingleAsync());
string propName = FormatExtensionPropertyName(extName); //formats properly as extesion_xxx_name
user.SetExtendedProperty(propName, "testvalue");
//user.SetExtendedProperty(extName, "testvalue");
await user.UpdateAsync(); // fails here
}
user.UpdateAsync() according to Fiddler doesn't even go out and application fails with an exception:
"The property 'extension_e206e28ff36244b19bc56c01160b9cf0_UserEEEqdbtgd3ixx2' does not exist on type 'Microsoft.Azure.ActiveDirectory.GraphClient.Internal.User'. Make sure to only use property names that are defined by the type."
This issue is also being tracked here:
https://github.com/Azure-Samples/active-directory-dotnet-graphapi-console/issues/28
I've got an alternative workaround for this bug, for those that want to use the version 5.7 OData libraries rather than redirecting to the v5.6.4 versions.
Add a request pipeline configuration handler.
// initialize in the usual way
ActiveDirectoryClient activeDirectoryClient =
AuthenticationHelper.GetActiveDirectoryClientAsApplication();
// after initialization add a handler to the request pipline configuration.
activeDirectoryClient.Context
.Configurations.RequestPipeline
.OnMessageWriterSettingsCreated(UndeclaredPropertyHandler);
In the handler, change the ODataUndeclaredPropertyBehaviorKinds value on the writer settings to SupportUndeclaredValueProperty.
private static void UndeclaredPropertyHandler(MessageWriterSettingsArgs args)
{
var field = args.Settings.GetType().GetField("settings",
BindingFlags.NonPublic | BindingFlags.Instance);
var settingsObject = field?.GetValue(args.Settings);
var settings = settingsObject as ODataMessageWriterSettings;
if (settings != null)
{
settings.UndeclaredPropertyBehaviorKinds =
ODataUndeclaredPropertyBehaviorKinds.SupportUndeclaredValueProperty;
}
}
Just in case you still looking for solution to this problem or someone else is facing the same issue:
I got similar issue and it looks like, at least for me, the problem was in latest version of "Microsoft.Data.Services.Client" package - 5.7.0 (or in one of it dependencies). When I downgraded to previous version - 5.6.4 it worked as a charm.
I had same symptoms - updating of extended property was failing even w/o any request is made (also used Fiddler)
Hope it helps!
Artem Liman
Related
I'm trying to use the ExchangeService.AutodiscoverUrl() method, but it's not working. It doesn't seem to be getting a URL, resulting in the error "Cannot read property 'AbsoluteUri' of undefined" from ExchangeCredentials.GetUriWithoutSuffix.
Here is my code ('c' is just a json object):
service = new EwsJS.ExchangeService(EwsJS.ExchangeVersion.Exchange2016);
service.Credentials = new EwsJS.ExchangeCredentials(c.UserName, c.Password);
service.AutodiscoverUrl("email#domain.com", RedirectCallback);
// I'm forcing the accepted redirect here.
function RedirectCallback(url) {
return true;
}
Autodiscover in ews-javascript-api needs major re-write to work properly.
Autodiscover is re-written, latest dev build is out with #next tag.
you can now use it when installing npm i ews-javascript-api#next, once stable build is out you can install regular build.
var Service = new ExchangeService(ExchangeVersion.Exchange2010_SP1);
Service.Credentials = new WebCredentials(user, pass);
//Autodiscover
Service.AutodiscoverUrl(user, this.RedirectionUrlValidationCallback);
console.log(Service.Url);
I upgraded to version 5.1.1 of ServiceStack OrmLite (via MyGet), and when I try to open a connection to the db, I suddenly get this error:
MySql.Data.MySqlClient.MySqlException: 'The host 127.0.0.1 does not support SSL connections.'
Before the upgrade I was running v 5.1.0, and I got no such error.
I initialize OrmLite as follows:
private void InitOrmLite()
{
JsConfig.IncludeTypeInfo = true;
OrmLiteConfig.ThrowOnError = JsConfig.ThrowOnError = true;
//OrmLiteConfig.BeforeExecFilter = dbCmd => Console.WriteLine(dbCmd.GetDebugString());
_dbFactory = new OrmLiteConnectionFactory($"Uid={dbUsername};Password={dbPassword};Server={dbAddress};Port={dbPort};Database={dbDatabase}", MySqlDialect.Provider);
SetTableMeta();
}
and usage is
using (var _db = dbFactory.Open())
{
// AlterTable will create if not exist, otherwise add columns that was added to the PCO
_db.AlterTable<Customer>(MySqlDialect.Provider);
}
And here it is:
There is a workaround, that I am posting as an answer, but I'd like mythz input on this =)
The workaround that I found, is to add the following to the connection string:
SslMode=None
so, the connectionstring would be:
$"Uid={dbUsername};Password={dbPassword};Server={dbAddress};Port={dbPort};Database={dbDatabase};SslMode=None",
MySqlDialect.Provider
When doing so, the exception is gone.
I'm using a Node.JS backend on Azure with Easy Tables. The table contains the required columns to support offline syncing.
While testing the sync process I noticed that conflicts keep coming back even though I'm resolving them.
My test:
Pull table content from Azure to iOS and Android device
Change a record on iOS but don't sync back to Azure
Change the same record on Android and sync
Now sync iOS
As expected, the conflict is detected correctly and I catch a MobileServicePushFailedException. I am then resolving the error by replacing the local item with the server item:
localItem.AzureVersion = serverItem.AzureVersion;
await result.UpdateOperationAsync(JObject.FromObject (localItem));
However, the next time I sync, the same item fails again with the same error.
The AzureVersion property is declared like this:
[Version]
public string AzureVersion { get; set; }
What exactly is result.UpdateOperationAsync() doing? Does it update my local database? Do I have to do it manually?
And also: am I supposed to trigger an explicit PushAsync() afterwards?
EDIT:
I changed the property from AzureVersion to Version and it works. I noticed that the serverItem's AzureVersion property was NULL even though the JSON contained it. Bug in Json.Net or in the Azure Mobile Client?
You should be using something like the following:
public async Task SyncAsync()
{
ReadOnlyCollection<MobileServiceTableOperationError> syncErrors = null;
try
{
await this.client.SyncContext.PushAsync();
await this.todoTable.PullAsync(
//The first parameter is a query name that is used internally by the client SDK to implement incremental sync.
//Use a different query name for each unique query in your program
"allTodoItems",
this.todoTable.CreateQuery());
}
catch (MobileServicePushFailedException exc)
{
if (exc.PushResult != null)
{
syncErrors = exc.PushResult.Errors;
}
}
// Simple error/conflict handling. A real application would handle the various errors like network conditions,
// server conflicts and others via the IMobileServiceSyncHandler.
if (syncErrors != null)
{
foreach (var error in syncErrors)
{
if (error.OperationKind == MobileServiceTableOperationKind.Update && error.Result != null)
{
//Update failed, reverting to server's copy.
await error.CancelAndUpdateItemAsync(error.Result);
}
else
{
// Discard local change.
await error.CancelAndDiscardItemAsync();
}
Debug.WriteLine(#"Error executing sync operation. Item: {0} ({1}). Operation discarded.", error.TableName, error.Item["id"]);
}
}
}
Note the CancelAndUpdateItemAsync(), which updates the item to the server copy or CancelAndDiscardItemAsync(), which accepts the local item. These are the important things for you.
This code came from the official HOWTO docs here: https://azure.microsoft.com/en-us/documentation/articles/app-service-mobile-dotnet-how-to-use-client-library/##offlinesync
We decided to implement a search functionality in our API which is developed in ServiceStack, we decided to use Lucene.Net since we heard it was a great indexer to make searches.
We created a worker role whose job is to create the indexes in a Azure Storage folder, we guided ourselves using Leon Cullen's tutorial. We use the AzureDirectory library specified in that post, so we could use the latest Azure SDK.
Then in our API project we added the references for Lucene.Net and AzureDirectory too, our endpoint ended up looking like this:
public object Post(SearchIndex request)
{
List<Product> products = new List<Product>();
var pageSize = -1;
var totalpages = -1;
int.TryParse(ConfigurationManager.AppSettings["PageSize"], out pageSize);
if (request.Page.Equals(0))
{
request.Page = 1;
}
// Get Azure settings
AzureDirectory azureDirectory ;
try
{
// This is the line where we get the Access denied exception thrown at us
azureDirectory = new AzureDirectory(Microsoft.WindowsAzure.Storage.CloudStorageAccount.Parse(ConfigurationManager.AppSettings["ConnectionStringAzureSearch"]), "indexsearch");
IndexSearcher searcher;
using (new AutoStopWatch("Creating searcher"))
{
searcher = new IndexSearcher(azureDirectory);
}
using (new AutoStopWatch(string.Format("Search for {0}", request.SearchString)))
{
string[] searchfields = new string[] { "Id", "Name", "Description" };
var hits = searcher.Search(QueryMaker(request.SearchString, searchfields), request.Page * pageSize);
int count = hits.ScoreDocs.Count();
float temp_totalpages = 0;
temp_totalpages = (float)hits.ScoreDocs.Count() / (float)pageSize;
if (temp_totalpages > (int)temp_totalpages)
{
totalpages = (int)temp_totalpages + 1;
}
else
{
totalpages = (int)temp_totalpages;
}
foreach (ScoreDoc match in hits.ScoreDocs)
{
Document doc = searcher.Doc(match.Doc);
int producId = int.Parse(doc.Get("Id"));
Product product = Db.Select<Product>("Id={0}", producId).FirstOrDefault();
products.Add(product);
}
}
return new SearchIndexResult { result = products.Skip((int)((request.Page - 1) * 10)).Take(pageSize).ToList(), PageSize = pageSize, TotalPages = totalpages };
}
catch (Exception e)
{
return new HttpResult(HttpStatusCode.NoContent, "azureDirectory. Parameter: " + request.SearchString + ". e: " + e.Message);
}
}
If we run this locally it works as expected, returning us the results we were expecting. But when we published our API to Azure and tried to access to the search endpoint we received an 403 error message with the message 'Access to the path "D:/AzureDirectory" is denied".
We're confused as to why is trying to access to such folder at all, the name of the folder is wrong and I think it's trying to access a local route, we really don't know why does it work fine locally but once it's deployed to Azure it stops working.
The worker role runs without a problems, but it's the API side that cannot access to the folder in Azure Storage. Are we missing some important step in the configuration? The tutorial we followed wasn't very clear for beginners using Lucene.Net or Azure Storage so we fear we might have missed an important step. We've checked our connection strings and everything seems ok though.
As for reference:
https://github.com/azure-contrib/AzureDirectory/blob/master/AzureDirectory/AzureDirectory.cs
when you do this
azureDirectory = new AzureDirectory(Microsoft.WindowsAzure.Storage.CloudStorageAccount.Parse(ConfigurationManager.AppSettings["ConnectionStringAzureSearch"]), "indexsearch");
This executes
var cachePath = Path.Combine(Path.GetPathRoot(Environment.SystemDirectory), "AzureDirectory");
var azureDir = new DirectoryInfo(cachePath);
if (!azureDir.Exists)
azureDir.Create();
var catalogPath = Path.Combine(cachePath, _containerName);
var catalogDir = new DirectoryInfo(catalogPath);
if (!catalogDir.Exists)
catalogDir.Create();
_cacheDirectory = FSDirectory.Open(catalogPath);
So simple solution for you might be to have that directory on site root
DirectoryInfo info = new DirectoryInfo(HostingEnvironment.MapPath("~/"));
azureDirectory = new AzureDirectory(storageAccount, containerName, new SimpleFSDirectory(info), true);
I got it to work.
I just got the latest version of AzureDirectory from GitHub.
Got the latest nuGet packages for Azure Storage etc.
Recreated the index.
In addition to #brykneval answer, I tried his solution but last parameter bool compressBlob = false which he set to true made my local debug fail with 404 exception from AzureDirectory library and when I published to Azure web app, it had exception with message: System.IO.InvalidDataException: Block length does not match with its complement.
I removed last parameter from constructor and everything works like a charm. Hope this helps anyone.
I'm using ASP.NET MVC 5 and attempting to resolve a few services using the example:
var authService = AppHostBase.Resolve<AuthService>();
authService.RequestContext = System.Web.HttpContext.Current.ToRequestContext();
var response = authService.Authenticate(new Auth
{
UserName = model.UserName,
Password = model.Password,
RememberMe = model.RememberMe
});
or I've also tried:
using (var helloService = AppHostBase.ResolveService<HelloService>())
{
ViewBag.GreetResult = helloService.Get(name).Result;
return View();
}
In the first case I needed the RequestContext injected so I tried that approach and in the second case I was using the example which I understand has the RequestContext automatically injected through Funq.
ResolveService could not be found when I tried the second approach and in the first approach RequestContext is not a valid property. Am I missing something simple or has there been changes to the API?
The documentation does appear to be wrong for this as there is no longer a ResolveService<T> on AppHostBase. It needs to be updated due to changes in the Api.
You can do this in ServiceStack v4 with MVC:
var authService = HostContext.Resolve<AuthService>();
...