I upgraded to version 5.1.1 of ServiceStack OrmLite (via MyGet), and when I try to open a connection to the db, I suddenly get this error:
MySql.Data.MySqlClient.MySqlException: 'The host 127.0.0.1 does not support SSL connections.'
Before the upgrade I was running v 5.1.0, and I got no such error.
I initialize OrmLite as follows:
private void InitOrmLite()
{
JsConfig.IncludeTypeInfo = true;
OrmLiteConfig.ThrowOnError = JsConfig.ThrowOnError = true;
//OrmLiteConfig.BeforeExecFilter = dbCmd => Console.WriteLine(dbCmd.GetDebugString());
_dbFactory = new OrmLiteConnectionFactory($"Uid={dbUsername};Password={dbPassword};Server={dbAddress};Port={dbPort};Database={dbDatabase}", MySqlDialect.Provider);
SetTableMeta();
}
and usage is
using (var _db = dbFactory.Open())
{
// AlterTable will create if not exist, otherwise add columns that was added to the PCO
_db.AlterTable<Customer>(MySqlDialect.Provider);
}
And here it is:
There is a workaround, that I am posting as an answer, but I'd like mythz input on this =)
The workaround that I found, is to add the following to the connection string:
SslMode=None
so, the connectionstring would be:
$"Uid={dbUsername};Password={dbPassword};Server={dbAddress};Port={dbPort};Database={dbDatabase};SslMode=None",
MySqlDialect.Provider
When doing so, the exception is gone.
Related
In my .NET Framework 4.6.1 application I am using StackExchange.Redis.StrongName 1.2.6 to connect to Azure Redis.
This is the code
public RedisContext(string connectionString = null)
{
if (connectionString == null) return;
Lazy<ConfigurationOptions> lazyConfiguration
= new Lazy<ConfigurationOptions>(() => ConfigurationOptions.Parse(connectionString));
var configuration = lazyConfiguration.Value;
configuration.SslProtocols = SslProtocols.Tls12;//just added
configuration.AbortOnConnectFail = false;
Lazy<ConnectionMultiplexer> lazyConnection =
new Lazy<ConnectionMultiplexer>(() => ConnectionMultiplexer.Connect(configuration));
_connectionMultiplexer = lazyConnection.Value;
LogProvider.IsDisabled = true;
var connectionEndpoints = _connectionMultiplexer.GetEndPoints();
_lockFactory = new RedisLockFactory(connectionEndpoints.Select(endpoint => new RedisLockEndPoint
{
EndPoint = endpoint,
Password = configuration.Password,
Ssl = configuration.Ssl
}));
}
In Azure, I have changed the Redis resource to use TLS1.2 and in code I have added this line:
configuration.SslProtocols = SslProtocols.Tls12;//just added
And now, nothing works anymore. This is the error I get in Application Insights:
Error connecting to Redis. It was not possible to connect to the redis server(s); ConnectTimeout
I have also tried to add ",ssl=True,sslprotocols=tls12" to the redis connection string, but with the same result.
Try referencing StackExchange.Redis instead of StackExchange.Redis.StrongName. I have done that in a few of my projects and now it works. However some 3rd party still use StrongName rather than the normal redis one. StackExchange.Redis.StrongName is now deprecated. https://github.com/Azure/aspnet-redis-providers/issues/107. I assume you are trying to connect to Azure Redis in relation to them stopping TLS 1.0 and 1.1 support?
I'm trying to use the ExchangeService.AutodiscoverUrl() method, but it's not working. It doesn't seem to be getting a URL, resulting in the error "Cannot read property 'AbsoluteUri' of undefined" from ExchangeCredentials.GetUriWithoutSuffix.
Here is my code ('c' is just a json object):
service = new EwsJS.ExchangeService(EwsJS.ExchangeVersion.Exchange2016);
service.Credentials = new EwsJS.ExchangeCredentials(c.UserName, c.Password);
service.AutodiscoverUrl("email#domain.com", RedirectCallback);
// I'm forcing the accepted redirect here.
function RedirectCallback(url) {
return true;
}
Autodiscover in ews-javascript-api needs major re-write to work properly.
Autodiscover is re-written, latest dev build is out with #next tag.
you can now use it when installing npm i ews-javascript-api#next, once stable build is out you can install regular build.
var Service = new ExchangeService(ExchangeVersion.Exchange2010_SP1);
Service.Credentials = new WebCredentials(user, pass);
//Autodiscover
Service.AutodiscoverUrl(user, this.RedirectionUrlValidationCallback);
console.log(Service.Url);
Whatever I tried I cannot set an extension property on a User object, here is a reproducible piece of code:
public async Task CleanTest(string extName)
{
ExtensionProperty ep = new ExtensionProperty
{
Name = extName,
DataType = "String",
TargetObjects = { "User" }
};
App app = (App)(await _client.Applications.Where(a => a.AppId == _managementAppClientId).ExecuteSingleAsync());
app.ExtensionProperties.Add(ep);
await app.UpdateAsync();
GraphUser user = (GraphUser)(await _client.Users.Where(u => u.UserPrincipalName.Equals("email")).ExecuteSingleAsync());
string propName = FormatExtensionPropertyName(extName); //formats properly as extesion_xxx_name
user.SetExtendedProperty(propName, "testvalue");
//user.SetExtendedProperty(extName, "testvalue");
await user.UpdateAsync(); // fails here
}
user.UpdateAsync() according to Fiddler doesn't even go out and application fails with an exception:
"The property 'extension_e206e28ff36244b19bc56c01160b9cf0_UserEEEqdbtgd3ixx2' does not exist on type 'Microsoft.Azure.ActiveDirectory.GraphClient.Internal.User'. Make sure to only use property names that are defined by the type."
This issue is also being tracked here:
https://github.com/Azure-Samples/active-directory-dotnet-graphapi-console/issues/28
I've got an alternative workaround for this bug, for those that want to use the version 5.7 OData libraries rather than redirecting to the v5.6.4 versions.
Add a request pipeline configuration handler.
// initialize in the usual way
ActiveDirectoryClient activeDirectoryClient =
AuthenticationHelper.GetActiveDirectoryClientAsApplication();
// after initialization add a handler to the request pipline configuration.
activeDirectoryClient.Context
.Configurations.RequestPipeline
.OnMessageWriterSettingsCreated(UndeclaredPropertyHandler);
In the handler, change the ODataUndeclaredPropertyBehaviorKinds value on the writer settings to SupportUndeclaredValueProperty.
private static void UndeclaredPropertyHandler(MessageWriterSettingsArgs args)
{
var field = args.Settings.GetType().GetField("settings",
BindingFlags.NonPublic | BindingFlags.Instance);
var settingsObject = field?.GetValue(args.Settings);
var settings = settingsObject as ODataMessageWriterSettings;
if (settings != null)
{
settings.UndeclaredPropertyBehaviorKinds =
ODataUndeclaredPropertyBehaviorKinds.SupportUndeclaredValueProperty;
}
}
Just in case you still looking for solution to this problem or someone else is facing the same issue:
I got similar issue and it looks like, at least for me, the problem was in latest version of "Microsoft.Data.Services.Client" package - 5.7.0 (or in one of it dependencies). When I downgraded to previous version - 5.6.4 it worked as a charm.
I had same symptoms - updating of extended property was failing even w/o any request is made (also used Fiddler)
Hope it helps!
Artem Liman
I assumed that we should use basicredisclientmanager or pooledredisclientmanager?
I tried this
private void dddddd()
{
for(int i=0;i<=1000;i++)
{
var client = new BasicRedisClientManager(new string[] { "host1", "host2", "host3" }).GetClient();
//do something with client
}
}
This loop runs fine for the first 100 plus but after that, I always got an error "Unknown Command Role"?? What is that and how to fix it? I need help!
I also tried to make a new class called MyRedisMgr and created a static property to create some sort of singleton but it didn't work either.
public BasicRedisClientManager MyMgr = new BasicRedisClientManager(new string[] { "host1", "host2", "host3" });
And then I use it like
for(int i=0;i<=1000;i++)
{
var client = MyRedisMgr.MyMgr.GetClient();
//do something with client
}
Please read the documentation on the proper usage of Redis Client Manager which should only be used as a singleton.
The BasicRedisClientManager doesn't have any connection pooling so every time you call GetClient() you're opening a new TCP connection with the redis-server. Unless you understand the implications you should be using one of the Pooled Redis Client Managers, e.g: RedisManagerPool.
You also need to always dispose the client after its used so that it can either be re-used or the TCP connection disposed of properly.
So your code sample should look like:
//Always use the same singleton instance of a Client Manager
var redisManager = new RedisManagerPool(masterHost);
for(int i=0;i<=1000;i++)
{
using (var redis = redisManager.GetClient())
{
//do something with client
}
}
The "Unknown Command Role" error is due to using an old version of Redis Server. The ROLE command was added in redis 2.8.12 but this API should only be used if your using redis-server v2.8.12+, so you shouldn't be getting this error by default. You can avoid this error by upgrading to either the stable v3.0 or old 2.8 versions of redis-server which has this command.
If you want to continue using an older version, use the INFO command to check what version you're running then tell ServiceStack.Redis what the version is with:
RedisConfig.AssumeServerVersion = 2600; //e.g. v2.6
RedisConfig.AssumeServerVersion = 2612; //e.g. v2.6.12
We decided to implement a search functionality in our API which is developed in ServiceStack, we decided to use Lucene.Net since we heard it was a great indexer to make searches.
We created a worker role whose job is to create the indexes in a Azure Storage folder, we guided ourselves using Leon Cullen's tutorial. We use the AzureDirectory library specified in that post, so we could use the latest Azure SDK.
Then in our API project we added the references for Lucene.Net and AzureDirectory too, our endpoint ended up looking like this:
public object Post(SearchIndex request)
{
List<Product> products = new List<Product>();
var pageSize = -1;
var totalpages = -1;
int.TryParse(ConfigurationManager.AppSettings["PageSize"], out pageSize);
if (request.Page.Equals(0))
{
request.Page = 1;
}
// Get Azure settings
AzureDirectory azureDirectory ;
try
{
// This is the line where we get the Access denied exception thrown at us
azureDirectory = new AzureDirectory(Microsoft.WindowsAzure.Storage.CloudStorageAccount.Parse(ConfigurationManager.AppSettings["ConnectionStringAzureSearch"]), "indexsearch");
IndexSearcher searcher;
using (new AutoStopWatch("Creating searcher"))
{
searcher = new IndexSearcher(azureDirectory);
}
using (new AutoStopWatch(string.Format("Search for {0}", request.SearchString)))
{
string[] searchfields = new string[] { "Id", "Name", "Description" };
var hits = searcher.Search(QueryMaker(request.SearchString, searchfields), request.Page * pageSize);
int count = hits.ScoreDocs.Count();
float temp_totalpages = 0;
temp_totalpages = (float)hits.ScoreDocs.Count() / (float)pageSize;
if (temp_totalpages > (int)temp_totalpages)
{
totalpages = (int)temp_totalpages + 1;
}
else
{
totalpages = (int)temp_totalpages;
}
foreach (ScoreDoc match in hits.ScoreDocs)
{
Document doc = searcher.Doc(match.Doc);
int producId = int.Parse(doc.Get("Id"));
Product product = Db.Select<Product>("Id={0}", producId).FirstOrDefault();
products.Add(product);
}
}
return new SearchIndexResult { result = products.Skip((int)((request.Page - 1) * 10)).Take(pageSize).ToList(), PageSize = pageSize, TotalPages = totalpages };
}
catch (Exception e)
{
return new HttpResult(HttpStatusCode.NoContent, "azureDirectory. Parameter: " + request.SearchString + ". e: " + e.Message);
}
}
If we run this locally it works as expected, returning us the results we were expecting. But when we published our API to Azure and tried to access to the search endpoint we received an 403 error message with the message 'Access to the path "D:/AzureDirectory" is denied".
We're confused as to why is trying to access to such folder at all, the name of the folder is wrong and I think it's trying to access a local route, we really don't know why does it work fine locally but once it's deployed to Azure it stops working.
The worker role runs without a problems, but it's the API side that cannot access to the folder in Azure Storage. Are we missing some important step in the configuration? The tutorial we followed wasn't very clear for beginners using Lucene.Net or Azure Storage so we fear we might have missed an important step. We've checked our connection strings and everything seems ok though.
As for reference:
https://github.com/azure-contrib/AzureDirectory/blob/master/AzureDirectory/AzureDirectory.cs
when you do this
azureDirectory = new AzureDirectory(Microsoft.WindowsAzure.Storage.CloudStorageAccount.Parse(ConfigurationManager.AppSettings["ConnectionStringAzureSearch"]), "indexsearch");
This executes
var cachePath = Path.Combine(Path.GetPathRoot(Environment.SystemDirectory), "AzureDirectory");
var azureDir = new DirectoryInfo(cachePath);
if (!azureDir.Exists)
azureDir.Create();
var catalogPath = Path.Combine(cachePath, _containerName);
var catalogDir = new DirectoryInfo(catalogPath);
if (!catalogDir.Exists)
catalogDir.Create();
_cacheDirectory = FSDirectory.Open(catalogPath);
So simple solution for you might be to have that directory on site root
DirectoryInfo info = new DirectoryInfo(HostingEnvironment.MapPath("~/"));
azureDirectory = new AzureDirectory(storageAccount, containerName, new SimpleFSDirectory(info), true);
I got it to work.
I just got the latest version of AzureDirectory from GitHub.
Got the latest nuGet packages for Azure Storage etc.
Recreated the index.
In addition to #brykneval answer, I tried his solution but last parameter bool compressBlob = false which he set to true made my local debug fail with 404 exception from AzureDirectory library and when I published to Azure web app, it had exception with message: System.IO.InvalidDataException: Block length does not match with its complement.
I removed last parameter from constructor and everything works like a charm. Hope this helps anyone.