I'm using the TextFileSettings and OrmLiteAppSettings together via MultiAppSettings, but would prefer to pre-read all the database settings in one call versus on demand, is there a way to do that, so that everything is in memory?
Below is the relevant code:
OracleDialect.Provider.NamingStrategy = new OrmLiteNamingStrategyBase();
OracleDialect.Provider.StringSerializer = new JsonStringSerializer();
var fileSettings = new TextFileSettings(ConfigUtils.GetAppSetting("PathToSecuredFile"));
var dbFactory = new OrmLiteConnectionFactory(fileSettings.GetString("LeadDbConfigKey"), OracleOrmLiteDialectProvider.Instance);
var dbSettings = new OrmLiteAppSettings(dbFactory);
var multiSettings = new MultiAppSettings(fileSettings, dbSettings);
container.Register<IAppSettings>(c => multiSettings);
Thank you,
Stephen
To preload all db App Settings you can just read the entire ConfigSetting db table into a .NET Dictionary and wrap it in DictionarySettings, e.g:
using (db = dbFactory.Open())
{
var allDbSettings = db.Dictionary<string,string>(
db.From<ConfigSetting>().Select(x => new { x.Id, x.Value}));
var multiSettings = new MultiAppSettings(
fileSettings,
new DictionarySettings(allDbSettings));
}
Related
I'm trying to update my upload and delete code for Azure.Storage.Blobs 12.10 version and just not having any luck. This is how I used to do it in 12.8:
'''
var name = Guid.NewGuid().ToString();
var fullUri = new UriBuilder()
{
Scheme = "https",
Host = "MyStorageAccount.blob.core.windows.net",
Path = $"myimages/{App.User.EmailAddress}/{name}.jpg",
Query = subToken
};
var blobClient = new BlobClient(fullUri.Uri);
try
{
using var myStream = File.OpenRead(file.Path);
await blobClient.UploadAsync(myStream);
myStream.Close();
'''
All the examples on github are using the default storage connections string and not a SaS token. I have a serverless function that gets me a SaS token and I already have the container created. I just need a simple sample that uses a pre-existing SaS token and container. Any ideas would be appreciated very much.
John
EDIT: This is this 12.10 code I have been trying but it doesn't seem to like the URI.
'''
var name = Guid.NewGuid().ToString();
var blobUri = new BlobUriBuilder(new Uri(
$"https://myAccount.blob.core.windows.net/blobcontainer/{App.User.EmailAddress}/{name}.jpg?{subToken}"));
var completeUri = blobUri.ToUri();
var blobClientNew = new BlobClient(completeUri, null);
using var myStream3 = File.OpenRead(file.Path);
await blobClientNew.UploadAsync(myStream3);
myStream3.Close();
'''
I'm trying to migrate from the deprecated Microsoft.WindowsAzure.Storage to Azure.Storage. In my API app, I have a method that I call occasionally to programmatically set the CORS rules in my Azure Storage account.
How do I add CORS rules to the properties using the new Azure.Storage.Blobs?
My original code that worked under Microsoft.WindowsAzure.Storage is as follows. In the following code, the _client is an instance of CloudBlobClient. I understand that in Azure.Storage.Blobs, I need to use BlobServiceClient which I now do but as I said, some parts of the following code are not working because some methods/properties are no longer there. I'm sure they're moved somewhere else but I haven't been able to figure out where.
public async Task ConfigureCors()
{
var ALLOWED_CORS_ORIGINS = new List<String> { "http://localhost:49065", "https://myappdomain.com", "https://www.myappdomain", "https://login.microsoftonline.com" };
var ALLOWED_CORS_HEADERS = new List<String> { "x-ms-meta-qqfilename", "Content-Type", "x-ms-blob-type", "x-ms-blob-content-type" };
const CorsHttpMethods ALLOWED_CORS_METHODS = CorsHttpMethods.Get | CorsHttpMethods.Delete | CorsHttpMethods.Put | CorsHttpMethods.Options;
const int ALLOWED_CORS_AGE_DAYS = 5;
var properties = await _client.GetServicePropertiesAsync();
properties.DefaultServiceVersion = "2013-08-15";
await _client.SetServicePropertiesAsync(properties);
var addRule = true;
if (addRule)
{
var ruleWideOpenWriter = new CorsRule()
{
AllowedHeaders = ALLOWED_CORS_HEADERS,
AllowedOrigins = ALLOWED_CORS_ORIGINS,
AllowedMethods = ALLOWED_CORS_METHODS,
MaxAgeInSeconds = (int)TimeSpan.FromDays(ALLOWED_CORS_AGE_DAYS).TotalSeconds
};
properties.Cors.CorsRules.Clear();
properties.Cors.CorsRules.Add(ruleWideOpenWriter);
await _client.SetServicePropertiesAsync(properties);
}
}
Looks like I can get and set properties by changing _client.GetServicePropertiesAsync() to _client.GetPropertiesAsync() but DefaultServiceVersion is no longer there. Also I can't seem to find the right way to set CORS rules.
I'd appreciate your suggestions. Thanks!
You can use the code below when using Azure.Storage.Blobs(I'm using sync method, please change it to async method if you need that):
var properties = blobServiceClient.GetProperties().Value;
properties.DefaultServiceVersion = "xxx";
BlobCorsRule rule = new BlobCorsRule();
rule.AllowedHeaders= "x-ms-meta-qqfilename,Content-Type,x-ms-blob-type,x-ms-blob-content-type";
rule.AllowedMethods = "GET,DELETE,PUT,OPTIONS";
rule.AllowedOrigins = "http://localhost:49065,https://myappdomain.com,https://www.myappdomain,https://login.microsoftonline.com";
rule.MaxAgeInSeconds = 3600; // in seconds
properties.Cors.Add(rule);
blobServiceClient.SetProperties(properties);
I need to check if the collection is exist in the DB using Node.js and MongoDB. Here I am using mongoJS as node driver. My code is below:
var mongoJs=require('mongojs');
var md5 = require('md5');
var dateTime = require('node-datetime');
var collections=['f_users'];
var MONGOLAB_URI="mongodb://user:*****123%40#ds127153.mlab.com:27153/fgdp";
var db=mongoJs(MONGOLAB_URI,collections);
exports.userSignup=function(req,res){
var email=req.body.email;
var password=req.body.password;
var dob=req.body.dob;
var dt = dateTime.create();
var createdDate=dt.format('Y-m-d H:M:S');
var updateDate=dt.format('Y-m-d H:M:S');
db.f_user_login
db.f_user_login.insert()
}
Here I need if collection f_user_login exist inside db or not. If not exist it will insert the required document.
I suppose that you first need to add the collection to your db.
var db=mongoJs(MONGOLAB_URI,['f_user_login', 'f_users']);
And then you can try running this
var fUserLoginExist = db.f_user_login.findOne();
if (fUserLoginExist) {
// the collection exists
} else {
// the collection does not exist
}
Hope it helps
When I wish to check an existence of collection, I use an easy piece of code below
var nmColl = "MyCollection";
if(db.getCollectionNames().find(function(el) {return el == nmColl;}) == null)
{
//do something
}
It is good for MongoDB up 3.0. At first, there is function db.getCollectionNames() to return all exists collections, when I look up specified name of collection. If there is no necessary collection, for example, I will create it.
I decided to try Webstorm, mainly for the autocomplete feature, but I've got an issue with it.
I require a .js file of my project(Which in this case is a driver to communicate with my Database) but the autocomplete is not working properly:
var db = require('../../config/database');
var Validator = {};
Validator.isAKnownUserId = function (user_id) {
var query = 'SELECT * FROM users WHERE id = ?';
db.
};
The databse.js file :
var cassandra = require('cassandra-driver');
// Client connecting to the keyspace used by the application
var client = new cassandra.Client ({
keyspace: keyspace,
contactPoints: ['127.0.0.1']
});
module.exports = client;
As you can see nothing special. But for example the "execute" function that is available with the cassandra.Client is not autocompletes in my validator.js file when it is in the database.js file.
Furthermore, if I replace
var db = require('../../config/database');
with
var db;
db = require('../../config/database');
or
var db = new require('../../config/database');
then the autocomplete is working correctly in my file.
Can someone help me figure out this behavior and how to get a proper autocomplete ?
Thanks in advance
I am trying to use Apache Thrift for passing messages between applications implemented in different languages. It is not necessarily used as RPC, but more for serializing/deserializing messages.
One application is in node.js. I am trying to figure out how Apache thrift works with node.js, but I can't find too much documentation and examples, except for one tiny one regarding Cassandra at:
https://github.com/apache/thrift/tree/trunk/lib/nodejs
Again, I don't need any procedures declared in the .thrift file, I only need to serialize a simple data structure like:
struct Notification {
1: string subject,
2: string message
}
Can anyone help me with an example?
I finally found the answer to this question, after wasting a lot of time just by looking at the library for nodejs.
//SERIALIZATION:
var buffer = new Buffer(notification);
var transport = new thrift.TFramedTransport(buffer);
var binaryProt = new thrift.TBinaryProtocol(transport);
notification.write(binaryProt);
At this point, the byte array can be found in the transport.outBuffers field:
var byteArray = transport.outBuffers;
For deserialization:
var tTransport = new thrift.TFramedTransport(byteArray);
var tProtocol = new thrift.TBinaryProtocol(tTransport);
var receivedNotif = new notification_type.Notification();
receivedNotif.read(tProtocol);
Also the following lines need to be added to the index.js file from the nodejs library for thrift:
exports.TFramedTransport = require('./transport').TFramedTransport;
exports.TBufferedTransport = require('./transport').TBufferedTransport;
exports.TBinaryProtocol = require('./protocol').TBinaryProtocol;
Plus there is also at least one bug in the nodejs library.
The above answer is wrong, because it tries to use outBuffers directly, which is an array of buffers. Here is a working example of using thrift with nodejs:
var util = require('util');
var thrift = require('thrift');
var Notification = require('./gen-nodejs/notification_types.js').Notification;
var TFramedTransport = require('thrift/lib/thrift/transport').TFramedTransport;
var TBufferedTransport = require('thrift/lib/thrift/transport').TBufferedTransport;
var TBinaryProtocol = require('thrift/lib/thrift/protocol').TBinaryProtocol;
var transport = new TFramedTransport(null, function(byteArray) {
// Flush puts a 4-byte header, which needs to be parsed/sliced.
byteArray = byteArray.slice(4);
// DESERIALIZATION:
var tTransport = new TFramedTransport(byteArray);
var tProtocol = new TBinaryProtocol(tTransport);
var receivedNotification = new Notification();
receivedUser.read(tProtocol);
console.log(util.inspect(receivedNotification, false, null));
});
var binaryProt = new TBinaryProtocol(transport);
// SERIALIZATION:
var notification = new Notification({"subject":"AAAA"});
console.log(util.inspect(notification, false, null));
notification.write(binaryProt);
transport.flush();
DigitalGhost is right, the previous example is wrong.
IMHO the outBuffers is a private property to the transport class and should not be accessed.