Make SubSonic use existing <connectionstring> instead of new data provider - subsonic

I am adding SubSonic to a legacy application. This application already defines a ConnectionString. Is there a way I can use this connectionstring instead of creating a new Data Provider entry?
I know that one solution is programmatically setting this in the code (i.e. SubSonic.DataService.GetInstance("Name").SetDefaultConnectionString("ConnString") ). However, is there a more elegant solution?

I think that's the only way to do it. And it might throw an exception if there is no place holder SubSonicService in the config file, I don't remember.
// GetInstance just to initialize subsonic.
DataProvider provider = DataService.GetInstance(subsonicProviderName);
// Set the actual database connection string.
// Overrides config file setting.
provider.DefaultConnectionString = connectionString;
DataService.Provider = provider;

Related

How to create a new Authority in jHipster?

I wonder if it is possible to create a new Authority in Jhispter. I tried adding a ROLE_WRITER:
/project/src/main/java/location/security/AuthoritiesConstants.java
package location.security;
/**
* Constants for Spring Security authorities.
*/
public final class AuthoritiesConstants {
public static final String ADMIN = "ROLE_ADMIN";
public static final String USER = "ROLE_USER";
public static final String WRITER = "ROLE_WRITER";
public static final String ANONYMOUS = "ROLE_ANONYMOUS";
private AuthoritiesConstants() {
}
}
When I run the app, it does not crash, but when I tried to change the localhost:9000/#/user-management ROLE in the profile, it did not offer me the option.
So I went to the database and add a new ROLE in the JHI_AUTHORITY Table and now it appears in the user-management, but I have the feeling that i'm getting into trouble if I mess around with the User Entity.
Is there any official way of doing it? (that I am not aware of)
Is there any danger with doing it?
Is there anything else that I should consider?
Thanks
Is there any official way of doing it? (that I am not aware of)
Have you seen src/main/resources/liquibase/authorities.csv? I think that is a right place to add a new authority before production, and when you are in production stage, then it is recommended to add your change(insert into) as liquibase changeset.
Is there any danger with doing it?
AFAIK new role will work like other existing roles in Spring security context. having said that I might misunderstood your question.
Is there anything else that I should consider?
Automation, this type of manual changes will cause dysfunction in production or new installation, so we need to automate this type of changes for both situations.
Although this is already answered, I think it's a good idea to put a link to a related tip posted in the official website of JHipster:
https://www.jhipster.tech/tips/025_tip_create_new_authority.html
I faced the same issue, I added drop-first: true parameter to src/main/resources/config/application-dev.yml:
....
liquibase:
contexts: dev
drop-first: true
....
It seems like it ais a parameter to regenerate the database (for development mode).

NodeJS and storing OAuth credentials, outside of the code base?

I am creating a NodeJS API server that will be delegatiing authentication to an oauth2 server. While I could store the key and secret along with the source code, I want to avoid doing that since it feels like a security risk and it is something that doesn't match the lifespan of a server implementation update (key/secret refresh will likely happen more often).
I could store it in a database or a maybe a transient json file, but I would like to know what are the considered the best practices in the NodeJS world or what is considered acceptable. Any suggestions are appreciated.
One option would be to set environment variables as part of your deployment and then access them in the code from the global process object:
var clientId = process.env.CLIENT_ID
var clientSecret = process.env.CLIENT_SECRET
Since I wanted to provide something that can store multiple values, I just created a JSON file and then read that into a module I called keystore (using ES6 class):
class KeyStore {
load() {
// load the json file from a location specified in the config
// or process.env.MYSERVER_KEYSTORE
}
get (keyname) {
// return the key I am looking for
}
}
module.exports = new KeyStore();
I would ideally want to store the file encrypted, but for now I am just storing it read only to the current user in the home directory.
If there is another way, that is considered 'better', then I am open to that.

Azure Diagnostics - runtime def vs. wadcfg

I'm trying to understand the various ways to configure the Diagnostics in Windows Azure.
So far I've set a diagnostics.wadcfg that is properly used by Azure as I retrieve its content in the xml blob stored by Diagnostics in the wad-control-container (and the tables are updated at the correct refresh rate).
Now I would like to override some fields from the cscfg, in order to boost the log transfer period for all instances, for example (without having to update each wad-control-container file, which will be erased in case of instance recycle btw).
So in my WebRole.Run(), I get a parameter from RoleEnvironment.GetConfigurationSettingValue() and try to apply it to the current config ; but my problem is that the values I read from DiagnosticMonitor.GetDefaultInitialConfiguration() do not correspond to the content of my diagnostics.wadcfg, and setting new values in there doesn't seem to have any effect.
Can anyone explain the relationship between what's taken from diagnostics.wadcfg and the values you can set at run-time?
Thanks
GetDefaultInitialConfiguration() will not return you your current settings, becasue as its name states it takes a default configuration. You have to use the GetCurrentConfiguration method if you need to take the configuration that is in place.
However, if you need to just boost the transfer, you could use for example the Cerebrata's Azure Diagnostics Manager to quickly kick off on-demand transfer of your roles.
You could also use the Windows Azure Diagnostics Management cmdlets for powershell. Check out this article.
Hope this helps!
In order to utilize values in wadcfg file the following code code could be used to access current DiagnosticsMonitorConfiguration:
var cloudStorageAccount = CloudStorageAccount.Parse(
RoleEnvironment.GetConfigurationSettingValue(WADStorageConnectionString));
var roleInstanceDiagnosticManager = cloudStorageAccount.CreateRoleInstanceDiagnosticManager(
RoleEnvironment.DeploymentId,
RoleEnvironment.CurrentRoleInstance.Role.Name,
RoleEnvironment.CurrentRoleInstance.Id);
var dmc = roleInstanceDiagnosticManager.GetCurrentConfiguration();
// Set different logging settings
dmc.Logs....
dmc.PerformanceCounters....
// don't forget to update
roleInstanceDiagnosticManager.SetCurrentConfiguration(dmc);
The code by Boris Lipshitz doesn't work now (Breaking Changes in Windows Azure Diagnostics (SDK 2.0)): "the DeploymentDiagnosticManager constructor now accepts a connection string to the storage account instead of a CloudStorageAccount object".
Updated code for SDK 2.0+:
var roleInstanceDiagnosticManager = new RoleInstanceDiagnosticManager(
// Add StorageConnectionString to your role settings for this to work
CloudConfigurationManager.GetSetting("StorageConnectionString"),
RoleEnvironment.DeploymentId,
RoleEnvironment.CurrentRoleInstance.Role.Name,
RoleEnvironment.CurrentRoleInstance.Id);
var dmc = roleInstanceDiagnosticManager.GetCurrentConfiguration();
// Set different logging settings
dmc.Logs....
dmc.PerformanceCounters....
// don't forget to update
roleInstanceDiagnosticManager.SetCurrentConfiguration(dmc)

Connections with many databases

We have a webapp where each client has their own db (approx. 700 at the moment).
In SubSonic 2, you had to wrap each call with the SharedDBConnectionScope passing in the right connection string to use, otherwise you ran the risk of one thread or client getting data from another thread or client.
In SubSonic3 is this still needed? Do I need to wrap the calls like I did in 2.x?
There are easy ways of switching the database now, but do I still have thread issues or can I do away with the call to SharedDBConnectionScope?
SubSonic 3 greatly improved the way to create a provider from scratch or just passing a name and a connectionsctring:
Some Examples:
// Linq Templates:
var db = new YourDB("connectionstring goes here", "System.Data.SqlClient");
// SimpleRepository without app.config
IDataProvider provider = SubSonic.DataProviders.ProviderFactory.GetProvider(
connectionString: "Server=localhost;Database=clientdb;Uid=root;",
providerName: "MySql.Data.MySqlClient"
);
IRepository repository = new SimpleRepository(provider,
SimpleRepositoryOptions.RunMigrations);
So basically you can create a provider or repository each time a client connects and use this in your class.

Separate Read/Write Connection for SubSonic

The security policy at our client's production environment requires that we use separate connection to execute writes and read to and from database. We have decided to use SubSonic to generate our DAL. So I am interested in knowing if it is possible and if yes how?
You can specify the provider SubSonic is using at runtime. So you would specify the read provider (using your read connectionstring) when loading from the database and then specify the write provider (using your write connectionstring) when you want to save to it.
The following isn't tested but I think it should give you the general idea:
SqlQuery query = new Select()
.From<Contact>();
query.ProviderName = Databases.ReadProvider;
ContactCollection contacts = query.ExecuteAsCollection<ContactCollection>();
contacts[0].FirstName = "John";
contacts.ProviderName = Databases.WriteProvider;
contacts.SaveAll();

Resources