Why ASP.NET Core AddDataProtection Keys cannot be loaded from AppSettings.json - security

I want to know why there isn't an easy way to load your security keys from the AppSettings.json instead of loading them off the file system as XML?
Here is the example from the Microsoft documentation.
services.AddDataProtection()
.PersistKeysToFileSystem("{PATH TO COMMON KEY RING FOLDER}")
.SetApplicationName("SharedCookieApp");
services.ConfigureApplicationCookie(options => {
options.Cookie.Name = ".AspNet.SharedCookie";
options.Cookie.Path = "/";
});
I'm just wondering why there isn't something like the following.
services.AddDataProtection()
.PersistKeysToAppSetings("EncryptionKeys")
.SetApplicationName("SharedCookieApp");
services.ConfigureApplicationCookie(options => {
options.Cookie.Name = ".AspNet.SharedCookie";
options.Cookie.Path = "/";
});
I don't understand why storing keys in an XML file would be any different than storing them in your AppSettigs.json. Now I know the format is different, however it's no more or less secure? correct?
I just want to be sure I'm not missing something.
Assumptions:
AppSettings.json is just as secure as some other XML file on disk
Azure AppSettings are securely stored and can only be access by permitted accounts
Azure AppSettings values would override any uploaded "developer" values
Developers will not store their production keys in source, surely right? :)
I know this would not work for expiring / recycling keys

"It's complicated"
We create keys on demand.
We create multiple keys, just before a key expires, we create a new one.
We need keys to be synchronized between applications.
We need keys to be encrypted where possible.
AppSettings does not give us any of those things, applications can't update their own settings files so that rules out 1 and 2, web sites don't copy a changed app settings file between instances which rules out 3, and you can't encrypt things in app settings which rules out 4.
The format isn't the problem, you could write your own encryption wrapper to cope with #4, but the rest is still necessary, so now you have to change how settings works so they're read/write (and safely read write), and then persuade web hosts to support synchronization of your custom settings file between instances.

Related

What does IsEncrypted in local.appsettings.json of Azure function mean?

The local.settings.json file for Azure function is created with the property IsEncrypted set to false.
{
"IsEncrypted":false,
"Values" : {
}
}
What does this setting mean and how is it used?
This setting represents whether the values in local.settings.json are encrypted using a local machine key. It's used with func settings encrypt/decrypt/add command of Azure Function Core Tools, hence manually change true/false is meaningless.
Usually we don't need to care about this setting and it's false by default. We can encrypt the settings file once we need to transfer it through internet and want to enforce the security. Because the file can only be decrypted at the machine where we encrypt it.
Here's the doc.

yii2 config\db.php secure?

I'm using the framework Yii2 for the first time and I was wondering if it is secure to write in plain text my database's password in config\db.php? Or is their a more secure way to access to the database?
It should be secure enough as long you are not exposing this file to public users.
You can store it anywhere on the server basically (as long as it's not exposed to public users) but application should be able to get it fast and easy (to not provide additional delay for the database connection).
If you are using Git, I believe it is a bad idea to commit any file with a password in it. Personally I keep a separate file with a few crucial passwords outside of Git and load that file in to get the values, like this...
$ini = parse_ini_file('path/passwords.ini', true);
return [
'class' => 'yii\db\Connection',
'dsn' => $config['db']['dsn'],
'username' => $config['db']['username'],
'password' => $config['db']['password'],
'charset' => 'utf8',
];
Where passwords.ini would look like...
// My password file
[db]
dsn="mysql:host=localhost;dbname=xxx"
username="xxx"
password="xxx"
You could leave db.php out of Git, but I find this more convenient, especially for other config-files where many settings does not affect security.
I don't know why you created an extra file. In your common\config folder,you will get main-local.php. You can define your Connection credentials in there.
By default that file will be ignored by git.

How can I sign a JWT token on an Azure WebJob without getting a CryptographicException?

I have a WebJob that needs to create a JWT token to talk with an external service. The following code works when I run the WebJob on my local machine:
public static string SignES256(byte[] p8Certificate, object header, object payload)
{
var headerString = JsonConvert.SerializeObject(header);
var payloadString = JsonConvert.SerializeObject(payload);
CngKey key = CngKey.Import(p8Certificate, CngKeyBlobFormat.Pkcs8PrivateBlob);
using (ECDsaCng dsa = new ECDsaCng(key))
{
dsa.HashAlgorithm = CngAlgorithm.Sha256;
var unsignedJwtData = Base64UrlEncoder.Encode(Encoding.UTF8.GetBytes(headerString)) + "." + Base64UrlEncoder.Encode(Encoding.UTF8.GetBytes(payloadString));
var signature = dsa.SignData(Encoding.UTF8.GetBytes(unsignedJwtData));
return unsignedJwtData + "." + Base64UrlEncoder.Encode(signature);
}
}
However, when I deploy my WebJob to Azure, I get the following exception:
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: NotificationFunctions.QueueOperation ---> System.Security.Cryptography.CryptographicException: The system cannot find the file specified. at System.Security.Cryptography.NCryptNative.ImportKey(SafeNCryptProviderHandle provider, Byte[] keyBlob, String format) at System.Security.Cryptography.CngKey.Import(Byte[] keyBlob, CngKeyBlobFormat format, CngProvider provider)
It says it can't find a specified file, but the parameters I am passing in are not looking at a file location, they are in memory. From what I have gathered, there may be some kind of cryptography setting I need to enable to be able to use the CngKey.Import method, but I can't find any settings in the Azure portal to configure related to this.
I have also tried using JwtSecurityTokenHandler, but it doesn't seem to handle the ES256 hashing algorithm I need to use (even though it is referenced in the JwtAlgorithms class as ECDSA_SHA256).
Any suggestions would be appreciated!
UPDATE
It appears that CngKey.Import may actually be trying to store the certificate somewhere that is not accessible on Azure. I don't need it stored, so if there is a better way to access the certificate in memory or convert it to a different kind of certificate that would be easier to use that would work.
UPDATE 2
This issue might be related to Azure Web Apps IIS setting not loading the user profile as mentioned here. I have enabled this by setting WEBSITE_LOAD_USER_PROFILE = 1 in the Azure portal app settings. I have tried with this update when running the code both via the WebJob and the Web App in Azure but I still receive the same error.
I used a decompiler to take a look under the hood at what the CngKey.Import method was actually doing. It looks like it tries to insert the certificate I am using into the "Microsoft Software Key Storage Provider". I don't actually need this, just need to read the value of the certificate but it doesn't look like that is possible.
Once I realized a certificate is getting inserted into a store somewhere one the machine, I started thinking about how bad of a think that would be from a security standpoint if your Azure Web App was running in a shared environment, like it does for the Free and Shared tiers. Sure enough, my VM was on the Shared tier. Scaling it up to the Basic tier resolved this issue.

Virus Scanning Uploaded files from Azure Web/Worker Role

We are designing an Azure Website which will allow users to Upload content(MP4,Docx...MSOffice Files) which can then be accessed.
Some video content we will encode to provide several differing quality formats, before it will be streamed (using Azure Media Services).
We need to add an intermediate step so we can scan uploaded files for potential virus risk. Is there functionality built into azure (or third party) which will allow us to call an API to scan content before processing it? We are ideally looking for an API rather than just a background service on a VM, so we can get feedback potentially for use in a web or worker role.
Had a quick look at Symantec Endpoint and Windows Defender but not sure these offer an API
I have successfully done this using the open source ClamAV. You don't specify what languages you are using, but as it's Azure I'll assume .Net.
There is a .Net wrapper that should provide the API that you are looking for:
https://github.com/tekmaven/nClam
Here is some sample code (note: this is copied directly from the nClam GitHub repo page and reproduced here just to protect against link rot)
using System;
using System.Linq;
using nClam;
class Program
{
static void Main(string[] args)
{
var clam = new ClamClient("localhost", 3310);
var scanResult = clam.ScanFileOnServer("C:\\test.txt"); //any file you would like!
switch(scanResult.Result)
{
case ClamScanResults.Clean:
Console.WriteLine("The file is clean!");
break;
case ClamScanResults.VirusDetected:
Console.WriteLine("Virus Found!");
Console.WriteLine("Virus name: {0}", scanResult.InfectedFiles.First().VirusName);
break;
case ClamScanResults.Error:
Console.WriteLine("Woah an error occured! Error: {0}", scanResult.RawResult);
break;
}
}
}
There are also APIs available for refreshing the virus definition database. All the necessary ClamAV files can be included in the deployment package and any configuration can be put into the service start-up code.
ClamAV is a good idea, specially now that 0.99 is about to be released with YARA rule support - it will make it really easy for you to write custom rules and allow clamav to use tons of good YARA rules in the open today.
Another route, and a bit of shameless plugging, is to check out scanii.com, it's a SaaS for malware/virus detection and it integrates quite nicely with AWS and Azures.
There are a number of options to achieve this:
Firstly you can use ClamAV as already mentioned. ClamAV doesn't always receive the best press for its virus databases but as others have pointed out it's easy to use and is expandable.
You can also install a commercial scanner, such as avg, kaspersky etc. Many of these come with a C API that you can talk to directly, although often getting access to this can be expensive from a licensing point of view.
Alternatively you can make calls to the executable directly using something like the following to capture the output:
var proc = new Process {
StartInfo = new ProcessStartInfo {
FileName = "scanner.exe",
Arguments = "arguments needed",
UseShellExecute = false,
RedirectStandardOutput = true,
CreateNoWindow = true
}
};
proc.Start();
while (!proc.StandardOutput.EndOfStream) {
string line = proc.StandardOutput.ReadLine();
}
You would then need to parse the output to get the result and use it within your application.
Finally, now there are some commercial APIs available to do this kind of thing such as attachmentscanner (disclaimer I'm related to this product) or scanii. These will provide you with an API and a more scalable option to scan specific files and receive the response from at least one virus checking engine.
New thing coming Spring / Summer 2020. Advanced threat protection for Azure Storage includes Malware Reputation Screening, which detects malware uploads using hash reputation analysis leveraging the power of Microsoft Threat Intelligence, which includes hashes for Viruses, Trojans, Spyware and Ransomware. Note: cannot guarantee every malware will be detected using hash reputation analysis technique.
https://techcommunity.microsoft.com/t5/Azure-Security-Center/Validating-ATP-for-Azure-Storage-Detections-in-Azure-Security/ba-p/1068131

What is the laravel way of storing API keys?

Is there a specific file or directory that is recommended for storing API keys? I'd like to take my keys out of my codebase but I'm not sure where to put them.
This is an updated answer for newer versions of Laravel.
First, set the credentials in your .env file. Generally you'll want to prefix it with the name of the service, so in this example I'll use Google Maps.
GOOGLE_KEY=secret_api_key
Then, take a look in config/services.php - it's where we can map environment variables into the app configuration. You'll see some existing examples out of the box. You can add additional configuration under the service name and point it to the environment variable.
'google' => [
'key' => env('GOOGLE_KEY'),
],
Then when you need to access this key within your app you can get it through the app configuration instead.
// Through a facade
Config::get('services.google.key');
// Through a helper
config('services.google.key');
Be sure not to just use env('GOOGLE_KEY) through your app - it's more performant to go through the app configuration as it's cached - especially if you call php artisan config:cache as part of your deployment process.
You can make your API keys environment variables and then access them that way. Read more about protecting sensitive configuration from the docs.
You simply create a .env.php file in the root of your project that returns an array of environment variables.
<?php
return array(
'SECRET_API_KEY' => 'PUT YOUR API KEY HERE'
);
Then you can access it in your app like so.
getenv('SECRET_API_KEY');

Resources