Where to keep p12 file securely on App Engine? - security

On App Engine, I need a p12 file to create signed URLs:
https://developers.google.com/storage/docs/accesscontrol#Signing-Strings
Google does not describe what are best practices about keeping this file.
Can I use the WEB-INF directory to store the file? It would then be part of the source code and kept together with the password to open it.
What are best practices here? Or other approaches?
--
What about performance? Is it efficient to load the file over and over again? Does App Engine automatically cache the file across calls (on the same instance)? Or will I need to load the file once using a servlet and then keep it in a static variable somehow? Are there better ways to achieve this, like storing the file in a datastore record and then keep it in memcache? How secure would that approach be? Probably no good, right?

In App Engine specifically there are a number of unusual security limitations around File storage. I have found the best place to store resources securely is by using the bundle itself. If you're using the default Maven setup as produced by the appengine maven skeleton project this is as simple as placing the file inside of the appropriate resources directory
Once the p12 is in the correct location, you'll need to load it using the class loader's GetResourceAsStream function. Then when building the GoogleCredentials, don't use the documented setServiceAccountPrivateKeyFromP12File() function, but instead use the setServiceAccountPrivateKey() function and pass in the PrivateKey that you just constructed.
Additionally, you will most likely not want to use any of this functionality with a live appengine instance since Appengine already provides you with a much easier to use AppIdentityCredentials function in that case so you will probably want to detect whether or not your app is in production mode and only use the ServiceAccount when testing using localhost.
Putting all of these functions together yields the following function which works for me:
public static HttpRequestInitializer getDefaultCredentials() throws IOException
{
List<String> scopes = Arrays.asList(new String[] {DEVSTORAGE_FULL_CONTROL});
if (SystemProperty.environment.value() == SystemProperty.Environment.Value.Production)
return new AppIdentityCredential(scopes);
else
{
GoogleCredential credential;
try {
String p12Password = "notasecret";
ClassLoader classLoader = ServiceUtils.class.getClassLoader();
KeyStore keystore = KeyStore.getInstance("PKCS12");
InputStream keyFileStream = classLoader.getResourceAsStream("key.p12");
if (keyFileStream == null){
throw new Exception("Key File Not Found.");
}
keystore.load(keyFileStream, p12Password.toCharArray());
PrivateKey key = (PrivateKey)keystore.getKey("privatekey", p12Password.toCharArray());
credential = new GoogleCredential.Builder()
.setTransport(HTTP_TRANSPORT)
.setJsonFactory(JSON_FACTORY)
.setServiceAccountId("YOUR_SERVICE_ACCOUNT_EMAIL#developer.gserviceaccount.com")
.setServiceAccountPrivateKey(key)
.setServiceAccountScopes(scopes)
.build();
} catch (GeneralSecurityException e) {
e.printStackTrace();
return null;
} catch (Exception e) {
e.printStackTrace();
return null;
}
return credential;
}
}

Related

How to get a TokenCredential from a ServiceClientCredential object?

In my application, we presently are using ServiceClientCredentials from Microsoft.Rest. We are migrating parts of our application over to start using Azure.ResourceManager's ArmClient.
Basically all of our previous application integrations into Azure were using Microsoft.Azure.ResourceManager, which exposed agents like BlobClient or SecretClient, and these all accepted ServiceClientCredentials as a valid token type.
Now, with ArmClient I need to authenticate using DefaultAzureCredential which derives from Azure.Core's TokenCredential.
Surprisingly I haven't been able to find any examples yet of how to create this TokenCredential.
DefaultAzureCredential just works on my local PC since I'm signed into Visual Studio, but not on my build pipeline where I use Certificate based auth exposed as a ServiceClientCredential.
This was easier than I thought. The fix ended up being adding a new ServiceCollection extension method and passing in IWebHostEnvironment.
I use that to determine whether running in local debug, in which case we can use DefaultAzureCredential, or whether running in prod mode, in which case we should use Certificate Based auth.
It looks somewhat like this and works like a charm.
public static IServiceCollection AddDefaultAzureToken (this IServiceCollection services, IWebHostEnvironment environment)
{
if (environment.IsDevelopment())
{
var l = new DefaultAzureCredential();
services.AddSingleton<TokenCredential, l>;
}
else
{
var certCredential= new ClientCertificateCredential(null, null, "Abc");
services.AddSingleton<TokenCredential, certCredential>;
}
return services;
}
This works since DefaultAzureCredential and ClientCertficateCredential all have a common ancestor of TokenCredential, and the L in SOLID, the Liskov Substitution principle tells us that any implementation of a class can be substituted for any other instance of that class without breaking the application.
Note: the above sample was pseudocode and may need slight changing to work in your environment and should be cleaned to match your teams coding standards.

How to store file into inetpub\wwwroot instead of local machine folder on UWP application

I am currently developing a UWP application for my school project and one of the pages allows the user to take a picture of themselves. I created the feature by following this tutorial: CameraStarterKit
For now I am storing the pictures taken on my desktop's picture folder. But the requirement of my project is to store the pictures taken in a folder called "Photos" under inetpub\wwwroot.
I dont really understand what wwwroot or IIS is... hence, I have no idea how I should modify my codes and store them into the folder.
Here are my codes for storing on my local desktop:
private async Task TakePhotoAsync()
{
idleTimer.Stop();
idleTimer.Start();
var stream = new InMemoryRandomAccessStream();
//MediaPlayer mediaPlayer = new MediaPlayer();
//mediaPlayer.Source = MediaSource.CreateFromUri(new Uri("ms-appx:///Assets/camera-shutter-click-03.mp3"));
//mediaPlayer.Play();
Debug.WriteLine("Taking photo...");
await _mediaCapture.CapturePhotoToStreamAsync(ImageEncodingProperties.CreateJpeg(), stream);
try
{
var file = await _captureFolder.CreateFileAsync("NYPVisitPhoto.jpg", CreationCollisionOption.GenerateUniqueName);
Debug.WriteLine("Photo taken! Saving to " + file.Path);
var photoOrientation = CameraRotationHelper.ConvertSimpleOrientationToPhotoOrientation(_rotationHelper.GetCameraCaptureOrientation());
await ReencodeAndSavePhotoAsync(stream, file, photoOrientation);
Debug.WriteLine("Photo saved!");
}
catch (Exception ex)
{
// File I/O errors are reported as exceptions
Debug.WriteLine("Exception when taking a photo: " + ex.ToString());
}
}
For the storing of the files:
private static async Task ReencodeAndSavePhotoAsync(IRandomAccessStream stream, StorageFile file, PhotoOrientation photoOrientation)
{
using (var inputStream = stream)
{
var decoder = await BitmapDecoder.CreateAsync(inputStream);
using (var outputStream = await file.OpenAsync(FileAccessMode.ReadWrite))
{
var encoder = await BitmapEncoder.CreateForTranscodingAsync(outputStream, decoder);
var properties = new BitmapPropertySet { { "System.Photo.Orientation", new BitmapTypedValue(photoOrientation, PropertyType.UInt16) } };
await encoder.BitmapProperties.SetPropertiesAsync(properties);
await encoder.FlushAsync();
}
}
}
I would add an answer since there are tricky things about this requirement.
The first is the app can only access a few folders, inetpub is not one of them.
Using brokered Windows runtime component (I would suggest using FullTrustProcessLauncher, which is much simpler to develop and deploy) can enable UWP apps access folders in the same way as the traditional desktop applications do.
While this works for an ordinary folder, the inetpub folder, however, is different that it requires Administrators Privileges to write to, unless you turn UAC off.
The desktop component launched by the app does not have the adequate privileges to write to that folder, either.
So it think an alternative way would be setting up a virtual directory in IIS manager that maps to a folder in the public Pictures library, and the app saves picture to that folder.
From the website’s perspective, a virtual directory is the same as a real folder under inetpub, what differs is the access permissions.
Kennyzx is right here that you cannot access inetpub folder through your UWP application due to permissions.
But if your application fulfills following criteria then you can use Brokered Windows Component(a component within your app) to copy your file to any location in the system.
Your application is a LOB application
You are only targetting desktop devices(I assume this will be true because of your requirement)
You are using side-loading for your app installation and distribution.
If all three are Yes then use Brokered Windows Component for UWP, it's not a small thing that can be showed here on SO using an example. So give worth a try reading and implementing it.

Azure Blob Storage to host images / media - fetching with blob URL (without intermediary controller)

In this article, the author provides a way to upload via a WebAPI controller. This makes sense to me.
He then recommends using an API Controller and a dedicated service method to deliver the blob:
public async Task<HttpResponseMessage> GetBlobDownload(int blobId)
{
// IMPORTANT: This must return HttpResponseMessage instead of IHttpActionResult
try
{
var result = await _service.DownloadBlob(blobId);
if (result == null)
{
return new HttpResponseMessage(HttpStatusCode.NotFound);
}
// Reset the stream position; otherwise, download will not work
result.BlobStream.Position = 0;
// Create response message with blob stream as its content
var message = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(result.BlobStream)
};
// Set content headers
message.Content.Headers.ContentLength = result.BlobLength;
message.Content.Headers.ContentType = new MediaTypeHeaderValue(result.BlobContentType);
message.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = HttpUtility.UrlDecode(result.BlobFileName),
Size = result.BlobLength
};
return message;
}
catch (Exception ex)
{
return new HttpResponseMessage
{
StatusCode = HttpStatusCode.InternalServerError,
Content = new StringContent(ex.Message)
};
}
}
My question is - why can't we just reference the blob URL directly after storing it in the database (instead of fetching via Blob ID)?
What's the benefit of fetching through a controller like this?
You can certainly deliver a blob directly, which then avoids using resources of your app tier (vm, app service, etc). Just note that, if blobs are private, you'd have to provide a special signed URI to the client app (e.g. adding a shared access signature) to allow this URI to be used publicly (for a temporary period of time). You'd generate the SAS within your app tier.
You'd still have all of your access control logic in your controller, to decide who has the rights to the object, for how long, etc. But you'd no longer need to stream the content through your app (consuming cpu, memory, & network resources). And you'd still be able to use https with direct storage access.
Quite simply, you can enforce access control centrally when you use a controller. You have way more control over who/what/why is accessing the file. You can also log requests pretty easily too.
Longer term, you might want to change the locations of your files, add a partitioning strategy for scalability, or do something else in your app that requires a change that you don't see right now. When you use a controller you can isolate the client code from all of those inevitable changes.

Service Fabric reverse proxy port configurability

I'm trying to write an encapsulation to get the uri for a local reverse proxy for service fabric and I'm having a hard time deciding how I want to approach configurability for the port (known as "HttpApplicationGatewayEndpoint" in the service manifest or "reverseProxyEndpointPort" in the arm template). The best way I've thought to do it would be to call "GetClusterManifestAsync" from the fabric client and parse it from there, but I'm also not a fan of that for a few reasons. For one, the call returns a string xml blob, which isn't guarded against changes to the manifest schema. I've also not yet found a way to query the cluster manager to find out which node type I'm currently on, so if for some silly reason the cluster has multiple node types and each one has a different reverse proxy port (just being a defensive coder here), that could potentially fail. It seems like an awful lot of effort to go through to dynamically discover that port number, and I've definitely missed things in the fabric api before, so any suggestions on how to approach this issue?
Edit:
I'm seeing from the example project that it's getting the port number from a config package in the service. I would rather not have to do it that way as then I'm going to have to write a ton of boilerplate for every service that'll need to use this to read configs and pass this around. Since this is more or less a constant at runtime then it seems to me like this could be treated as such and fetched somewhere from the fabric client?
After some time spent in the object browser I was able to find the various pieces I needed to make this properly.
public class ReverseProxyPortResolver
{
/// <summary>
/// Represents the port that the current fabric node is configured
/// to use when using a reverse proxy on localhost
/// </summary>
public static AsyncLazy<int> ReverseProxyPort = new AsyncLazy<int>(async ()=>
{
//Get the cluster manifest from the fabric client & deserialize it into a hardened object
ClusterManifestType deserializedManifest;
using (var cl = new FabricClient())
{
var manifestStr = await cl.ClusterManager.GetClusterManifestAsync().ConfigureAwait(false);
var serializer = new XmlSerializer(typeof(ClusterManifestType));
using (var reader = new StringReader(manifestStr))
{
deserializedManifest = (ClusterManifestType)serializer.Deserialize(reader);
}
}
//Fetch the setting from the correct node type
var nodeType = GetNodeType();
var nodeTypeSettings = deserializedManifest.NodeTypes.Single(x => x.Name.Equals(nodeType));
return int.Parse(nodeTypeSettings.Endpoints.HttpApplicationGatewayEndpoint.Port);
});
private static string GetNodeType()
{
try
{
return FabricRuntime.GetNodeContext().NodeType;
}
catch (FabricConnectionDeniedException)
{
//this code was invoked from a non-fabric started application
//likely a unit test
return "NodeType0";
}
}
}
News to me in this investigation was that all of the schemas for any of the service fabric xml is squirreled away in an assembly named System.Fabric.Management.ServiceModel.

Google Calendar API and shared hosting issue

I'm trying to use a public Google calendar in a webpage that will need editing functionalities.
To that effect, I created the calendar and made it public. I then created a Google service account and the related client id.
I also enabled the Calendar API and added the v3 dlls to the project.
I downloaded the p12 certificate and that's when the problems start.
The call to Google goes with a X509 cert but the way the .NET framework is built is that it uses a user temp folder.
Since it's a shared host for the web server (GoDaddy), I cannot have the app pool identity modified.
As a result, I'm getting this error:
System.Security.Cryptography.CryptographicException: The system cannot
find the file specified.
when calling:
X509Certificate2 certificate = new X509Certificate2(GoogleOAuth2CertificatePath,
"notasecret", X509KeyStorageFlags.Exportable);
that cerificate var is then to be used in the google call:
ServiceAccountCredential credential = new ServiceAccountCredential(
new ServiceAccountCredential.Initializer(GoogleOAuth2EmailAddress)
{
User = GoogleAccount,
Scopes = new[] { CalendarService.Scope.Calendar }
}.FromCertificate(certificate));
... but I never get that far.
Question: is there a way to make the call differently, i.e. not to use a X509 certificate but JSON instead?
Or can I get the x509 function to use a general temp location rather than a user location to which I have no access to since I can't change the identity in the app pool?
Since I'm completely stuck, any help would be appreciated.
One simple option which avoids needing to worry about file locations is to embed the certificate within your assembly. In Visual Studio, right-click on the file and show its properties. Under Build Action, pick "Embedded resource".
You should then be able to load the data with something like this:
// In a helper class somewhere...
private static byte[] LoadResourceContent(Type type, string resourceName)
{
string fullName = type.Namespace + "." + resourceName;
using (var stream = type.Assembly.GetManifestResourceStream(fullName)
{
var output = new MemoryStream();
stream.CopyTo(output);
return output.ToArray();
}
}
Then:
byte[] data = ResourceHelper.LoadResourceContent(typeof(MyType), "Certificate.p12");
var certificate = new X509Certificate2(data, "notasecret", X509KeyStorageFlags.Exportable);
Here MyType is some type which is in the same folder as your resource.
Note that there are lots of different "web" project types in .NET... depending on the exact project type you're using, you may need to tweak this.

Resources