Getting Azure InstanceInput endpoint port - azure

I'm want my client to communicate with a specific WorkerRole instance, so I'm trying to use InstanceInput endpoints.
My project is based on the example provided in this question: Azure InstanceInput endpoint usage
The problem is that I don't get the external IP address + port for the actual instance, when using RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint1"].IPEndpoint;
I just get internal address with the local port (e.g. 10.x.x.x:10100). I know that I can get the public IP address via DNS lookup (xxx.cloudapp.net), but I don't have a glue how to get the correct public port for each instance.
One possible solution would be: get the instance number (from RoleEnvironment.CurrentRoleInstance.Id) and add this instance number to the FixedPortRange minimum (e.g. 10106). This would imply that the first instance will always have the port 10106, the second instance always 10107 and so on. This solution seems a bit hacky to me, since I don't know how Windows Azure assigns the instances to the ports.
Is there a better (correct) way to retrieve the public port for each instance?
Question #2:
Are there any information about the Azure Compute Emulator supporting InstanceInput endpoints? (As I already mentioned in the comments: It seems that the Azure Compute Emulator currently doesn't support InstanceInputEndpoint).
Second solution (much better):
To get the public port, the porperty PublicIPEndpoint can be used (I don't know why I didn't notice this property in the first place).
Usage: RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint1"].PublicIPEndpoint;
Warning:
The IP address in the property is unused (http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleinstanceendpoint.publicipendpoint.aspx).
First solution:
As 'artfulmethod' already mentioned, the REST operation Get Deployment retrieves interesting information about the current deployment. Since I encountered some small annoying 'issues', I'll will provide the code for the REST client here (in case someone else is having a similiar problem):
X509Store certificateStore = new X509Store(StoreName.My, StoreLocation.CurrentUser);
certificateStore.Open(OpenFlags.ReadOnly);
string footPrint = "xxx"; // enter the footprint of the certificate you use to upload the deployment (aka Management Certificate)
X509Certificate2Collection certs =
certificateStore.Certificates.Find(X509FindType.FindByThumbprint, footPrint, false);
if (certs.Count != 1) {
// client certificate cannot be found - check footprint
}
string url = "https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>"; // replace <xxx> with actual values
try {
var request = (HttpWebRequest)WebRequest.Create(url);
request.ClientCertificates.Add(certs[0]);
request.Headers.Add("x-ms-version", "2012-03-01"); // very important, otherwise you get an HTTP 400 error, specifies in which version the response is formatted
request.Method = "GET";
var response = (HttpWebResponse)request.GetResponse(); // get response
string result = new StreamReader(response.GetResponseStream()).ReadToEnd() // get response body
} catch (Exception ex) {
// handle error
}
The string 'result' contains all the information about the deployment (format of the XML is described in section 'Response Body' # http://msdn.microsoft.com/en-us/library/windowsazure/ee460804.aspx)

To get information about your deployments, including the VIPs and public ports for your role instances, use the Get Deployment operation on the Service Management API. The response body includes an InstanceInputList.

Related

Configuring Azure ServiceFabric for development lifecycle - how to parameterize host name?

What's a good way to manage deploying code changes to Dev, Test, and Prod environments in Azure? The Azure / Service Fabric site provides an article for specifying port numbers using parameters under How-to guides - Manage application lifecycle (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-how-to-specify-port-number-using-parameters), but I'm not sure how one manages host names - is there a host name related property that can be included in Publish Profile .xml file (e.g., Cloud.xml)?
Background: I'm migrating from a self hosted on premise WCF application running as a Windows Service and using WebHttpBinding with http and https endpoints (uses T4 config file templates to determine hostname and port number depending on the environment). I'm migrating this to an Azure ServiceFabric WcfCommunicationListener application (similar to the sample found here: https://github.com/loekd/ServiceFabric.WcfCalc)....
internal sealed class ServiceFabricWcfService : StatelessService
{
public ServiceFabricWcfService(StatelessServiceContext context) : base(context)
protected override IEnumerable<ServiceInstanceListener>
CreateServiceInstanceListeners()
{
yield return new ServiceInstanceListener(CreateRestListener);
}
private ICommunicationListener CreateRestListener(StatelessServiceContext context)
{
var host = context.NodeContext.IPAddressOrFQDN;
var endpointConfig = context.CodePackageActivationContext.GetEndpoint("ServiceEndpoint");
var port = endpointConfig.Port;
var scheme = endpointConfig.Protocol.ToString();
var uri = string.Format(CultureInfo.InvariantCulture, "{0}://{1}:{2}/webhost/", scheme, host, port);
var listener = new WcfCommunicationListener<IJsonService>(context, new JsonServicePerCall(), new WebHttpBinding(WebHttpSecurityMode.None), new EndpointAddress(uri));
var ep = listener.ServiceHost.Description.Endpoints.Last();
ep.Behaviors.Add(new WebHttpBehavior());
return listener;
}
}
As you can see, the host name is obtained from the StatelessServiceContext's NodeContext - is there a good way to set this up to target different host names for each environment? My clients need to be able to make http/https calls based on host name to determine which environment they connect to. Thanks!
I don't think that you can do that, since in the provided example host variable represents exact node on which service is running. You can reach it using cluster name if you open appropriate port, e.g. http://mycluster.eastus.cloudapp.azure.com:19081/MyApp/MyService

Azure Blob Storage to host images / media - fetching with blob URL (without intermediary controller)

In this article, the author provides a way to upload via a WebAPI controller. This makes sense to me.
He then recommends using an API Controller and a dedicated service method to deliver the blob:
public async Task<HttpResponseMessage> GetBlobDownload(int blobId)
{
// IMPORTANT: This must return HttpResponseMessage instead of IHttpActionResult
try
{
var result = await _service.DownloadBlob(blobId);
if (result == null)
{
return new HttpResponseMessage(HttpStatusCode.NotFound);
}
// Reset the stream position; otherwise, download will not work
result.BlobStream.Position = 0;
// Create response message with blob stream as its content
var message = new HttpResponseMessage(HttpStatusCode.OK)
{
Content = new StreamContent(result.BlobStream)
};
// Set content headers
message.Content.Headers.ContentLength = result.BlobLength;
message.Content.Headers.ContentType = new MediaTypeHeaderValue(result.BlobContentType);
message.Content.Headers.ContentDisposition = new ContentDispositionHeaderValue("attachment")
{
FileName = HttpUtility.UrlDecode(result.BlobFileName),
Size = result.BlobLength
};
return message;
}
catch (Exception ex)
{
return new HttpResponseMessage
{
StatusCode = HttpStatusCode.InternalServerError,
Content = new StringContent(ex.Message)
};
}
}
My question is - why can't we just reference the blob URL directly after storing it in the database (instead of fetching via Blob ID)?
What's the benefit of fetching through a controller like this?
You can certainly deliver a blob directly, which then avoids using resources of your app tier (vm, app service, etc). Just note that, if blobs are private, you'd have to provide a special signed URI to the client app (e.g. adding a shared access signature) to allow this URI to be used publicly (for a temporary period of time). You'd generate the SAS within your app tier.
You'd still have all of your access control logic in your controller, to decide who has the rights to the object, for how long, etc. But you'd no longer need to stream the content through your app (consuming cpu, memory, & network resources). And you'd still be able to use https with direct storage access.
Quite simply, you can enforce access control centrally when you use a controller. You have way more control over who/what/why is accessing the file. You can also log requests pretty easily too.
Longer term, you might want to change the locations of your files, add a partitioning strategy for scalability, or do something else in your app that requires a change that you don't see right now. When you use a controller you can isolate the client code from all of those inevitable changes.

Service Fabric reverse proxy port configurability

I'm trying to write an encapsulation to get the uri for a local reverse proxy for service fabric and I'm having a hard time deciding how I want to approach configurability for the port (known as "HttpApplicationGatewayEndpoint" in the service manifest or "reverseProxyEndpointPort" in the arm template). The best way I've thought to do it would be to call "GetClusterManifestAsync" from the fabric client and parse it from there, but I'm also not a fan of that for a few reasons. For one, the call returns a string xml blob, which isn't guarded against changes to the manifest schema. I've also not yet found a way to query the cluster manager to find out which node type I'm currently on, so if for some silly reason the cluster has multiple node types and each one has a different reverse proxy port (just being a defensive coder here), that could potentially fail. It seems like an awful lot of effort to go through to dynamically discover that port number, and I've definitely missed things in the fabric api before, so any suggestions on how to approach this issue?
Edit:
I'm seeing from the example project that it's getting the port number from a config package in the service. I would rather not have to do it that way as then I'm going to have to write a ton of boilerplate for every service that'll need to use this to read configs and pass this around. Since this is more or less a constant at runtime then it seems to me like this could be treated as such and fetched somewhere from the fabric client?
After some time spent in the object browser I was able to find the various pieces I needed to make this properly.
public class ReverseProxyPortResolver
{
/// <summary>
/// Represents the port that the current fabric node is configured
/// to use when using a reverse proxy on localhost
/// </summary>
public static AsyncLazy<int> ReverseProxyPort = new AsyncLazy<int>(async ()=>
{
//Get the cluster manifest from the fabric client & deserialize it into a hardened object
ClusterManifestType deserializedManifest;
using (var cl = new FabricClient())
{
var manifestStr = await cl.ClusterManager.GetClusterManifestAsync().ConfigureAwait(false);
var serializer = new XmlSerializer(typeof(ClusterManifestType));
using (var reader = new StringReader(manifestStr))
{
deserializedManifest = (ClusterManifestType)serializer.Deserialize(reader);
}
}
//Fetch the setting from the correct node type
var nodeType = GetNodeType();
var nodeTypeSettings = deserializedManifest.NodeTypes.Single(x => x.Name.Equals(nodeType));
return int.Parse(nodeTypeSettings.Endpoints.HttpApplicationGatewayEndpoint.Port);
});
private static string GetNodeType()
{
try
{
return FabricRuntime.GetNodeContext().NodeType;
}
catch (FabricConnectionDeniedException)
{
//this code was invoked from a non-fabric started application
//likely a unit test
return "NodeType0";
}
}
}
News to me in this investigation was that all of the schemas for any of the service fabric xml is squirreled away in an assembly named System.Fabric.Management.ServiceModel.

How can I sign a JWT token on an Azure WebJob without getting a CryptographicException?

I have a WebJob that needs to create a JWT token to talk with an external service. The following code works when I run the WebJob on my local machine:
public static string SignES256(byte[] p8Certificate, object header, object payload)
{
var headerString = JsonConvert.SerializeObject(header);
var payloadString = JsonConvert.SerializeObject(payload);
CngKey key = CngKey.Import(p8Certificate, CngKeyBlobFormat.Pkcs8PrivateBlob);
using (ECDsaCng dsa = new ECDsaCng(key))
{
dsa.HashAlgorithm = CngAlgorithm.Sha256;
var unsignedJwtData = Base64UrlEncoder.Encode(Encoding.UTF8.GetBytes(headerString)) + "." + Base64UrlEncoder.Encode(Encoding.UTF8.GetBytes(payloadString));
var signature = dsa.SignData(Encoding.UTF8.GetBytes(unsignedJwtData));
return unsignedJwtData + "." + Base64UrlEncoder.Encode(signature);
}
}
However, when I deploy my WebJob to Azure, I get the following exception:
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: NotificationFunctions.QueueOperation ---> System.Security.Cryptography.CryptographicException: The system cannot find the file specified. at System.Security.Cryptography.NCryptNative.ImportKey(SafeNCryptProviderHandle provider, Byte[] keyBlob, String format) at System.Security.Cryptography.CngKey.Import(Byte[] keyBlob, CngKeyBlobFormat format, CngProvider provider)
It says it can't find a specified file, but the parameters I am passing in are not looking at a file location, they are in memory. From what I have gathered, there may be some kind of cryptography setting I need to enable to be able to use the CngKey.Import method, but I can't find any settings in the Azure portal to configure related to this.
I have also tried using JwtSecurityTokenHandler, but it doesn't seem to handle the ES256 hashing algorithm I need to use (even though it is referenced in the JwtAlgorithms class as ECDSA_SHA256).
Any suggestions would be appreciated!
UPDATE
It appears that CngKey.Import may actually be trying to store the certificate somewhere that is not accessible on Azure. I don't need it stored, so if there is a better way to access the certificate in memory or convert it to a different kind of certificate that would be easier to use that would work.
UPDATE 2
This issue might be related to Azure Web Apps IIS setting not loading the user profile as mentioned here. I have enabled this by setting WEBSITE_LOAD_USER_PROFILE = 1 in the Azure portal app settings. I have tried with this update when running the code both via the WebJob and the Web App in Azure but I still receive the same error.
I used a decompiler to take a look under the hood at what the CngKey.Import method was actually doing. It looks like it tries to insert the certificate I am using into the "Microsoft Software Key Storage Provider". I don't actually need this, just need to read the value of the certificate but it doesn't look like that is possible.
Once I realized a certificate is getting inserted into a store somewhere one the machine, I started thinking about how bad of a think that would be from a security standpoint if your Azure Web App was running in a shared environment, like it does for the Free and Shared tiers. Sure enough, my VM was on the Shared tier. Scaling it up to the Basic tier resolved this issue.

Microsoft Unity - How to register connectionstring as a parameter to repository constructor when it can vary by client?

I am relatively new to IoC containers so I apologize in advance for my ignorance.
My application is a asp.net 4.0 MVC app that uses the Entity Framework with a Repository layer on top of that. It is a multi tenant application so the connection string that is used varies by the logged in client.
The connection string is determined by a 'key' that gets passed in as part of the route which indicates the client. This route data is only present on the first request of the user's session.
The route looks kind of like this: http://{host}/login/dev/
where 'dev' indicates we are using the dev database.
Currently the IoC container is registering all dependencies in the global.asax Application_Start event handler and I have the 'key' hardcoded as follows:
var cnString = CommonServices.GetDBConnection("dev");
container.RegisterType<IRequestMgmtRecipientRepository, RequestMgmtRecipientRepository>(
new InjectionConstructor(cnString));
Is there a way with Unity to dynamically register the repository based on the logged in client using the route data that is supplied initially?
Note: I am not manually resolving the repositories. They are getting constructed by the container when the controllers get instantiated.
I am stumped.
Thanks!
Quick assumption, you can use the host to identify your tenant.
the following article has a slightly different approach http://www.agileatwork.com/bolt-on-multi-tenancy-in-asp-net-mvc-with-unity-and-nhibernate-part-ii-commingled-data/, its using NH, but it is usable.
based on the above this hacked code may work (not tried/complied the following, not much of a unity user, more of a windsor person :) )
Container.RegisterType<IRequestMgmtRecipientRepository, RequestMgmtRecipientRepository>(new InjectionFactory(c =>
{
//the following you can get via a static class
//HttpContext.Current.Request.Url.Host, if i remember correctly
var context = c.Resolve<HttpContextBase>();
var host = context.Request.Headers["Host"] ?? context.Request.Url.Host;
var connStr = CommonServices.GetDBConnection("dev_" + host); //assumed
return new RequestMgmtRecipientRepository(connStr);
}));
Scenario 2 (i do not think this was the case)
if the client identifies the Tenant (not the host, ie http: //host1), this suggests you would already need access to a database to access the client information? in this case the database which holds client information, will also need to have enough information to identify the tenant.
the issue with senario 2 will arise around anon uses, which tenant is being accessed.
assuming senario 2, then the InjectionFactory should still work.
hope this helps

Resources