How to pass web proxy address to Microsoft.WindowsAzure.Storage.OperationContext.UserHeaders? - azure

I am writing some C# code that uses the Azure Resource Manager APIs and my CloudBlobClient needs to use a web proxy. According to the documentation for OperationContext.UserHeaders property at https://msdn.microsoft.com/en-us/library/microsoft.windowsazure.storage.operationcontext.userheaders.aspx, UserHeaders can be used to specify a proxy. Can you please share how this should be done properly?
Edited after Gaurav Mantri's comment.
The Azure clients below allow you to specify a proxy to be used via the httpClientHandler but the CloudBlobClient does not respect the proxy information from StorageManagementClient and there doesn't seem to be a way to pass the proxy information to the CloudBlobClient. Our users may want to specify different proxies for multiple connections and it doesn't seem the current architecture will easily allow this.
//Example code that instantiates clients with proxy information inside the httpClientHandler
armCompute = new ComputeManagementClient(tokenCredentials, httpClientHandler)
armStorage = new StorageManagementClient(tokenCredentials, httpClientHandler)
armNetwork = new NetworkManagementClient(tokenCredentials, httpClientHandler)
armResource = new ResourceManagementClient(tokenCredentials, httpClientHandler)
armSubscription = new SubscriptionClient(tokenCredentials, httpClientHandler)

I believe you're understanding it incorrectly. The documentation states:
Gets or sets additional headers on the request, for example, for proxy
or logging information.
From what I understand you use this to get or set the headers for your proxy to understand and not specify proxy configuration settings.
In order to specify proxy settings, you would need to specify those in your application configuration file (web.config or app.config).

Related

Injecting agent properties into HTTP headers

I have an SSO setup using OpenAM 13.5 protecting an application on IIS with an IIS Web Agent.
The application receives user/session attributes by mapping the appropriate properties in the Agent configuration - everything is working fine, however I'd like to take things a step further: I'd like to pass the application a few agent properties as HTTP headers - i.e.:
CUSTOM-LOGIN-URL = com.sun.identity.agents.config.login.url
CUSTOM-EDITPASSWORD-URL = (set by a custom agent property)
CUSTOM-EDITPROFILE-URL = (set by a custom agent property)
CUSTOM-LOGOUT-URL = com.sun.identity.agents.config.logout.url
CUSTOM-GOTO-PARAMETER-NAME = com.sun.identity.agents.config.redirect.param
This way I could avoid hardwiring the application to the specific SSO config details.
Do you have any idea on how I could achieve that, possibly without writing code?
That's not possible OOTB. It might be possible by implementing https://backstage.forgerock.com/docs/openam/13.5/apidocs/com/sun/identity/entitlement/ResourceAttribute.html
Please see https://backstage.forgerock.com/docs/openam/13.5/dev-guide/#sec-policy-spi

Keep configurations for DietJS server

Rather than having to change the URL passed in diet.listen() method on every server that I deploy my application on, there should be a better way to maintain such parameters in the application.
What options do we have to be able to manage such parameters?
You can create a '.json' file there at the root of the application and then do a require for the same. For example:
var configuration = require('./config.json');
The example expects you to save a file named 'config.json' with all your configuration as a JSON. The configuration object will hold all your settings that you might want to make dynamic and read at runtime.

Azure Elasticsearch and NEST can't add index

In Azure I've created an Elasticsearch and Kibana cluster based on the (VM)template. In my unit test I use the Elasticsearch NEST nuget package to access my azure Electricsearch. A Ping just works fine
var node = new Uri("http://x:5601");
var settings = new ConnectionSettings(node);
var client = new ElasticClient(settings);
var response = client.Ping(new PingRequest());
Assert.IsTrue(response.IsValid);
But when I try to add an index I always get the error "Request must contain an kbn-xsrf header"
I have tried many things and tried to read as many examples but with no succes. Things I would like to know
Which nuget version should be used with the created VM. I figured out the azure enviroment runs ES 1 so I should use the nuget package
1.82
How should I authenticate in my code? I've found SetBasicAuthentication, still this doesn't seem to work beter
How to set or work with kbn-xsrf
Btw my index creating unit test looks like
var node = new Uri("http://x:5601");
var settings = new ConnectionSettings(node);
settings.SetBasicAuthentication("x", "x");
var client = new ElasticClient(settings);
var response = client.CreateIndex("hotelindex");
Assert.IsTrue(response.IsValid);
The Elastic ARM template deploys with either an internal load balancer or an external load balancer (which also deploys an internal load balancer for the following reason).
Kibana communicates with the cluster through the internal load balancer and looking at your Uri, it looks like you're sending the request to the Kibana endpoint. If you need to access the cluster through the REST API (directly or through a client), you also need to deploy an external load balancer as well.
Please note that for Kibana or external load balancer public IP addresses, the template does not configure SSL/TLS for them, so all communication is unencrypted. This is something that you will need to configure yourself.

Microsoft Unity - How to register connectionstring as a parameter to repository constructor when it can vary by client?

I am relatively new to IoC containers so I apologize in advance for my ignorance.
My application is a asp.net 4.0 MVC app that uses the Entity Framework with a Repository layer on top of that. It is a multi tenant application so the connection string that is used varies by the logged in client.
The connection string is determined by a 'key' that gets passed in as part of the route which indicates the client. This route data is only present on the first request of the user's session.
The route looks kind of like this: http://{host}/login/dev/
where 'dev' indicates we are using the dev database.
Currently the IoC container is registering all dependencies in the global.asax Application_Start event handler and I have the 'key' hardcoded as follows:
var cnString = CommonServices.GetDBConnection("dev");
container.RegisterType<IRequestMgmtRecipientRepository, RequestMgmtRecipientRepository>(
new InjectionConstructor(cnString));
Is there a way with Unity to dynamically register the repository based on the logged in client using the route data that is supplied initially?
Note: I am not manually resolving the repositories. They are getting constructed by the container when the controllers get instantiated.
I am stumped.
Thanks!
Quick assumption, you can use the host to identify your tenant.
the following article has a slightly different approach http://www.agileatwork.com/bolt-on-multi-tenancy-in-asp-net-mvc-with-unity-and-nhibernate-part-ii-commingled-data/, its using NH, but it is usable.
based on the above this hacked code may work (not tried/complied the following, not much of a unity user, more of a windsor person :) )
Container.RegisterType<IRequestMgmtRecipientRepository, RequestMgmtRecipientRepository>(new InjectionFactory(c =>
{
//the following you can get via a static class
//HttpContext.Current.Request.Url.Host, if i remember correctly
var context = c.Resolve<HttpContextBase>();
var host = context.Request.Headers["Host"] ?? context.Request.Url.Host;
var connStr = CommonServices.GetDBConnection("dev_" + host); //assumed
return new RequestMgmtRecipientRepository(connStr);
}));
Scenario 2 (i do not think this was the case)
if the client identifies the Tenant (not the host, ie http: //host1), this suggests you would already need access to a database to access the client information? in this case the database which holds client information, will also need to have enough information to identify the tenant.
the issue with senario 2 will arise around anon uses, which tenant is being accessed.
assuming senario 2, then the InjectionFactory should still work.
hope this helps

Do I need to replace localhost in the IIS://localhost/MimeMap when reading the Mimemap

I'm reading out the mime types from IIS's MimeMap using the command
_mimeTypes = new Dictionary<string, string>();
//load from iis store.
DirectoryEntry Path = new DirectoryEntry("IIS://localhost/MimeMap");
PropertyValueCollection PropValues = Path.Properties["MimeMap"];
IISOle.MimeMap MimeTypeObj;
foreach (var item in PropValues)
{
// IISOle -> Add reference to Active DS IIS Namespace provider
MimeTypeObj = (IISOle.MimeMap)item;
_mimeTypes.Add(MimeTypeObj.Extension, MimeTypeObj.MimeType);
}
Do I need replace the localhost part when I deploy it to my live server? If not, why not and what are the implications of not doing so.
Cheers
It should not be an issue to leave the host as 'localhost'.
After all, you want to get the MimeMap of the machine your app is running on, correct?
A possible complication that I can forsee is that if you are using a third party as a host. They can do anything they want with host headers and it may be possible that localhost is not available for whatever reason.
But you should simply give it a shot and adjust if necessary.
If you leave it like 'Localhost', you will have to run this script directly on the server.
If you change it to fetch the machine name directly, you can think of running this script remotely as well.

Resources