Azure Elasticsearch and NEST can't add index - azure

In Azure I've created an Elasticsearch and Kibana cluster based on the (VM)template. In my unit test I use the Elasticsearch NEST nuget package to access my azure Electricsearch. A Ping just works fine
var node = new Uri("http://x:5601");
var settings = new ConnectionSettings(node);
var client = new ElasticClient(settings);
var response = client.Ping(new PingRequest());
Assert.IsTrue(response.IsValid);
But when I try to add an index I always get the error "Request must contain an kbn-xsrf header"
I have tried many things and tried to read as many examples but with no succes. Things I would like to know
Which nuget version should be used with the created VM. I figured out the azure enviroment runs ES 1 so I should use the nuget package
1.82
How should I authenticate in my code? I've found SetBasicAuthentication, still this doesn't seem to work beter
How to set or work with kbn-xsrf
Btw my index creating unit test looks like
var node = new Uri("http://x:5601");
var settings = new ConnectionSettings(node);
settings.SetBasicAuthentication("x", "x");
var client = new ElasticClient(settings);
var response = client.CreateIndex("hotelindex");
Assert.IsTrue(response.IsValid);

The Elastic ARM template deploys with either an internal load balancer or an external load balancer (which also deploys an internal load balancer for the following reason).
Kibana communicates with the cluster through the internal load balancer and looking at your Uri, it looks like you're sending the request to the Kibana endpoint. If you need to access the cluster through the REST API (directly or through a client), you also need to deploy an external load balancer as well.
Please note that for Kibana or external load balancer public IP addresses, the template does not configure SSL/TLS for them, so all communication is unencrypted. This is something that you will need to configure yourself.

Related

Blazor WASM Azure Static Web App, Functions not working

I created a simple Blazor WASM webapp using C# .NET5. It connects to some Functions which in turn get some data from a SQL Server database.
I followed the tutorial of BlazorTrain: https://www.youtube.com/watch?v=5QctDo9MWps
Locally using Azurite to emulate the Azure stuff it all works fine.
But after deployment using GitHub Action the webapp starts but then it needs to get some data using the Functions and that fails. Running the Function in Postman results in a 503: Function host is not running.
I'm not sure what I need to configure more. I can't find the logging from Functions. I use the injected ILog, but can find the log messages in Azure Portal.
In Azure portal I see my 3 GET functions, but no option to test or see the logging.
With the help of #Aravid I found my problem.
Because I locally needed to tell my client the URL of the API I added a configuration in Client\wwwroot\appsettings.Development.json.
Of course this file doesn't get deployed.
After changing my code in Program.cs to:
var apiAddress = builder.Configuration["ApiAddress"] ?? $"{builder.HostEnvironment.BaseAddress}/api/";
builder.Services.AddHttpClient("Api",(options) => {
options.BaseAddress = new Uri(apiAddress);
});
My client works again.
I also added my SqlServer connection string in the Application Settings of my Static Web App and the functions are working as well.
I hope somebody else will benefit from this. Took me several hours to figure it out ;)

GCloud Vision API Permission Denied on Second Request

I've gone through all the setup steps to make calls to the Google Vision API from a Node.js App. Link to the guide: https://cloud.google.com/vision/docs/libraries#setting_up_authentication
I'm using the ImageAnnotatorClient from the #google-cloud/vision package to make some text detections.
At first, it looked like everything was set up correctly but I don't know why it only allows me to do one request.
Further requests will give me the following error:
Error: 7 PERMISSION_DENIED: Your application has authenticated using end user credentials from the Google Cloud SDK or Google Cloud Shell which are not supported by the vision.googleapis.com. We recommend configuring the billing/quota_project setting in gcloud or using a service account through the auth/impersonate_service_account setting. For more information about service accounts and how to use them in your application, see https://cloud.google.com/docs/authentication/
If I restart the Node app it again allows me to do one request to the Vision API but then the subsequent requests keep failing.
Here's my code which is almost the same as in the examples:
const vision = require('#google-cloud/vision');
// Creates a client
const client = new vision.ImageAnnotatorClient();
const detectText = async (imgPath) => {
// console.log(imgPath);
const [result] = await client.textDetection(imgPath);
const detections = result.textAnnotations;
return detections;
}
It is worth to mention that this works every time when I run the Node app in my local machine. The problem is happening on my Ubuntu Droplet from Digital Ocean.
Again, I set everything up as it is in the guides. Created a Service Account, downloaded the Service Account Key JSON file, set up the environment variable like this:
export GOOGLE_APPLICATION_CREDENTIALS="PATH-TO-JSON-FILE"
I'm also setting the environment variable in the .bashrc file.
What could I be missing? Before setting up everything from scratch and go through the whole process again I thought it would be good to ask for some help.
So I found the problem. In my case, it was a problem with PM2 not passing the system env variables to the Node app.
So I had everything set up correctly auth-wise but the Node app wasn't seeing the GOOGLE_APPLICATION_CREDENTIALS env var.
I deleted the PM2 process, created a new one and now it works.

Configuring Azure ServiceFabric for development lifecycle - how to parameterize host name?

What's a good way to manage deploying code changes to Dev, Test, and Prod environments in Azure? The Azure / Service Fabric site provides an article for specifying port numbers using parameters under How-to guides - Manage application lifecycle (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-how-to-specify-port-number-using-parameters), but I'm not sure how one manages host names - is there a host name related property that can be included in Publish Profile .xml file (e.g., Cloud.xml)?
Background: I'm migrating from a self hosted on premise WCF application running as a Windows Service and using WebHttpBinding with http and https endpoints (uses T4 config file templates to determine hostname and port number depending on the environment). I'm migrating this to an Azure ServiceFabric WcfCommunicationListener application (similar to the sample found here: https://github.com/loekd/ServiceFabric.WcfCalc)....
internal sealed class ServiceFabricWcfService : StatelessService
{
public ServiceFabricWcfService(StatelessServiceContext context) : base(context)
protected override IEnumerable<ServiceInstanceListener>
CreateServiceInstanceListeners()
{
yield return new ServiceInstanceListener(CreateRestListener);
}
private ICommunicationListener CreateRestListener(StatelessServiceContext context)
{
var host = context.NodeContext.IPAddressOrFQDN;
var endpointConfig = context.CodePackageActivationContext.GetEndpoint("ServiceEndpoint");
var port = endpointConfig.Port;
var scheme = endpointConfig.Protocol.ToString();
var uri = string.Format(CultureInfo.InvariantCulture, "{0}://{1}:{2}/webhost/", scheme, host, port);
var listener = new WcfCommunicationListener<IJsonService>(context, new JsonServicePerCall(), new WebHttpBinding(WebHttpSecurityMode.None), new EndpointAddress(uri));
var ep = listener.ServiceHost.Description.Endpoints.Last();
ep.Behaviors.Add(new WebHttpBehavior());
return listener;
}
}
As you can see, the host name is obtained from the StatelessServiceContext's NodeContext - is there a good way to set this up to target different host names for each environment? My clients need to be able to make http/https calls based on host name to determine which environment they connect to. Thanks!
I don't think that you can do that, since in the provided example host variable represents exact node on which service is running. You can reach it using cluster name if you open appropriate port, e.g. http://mycluster.eastus.cloudapp.azure.com:19081/MyApp/MyService

How can I sign a JWT token on an Azure WebJob without getting a CryptographicException?

I have a WebJob that needs to create a JWT token to talk with an external service. The following code works when I run the WebJob on my local machine:
public static string SignES256(byte[] p8Certificate, object header, object payload)
{
var headerString = JsonConvert.SerializeObject(header);
var payloadString = JsonConvert.SerializeObject(payload);
CngKey key = CngKey.Import(p8Certificate, CngKeyBlobFormat.Pkcs8PrivateBlob);
using (ECDsaCng dsa = new ECDsaCng(key))
{
dsa.HashAlgorithm = CngAlgorithm.Sha256;
var unsignedJwtData = Base64UrlEncoder.Encode(Encoding.UTF8.GetBytes(headerString)) + "." + Base64UrlEncoder.Encode(Encoding.UTF8.GetBytes(payloadString));
var signature = dsa.SignData(Encoding.UTF8.GetBytes(unsignedJwtData));
return unsignedJwtData + "." + Base64UrlEncoder.Encode(signature);
}
}
However, when I deploy my WebJob to Azure, I get the following exception:
Microsoft.Azure.WebJobs.Host.FunctionInvocationException: Exception while executing function: NotificationFunctions.QueueOperation ---> System.Security.Cryptography.CryptographicException: The system cannot find the file specified. at System.Security.Cryptography.NCryptNative.ImportKey(SafeNCryptProviderHandle provider, Byte[] keyBlob, String format) at System.Security.Cryptography.CngKey.Import(Byte[] keyBlob, CngKeyBlobFormat format, CngProvider provider)
It says it can't find a specified file, but the parameters I am passing in are not looking at a file location, they are in memory. From what I have gathered, there may be some kind of cryptography setting I need to enable to be able to use the CngKey.Import method, but I can't find any settings in the Azure portal to configure related to this.
I have also tried using JwtSecurityTokenHandler, but it doesn't seem to handle the ES256 hashing algorithm I need to use (even though it is referenced in the JwtAlgorithms class as ECDSA_SHA256).
Any suggestions would be appreciated!
UPDATE
It appears that CngKey.Import may actually be trying to store the certificate somewhere that is not accessible on Azure. I don't need it stored, so if there is a better way to access the certificate in memory or convert it to a different kind of certificate that would be easier to use that would work.
UPDATE 2
This issue might be related to Azure Web Apps IIS setting not loading the user profile as mentioned here. I have enabled this by setting WEBSITE_LOAD_USER_PROFILE = 1 in the Azure portal app settings. I have tried with this update when running the code both via the WebJob and the Web App in Azure but I still receive the same error.
I used a decompiler to take a look under the hood at what the CngKey.Import method was actually doing. It looks like it tries to insert the certificate I am using into the "Microsoft Software Key Storage Provider". I don't actually need this, just need to read the value of the certificate but it doesn't look like that is possible.
Once I realized a certificate is getting inserted into a store somewhere one the machine, I started thinking about how bad of a think that would be from a security standpoint if your Azure Web App was running in a shared environment, like it does for the Free and Shared tiers. Sure enough, my VM was on the Shared tier. Scaling it up to the Basic tier resolved this issue.

How to pass web proxy address to Microsoft.WindowsAzure.Storage.OperationContext.UserHeaders?

I am writing some C# code that uses the Azure Resource Manager APIs and my CloudBlobClient needs to use a web proxy. According to the documentation for OperationContext.UserHeaders property at https://msdn.microsoft.com/en-us/library/microsoft.windowsazure.storage.operationcontext.userheaders.aspx, UserHeaders can be used to specify a proxy. Can you please share how this should be done properly?
Edited after Gaurav Mantri's comment.
The Azure clients below allow you to specify a proxy to be used via the httpClientHandler but the CloudBlobClient does not respect the proxy information from StorageManagementClient and there doesn't seem to be a way to pass the proxy information to the CloudBlobClient. Our users may want to specify different proxies for multiple connections and it doesn't seem the current architecture will easily allow this.
//Example code that instantiates clients with proxy information inside the httpClientHandler
armCompute = new ComputeManagementClient(tokenCredentials, httpClientHandler)
armStorage = new StorageManagementClient(tokenCredentials, httpClientHandler)
armNetwork = new NetworkManagementClient(tokenCredentials, httpClientHandler)
armResource = new ResourceManagementClient(tokenCredentials, httpClientHandler)
armSubscription = new SubscriptionClient(tokenCredentials, httpClientHandler)
I believe you're understanding it incorrectly. The documentation states:
Gets or sets additional headers on the request, for example, for proxy
or logging information.
From what I understand you use this to get or set the headers for your proxy to understand and not specify proxy configuration settings.
In order to specify proxy settings, you would need to specify those in your application configuration file (web.config or app.config).

Resources