What's a good way to manage deploying code changes to Dev, Test, and Prod environments in Azure? The Azure / Service Fabric site provides an article for specifying port numbers using parameters under How-to guides - Manage application lifecycle (https://learn.microsoft.com/en-us/azure/service-fabric/service-fabric-how-to-specify-port-number-using-parameters), but I'm not sure how one manages host names - is there a host name related property that can be included in Publish Profile .xml file (e.g., Cloud.xml)?
Background: I'm migrating from a self hosted on premise WCF application running as a Windows Service and using WebHttpBinding with http and https endpoints (uses T4 config file templates to determine hostname and port number depending on the environment). I'm migrating this to an Azure ServiceFabric WcfCommunicationListener application (similar to the sample found here: https://github.com/loekd/ServiceFabric.WcfCalc)....
internal sealed class ServiceFabricWcfService : StatelessService
{
public ServiceFabricWcfService(StatelessServiceContext context) : base(context)
protected override IEnumerable<ServiceInstanceListener>
CreateServiceInstanceListeners()
{
yield return new ServiceInstanceListener(CreateRestListener);
}
private ICommunicationListener CreateRestListener(StatelessServiceContext context)
{
var host = context.NodeContext.IPAddressOrFQDN;
var endpointConfig = context.CodePackageActivationContext.GetEndpoint("ServiceEndpoint");
var port = endpointConfig.Port;
var scheme = endpointConfig.Protocol.ToString();
var uri = string.Format(CultureInfo.InvariantCulture, "{0}://{1}:{2}/webhost/", scheme, host, port);
var listener = new WcfCommunicationListener<IJsonService>(context, new JsonServicePerCall(), new WebHttpBinding(WebHttpSecurityMode.None), new EndpointAddress(uri));
var ep = listener.ServiceHost.Description.Endpoints.Last();
ep.Behaviors.Add(new WebHttpBehavior());
return listener;
}
}
As you can see, the host name is obtained from the StatelessServiceContext's NodeContext - is there a good way to set this up to target different host names for each environment? My clients need to be able to make http/https calls based on host name to determine which environment they connect to. Thanks!
I don't think that you can do that, since in the provided example host variable represents exact node on which service is running. You can reach it using cluster name if you open appropriate port, e.g. http://mycluster.eastus.cloudapp.azure.com:19081/MyApp/MyService
Related
I'm trying to write an encapsulation to get the uri for a local reverse proxy for service fabric and I'm having a hard time deciding how I want to approach configurability for the port (known as "HttpApplicationGatewayEndpoint" in the service manifest or "reverseProxyEndpointPort" in the arm template). The best way I've thought to do it would be to call "GetClusterManifestAsync" from the fabric client and parse it from there, but I'm also not a fan of that for a few reasons. For one, the call returns a string xml blob, which isn't guarded against changes to the manifest schema. I've also not yet found a way to query the cluster manager to find out which node type I'm currently on, so if for some silly reason the cluster has multiple node types and each one has a different reverse proxy port (just being a defensive coder here), that could potentially fail. It seems like an awful lot of effort to go through to dynamically discover that port number, and I've definitely missed things in the fabric api before, so any suggestions on how to approach this issue?
Edit:
I'm seeing from the example project that it's getting the port number from a config package in the service. I would rather not have to do it that way as then I'm going to have to write a ton of boilerplate for every service that'll need to use this to read configs and pass this around. Since this is more or less a constant at runtime then it seems to me like this could be treated as such and fetched somewhere from the fabric client?
After some time spent in the object browser I was able to find the various pieces I needed to make this properly.
public class ReverseProxyPortResolver
{
/// <summary>
/// Represents the port that the current fabric node is configured
/// to use when using a reverse proxy on localhost
/// </summary>
public static AsyncLazy<int> ReverseProxyPort = new AsyncLazy<int>(async ()=>
{
//Get the cluster manifest from the fabric client & deserialize it into a hardened object
ClusterManifestType deserializedManifest;
using (var cl = new FabricClient())
{
var manifestStr = await cl.ClusterManager.GetClusterManifestAsync().ConfigureAwait(false);
var serializer = new XmlSerializer(typeof(ClusterManifestType));
using (var reader = new StringReader(manifestStr))
{
deserializedManifest = (ClusterManifestType)serializer.Deserialize(reader);
}
}
//Fetch the setting from the correct node type
var nodeType = GetNodeType();
var nodeTypeSettings = deserializedManifest.NodeTypes.Single(x => x.Name.Equals(nodeType));
return int.Parse(nodeTypeSettings.Endpoints.HttpApplicationGatewayEndpoint.Port);
});
private static string GetNodeType()
{
try
{
return FabricRuntime.GetNodeContext().NodeType;
}
catch (FabricConnectionDeniedException)
{
//this code was invoked from a non-fabric started application
//likely a unit test
return "NodeType0";
}
}
}
News to me in this investigation was that all of the schemas for any of the service fabric xml is squirreled away in an assembly named System.Fabric.Management.ServiceModel.
The Service Fabric samples like wordcount the web app listen on a port in a subpath like this:
http://localhost:8081/wordcount
The code for this configuration is: (See the file on GitHub https://github.com/Azure-Samples/service-fabric-dotnet-getting-started/blob/master/Services/WordCount/WordCount.WebService/WordCountWebService.cs)
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
return new[]
{
new ServiceInstanceListener(initParams => new OwinCommunicationListener("wordcount", new Startup(), initParams))
};
}
With this configuration we can deploy other web apps on the same cluster using the same port (8081)
http://localhost:8081/wordcount
http://localhost:8081/app1
http://localhost:8081/app2
And so on.
But the Asp.Net Core project template is different and I don't know how to add the subpath on listener configuration.
The code below is what we have in the project template (Program.cs class WebHostingService):
protected override IEnumerable<ServiceInstanceListener> CreateServiceInstanceListeners()
{
return new[] { new ServiceInstanceListener(_ => this) };
}
Task<string> ICommunicationListener.OpenAsync(CancellationToken cancellationToken)
{
var endpoint = FabricRuntime.GetActivationContext().GetEndpoint(_endpointName);
string serverUrl = $"{endpoint.Protocol}://{FabricRuntime.GetNodeContext().IPAddressOrFQDN}:{endpoint.Port}";
_webHost = new WebHostBuilder().UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseStartup<Startup>()
.UseUrls(serverUrl)
.Build();
_webHost.Start();
return Task.FromResult(serverUrl);
}
The semantic is a bit different, but all ends up in the same point.
The problems is that even I add the subpath at the end of serverUrl it does't work and the web apps always responds on the root http://localhost:8081/
See how I've tried in the code snippet below:
string serverUrl = $"{endpoint.Protocol}://{FabricRuntime.GetNodeContext().IPAddressOrFQDN}:{endpoint.Port}/app1";
How to achieve the same result as "classic" web app using asp.net core?
The goal is to publish on azure on port 80 to let users with a better experience like:
http://mywebsite.com/app1
http://mywebsite.com/app2
Thank you a lot!
As #Vaclav said is necessary to change UseKestrel by UseWebListener.
But the problem is that WebListener binding to the address is different.
Look this thread to more details https://github.com/aspnet/Hosting/issues/749
Is necessary to use + instead of localhost or other machine names on the serverUrl.
So, change de template code from:
Task<string> ICommunicationListener.OpenAsync(CancellationToken cancellationToken)
{
var endpoint = FabricRuntime.GetActivationContext().GetEndpoint(_endpointName);
string serverUrl = $"{endpoint.Protocol}://{FabricRuntime.GetNodeContext().IPAddressOrFQDN}:{endpoint.Port}/service1";
_webHost = new WebHostBuilder().UseKestrel()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseStartup<Startup>()
.UseUrls(serverUrl)
.Build();
_webHost.Start();
return Task.FromResult(serverUrl);
}
To
Task<string> ICommunicationListener.OpenAsync(CancellationToken cancellationToken)
{
var endpoint = FabricRuntime.GetActivationContext().GetEndpoint(_endpointName);
string serverUrl = $"{endpoint.Protocol}://+:{endpoint.Port}/service1";
_webHost = new WebHostBuilder().UseWebListener()
.UseContentRoot(Directory.GetCurrentDirectory())
.UseStartup<Startup>()
.UseUrls(serverUrl)
.Build();
_webHost.Start();
return Task.FromResult(serverUrl);
}
And it workd very well.
Kestrel doesn't support URL prefixes or port sharing between multiple applications. You have to use WebListener instead:
using Microsoft.AspNetCore.Hosting
...
_webHost = new WebHostBuilder().UseWebListener()
I've not done this yet, but is this GitHub repository useful?
https://github.com/weidazhao/Hosting
About The Sample
This sample demonstrates:
1.How ASP.NET Core can be used in a communication listener of stateless/stateful services. Today the scenario we've enabled is to host ASP.NET Core web application as a stateless service with Service Fabric. We wanted to light up the scenarios that people also can use ASP.NET Core as communication listeners in stateless services and stateful services, similar to what the OwinCommunicationListener does.
2.How to build an API gateway service to forward requests to multiple microservices behind it with the reusable and modular component. Service Fabric is a great platform for building microservices. The gateway middleware (Microsoft.ServiceFabric.AspNetCore.Gateway) is an attempt to provide a building block for people to easily implement the API gateway pattern of microservices on Service Fabric. There are a couple good articles elaborating the API gateway pattern, such as http://microservices.io/patterns/apigateway.html, http://www.infoq.com/articles/microservices-intro, etc. For more information about microservices, check out https://azure.microsoft.com/en-us/blog/microservices-an-application-revolution-powered-by-the-cloud/, http://martinfowler.com/articles/microservices.html.
#Nick Randell
With the sample approach is possible to run several Services on the same port using their names like:
http://localhost:20000/service1 <--- Svc in Application1
http://localhost:20000/service2 <--- Svc in Application1
This is possible because is there a Gateway service that maps the addresses service1 and service2 in the URI to the correct services.
But I couldn't find a way to have 2 different Applications running on the same port.
Is it possible?
http://localhost:20000/service1 <--- Svc in Application1
http://localhost:20000/service2 <--- Svc in Application2
I'm trying to work with Azure Webjobs, I understand the way its works but I don't understand why I need to use two connection strings, one is for the queue for holding the messages but
why there is another one called "AzureWebJobsDashboard" ?
What its purpose?
And where I get this connection string from ?
At the moment I have one Web App and one Webjob at the same solution, I'm experiment only locally ( without publishing anything ), the one thing I got up in the cloud is the Storage account that holds the queue.
I even try to put the same connection string in both places ( AzureWebJobsDashboard,AzureWebJobsStorage) but its throw exception :
"Cannot bind parameter 'log' when using this trigger."
Thank you.
There are two connection strings because the WebJobs SDK writes some logs in the storage account. It gives you the possibility of having one storage account just for data (AzureWebJobsStorage) and the another one for logs (AzureWebJobsDashboard). They can be the same. Also, you need two of them because you can have multiple job hosts using different data accounts but sending logs to the same dashboard.
The error you are getting is not related to the connection strings but to one of the functions in your code. One of them has a log parameter that is not of the right type. Can you share the code?
Okay, anyone coming here looking for an actual answer of "where do I get the ConnectionString from"... here you go.
On the new Azure portal, you should have a Storage Account resource; mine starts with "portalvhds" followed by a bunch of alphanumerics. Click that so see a resource Dashboard on the right, followed immediately by a Settings window. Look for the Keys submenu under General -- click that. The whole connection string is there (actually there are two, Primary and Secondary; I don't currently understand the difference, but let's go with Primary, shall we?).
Copy and paste that in your App.config file on the connectionString attribute of the AzureWebJobsDashboard and AzureWebJobsStorage items. This presumes for your environment you only have one Storage Account, and so you want that same storage to be used for data and logs.
I tried this, and at least the WebJob ran without throwing an error.
#RayHAz - Expanding upon your above answer (thanks)...
I tried this https://learn.microsoft.com/en-us/azure/app-service/webjobs-sdk-get-started
but in .Net Core 2.1, was getting exceptions about how it couldn't find the connection string.
Long story short, I ended up with the following, which worked for me:
appsettings.json, in a .Net Core 2.1 Console app:
{
"ConnectionStrings": {
"AzureWebJobsStorage": "---your Azure storage connection string here---",
"AzureWebJobsDashboard":"---the same connectionstring---"
}
}
... and my Program.cs file...
using System;
using System.IO;
using Microsoft.Azure.WebJobs;
using Microsoft.Extensions.Configuration;
using Microsoft.Extensions.Logging;
namespace YourWebJobConsoleAppProjectNamespaceHere
{
public class Program
{
public static IConfiguration Configuration;
static void Main(string[] args)
{
var builder = new ConfigurationBuilder()
.SetBasePath(Path.Combine(AppContext.BaseDirectory))
.AddJsonFile("appsettings.json", true);
Configuration = builder.Build();
var azureWebJobsStorageConnectionString = Configuration.GetConnectionString("AzureWebJobsStorage");
var azureWebJobsDashboardConnectionString = Configuration.GetConnectionString("AzureWebJobsDashboard");
var config = new JobHostConfiguration
{
DashboardConnectionString = azureWebJobsDashboardConnectionString,
StorageConnectionString = azureWebJobsStorageConnectionString
};
var loggerFactory = new LoggerFactory();
config.LoggerFactory = loggerFactory.AddConsole();
var host = new JobHost(config);
host.RunAndBlock();
}
}
}
I'm want my client to communicate with a specific WorkerRole instance, so I'm trying to use InstanceInput endpoints.
My project is based on the example provided in this question: Azure InstanceInput endpoint usage
The problem is that I don't get the external IP address + port for the actual instance, when using RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint1"].IPEndpoint;
I just get internal address with the local port (e.g. 10.x.x.x:10100). I know that I can get the public IP address via DNS lookup (xxx.cloudapp.net), but I don't have a glue how to get the correct public port for each instance.
One possible solution would be: get the instance number (from RoleEnvironment.CurrentRoleInstance.Id) and add this instance number to the FixedPortRange minimum (e.g. 10106). This would imply that the first instance will always have the port 10106, the second instance always 10107 and so on. This solution seems a bit hacky to me, since I don't know how Windows Azure assigns the instances to the ports.
Is there a better (correct) way to retrieve the public port for each instance?
Question #2:
Are there any information about the Azure Compute Emulator supporting InstanceInput endpoints? (As I already mentioned in the comments: It seems that the Azure Compute Emulator currently doesn't support InstanceInputEndpoint).
Second solution (much better):
To get the public port, the porperty PublicIPEndpoint can be used (I don't know why I didn't notice this property in the first place).
Usage: RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint1"].PublicIPEndpoint;
Warning:
The IP address in the property is unused (http://msdn.microsoft.com/en-us/library/microsoft.windowsazure.serviceruntime.roleinstanceendpoint.publicipendpoint.aspx).
First solution:
As 'artfulmethod' already mentioned, the REST operation Get Deployment retrieves interesting information about the current deployment. Since I encountered some small annoying 'issues', I'll will provide the code for the REST client here (in case someone else is having a similiar problem):
X509Store certificateStore = new X509Store(StoreName.My, StoreLocation.CurrentUser);
certificateStore.Open(OpenFlags.ReadOnly);
string footPrint = "xxx"; // enter the footprint of the certificate you use to upload the deployment (aka Management Certificate)
X509Certificate2Collection certs =
certificateStore.Certificates.Find(X509FindType.FindByThumbprint, footPrint, false);
if (certs.Count != 1) {
// client certificate cannot be found - check footprint
}
string url = "https://management.core.windows.net/<subscription-id>/services/hostedservices/<service-name>/deployments/<deployment-name>"; // replace <xxx> with actual values
try {
var request = (HttpWebRequest)WebRequest.Create(url);
request.ClientCertificates.Add(certs[0]);
request.Headers.Add("x-ms-version", "2012-03-01"); // very important, otherwise you get an HTTP 400 error, specifies in which version the response is formatted
request.Method = "GET";
var response = (HttpWebResponse)request.GetResponse(); // get response
string result = new StreamReader(response.GetResponseStream()).ReadToEnd() // get response body
} catch (Exception ex) {
// handle error
}
The string 'result' contains all the information about the deployment (format of the XML is described in section 'Response Body' # http://msdn.microsoft.com/en-us/library/windowsazure/ee460804.aspx)
To get information about your deployments, including the VIPs and public ports for your role instances, use the Get Deployment operation on the Service Management API. The response body includes an InstanceInputList.
I am developing "azure web application".
I have created drive and drivePath static members in WebRole as follows:
public static CloudDrive drive = null;
public static string drivePath = "";
I have created development storage drive in WebRole.OnStart as follows:
LocalResource azureDriveCache = RoleEnvironment.GetLocalResource("cache");
CloudDrive.InitializeCache(azureDriveCache.RootPath, azureDriveCache.MaximumSizeInMegabytes);
CloudStorageAccount.SetConfigurationSettingPublisher((configName, configSetter) =>
{
// for a console app, reading from App.config
//configSetter(ConfigurationManager.AppSettings[configName]);
// OR, if running in the Windows Azure environment
configSetter(RoleEnvironment.GetConfigurationSettingValue(configName));
});
CloudStorageAccount account = CloudStorageAccount.DevelopmentStorageAccount;
CloudBlobClient blobClient = account.CreateCloudBlobClient();
blobClient.GetContainerReference("drives").CreateIfNotExist();
drive = account.CreateCloudDrive(
blobClient
.GetContainerReference("drives")
.GetPageBlobReference("mysupercooldrive.vhd")
.Uri.ToString()
);
try
{
drive.Create(64);
}
catch (CloudDriveException ex)
{
// handle exception here
// exception is also thrown if all is well but the drive already exists
}
string path = drive.Mount(azureDriveCache.MaximumSizeInMegabytes, DriveMountOptions.None);
IDictionary<String, Uri> listDrives = Microsoft.WindowsAzure.StorageClient.CloudDrive.GetMountedDrives();
drivePath = path;
The drive keeps visible and accessible till execution scope remain in WebRole.OnStart, as soon as execution scope leave WebRole.OnStart, drive become unavailable from application and static members get reset (such as drivePath get set to "")
Am I missing some configuration or some other error ?
Where's the other code where you're expecting to use drivePath? Is it in a web application?
If so, are you using SDK 1.3? In SDK 1.3, the default mode for a web application is to run under full IIS, which means running in a separate app domain from your RoleEntryPoint code (like OnStart), so you can't share static variables across the two. If this is the problem, you might consider moving this initialization code to Application_Begin in Global.asax.cs instead (which is in the web application's app domain).
I found the solution:
In development machine, request originate for localhost, which was making the system to crash.
Commenting "Sites" tag in ServiceDefinition.csdef, resolves the issue.