Can someone please post a sample code for using InstanceInput endpoints?
I used the below configuration in a worker role where a sample WCF service listens at port 8080.
<Endpoints>
<InstanceInputEndpoint name="InstanceAccess" protocol="tcp" localPort="8080">
<AllocatePublicPortFrom>
<FixedPortRange max="10105" min="10101" />
</AllocatePublicPortFrom>
</InstanceInputEndpoint>
</Endpoints>
But I was not able to access this WCF service from an external consumer using any of the ports 10101 to 10105. Should we use the public DNS name of the Azure service along with the public ports in the give range?
Also, I was not able to access this endpoint details from within the worker role OnStart() method. I used RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["InstanceAccess"]. But it does not return a RoleInstanceEndpoint. Am I missing something here?
Here is a sample Visual Studio solution which uses Azure InstanceInput endpoint and hosts a WCF service on a worker role. The WCF service running on each of the individual instances can be accessed using the Azure DNS name and the public port mapped to that instance. I used the following endpoint configuration.
<Endpoints>
<InstanceInputEndpoint name="Endpoint1" protocol="tcp" localPort="10100">
<AllocatePublicPortFrom>
<FixedPortRange max="10110" min="10106" />
</AllocatePublicPortFrom>
</InstanceInputEndpoint>
</Endpoints>
This endpoint was somehow not accessible from within the WorkerRole (both OnStart() and Run() methods). So I used 'localhost'.
string endpointIP = "localhost:10100";
if (RoleEnvironment.CurrentRoleInstance.InstanceEndpoints.Keys.Contains("Endpoint1"))
{
IPEndPoint externalEndPoint = RoleEnvironment.CurrentRoleInstance.InstanceEndpoints["Endpoint1"].IPEndpoint;
endpointIP = externalEndPoint.ToString();
}
The solution also contains a console client which uses the hosted DNS name to invoke these individual WCF services.
InstanceInput endpoint is not working locally but once deployed it is working fine and assigned a different port for each instance, based on the port range it is allowed to create an instance, you cannot create instance more than the specified port range in the configuration. for example, port range is 101 - 105 you can create only 5 instance
Related
I have a worker role that hosts an ApiController, and it currently communicates with the public internet via http and https input endpoints I've defined in its Service Configuration file.
I would like to put this API behind an Azure APIM API, and have all traffic go through there, rather than hitting the worker role directly. I'm most of the way there, but am having trouble ensuring the worker role can't be hit directly from the public internet.
Currently:
I've created an ARM virtual network, and an Azure APIM API
I've configured our API to run on the ARM virtual network
I also created a classic virtual network and configured our worker role to deploy to it
I've defined a peering in the ARM virtual network between it and our classic virtual network
The API's Web service URL is set to the Cloud Service's Site URL value
Our worker role configuration file currently has http and https input endpoints that can be hit from the public internet
I currently have a url that maps to the Virtual IP (VIP) address of my API Management service, and can successfully make requests to my API via that url.
I believe the best way for me to prevent my worker role from being accessed directly from the public internet would be defining Access Control List rules in its configuration file that would only allow calls originating from my APIM API. It would look something like this:
<AccessControls>
<AccessControl name="APIM">
<Rule action="permit" description="OnlyPermitAPIM" order="100" remoteSubnet="?" />
</AccessControl>
</AccessControls>
<EndpointAcls>
<EndpointAcl role="RoleName" endPoint="httpsIn" accessControl="APIM"/>
<EndpointAcl role="RoleName" endPoint="httpIn" accessControl="APIM"/>
</EndpointAcls>
I'm not sure what the correct value would be for the remoteSubnet property. I tried entering the Address space value of my ARM Virtual Network (which my APIM API resides on), but that didn't seem to work, test calls returned a 500 status.
Is this the right approach? Also, is there a way to ensure that my APIM API makes a call directly through the peered virtual networks? Right now I believe it's still going through the public internet.
I was on the right track. The only thing I needed to change was the value of remoteSubnet. Rather than the address space of the ARM virtual network, I needed to include the API Management service's VIP. The relevant section of the .cscfg file looked like this:
<AccessControls>
<AccessControl name="APIM">
<Rule action="permit" description="OnlyPermitAPIM" order="100" remoteSubnet="<VIP address of APIM service>/32" />
</AccessControl>
</AccessControls>
<EndpointAcls>
<EndpointAcl role="RoleName" endPoint="httpsIn" accessControl="APIM"/>
<EndpointAcl role="RoleName" endPoint="httpIn" accessControl="APIM"/>
</EndpointAcls>
I'd like to have all of my configuration settings all in one place for all of my azure web app services, as well as resources outside of Azure. Consul's key value store seems like it could be a good fit (I'm happy to hear other suggestions if something else fits better). From my admittedly limited understanding of Consul, each node requires an agent to be running in order to access to key value store.
Is this correct? If so, how can I do this, would it be via a continuous webjob in Azure? If not, how can I access the KV store without an agent?
it looks like we will not be able to use consul with Azure App Service (aka Web Apps) at all.
Here is what I've tried.
1. Naive approach - consul as WebJob
Due to networking restrictions attempt to connect to ANY localhost ports what did not spawn with processes that belong to App Service (Web App) itself will end up with the following exception.
An attempt was made to access a socket in a way forbidden by its
access permissions 127.0.0.1:8500.
Reference from documentation:
https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#networking-restrictionsconsiderations
The only way an application can be accessed via the internet is
through the already-exposed HTTP (80) and HTTPS (443) TCP ports;
applications may not listen on other ports for packets arriving from
the internet. However, applications may create a socket which can
listen for connections from within the sandbox. For example, two
processes within the same app may communicate with one another via TCP
sockets; connection attempts incoming from outside the sandbox, albeit
they be on the same machine, will fail. See the next topic for
additional detail.
Here is an interesting piece:
Connection attempts to local addresses (e.g. localhost, 127.0.0.1) and
the machine's own IP will fail, except if another process in the same
sandbox has created a listening socket on the destination port.
2. Consul spawned from App Service itself
I've copied consul to Web App (as build output) and added the following lines to application startup code:
var consul = Path.Combine(AppDomain.CurrentDomain.BaseDirectory, "bin/consul/consul.exe");
Process.Start(consul, "agent --data-dir=../../data");
Process.Start(consul, "join my-cluster-dns.name");
... and it joined the cluster and I even was able to connect to consul via 127.0.0.1:8500 from the App Service (Web App) itself.
However, it is still useless setup as Consul agent MUST be reachable from server, so all I was able to see from cluster standpoint is a dead node with failing "serf" health-check. Again, according to the documentation there is no work around this: "The only way an application can be accessed via the internet is through the already-exposed HTTP (80) and HTTPS (443) TCP ports".
https://www.consul.io/docs/agent/basics.html
Not all Consul agents in a cluster have to use the same port, but this address MUST
be reachable by all other nodes.
Summary
All-in-all, probably there is no way to properly host/use Consul with Azure App Services.
You don't need Consul Agent to retrieve configuration for your application.
You can use library Winton.Extensions.Configuration.Consul. It introduces Configuration Provider (docs) which can be integrated within your application.
Here sample configuration (full sample project available here)
internal sealed class Program
{
public static IHostBuilder CreateHostBuilder(string[] args)
{
return Host
.CreateDefaultBuilder(args)
.ConfigureWebHostDefaults(builder => builder.UseStartup<Startup>())
.ConfigureAppConfiguration(
builder =>
{
builder
.AddConsul(
"appsettings.json",
options =>
{
options.ConsulConfigurationOptions =
cco => { cco.Address = new Uri("http://consul:8500"); };
options.Optional = true;
options.PollWaitTime = TimeSpan.FromSeconds(5);
options.ReloadOnChange = true;
})
.AddEnvironmentVariables();
});
}
public static void Main(string[] args)
{
CreateHostBuilder(args).Build().Run();
}
}
Your app configuration will be updated periodically.
I have a Cloud Service Worker Role in Azure which has been set up with a Reserved IP address. The goal of the Reserved IP is so when the worker role makes external requests it will always come from the same IP. No external traffic is received by the service and no internal communication is required.
EDIT: The Reserved IP was associated with the Cloud Service using the following Azure Powershell command:
Set-AzureReservedIPAssociation -ReservedIPName uld-sender-ip -ServiceName uld-sender
This added the following NetworkConfiguration section into the .cscfg file:
<NetworkConfiguration>
<AddressAssignments>
<ReservedIPs>
<ReservedIP name="uld-sender-ip" />
</ReservedIPs>
</AddressAssignments>
</NetworkConfiguration>
Now, when I try and re-deploy the service or update the configuration settings in Azure, I get the following error:
The operation '5e6772fae607ae0ca387457883bf2974' failed: 'Validation
Errors: Error validating the .cscfg file against the .csdef file.
Severity:Error, message:ReservedIP 'uld-sender-ip' was not mapped to
an endpoint. The service definition must contain atleast one endpoint
that maps to the ReservedIP..'.
So, I have tried adding an Endpoint to the .csdef file like so:
<Endpoints>
<InternalEndpoint name="uld-sender-ip" protocol="tcp" port="8080" />
</Endpoints>
In addition, I have entered NetworkTrafficRules to the .csdef like so:
<NetworkTrafficRules>
<OnlyAllowTrafficTo>
<Destinations>
<RoleEndpoint endpointName="uld-sender-ip" roleName="Sender"/>
</Destinations>
<AllowAllTraffic/>
</OnlyAllowTrafficTo>
</NetworkTrafficRules>
But I still get the same error.
My understanding is that endpoints are only required for internal communication between worker/web roles, or to open a port to receive external communication.
EDIT: My question is how do you map a Reserved IP to an Endpoint for this scenario?
To avoid getting the error while trying to update the configuration settings or re-deploy the service, I ran the Azure Powershell command to remove the reserved ip association with the service:
Remove-AzureReservedIPAssociation -ReservedIPName uld-sender-ip -ServiceName uld-sender
Then I was able to edit and save the configuration settings in Azure, and/or re-deploy the service. Once the service is updated I ran the Azure Powershell command to set the reserved ip association with the service:
Set-AzureReservedIPAssociation -ReservedIPName uld-sender-ip -ServiceName uld-sender
This is obviously not the ideal solution but at least I can make changes to the service if needed. Hope this helps someone.
I have a service fabric cluster deployed with a domain of foo.northcentralus.cloudapp.azure.com
It has a single node type with a single public ip address / load balancer.
Lets say I have the following two apps deployed:
http://foo.northcentralus.cloudapp.azure.com:8101/wordcount/
http://foo.northcentralus.cloudapp.azure.com:8102/visualobjects/
How can I set this up so I can have multiple domains each hosted on port 80? (assuming I own both of these domains obviously)
http://www.wordcount.com
http://www.visualobjects.com
Do I need more than one public ip address in my cluster to support this?
You should be able to do this with a single public IP address through some http.sys magic.
Assuming you're using Katana for your web host (the word count and visual object samples you reference use Katana), then it should be as simple as starting the server using the domain name in the URL:
WebApp.Start("http://visualobjects.com:80", appBuilder => this.startup.Invoke(appBuilder));
The underlying Windows HTTP Server API will register that server with that URL, and any HTTP request that comes in with a Host: visualobjects.com header will automatically be routed to that server. Repeat for any number of servers with their own hostname. This is the host routing that http.sys does for multi-website hosting on a single machine, same as you had in IIS.
The problem you'll run into is with reserving the hostname, which you have to do under an elevated user account before you open the server. Service Fabric has limited support for this in the form of Endpoint configuration in ServiceManifest.xml:
<!-- THIS WON'T WORK FOR REGISTERING HOSTNAMES -->
<Resources>
<Endpoints>
<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="80" />
</Endpoints>
</Resources>
The limitation here is that there is nowhere to specify a hostname, so Service Fabric will always register "http://+:[port]". That unfortunately doesn't work if you want to open a server on a specific hostname - you need to register just the hostname you want to use. You have to do that part manually using netsh (and remove any Endpoints for the same port from ServiceManifest.xml, otherwise it will override your hostname registration).
To register the hostname with http.sys manually, you have to run netsh for the hostname, port, and user account under which your service runs, which by default is Network Service:
netsh http add urlacl url=http://visualobjects.com:80/ user="NT AUTHORITY\NETWORK SERVICE"
But you have to do this from an elevated account on each machine the service will run on. Luckily, we have service setup entry points that can run under elevated account privileges.
edit
One thing you will need to do in order for this to work is open up the firewall on the same port you are listening on. You can do that with the following:
<Resources>
<Endpoints>
<Endpoint Protocol="tcp" Name="ServiceEndpoint" Type="Input" Port="80" />
</Endpoints>
</Resources>
Note that the Protocol is tcp instead of http. This will open up the firewall port but not override the http.sys registration that you set up with the .bat script.
I believe so. Separate public IP's which can then be set up to allow for routing from those IPs to the same backend pools, but on different ports.
I started to play with Service Fabric very recently. I added a new Service Fabric cluster on Azure (unsecure) and I created a demo solution with 2 stateless Web API Services as follows:
Endpoint configuration for AnotherAPI is the following:
<Endpoints>
<!-- This endpoint is used by the communication listener to obtain the port on which to
listen. Please note that if your service is partitioned, this port is shared with
replicas of different partitions that are placed in your code. -->
<Endpoint Protocol="http" Name="ServiceEndpoint" Type="Input" Port="8698" />
</Endpoints>
I am able to access to the default controller (ValuesController) using the local endpoint:
http://localhost:8698/api/values
But when I try to use the azure endpoint I get an ERR_CONNECTION_TIMED_OUT error on Chrome.
http://{azure-ip-address}:8698/api/values
Is there anything that I am missing?
You have to open that port in your azure cluster via a Load Balancer Probe. You can do this at cluster creation time via ARM template or after the fact. For an existing cluster, go to the resource group, then the LB Balancer, then probes. The default open port in SF is 19080 though. If you just switch to that port it will work if you are not using SSL.